Peirce the Logician

by Hilary Putnam

Following is an excerpt of the last 5 pages of an article that was originally published in Historia Mathematica, vol. 9, 1982, pp. 290-301. It was reprinted in H. Putnam, Realism with a Human Face, Harvard University Press, 1990, pp. 252-260.

When I started to trace the later development of logic, the first thing I did was to look at Schröder's Vorlesungen über die Algebra der Logik. This book, which appeared in three volumes, has a third volume on the logic of relations (Algebra und Logik der Relative, 1895). The three volumes were the best-known logic text in the world among advanced students, and they can safely be taken to represent what any mathematician interested in the study of logic would have had to know, or at least become acquainted with in the 1890s.

As the title suggests, the approach was algebraic (Boole's logic, as we saw, grew out of abstract algebra), and the great problem was to develop a logic of Relative (that is, relations). (The influence of the German word Relativ is, perhaps, the reason Peirce always wrote "relatives" and not "relations.") Peirce, although himself a member of the algebraic school (he criticized himself for this in his correspondence), had reservations about Schröder's close assimilation of logical problems to algebraic ones. "While I am not at all disposed to deny that the so called 'solution problems', consisting in the ascertainment of the general forms of relatives which satisfy given conditions, are often of considerable importance, I cannot admit that the interest of logical study centers in them," Peirce wrote. And "Since Professor Schröder carries his algebraicity so very far, and talks of 'roots', 'values', 'solutions', etc., when, even in my opinion, with my bias towards algebra, such phrases are out of place..." But my purpose in consulting this reference work was narrower; I simply wished to see how Schröder presented the quantifier.

Well, Schröder does mention Frege's discovery, though just barely; but he does not explain Frege's notation at all. The notation he both explains and adopts (with credit to Peirce and his students, O. H. Mitchell and Christine Ladd-Franklin) is Peirce's. And this is no accident: Frege's notation (like one of Peirce's schemes, the system of existential graphs) repelled everyone (although Whitehead and Russell were to study it with consequential results). Peirce's notation, in contrast, was a typographical variant of the notation we use today. Like modern notation, it lends itself to writing formulas on a line (Frege's notation is two-dimensional) and to a simple analysis of normal-form formulas into a prefix (which Peirce calls the Quantifier) and a matrix (which Peirce calls the "Boolean part" of the formula).

Moreover, as Warren Goldfarb has emphasized in a fine paper on the history of the quantifier, the Boolean school, including Peirce, was willing to apply logical formulas to different "universes of discourse," and Peirce was willing (unlike Frege) to treat first-order logic by itself, and not just as part of an ideal language (with a fixed universe of discourse, namely, "all objects," for Frege). In fact — and this may be surprising to others as it was to me — the term "first-order logic" is due to Peirce! (It has nothing to do with either Russell's theory of types or Russell's theory of orders, although the way Peirce distinguished between first-order and second-order formulas — by whether the "relative" is quantified over or not — obviously has something to do with logical type.) In summary, Frege tried to "sell" a grand logical-metaphysical scheme with a dubious ontology, while Peirce (and, following him, Schröder) was busy "selling" a modest, flexible, and extremely useful notation.

The success they experienced was impressive. While, to my knowledge, no one except Frege ever published a single paper in Frege's notation, many famous logicians adopted Peirce-Schröder notation, and famous results and systems were published in it. Löwenheim stated and proved the Löwenheim theorem (later reproved and strengthened by Skolem, whose name became attached to it together with Löwenheim's) in Peircian notation. In fact, there is no reference in Löwenheim's paper to any logic other than Peirce's. To cite another example, Zermelo presented his axioms for set theory in Peirce-Schröder notation, and not, as one might have expected, in Russell-Whitehead notation.

One can sum up these simple facts (which anyone can quickly verify) as follows: Frege certainly discovered the quantifier first (four years before O. H. Mitchell, going by publication dates, which are all we have as far as I know). But Leif Erikson probably discovered America "first" (forgive me for not counting the native Americans, who of course really discovered it "first"). If the effective discoverer, from a European point of view, is Christopher Columbus, that is because he discovered it so that it stayed discovered (by Europeans, that is), so that the discovery became known (by Europeans). Frege did "discover" the quantifier in the sense of having the rightful claim to priority; but Peirce and his students discovered it in the effective sense. The fact is that until Russell appreciated what he had done, Frege was relatively obscure, and it was Peirce who seems to have been known to the entire world logical community. How many of the people who think that "Frege invented logic" are aware of these facts?

The example of Löwenheim shows something else: metamathematical work (of a certain kind) did not have to wait for Russell and Whitehead to make Frege's work known (and to extend it and repair it). First-order logic (and its metamathematical study) would have existed without Frege. (Zermelo even denied that his set-theoretic work depended on Whitehead and Russell; he claimed to have been aware of the "Russell paradox" on his own.)

The Peircian Influence on Whitehead and Russell

Still, I thought, Russell and Whitehead themselves certainly learned their logic from Frege. To check this I turned to Russell's autobiographical writings. The result was frustrating. In My Philosophical Development, Russell describes the impact that meeting Peano had upon his logical development. Strangely enough, he does not mention the quantifier, which seems so very central from our present point of view, at all. Peano taught Russell what was a commonplace in the Peirce-Schröder logical community, the difference in logical form between all men are mortal and Socrates is a man. And it is clear that one of the notations used in Principia for a universally quantified conditional — writing the variable of quantification under the sign of the conditional — came from Peano. But the quantifier as such is not something that Russell singled out for discussion (unless there is something in the unpublished Nachlass in the Russell Archives in Ontario). Even when Russell discusses his debt to Frege (in a peculiar way: Russell is unstinting in his praise of Frege's genius, but claims to have thought of the definition of number quite independently), he does not mention the quantifier. Principia is no more help on this score, although there is an indication in it that most of the specific notations were invented by Whitehead rather than Russell.

[Footnote: Subsequent to writing this essay I discovered that, in "Whitehead and Principia Mathematica," Mind (1948), p. 137, Russell says that Whitehead contributed the notion for the universal quantifier.]

Since I have mentioned Peano, I should remark that he was not only well acquainted with Peirce-Schröder logic, but he had actually corresponded with Peirce.

In desperation, I looked at Whitehead's Universal Algebra. This is a work squarely in the tradition to which Boole, Schröder, and Peirce belonged, the tradition that treated general algebra and logic as virtually one subject. And here, before Whitehead worked with Russell, there is no mention of Frege, but there is a citation of "suggestive papers" by Peirce's students O. H. Mitchell and Christine Ladd-Franklin. The topic, of course, is the quantifier.

In sum, Whitehead certainly came to his knowledge of quantification through "Peirce and his students." On the other hand, the axioms in Principia are almost certainly derived from Frege's Begriffsschrift; Pcirce gave no system of axioms for first-order logic, although his existential graphs" are a complete proof procedure for first-order bigic (an early form of natural deduction).

I have, if anything, minimized Frege's contribution and played up the Boolean contribution for reasons which I have explained. But to leave matters here would be as unjust to Frege and to a third tradition, the Hilbert tradition (proof theory), as Quine's unfortunate remark was to the Boolean tradition.

Frege's work is sometimes disparaged today (I mean Frege's logical achievement; Frege's stock as a philosopher has never been higher), though not, of course, by Quine. It is conceded that Frege was far more rigorous and, in particular, far more consistently free of use-mention confusions than other logicians; but such domestic virtue is no longer felt to be impressive. The central charge laid against his work (and that of Whitehead and Russell) is that what they called logic is not logic but "set theory," and that reducing arithmetic to set theory is a bad idea.

This raises philosophical issues far too broad for this essay. But let me just make two comments on this: (1) Where to draw the line between logic and set theory (or predicate theory) is not an easy question. The statement that a syllogism is valid, for example, is a statement of second-order logic. (Barbara is valid just in case

(F)(G)(H)((Fx ⊃ Gx) ∧ (Gx ⊃ Hx) ⊃ (Fx ⊃ Hx)),
for example.) If second-order logic is "set theory," then most of traditional logic thus becomes set theory." (2) The full intuitive principle of mathematical induction is definitely second-order in anybody's view. Thus there is a higher-order element in arithmetic whether or not one chooses to "identify numbers with sets" (just as Frege realized).

But, philosophical questions aside, Frege certainly undertook one of the most ambitious logical investigations in all history. Its enormous sweep made it (after its repair by Whitehead and Russell, and its translation into a notation resembling Peirce's) a great stimulus to all future work in the field. The Hilbert school certainly put it in the center of their proof theoretic investigations: Gödel's most famous paper, after all, bears the title "On Principia Mathematica and Related Systems," That all its achievements could be imitated successfully by the Cantorians (Zermelo and von Neumann) does not take away either its priority or its influence. If Peirce and Schröder were the cutting edge of the logical world prior to Russell and Whitehead's Principia Mathematica (or a cutting edge — the Hilbert school was already under way), after the appearance of Principia their work lost its importance — or lost it except for one important thing: its influence on Hilbert, who followed Peirce in separating off first-order logic from the higher system for metamathematical study.

Principia in turn was to lose its cutting-edge position when interest shifted from the construction of systems (and the derivation of mathematics within them) to the metamathematical study of properties of systems. Nothing remains forever the cutting edge in a healthy science. But a fair-minded statement of the historical importance of the different schools of work, a statement that does justice to each without slighting the others, should not be impossible. Such a statement was given by Hilbert and Ackermann:

The first clear idea of a mathematical logic was formulated by Leibniz. The first results were obtained by A. de Morgan (1806-1876) and G. Boole (1815-1864). The entire later development goes back to Boole. Among his successors, W. S. Jevons (1835-1882) and especially C. S. Peirce (1839-1914) enriched the young science. Ernst Schröder systematically organized and supplemented the various results of his predecessors in his Vorlesungen über die Algebra der Logik (1890-1895), which represents a certain completion of the series of developments proceeding from Boole.

In part independently of the development of the Boole-Schröder algebra, symbolic logic received a new impetus from the need of mathematics for an exact foundation and strict axiomatic treatment. G. Frege published his Begriffsschrift in 1879 and his Grundgesetze der Arithmetik in 1893-1903. G. Peano and his co-workers began in 1894 the publication of the Formulaire des Mathematiques, in which all the mathematical disciplines were to be presented in terms of the logical calculus. A high point of this development is the appearance of the Principia Mathematica (1910-1913) by A. N. Whitehead and B. Russell. Most recently Hilbert, in a series of papers and university lectures, has used the logical calculus to find a new way of building up mathematics which makes it possible to recognize the consistency of the postulates adopted. The first comprehensive account of these researches has appeared in the Grundlagen der Mathematik (1934 1939), by D. Hilbert and P. Bernays.

If Quine had produced a statement like this in his book, I should not have had a topic for this essay!