(Part 3) Peircean Induction and the Error-Correcting Thesis

C. S. Peirce: 10 Sept, 1839-19 April, 1914

C. S. Peirce: 10 Sept, 1839-19 April, 1914

Last third of “Peircean Induction and the Error-Correcting Thesis”

Deborah G. Mayo
Transactions of the Charles S. Peirce Society 41(2) 2005: 299-319

Part 2 is here.

8. Random sampling and the uniformity of nature

We are now at the point to address the final move in warranting Peirce’s SCT. The severity or trustworthiness assessment, on which the error correcting capacity depends, requires an appropriate link (qualitative or quantitative) between the data and the data generating phenomenon, e.g., a reliable calibration of a scale in a qualitative case, or a probabilistic connection between the data and the population in a quantitative case. Establishing such a link, however, is regarded as assuming observed regularities will persist, or making some “uniformity of nature” assumption—the bugbear of attempts to justify induction.

But Peirce contrasts his position with those favored by followers of Mill, and “almost all logicians” of his day, who “commonly teach that the inductive conclusion approximates to the truth because of the uniformity of nature” (2.775). Inductive inference, as Peirce conceives it (i.e., severe testing) does not use the uniformity of nature as a premise. Rather, the justification is sought in the manner of obtaining data. Justifying induction is a matter of showing that there exist methods with good error probabilities. For this it suffices that randomness be met only approximately, that inductive methods check their own assumptions, and that they can often detect and correct departures from randomness.

… It has been objected that the sampling cannot be random in this sense. But this is an idea which flies far away from the plain facts. Thirty throws of a die constitute an approximately random sample of all the throws of that die; and that the randomness should be approximate is all that is required. (1.94)

Peirce backs up his defense with robustness arguments. For example, in an (attempted) Binomial induction, Peirce asks, “what will be the effect upon inductive inference of an imperfection in the strictly random character of the sampling” (2.728). What if, for example, a certain proportion of the population had twice the probability of being selected? He shows that “an imperfection of that kind in the random character of the sampling will only weaken the inductive conclusion, and render the concluded ratio less determinate, but will not necessarily destroy the force of the argument completely” (2.728). This is particularly so if the sample mean is near 0 or 1. In other words, violating experimental assumptions may be shown to weaken the trustworthiness or severity of the proceeding, but this may only mean we learn a little less.

Yet a further safeguard is at hand:

Nor must we lose sight of the constant tendency of the inductive process to correct itself. This is of its essence. This is the marvel of it. …even though doubts may be entertained whether one selection of instances is a random one, yet a different selection, made by a different method, will be likely to vary from the normal in a different way, and if the ratios derived from such different selections are nearly equal, they may be presumed to be near the truth. (2.729)

Here, the marvel is an inductive method’s ability to correct the attempt at random sampling. Still, Peirce cautions, we should not depend so much on the self-correcting virtue that we relax our efforts to get a random and independent sample. But if our effort is not successful, and neither is our method robust, we will probably discover it. “This consideration makes it extremely advantageous in all ampliative reasoning to fortify one method of investigation by another” (ibid.).

“The Supernal Powers Withhold Their Hands And Let Me Alone”

Peirce turns the tables on those skeptical about satisfying random sampling—or, more generally, satisfying the assumptions of a statistical model. He declares himself “willing to concede, in order to concede as much as possible, that when a man draws instances at random, all that he knows is that he tried to follow a certain precept” (2.749). There might be a “mysterious and malign connection between the mind and the universe” that deliberately thwarts such efforts. He considers betting on the game of rouge et noire: “could some devil look at each card before it was turned, and then influence me mentally” to bet or not, the ratio of successful bets might differ greatly from 0.5. But, as Peirce is quick to point out, this would equally vitiate deductive inferences about the expected ratio of successful bets.

Consider our informal example of weighing with calibrated scales. If I check the properties of the scales against known, standard weights, then I can check if my scales are working in a particular case. Were the scales infected by systematic error, I would discover this by finding systematic mismatches with the known weights; I could then subtract it out in measurements. That scales have given properties where I know the object’s weight indicates they have the same properties when the weights are unknown, lest I be forced to assume that my knowledge or ignorance somehow influences the properties of the scale. More generally, Peirce’s insightful argument goes, the experimental procedure thus confirmed where the measured property is known must work as well when it is unknown unless a mysterious and malign demon deliberately thwarts my efforts.

Peirce therefore grants that the validity of induction is based on assuming “that the supernal powers withhold their hands and let me alone, and that no mysterious uniformity … interferes with the action of chance” (ibid.). But this is very different from the uniformity of nature assumption.

…the negative fact supposed by me [no mysterious force interferes with the action of chance] is merely the denial of any major premise from which the falsity of the inductive conclusion could be deduced. Actually so long as the influence of this mysterious source not be overwhelming, the wonderful self-correcting nature of the ampliative inference would enable us, even so, to detect and make allowance for them. (2.749)

Not only do we not need the uniformity of nature assumption, Peirce declares “That there is a general tendency toward uniformity in nature is not merely an unfounded, it is an absolutely absurd, idea in any other sense than that man is adapted to his surroundings” (2.750). In other words, it is not nature that is uniform, it is we who are able to find patterns enough to serve our needs and interests. But the validity of inductive inference does not depend on this.

9. Conclusion

For Peirce, “the true guarantee of the validity of induction” is that it is a method of reaching conclusions which corrects itself; inductive methods—understood as methods of severe testing—are justified to the extent that they are error-correcting methods (SCT). I have argued that the well-known skepticism as regards Peirce’s SCT is based on erroneous views concerning the nature of inductive testing as well as what is required for a method to be self-correcting. By revisiting these two theses, justifying the SCT boils down to showing that severe testing methods exist and that they enable reliable means for learning from error.

An inductive inference to hypothesis H is warranted to the extent that H passes a severe test, that is, one which, with high probability, would have detected a specific flaw or departure from what H asserts, and yet it did not. Deliberately making use of known flaws and fallacies in reasoning with limited and uncertain data, tests may be constructed that are highly trustworthy probes in detecting and discriminating errors in particular cases. Modern statistical methods (e.g., statistical significance tests) based on controlling a test’s error probabilities provide tools which, when properly interpreted, afford severe tests. While on the one hand, contemporary statistical methods increase the mathematical rigor and generality of Peirce’s SCT, on the other, Peirce provides something current statistical methodology lacks: an account of inductive inference and a philosophy of experiment that links the justification for statistical tests to a more general rationale for scientific induction. Combining the mathematical contributions of modern statistics with the inductive philosophy of Peirce sets the stage for developing an adequate solution to the age-old problem of induction. To carry out this project fully is a topic for future work.*

[You can find a pdf version of this paper here.]

REFERENCES and Notes (see part 1)

*That was 2005; I think (hope) I’ve made headway since then.

Categories: C.S. Peirce, Error Statistics, phil/history of stat

Post navigation

6 thoughts on “(Part 3) Peircean Induction and the Error-Correcting Thesis

  1. My favorite line of all times: “The Supernal Powers Withhold Their Hands And Let Me Alone” (Then he adds, “Actually so long as the influence of this mysterious source not be overwhelming, the wonderful self-correcting nature of the ampliative inference would enable us, even so, to detect and make allowance for them”.) (2.749)

  2. Yes, it looks like you have made some headway since then, from what I am able to comprehend. However, I think a better indication of making more (or less) headway will come when your account of inductive inference is actually applied in practice; and the development of ‘experimental philosophy’ (or what you called ‘the new experimentalism’) renders such application as a plausible option. In addition, I see my future studies (in statistics education) as an opportunity to apply your account of inductive inference into practice.

    This blog post makes we want to read Peirce’s writings. Do you have any suggestions on where to begin with reading Peirce? (Also, I might ask Prof. Lydia Patton whether she would be willing to include Peirce in the readings of the class I am taking (i.e., PHIL/STS 6314: History of Philosophy of Science).)

    • Nicole: I would say to check the references in this paper for 2 or 3 good places to begin Peirce. There are also collections around.

  3. Clark Glymour reminds me, in relation to part 2 of this paper, that (a) and (b) (randomization and pre-designation) should not be exaggerated. Peirce himself even qualified them (saying when they can be violated). But what matter is that the method allow/make use of a role for (a) creating a probabilistic connection of the sort randomization can provide, (b) picking up on the ways data-dependent selection effects can modify error probing capacities. One may be able to accomplish what’s needed without literally satisfying (a) or (b), but consciously developing those alternative methods grows out of the appreciation of (a) and (b).
    I just thought it noteworthy that the many discussions of Peirce’s error correcting thesis overlook these requirements…or don’t seem to get the message they’re intended to send about his view of error-correcting.

  4. Anon


    Could elaborate further on this topic below?

    Let’s suppose we have two statistic tests T1 and T1, and data X and we want to test H.

    The first test gives us p-value(H) = 0%

    The second test gives us p-value(H) = 20%

    How can we decide which test result we should follow?

  5. Anon

    ** T1 and T2

Blog at WordPress.com.