Today is C.S. Peirce’s birthday. He’s one of my all time heroes. You should read him: he’s a treasure chest on essentially any topic, and he anticipated several major ideas in statistics (e.g., randomization, confidence intervals) as well as in logic. I’ll reblog the first portion of a (2005) paper of mine. Links to Parts 2 and 3 are at the end. It’s written for a very general philosophical audience; the statistical parts are pretty informal. *Happy birthday Peirce*.

**Peircean Induction and the Error-Correcting Thesis**

Deborah G. Mayo

*Transactions of the Charles S. Peirce Society: A Quarterly Journal in American Philosophy*, Volume 41, Number 2, 2005, pp. 299-319

Peirce’s philosophy of inductive inference in science is based on the idea that what permits us to make progress in science, what allows our knowledge to grow, is the fact that science uses methods that are self-correcting or error-correcting:

Induction is the experimental testing of a theory. The justification of it is that, although the conclusion at any stage of the investigation may be more or less erroneous, yet the further application of the same method must correct the error. (5.145)

Inductive methods—understood as methods of experimental testing—are justified to the extent that they are error-correcting methods. We may call this Peirce’s error-correcting or self-correcting thesis (SCT):

**Self-Correcting Thesis SCT:** methods for inductive inference in science are error correcting; the justification for inductive methods of experimental testing in science is that they are self-correcting.

Peirce’s SCT has been a source of fascination and frustration. By and large, critics and followers alike have denied that Peirce can sustain his SCT as a way to justify scientific induction: “No part of Peirce’s philosophy of science has been more severely criticized, even by his most sympathetic commentators, than this attempted validation of inductive methodology on the basis of its purported self-correctiveness” (Rescher 1978, p. 20).

In this paper I shall revisit the Peircean SCT: properly interpreted, I will argue, Peirce’s SCT not only serves its intended purpose, it also provides the basis for justifying (frequentist) statistical methods in science. While on the one hand, contemporary statistical methods increase the mathematical rigor and generality of Peirce’s SCT, on the other, Peirce provides something current statistical methodology lacks: an account of inductive inference and a philosophy of experiment that links the justification for statistical tests to a more general rationale for scientific induction. Combining the mathematical contributions of modern statistics with the inductive philosophy of Peirce, sets the stage for developing an adequate justification for contemporary inductive statistical methodology.

**2. Probabilities are assigned to procedures not hypotheses**

Peirce’s philosophy of experimental testing shares a number of key features with the contemporary (Neyman and Pearson) Statistical Theory: statistical methods provide, not means for assigning degrees of probability, evidential support, or confirmation to hypotheses, but procedures for testing (and estimation), whose rationale is their predesignated high frequencies of leading to correct results in some hypothetical long-run. A Neyman and Pearson (NP) statistical test, for example, instructs us “To decide whether a hypothesis, *H*, of a given type be rejected or not, calculate a specified character, ** x_{0}**, of the observed facts; if

**>**

*x***reject**

*x*_{0 }*H*; if

**<**

*x***accept**

*x*_{0}*H*.” Although the outputs of N-P tests do not assign hypotheses degrees of probability, “it may often be proved that if we behave according to such a rule … we shall reject

*H*when it is true not more, say, than once in a hundred times, and in addition we may have evidence that we shall reject

*H*sufficiently often when it is false” (Neyman and Pearson, 1933, p.142).[i]

The relative frequencies of erroneous rejections and erroneous acceptances in an actual or hypothetical long run sequence of applications of tests are error probabilities; we may call the statistical tools based on error probabilities, error statistical tools. In describing his theory of inference, Peirce could be describing that of the error-statistician:

The theory here proposed does not assign any probability to the inductive or hypothetic conclusion, in the sense of undertaking to say how frequently that conclusion would be found true. It does not propose to look through all the possible universes, and say in what proportion of them a certain uniformity occurs; such a proceeding, were it possible, would be quite idle. The theory here presented only says how frequently, in this universe, the special form of induction or hypothesis would lead us right. The probability given by this theory is in every way different—in meaning, numerical value, and form—from that of those who would apply to ampliative inference the doctrine of inverse chances. (2.748)

The doctrine of “inverse chances” alludes to assigning (posterior) probabilities in hypotheses by applying the definition of conditional probability (Bayes’s theorem)—a computation requires starting out with a (prior or “antecedent”) probability assignment to an exhaustive set of hypotheses:

If these antecedent probabilities were solid statistical facts, like those upon which the insurance business rests, the ordinary precepts and practice [of inverse probability] would be sound. But they are not and cannot be statistical facts. What is the antecedent probability that matter should be composed of atoms? Can we take statistics of a multitude of different universes? (2.777)

For Peircean induction, as in the N-P testing model, the conclusion or inference concerns a hypothesis that either is or is not true in this one universe; thus, assigning a frequentist probability to a particular conclusion, other than the trivial ones of 1 or 0, for Peirce, makes sense only “if universes were as plentiful as blackberries” (2.684). Thus the Bayesian inverse probability calculation seems forced to rely on subjective probabilities for computing inverse inferences, but “subjective probabilities” Peirce charges “express nothing but the conformity of a new suggestion to our prepossessions, and these are the source of most of the errors into which man falls, and of all the worse of them” (2.777).

Hearing Pierce contrast his view of induction with the more popular Bayesian account of his day (the Conceptualists), one could be listening to an error statistician arguing against the contemporary Bayesian (subjective or other)—with one important difference. Today’s error statistician seems to grant too readily that the only justification for N-P test rules is their ability to ensure we will rarely take erroneous actions with respect to hypotheses in the long run of applications. This so called inductive behavior rationale seems to supply no adequate answer to the question of what is learned in any particular application about the process underlying the data. Peirce, by contrast, was very clear that what is really wanted in inductive inference in science is the ability to control error probabilities of test procedures, i.e., “the trustworthiness of the proceeding”. Moreover it is only by a faulty analogy with deductive inference, Peirce explains, that many suppose that inductive (synthetic) inference should supply a probability to the conclusion: “… in the case of analytic inference we know the probability of our conclusion (if the premises are true), but in the case of synthetic inferences we only know the degree of trustworthiness of our proceeding (“The Probability of Induction” 2.693).

Knowing the “trustworthiness of our inductive proceeding”, I will argue, enables determining the test’s probative capacity, how reliably it detects errors, and the severity of the test a hypothesis withstands. Deliberately making use of known flaws and fallacies in reasoning with limited and uncertain data, tests may be constructed that are highly trustworthy probes in detecting and discriminating errors in particular cases. This, in turn, enables inferring which inferences about the process giving rise to the data are and are not warranted: an inductive inference to hypothesis *H* is warranted to the extent that with high probability the test would have detected a specific flaw or departure from what *H* asserts, and yet it did not.

**3. So why is justifying Peirce’s SCT thought to be so problematic?**

You can read Section 3 here. (it’s not necessary for understanding the rest).

**4. Peircean induction as severe testing**

… [I]nduction, for Peirce, is a matter of subjecting hypotheses to “the test of experiment” (7.182).

The process of testing it will consist, not in examining the facts, in order to see how well they accord with the hypothesis, but on the contrary in examining such of the probable consequences of the hypothesis … which would be very unlikely or surprising in case the hypothesis were not true. (7.231)

When, however, we find that prediction after prediction, notwithstanding a preference for putting the most unlikely ones to the test, is verified by experiment,…we begin to accord to the hypothesis a standing among scientific results.

This sort of inference it is, from experiments testing predictions based on a hypothesis, that is alone properly entitled to be called induction. (7.206)

While these and other passages are redolent of Popper, Peirce differs from Popper in crucial ways. Peirce, unlike Popper, is primarily interested not in falsifying claims but in the positive pieces of information provided by tests, with “the corrections called for by the experiment” and with the hypotheses, modified or not, that manage to pass severe tests. For Popper, even if a hypothesis is highly *corroborated (by his lights)*, he regards this as at most a report of the hypothesis’ past performance and denies it affords positive evidence for its correctness or reliability. Further, Popper denies that he could vouch for the reliability of the method he recommends as “most rational”—conjecture and refutation. Indeed, Popper’s requirements for a highly corroborated hypothesis are not sufficient for ensuring severity in Peirce’s sense (Mayo 1996, 2003, 2005). Where Popper recoils from even speaking of warranted inductions, Peirce conceives of a proper inductive inference as what had passed a severe test—one which would, with high probability, have detected an error if present.

In Peirce’s inductive philosophy, we have evidence for inductively inferring a claim or hypothesis *H* when not only does *H* “accord with” the data ** x**; but also, so good an accordance would very probably not have resulted, were

*H*not true. In other words, we may inductively infer

*H*when it has withstood a test of experiment that it would not have withstood, or withstood so well, were H not true (or were a specific flaw present). This can be encapsulated in the following severity requirement for an experimental test procedure, ET, and data set

**.**

*x**Hypothesis H passes a severe test with* ** x** iff (firstly)

**accords with**

*x**H*and (secondly) the experimental test procedure ET would, with very high probability, have signaled the presence of an error were there a discordancy between what

*H*asserts and what is correct (i.e., were

*H*false).

The test would “have signaled an error” by having produced results less accordant with *H* than what the test yielded. Thus, we may inductively infer *H* when (and only when) *H* has withstood a test with high error detecting capacity, the higher this probative capacity, the more severely *H* has passed. What is assessed (quantitatively or qualitatively) is not the amount of support for *H* but the probative capacity of the test of experiment ET (with regard to those errors that an inference to *H* is declaring to be absent)……….

You can read the rest of Section 4 here here

**5. The path from qualitative to quantitative induction**

In my understanding of Peircean induction, the difference between qualitative and quantitative induction is really a matter of degree, according to whether their trustworthiness or severity is quantitatively or only qualitatively ascertainable. This reading not only neatly organizes Peirce’s typologies of the various types of induction, it underwrites the manner in which, within a given classification, Peirce further subdivides inductions by their “strength”.

*(I) First-Order, Rudimentary or Crude Induction*

Consider Peirce’s First Order of induction: the lowest, most rudimentary form that he dubs, the “pooh-pooh argument”. It is essentially an argument from ignorance: Lacking evidence for the falsity of some hypothesis or claim *H*, provisionally adopt *H*. In this very weakest sort of induction, crude induction, the most that can be said is that a hypothesis would eventually be falsified if false. (It may correct itself—but with a bang!) It “is as weak an inference as any that I would not positively condemn” (8.237). While uneliminable in ordinary life, Peirce denies that rudimentary induction is to be included as scientific induction. Without some reason to think evidence of *H*‘s falsity would probably have been detected, were H false, finding no evidence against *H* is poor inductive evidence *for* *H*. *H* has passed only a highly unreliable error probe.

*(II) Second Order (Qualitative) Induction*

It is only with what Peirce calls “the Second Order” of induction that we arrive at a genuine test, and thereby scientific induction. Within second order inductions, a stronger and a weaker type exist, corresponding neatly to viewing strength as the severity of a testing procedure.

The weaker of these is where the predictions that are fulfilled are merely of the continuance in future experience of the same phenomena which originally suggested and recommended the hypothesis… (7.116)

The other variety of the argument … is where [results] lead to new predictions being based upon the hypothesis of an entirely different kind from those originally contemplated and these new predictions are equally found to be verified. (7.117)

The weaker type occurs where the predictions, though fulfilled, lack novelty; whereas, the stronger type reflects a more stringent hurdle having been satisfied: the hypothesis has had “novel” predictive success, and thereby higher severity. (For a discussion of the relationship between types of novelty and severity see Mayo 1991, 1996). Note that within a second order induction the assessment of strength is qualitative, e.g., very strong, weak, very weak.

The strength of any argument of the Second Order depends upon how much the confirmation of the prediction runs counter to what our expectation would have been without the hypothesis. It is entirely a question of how much; and yet there is no measurable quantity. For when such measure is possible the argument … becomes an induction of the Third Order [statistical induction]. (7.115)

It is upon these and like passages that I base my reading of Peirce. A qualitative induction, i.e., a test whose severity is qualitatively determined, becomes a quantitative induction when the severity is quantitatively determined; when an objective error probability can be given.

*(III) Third Order, Statistical (Quantitative) Induction*

We enter the Third Order of statistical or quantitative induction when it is possible to quantify “how much” the prediction runs counter to what our expectation would have been without the hypothesis. In his discussions of such quantifications, Peirce anticipates to a striking degree later developments of statistical testing and confidence interval estimation (Hacking 1980, Mayo 1993, 1996). Since this is not the place to describe his statistical contributions, I move to more modern methods to make the qualitative-quantitative contrast.

**6. Quantitative and qualitative induction: significance test reasoning**

*Quantitative Severity*

A statistical significance test illustrates an inductive inference justified by a quantitative severity assessment. The significance test procedure has the following components: (1) a *null hypothesis* *H_{0}*, which is an assertion about the distribution of the sample

**= (**

*X***, …,**

*X*_{1}**), a set of**

*X*_{n}*random variables*, and (2) a function of the sample,

**, the**

*d(x)**test statistic*, which reflects the difference between the data

**= (**

*x***, …,**

*x*_{1}**), and null hypothesis**

*x*_{n}*H*. The observed value of

_{0}**d(**is written

**)***X***. The larger the value of d(**

*d(x)***) the further the outcome is from what is expected under**

*x**H*, with respect to the particular question being asked. We can imagine that null hypothesis

_{0}*H*is

_{0}*H_{0}*: there are no increased cancer risks associated with hormone replacement therapy (HRT) in women who have taken them for 10 years.

*Let d(x)* measure the increased risk of cancer in

*n*women, half of which were randomly assigned to HRT.

*H*asserts, in effect, that it is an error to take as genuine any positive value of

_{0}**—any observed difference is claimed to be “due to chance”. The test computes (3) the p-value, which is the probability of a difference larger than d(x), under the assumption that H0 is true:**

*d(x)**p*-value = Prob(** d**(

**) >**

*X***);**

*d(x)**H*).

_{0}If this probability is very small, the data are taken as evidence that

*H**: cancer risks are higher in women treated with HRT

The reasoning is a statistical version of *modes tollens*.

If the hypothesis *H _{0}* is correct then, with high probability, 1-

*p*, the data would not be statistically significant at level

*p*.

** x** is statistically significant at level

*p.*

Therefore, ** x** is evidence of a discrepancy from

*H*, in the direction of an alternative hypothesis

_{0}*H*.

*(i.e., H* severely passes, where the severity is 1 minus the p-value) [iii]*

For example, the results of recent, large, randomized treatment-control studies showing statistically significant increased risks (at the 0.001 level) give strong evidence that HRT, taken for over 5 years, increases the chance of breast cancer, the severity being 0.999. If a particular conclusion is wrong, subsequent severe (or highly powerful) tests will with high probability detect it. In particular, if we are wrong to reject *H _{0}* (and

*H*is actually true), we would find we were rarely able to get so statistically significant a result to recur, and in this way we would discover our original error.

_{0}It is true that the observed conformity of the facts to the requirements of the hypothesis may have been fortuitous. But if so, we have only to persist in this same method of research and we shall gradually be brought around to the truth. (7.115)

The correction is not a matter of getting higher and higher probabilities, it is a matter of finding out whether the agreement is fortuitous; whether it is generated about as often as would be expected were the agreement of the chance variety.

[Part 2 and Part 3 are here; you can find the full paper here.]

**REFERENCES:**

Hacking, I. 1980 “The Theory of Probable Inference: Neyman, Peirce and Braithwaite”, pp. 141-160 in D. H. Mellor (ed.), *Science, Belief and Behavior: Essays in Honour of R.B. Braithwaite*. Cambridge: Cambridge University Press.

Laudan, L. 1981 *Science and Hypothesis: Historical Essays on Scientific Methodology*. Dordrecht: D. Reidel.

Levi, I. 1980 “Induction as Self Correcting According to Peirce”, pp. 127-140 in D. H. Mellor (ed.), *Science, Belief and Behavior: Essays in Honor of R.B. Braithwaite*. Cambridge: Cambridge University Press.

Mayo, D. 1991 “Novel Evidence and Severe Tests”, *Philosophy of Science*, 58: 523-552.

———- 1993 “The Test of Experiment: C. S. Peirce and E. S. Pearson”, pp. 161-174 in E. C. Moore (ed.), *Charles S. Peirce and the Philosophy of Science*. Tuscaloosa: University of Alabama Press.

——— 1996 *Error and the Growth of Experimental Knowledge*, The University of Chicago Press, Chicago.

———–2003 “Severe Testing as a Guide for Inductive Learning”, in H. Kyburg (ed.), *Probability Is the Very Guide in Life*. Chicago: Open Court Press, pp. 89-117.

———- 2005 “Evidence as Passing Severe Tests: Highly Probed vs. Highly Proved” in P. Achinstein (ed.), *Scientific Evidence*, Johns Hopkins University Press.

Mayo, D. and Kruse, M. 2001 “Principles of Inference and Their Consequences,” pp. 381-403 in *Foundations of Bayesianism*, D. Cornfield and J. Williamson (eds.), Dordrecht: Kluwer Academic Publishers.

Mayo, D. and Spanos, A. 2004 “Methodology in Practice: Statistical Misspecification Testing” *Philosophy of Science*, Vol. II, PSA 2002, pp. 1007-1025.

———- (2006). “Severe Testing as a Basic Concept in a Neyman-Pearson Theory of Induction”, *The British Journal of Philosophy of Science* 57: 323-357.

Mayo, D. and Cox, D.R. 2006 “The Theory of Statistics as the ‘Frequentist’s’ Theory of Inductive Inference”, *Institute of Mathematical Statistics (IMS) Lecture Notes-Monograph Series, Contributions to the Second Lehmann Symposium*, *2005*.

Neyman, J. and Pearson, E.S. 1933 “On the Problem of the Most Efficient Tests of Statistical Hypotheses”, in *Philosophical Transactions of the Royal Society*, A: 231, 289-337, as reprinted in J. Neyman and E.S. Pearson (1967), pp. 140-185.

———- 1967 *Joint Statistical Papers*, Berkeley: University of California Press.

Niiniluoto, I. 1984 *Is Science Progressive*? Dordrecht: D. Reidel.

Peirce, C. S. *Collected Papers: Vols. I-VI*, C. Hartshorne and P. Weiss (eds.) (1931-1935). Vols. VII-VIII, A. Burks (ed.) (1958), Cambridge: Harvard University Press.

Popper, K. 1962 *Conjectures and Refutations: the Growth of Scientific Knowledge*, Basic Books, New York.

Rescher, N. 1978 *Peirce’s Philosophy of Science: Critical Studies in His Theory of Induction and Scientific Method*, Notre Dame: University of Notre Dame Press.

[i] Others who relate Peircean induction and Neyman-Pearson tests are Isaac Levi (1980) and Ian Hacking (1980). See also Mayo 1993 and 1996.

[ii] This statement of (b) is regarded by Laudan as the strong thesis of self-correcting. A weaker thesis would replace (b) with (b’): science has techniques for determining unambiguously whether an alternative *T’* is closer to the truth than a refuted *T*.

[iii] If the *p*-value were not very small, then the difference would be considered statistically insignificant (generally small values are 0.1 or less). We would then regard *H _{0}* as consistent with data

**, but we may wish to go further and determine the size of an increased risk r that has thereby been ruled out with severity. We do so by finding a risk increase, such that, Prob(**

*x**>*

**d(X)****;**

*d(x)**risk increase r*) is high, say. Then the assertion: the risk increase <

*r*passes with high severity, we would argue.

If there were a discrepancy from hypothesis *H _{0}* of

*r*(or more), then, with high probability,1-

*p*, the data would be statistically significant at level

*p*.

** x** is not statistically significant at level

*p*.

Therefore, ** x** is evidence than any discrepancy from

*H*

_{0}*is less than*

*r*.

For a general treatment of effect size, see Mayo and Spanos (2006).

[Ed. Note: A not bad biographical sketch can be found on wikipedia.]

I have not found it possible to understand Peirce’s concept of induction outside the context of his full theory of inquiry, involving as it does all three types of inference, abductive, deductive, and inductive. And that account of inquiry cannot be understood outside the context of his theory of signs and the information they bear. I have recently returned to my study of Peirce’s early formulations of these intertwined ideas, as he presented them in his Harvard and Lowell lectures of 1865–1866. Here is a link to the initial blog post:

{ Information = Comprehension × Extension }

I think I appreciate the cited and discussed bits of Peirce by and large, and certainly he was a true pioneer of such ideas. The issue that I have, however, is that he uses for me all too optimistic wording. This is probably better discussed based on Section 3 which I read following your link; may objection may be related to Laudan’s but I am skeptical about 2(a) as well.

In any case, any probabilistic hypothesis can be false in very many different ways, some of which (like general wild dependence or non-identity structures) cannot be falsified by data. Tests on data can only ever compare “regular” hypotheses (with some element of repetition in them) against regular alternatives (and even ruling out some aspects of regular hypotheses can require amounts of data that are just not available), nothing of the kind “every other day the random mechanism can change completely”. So I have no strong objections against the course of action that is apparently implied, but I’m less optimistic about what it can eventually achieve in terms of “getting close to the truth” (where, as a side aspect, “close” would need to be defined if it is meant in a way that is not purely pragmatic), and I find myself drawn more to Popper’s apparently more pessimistic rendering. At least at “first” (well not really first ;-)) sight.

Christian:

> he uses for me all too optimistic wording

I would agree, but more so in his earlier rather than later writing.

In his later writing it was more “if inquiry was persisted in sufficiently, it _would_ eventually get you close[r] to the truth”

He called this assumption a regulative assumption, it was not being made for any reason other than not to make it was to completely give up on inquiry. He gave an example of a military commander who when surrounded by a formidable enemy drew out his pistol and shot himself in the head. But Ian Hacking reminds us “Peirce started everything and finished nothing”.

I recently put it like this. The closer the connections to reality the better and even though one will never know how close (and cannot take lack of resistance from realty as evidence of closeness), one should act if one could always could get closer. Now, one can no more get reality right than maintain one’s health – eventually we all die and some quite early despite their or anyone else’s efforts. But to commit suicide upon this realization is simply bonkers.

Let me know if you would like to continue by email.

Keith O’Rourke

Keith: I reply here for the moment unless Mayo tells us to take this off list.

As long as “getting closer to the truth” means pragmatic things such as improving precision and reliability of predictions, or inspiring innovations and inventions that do something good, I’m fine with that. I wouldn’t use the word “truth” for this, though, and I’d think that it’s somewhat misleading to do so. Nothing in the world is truly independent of anything else (and neither are things truly exchangeable), but we don’t need to get rid of models assuming independence in order to do something useful. I don’t see why there isn’t enough motivation for inquiry in doing useful things, be it “close to the truth” (whatever that means) or not.

Christian and others: Please continue to discuss this here, and I’ll jump back in momentarily. I’m very happy to discuss Peirce.

Christian: There’s a confusion, I think, between the incompleteness of knowledge and the supposition we don’t know true things.

Mayo: Can you elaborate? There can easily be confusion because people don’t necessarily mean the same when they talk about “knowing true things”.

What I don’t see is how the statement “we’re getting closer to the truth” can be made meaningful or let’s say clear and precise unless we’re already there and know what it is to which we get “closer”.

Christian: Why would we need to be there? That seems silly. We will continue to learn what is the case about objects and processes of inquiry insofar as our probing is not blocked and we test claims severely. Induction for Peirce is severe testing, in what seems to be my sense, and it’s error-correcting. This is discussed in the rest of the paper to which I link.

By the way, people shouldn’t include “abduction” under induction for Peirce. As he was clear, only the latter requires randomization and predesignation (or procedures that accomplish the same).

Abduction is like inference to the best explanation.

It has long been my sense that Peirce’s more optimistic theses about inquiry converging on true descriptions of reality have to be understood as regulative principles, the minimal hopes that are necessary for inquiry to proceed unhindered. They are thus more normative than descriptive even though they amount to hypotheses about the properties of descriptions.

Jon: I think it’s rather that if inquiry is allowed to proceed unhindered, that the truth will be arrived at.

Yes, that is the thesis, succinctly stated. Can we establish that statement as the conclusion of a deductive argument? No, it’s a maxim, a piece of advice about the appropriate attitude to adopt at the inception of any inquiry. As such it comports quite well with the rest of Peirce’s pragmatism, but it becomes problematic for interpreters who read it as a theorem to prove.

Jon: Why is it the appropriate attitude to adopt? A famous interpretation of his point is that’s just how we define truth, i.e., whatever would eventually be agreed upon after sufficient inquiry, but I deny that’s what he meant. Showing why it’s an appropriate or fruitful way to proceed is itself to give it a justification (given the goal of learning about the world) which suffices to remove it from the category of mere maxim or theorem.

Mayo: It is not that there is no justification, no warrant for a normative recommendation. It is just we evaluate the recipe based on the data that arises in its application and not à priori, before any action or risk is taken. There is an irreducibly injunctive quality to experimental procedure: “Try this and see if you do not see what others have seen.”

As far as defining truth goes, and all such troublesome concepts, Peirce will recommend his Pragmatic Maxim as the way to clear up what meaning, or lack thereof, it may bear.

Jon: This seems to be helpful regarding what I wrote earlier; will read it, thanks!

Mayo: That is correct. Taking a cue from Aristotle, Peirce regarded abduction, deduction, and induction as three independent and irreducible types of inference, each of which can be justified only by appealing to more basic inferences of the same type. Logicians who recognize only two dimensions of inference, deductive and inductive, tend to get a distorted picture of these 3-dimensional logical architectures, typically conflating abduction and induction.

Mayo & All: Here are some notes I collected on the three types of reasoning in Aristotle and Peirce back when I was beginning to view scientific inquiry from systems theoretic and software engineering perspectives.

Functional Logic : Inquiry and Analogy

Let me try that link again …

Functional Logic : Inquiry and Analogy