R.A Fisher: “It should never be true, though it is still often said, that the conclusions are no more accurate than the data on which they are based”

imgres

.

A final entry in a week of recognizing R.A.Fisher (February 17, 1890 – July 29, 1962). Fisher is among the very few thinkers I have come across to recognize this crucial difference between induction and deduction:

In deductive reasoning all knowledge obtainable is already latent in the postulates. Rigorous is needed to prevent the successive inferences growing less and less accurate as we proceed. The conclusions are never more accurate than the data. In inductive reasoning we are performing part of the process by which new knowledge is created. The conclusions normally grow more and more accurate as more data are included. It should never be true, though it is still often said, that the conclusions are no more accurate than the data on which they are based. Statistical data are always erroneous, in greater or less degree. The study of inductive reasoning is the study of the embryology of knowledge, of the processes by means of which truth is extracted from its native ore in which it is infused with much error. (Fisher, “The Logic of Inductive Inference,” 1935, p 54).

Reading/rereading this paper is very worthwhile for interested readers. Some of the fascinating historical/statistical background may be found in a guest post by Aris Spanos: “R.A.Fisher: How an Outsider Revolutionized Statistics”

Categories: Fisher, phil/history of stat

Post navigation

30 thoughts on “R.A Fisher: “It should never be true, though it is still often said, that the conclusions are no more accurate than the data on which they are based”

  1. I think we find that germ of insight already developing in Aristotle and come of age in Peirce, though their line of thinking credits abduction before induction with the creative spark.

    • Jon: Peirce was the second scholar I had in mind, but it isn’t abduction that achieves this, only induction viewed as severe testing (even though it’s true abduction might first be used to reach a hypothesis to test). Peirce anticipated a number of statistical methods, e.g., confidence intervals. As for Aristotle, can you give me a passage you have in mind?

      • Mayo,

        The way I understand it, abductive hypothesis formation initiates the flight “beyond the data given” and inductive testing trims the excess fancies to keep the course of inquiry in a stable orbit.

        Fisher’s phrase seemed to resonate with Aristotle’s idea that “we are brought nearer to knowledge” by the abductive step.

        Here is a place where I quoted and discussed the relevant passage from Prior Analytics 2.25.

        Aristotle’s “Apagogy” : Abductive Reasoning as Problem Reduction

        • Jon: I think you are right and below is the relevant passage from https://en.wikisource.org/wiki/A_Neglected_Argument_for_the_Reality_of_God *

          “Observe that neither Deduction nor Induction contributes the smallest positive item to the final conclusion of the inquiry. They render the indefinite definite; Deduction Explicates; Induction evaluates: that is all. … , we are building a cantilever bridge of induction, held together by scientific struts and ties. Yet every plank of its advance is first laid by Retroduction alone, that is to say, by the spontaneous conjectures of instinctive reason; and neither Deduction nor Induction contributes a single new concept to the structure.”

          * I used to ignore this as Ramsay called it “sad” paper given the topic but as a concise review of Peirce’s method of inquiry – I now think it is quite good.

          • Phan: I wouldn’t look for Peirce’s best treatments of inductive inference in science within a discussion on a pragmaticist arg for God, which is why I’m not familiar with this (plus I’m not a Peirce scholar). But it would miss his point, even in this quirky peice, to suppose guessing, abduction or retroduction are key in finding new things out. As he emphasizes here, he holds that humans have a habit of making good conjectures and intuitions (why should we be different in this from animals, he asks?) That’s all we get from retroduction.But “Retroduction does not afford security. The hypothesis must be tested”, and subjecting conjectures to the test of experiment, for Peirce is what enables “ampliative” learning. It’s here that truth is extracted from its native ore, and even if raw starting points stem from our lucky good intuitions, what emerges goes beyond that fuzzy, error filled clay.
            As faras the Neglected Argumentfor God, do you suppose this is a kindof inference to the best explanation (of our ability to wisely conjecture, test,achieve successful inductions,and find satisfaction in the faith that all inquiry will succeed if allowed to go on long enough)? I’m not big on args for God, but these pragmaticists and pragmatists were spiritual guys, and Peirce had to earn his keep by writing such articles.
            Please share what more you obviously find in it.I only read it quickly.

          • Phan —

            The paper affectionately known as NA or sometimes NAFTROG was one of the first my undergrad philosophy advisor gave me to puzzle over, many years ago. I had already cut my teeth on a hearty helping of Peirce’s logical and mathematical work so I probably came at it from a different angle than most. The word God used in that deistic ens necessarium way could just as easily be replaced by The Cosmos, Demiurge, Spindle of Necessity, or Wheel of Dharma without changing anything of essence to its meaning. At least, that’s how I have always read it, whether in the Ancients, Leibniz, Peirce, or anyone who understands theology as a way of referring to ends, goals, norms, or values. It’s the abstract structure of the articulated system that matters as a model for understanding phenomena.

            • Jon:

              I agree and my footnote was meant to discourage dismissal of the paper.

              If you are not already aware, Dennis Kohatyn wrote a paper arguing it deserved more attention Resurrecting Peirce’s “Neglected Argument” for God.

              Keith O’Rourke

          • Phan —

            Turning to your main observation, Peirce is making a point here that is often missed, namely, there is a considerable array of conceptual, logical, and mathematical infrastructure that has to be set up before we can talk about probability spaces, probability measures, reference classes of distributions, and all that. Those choices are made abductively, that is, they take us from a state of immeasurable uncertainty to a frame of reference where we can begin to quantify probability, uncertainty, information, etc.

  2. Also worth reading is Popper, who argued (rigorously) that knowledge is not “extracted” but constructed (as applies also, for example, to the visual process). He admired the Presocratics, including Democritus who, before Fisher, understood that “Everything in the world is the product of chance and necessity,” but also that “Nothing do we know from having seen it; the truth is hidden in the deep.” We don’t induce, we guess, and then look for reflections of our underlying assumptions. Those reflections are “statistical” but they don’t contain the “ore.”

    • Lydia: I have read Popper very thoroughly over the years, but he never got past his logical empiricism to get this point. He would say that we can say nothing about reliable methods nor methods moving from less to more. Corroboration was only about past successes.

  3. Accuracy refers to the data, not the conclusions. For example, if we weigh something and then use the average of our measurements as the “conclusion” about the weight, then our conclusion may well be more accurate than the data.

  4. StanYoung

    Deborah: This is a very good post. I was just talking to an editor about this sort of thing a few days ago. He is strongly against junk science/statistics very unlike some editors I deal with. Stan

  5. Sander Greenland

    Mayo: Thanks for posting such fascinating classics.

    Understandably given the time, I think Fisher’s technical comments missed some important issues like the (now-called) sparse-data problem pointed out by Neyman & Scott (1948) about how to do consistent estimation when each sampled unit introduces new unknown parameters, without introducing full distributional constraints on the parameters (as dealt with today under semi-parametric theory). Then too the discussants raised more fundamental objections to Fisher’s nascent pure-likelihoodism, along with the anticipation by Edgeworth (1908-1909) of Fisher’s information model (p. 56), which of course Fisher did not take kindly in response.

    Nonetheless, several advanced concepts Fisher mentioned were only fully appreciated and applied much later, such as linear estimating equations on p. 45, and the passage on p. 47 culminating with “As a mathematical quantity information is strikingly similar to entropy in the mathematical theory of thermo-dynamics…” Indeed, in the postwar period Good, Lindley, Kullback and others worked out the entropy relation to statistics quite thoroughly in light of then-new information theory in computing and communication (to which Fisher’s theory could be viewed as a second-order approximation).

    Sadly I think, the rise of “objective Bayes” has rendered false Fisher’s astute remark “I must doubt whether any living statistician agrees with Dr. Jeffreys that the prior probability that two unknown quantities are equal is the same as that they are unequal. If this were, indeed, a property of prior probabilities this fact would, in my opinion, alone suffice to justify their exclusion from any argument having practical aims.” (p. 81) – there now being many statisticians that followed Jeffreys in the mistake Fisher decried. Still, based on subsequent frequentist studies (e.g., Firth Biometrika 1993 and many works by Berger and colleagues) I imagine Fisher might have approved of the resulting O-Bayes estimation theory as an improvement on maximum likelihood.

    • Sander:
      Thanks for your comment. On the first point, Fisher never says that just any way of adding data, or performing an ampliative inference, counts as a good inductive inference. It would be a warning, if your successive inferences grew less and less accurate that you were not fulfilling inductive inquiry’s promise*. Would he recommend better planning, redesign, or reigning things in by various constraints? His claim, in a way, points to how you don’t want to proceed; but he never promised you’d be able to inductively infer all claims, models, theories.

      You’re last point is especially interesting–thanks for highlighting it.

      *Peirce, who makes similar claims, as another commentator noted, would say that an inference counts as inductive only if it has the “self correcting” property. So it has to be minimally good, bu it can be so weak that he doesn’t really count it as inductive. To justify inductive inference, Peirce says requires randomization and predesignation–or equivalent strategies.

    • Hi Sander (I owe you some links, sorry!).

      RE: Neyman-Scott. I like Barnard’s story about discussing the problem with Neyman and deciding he could talk to Neyman about politics but not statistics and with Fisher about statistics but not politics.

      • Sander Greenland

        Never met Fisher (I was 11 when he died) but that anecdote matches my experience with Neyman if one approached him with a Fisherian argument.

        • Interesting, though not surprising I suppose.

          His discussion here of course makes this clear since he mentions that his interest is

          “to construct a theory of mathematical statistics independent of the conception of likelihood…entirely based on the classical theory of probability.”

  6. Mayo, Sander —

    Peirce was a pioneer, too, in that brave new world of information theory, presenting his ideas about information in lectures at Harvard University and the Lowell Institute in 1866 and 1867. He reasoned that “the puzzle of the validity of scientific inference … is … entirely removed by a consideration of the laws of information.”

    Here’s a link to my ongoing study of those lectures:

    Information = Comprehension × Extension

    • Just as a little sidelight, I was involved in a National Security Association project to study Peirce, induction, information and error correction (with Aris Spanos).The NSA was keen to study/use some of his logical ideas–I think for matters of homeland security (but they were all very hush hush about it). I’m allowed to say this much.

  7. Maybe it’s the time of year, or maybe it’s the season of inquiry, but the same sorts of questions about the role of abduction in Peirce’s trio of inference types arose the same time last year in several niches of the web I frequent. I recorded the more substantive bits of these discussions on my blog. Here is the anchor post of the series:

    Abduction, Deduction, Induction, Analogy, Inquiry

    • Thanks Jon.

      To clarify my take on my main observation you refer to – it came from JC Gardin’s view of his SNARK automatic inference program (AI in France in the 1960’s applied to archaeology) that he was forced to publish in protest.

      He thought it was useless or even harmful given it only had deduction and induction processes but no abduction (he used different words for this).

      He related this in a graduate seminar in 1983 (before I went into statistics). He would slam the pointer on the floor and state “you can’t rule out a hypothesis by the way it was generated (even his SNARK program) but I would not choose to spend any of my time entertaining hypothesis generated this way. Nor do I think anyone else should.”

      I was aware of some of Peirce’s work on this – Gardin was not (he only read a few chapter of Brent’s biography of Peirce just before his death.) Actually this was one of the motivations for me to go into statistics and fix up EDA.

      So its the “inquiry with good abduction being unprofitable for science” that is my take (more than general conceptual, logical, and mathematical infrastructure but specific to the given inquiry). Inquiry also needs good deduction and good induction of course.

      Now the only justification that Peirce ever offered for this was that like animals have good instincts they don’t question we have good abductions. Or perhaps we should just hope we do so we won’t give up on inquiry (just a regulative assumption).

      Keith O’Rourke

      • Opps – “inquiry withOUT good abduction being unprofitable for science”

      • Phan: I’m missing the upshot of most of your comment, probably because I don’t know Gardin or SNARK. I always associate abduction with H-D inference & inference to the best explanation all of which I dislike. I also question the idea that we just come out with conjectures to test and that there’s no method to arriving at fruitful “guesses”. I took Peirce to hold this view (as against Popper), at least in his later work, when he got things clearer.

        I’m interested to hear Peirce motivated you to go into stat.

  8. Mayo, Keith —

    That same issue came up last year with respect to a controversy in (philosophy of) physics circles, where a number of misunderstandings about abductive hypothesis formation have apparently become more persistent and widespread than I realized.

    My comments on that began at № 5 in the series I linked above, and the difference between “inference to the best explanation” and abduction as Peirce understood it is pointed out in № 7.

    The phrase “inference to the best explanation” was coined by Gilbert Harman in his attempt to explain abductive inference but it conveys the wrong impression to anyone who takes it as a substitute for the whole course of inquiry rather than just its starting point. Peirce himself was always very clear about this.

    • Jon: I did distinguish them, but whether it makes a difference entirely turns on the subsequent testing. If you’re a Bayesian who permits probabilistic affirming the consequent, then the fact that H entails e or makes e more probable will suffice for H to get a Bayes-bump. So the fact that you say there’s a separate stage of testing matters not at all if your separate stage isn’t a stage of severe or stringent testing (of the sort that Peirce, and I, would call for).

      • Mayo —

        I’m getting the feeling I must have taken my statistical orders under a different canon. I learned theory from math and stat depts, method from from psych and computer science depts, and real-word practice from a series of jobs consulting on research, mostly in medical, nursing, and public health depts. There was never this thing about bayesian vs. frequentist. Bayes’ Rule was understood as a deductive identity, what Peirce called “explicative”, and deduction can at best preserve information, never amplify it. Bayesian methods could in principle be used for estimation and differential diagnosis but in practice there was never enough data to fill in the more obscure cells, so it wasn’t really all that much use. Still, that was a different world from real hypothesis testing, which demanded the full experimental protocols of random sampling, t-tests, ANOVA, F-tests, etc. Exploratory data analysis was allowed by some and encouraged by others — I spent a decade writing an AI-type program that I later applied to EDA — but everyone respected the difference between that the hard stuff.

        • (ed) between that and the hard stuff.

        • Jon: I don’t know that any of my remarks on Peirce would be at odds with this.

          • Mayo —

            Oh, no, I wasn’t talking about you, or any of the people I’ve interacted with here. I was just talking about the general tenor of discussions that I’ve encountered on the web of late.

Blog at WordPress.com.