February is a good time to read or reread these pages from Popper’s Conjectures and Refutations. Below are (a) some of my newer reflections on Popper after rereading him in the graduate seminar I taught one year ago with Aris Spanos (Phil 6334), and (b) my slides on Popper and the philosophical problem of induction, first posted here. I welcome reader questions on either.
As is typical in rereading any deep philosopher, I discover (or rediscover) different morsels of clues to understanding—whether fully intended by the philosopher or a byproduct of their other insights, and a more contemporary reading. So it is with Popper. A couple of key ideas to emerge from the seminar discussion (my slides are below) are:

 Unlike the “naïve” empiricists of the day, Popper recognized that observations are not just given unproblematically, but also require an interpretation, an interest, a point of view, a problem. What came first, a hypothesis or an observation? Another hypothesis, if only at a lower level, says Popper. He draws the contrast with Wittgenstein’s “verificationism”. In typical positivist style, the verificationist sees observations as the given “atoms,” and other knowledge is built up out of truth functional operations on those atoms.[1] However, scientific generalizations beyond the given observations cannot be so deduced, hence the traditional philosophical problem of induction isn’t solvable. One is left trying to build a formal “inductive logic” (generally deductive affairs, ironically) that is thought to capture intuitions about scientific inference (a largely degenerating program). The formal probabilists, as well as philosophical Bayesianism, may be seen as descendants of the logical positivists–instrumentalists, verificationists, operationalists (and the corresponding “isms”). So understanding Popper throws a great deal of light on current day philosophy of probability and statistics.
 The fact that observations must be interpreted opens the door to interpretations that prejudge the construal of data. With enough interpretive latitude, anything (or practically anything) that is observed can be interpreted as in sync with a general claim H. (Once you opened your eyes, you see confirmations everywhere, as with a gestalt conversion, as Popper put it.) For Popper, positive instances of a general claim H, i.e., observations that agree with or “fit” H, do not even count as evidence for H if virtually any result could be interpreted as according with H.
Note a modification of Popper here: Instead of putting the “riskiness” on H itself, it is the method of assessment or testing that bears the burden of showing that something (ideally quite a lot) has been done in order to scrutinize the way the data were interpreted (to avoid “verification bias”). The scrutiny needs to ensure that it would be difficult (rather than easy) to get an accordance between data x and H (as strong as the one obtained) if H were false (or specifiably flawed).
Note the second modification of Popper that goes along with the first: It isn’t that GTR opened itself to literal “refutation” (as Popper says), because even if true, a positive result could scarcely be said to follow, or even to have been expected in 1919 (or long afterward). (Poor fits, at best, were expected.) So failing to find the “predicted” phenomenon (the Einstein deflection effect) would not falsify GTR. There were too many background explanations for observed anomalies (Duhem’s problem). This is so even though observing a deflection effect doescount! (This is one of my main shifts on Popper–or rather, I think Popperians make a mistake when they say otherwise.) Of course, even when they observed a “deflection effect”—an apparently positive result—a lot of work was required to rule out any number of other explanations for the “positive” result (if interested, see refs[2]). Nor is there anything “unPopperian” about the fact that no eclipse result would have refuted GTR (certainly not in 1919). (Paul Meehl and other Popperians are wrong about this.) Admittedly Popper was not clear enough on this issue. Nevertheless, and this is my main point today, he was right to distinguish the GTR testing case from the “testing” of the popular theories he describes, wherein any data could be interpreted in light of the theory. My reading (or improvement?) on Popper, so far as this point goes, then, is that he is demarcating those empirical assessments or tests of a claim that are “scientific” (probative) and those that are “pseudoscientific” (insevere or questionable). To claim positive evidence for H from test T requires (minimally) indicating outcomes that would have been construed as evidence against or as counterinstances to H. The onus is on testers or interpreters of data to show how the charge of questionable science has been avoided.
[1] The verificationist’s view of meaning: the meaning of a proposition is its method of verification. Popper contrasts his problem of demarcating science and nonscience from this question of “meaning”. Were the verificationist’s account of meaning used as a principle of “demarcation” it would be both too narrow and too wide. (see Popper).
[2] For discussion of background theories in the early eclipse tests, see EGEK chapter 8:
For more contemporary experiments, see my discussion in Error and Inference.
NOTE: I have a “no pain philosophy” 3part tutorial (very short) on Popper on this blog. If you search under that, you’ll find it. Questions are welcome.
Problem of Induction & some Notes on Popper
Thanks a lot for this, very interesting!
One comment/question about the slides. I wonder what “accept a theory” is supposed to mean. I think that the wording “we accept the null hypothesis” is very misleading for a situation in which a test didn’t reject the hypothesis, and I try to teach my students not to use this wording.
Even a severe test is only severe distinguishing a hypothesis from a specific set of alternatives, not from everything that is conceivable. I think most people who say “I accept this or that hypothesis” refer to some kind of belief that either the hypothesis is true or at least certain aspects of it (e.g., “a certain drug doesn’t work better than a placebo”; although they wouldn’t necessarily also believe that the underlying distributions are indeed normal).
For me, if “acceptance” should be given any sense that I could accept, it would be something like, “we base our actions on this hypothesis for the time being, keeping in mind that it can still fall down at the next hurdle – which means that our actions should come with some additional safeguards that weren’t really necessary if we indeed knew that the hypothesis was true”.
Not sure whether this is what Popper or you had/have in mind, too, but in this case I’d still think that “acceptance” is not a good word and comes with a considerable temptation to overinterpret.
And another comment regarding induction. Although “method A has worked well in the past and therefore it will work well in the future” is not justified logically, in practice there can be a lot of good and useful discussion about the question for what reason a method that has worked well in the past can be expected to work more or less well in the future, which obviously depends on what kind of change of circumstances can be expected etc. We can’t know all this perfectly and people occasionally get it wrong, but in practice a stronger case can be made than just appealing to the inductive statement alone.
Christian:
Nice to get a comment on this blogpost. I’m not sure where “accept” comes up in the slides. Of course wrt statistical inference, “accept” was NP’s shorthand for “do not reject”. What I add there is to cash it out in terms of discrepancies from the null that are well ruled out. so even though you can’t (usually) rule them all out, you can set upper bounds, e.g., the mean improvement may be said to be less than mu’ if there’s high probability a larger observed mean would have occurred if the data were generated from a process with mu as high as mu’.
This is analogous to power analytical reasoning except that i use the actual outcome. (Readers can search the blog under power and Neyman).
But you were asking about accepting a theory? Popper might say we corroborate a theory, and that of course requires that it’s passed a severe test. Now my view on theories (the link on this post talks a fair bit about about it) is that we infer local aspects severely, rarely the entire theory. The question of the fruitfulness of using a theory or acting as if it’s approx true, or such things, is, for me, very different from the inferential claim.
On induction by enumeration, as I say, it’s warranted just when we can add something like: it’s very improbable we’d be generating A’s that are B’s unless all or most A’s are B’s, in the given population. We wouldn’t want to apply inductive enumeration without the severity stipulation because it’s a highly unreliable rule. Remember that for the confirmation theorists it was to be a formal rule that holds for cats or bosons or whatever. It was to be context free. Big mistake, but an essential part of the logic of confirmation game, essentially dead. Even Carnap added things like “there are many and varied instances”, but even that didn’t work. There were always counterexamples.
Thanks.
> I’m not sure where “accept” comes up in the slides.
On each of p.1215.
“The question of the fruitfulness of using a theory or acting as if it’s approx true, or such things, is, for me, very different from the inferential claim.”
Can you perhaps explain a bit more what you think the difference is? I’d be particularly interested in differences in terms of (expected) observations and actions.
“One is left trying to build a formal “inductive logic” (generally deductive affairs, ironically) that is thought to capture intuitions about scientific inference (a largely degenerating program) “
That ‘degenerating program’ is just the idea than if our intuitions about inference conflict with what the sum and product rules of probability theory (the Kolmogorov axioms applied to any well defined propositions basically) say then it’s our intuition that is wrong, not the foundational equations for all probability and statistics.
Any Frequentists who want to reject this sum/product rule reasoning by logical Bayesians in favor their own intuition has a subtle problem. All statisticians believe in the sum/product rules. Frequentists just believe they’re restricted to “random variables”. Remember that Frequentist “probabilities” satisfy the same mathematical properties as Bayesian probabilities. There is no mathematical ‘hook’ which can used to distinguish between them; the only difference between them is that Bayesian probabilities are more general. Hence, every equation that logical Bayesians write down in their degenerate program has to hold for Frequentist probabilities as well!
The problem Frequentists face is a bit like a mathematician trying to argue that an equation provable from the universal axioms of arithmetic (such as 0=XX) is not true for all integers, but is true when X is prime.
Your slide 6 illustrates this dilemma. There you describe Hempel’s paradox, which starts with the intuition that “a case of a hypothesis supports the hypothesis”. There is no justification for this other than it seems intuitive. You base a great deal on this example saying it’s the “kind of problem that leads to requiring severity”. I think you even had a blog post on it. I. J. Good, who was a colleague of yours I believe, showed through a simple example using nothing more than the sum/product rules that it’s not true in general (see here
). In some circumstances a case of a hypothesis supports the opposite.
Good’s example only used probabilities interpretable as frequencies and the sum/product rules. Any Frequentist who accepts the sum/product rules has to thus accept Good’s resolution of Hempel’s paradox. If anyone wants to believe “a case of a hypothesis supports the hypothesis” and resulting consequences such as the need for severity measures, they must reject the two equations which are the foundation of every probability/statistical calculation ever made.
The fact that every equation that logical Bayesians write down has to hold for Frequentist probabilities as well guarantees their ‘degenerating program’ will have legs no matter what any of us believe or hope.
Alan: Please see the discussion at this post: https://errorstatistics.com/2013/10/19/bayesianconfirmationphilosophyandthetackingparadoxini/
I read it and now think you’re wrong about the “tracking paradox” as well. My point doesn’t require reference to any particular paradox. It’s quite general. These are the key points. Which do you disagree with?
(A) Everyone agrees with the sum/product rules. Bayesians think they apply generally, nonBayesians think they only apply to “random variables”
(B) The Logical Bayesians are deriving everything from the sum/product rules. That’s it.
(C) Nonbayesians can’t simply say the Logical Bayesians are wrong, because they believe the Logical Bayesians results are true for “random variables”.
(D) There is no mathematical definition, axiom, or property of “random variables” which can be used to distinguish it from general Baeysian probabilities. This contrasts with “even” integers which can be given a defining axiom which objectively separates them from other integers. Unlike the property of “being an integer” the property of “being a random variable” doesn’t exist in the mathematics at all and only exists subjectively in the minds of statisticians. It is far from unambiguous in practice. You can program a computer to tell you whether an integer is even, you can’t program a computer to tell you whether a probability is an RV or not.
(E) AD put nonBayesians in an awkward situation. NonBayesians have to say that sometimes Logical Bayesians results are right and usable, but other times you should use nonBayesian intuitions instead. There is no objective rule which can be used to decide unambiguously which to use.
The alternative to (E) is to accept that the sum/product rules are generally usable and are a more powerful tool for studying inference than anyone’s intuition. After studying dozens of examples where they seem to conflict, I would whole heartedly endorse this latter alternative.
There is a practical consequence to all this. Anyone who accepts the awkward situation in (E) will forever be limited by their intuition. Anyone who accepts the alternative, can use the sum/product rules to derive results about inference beyond their intuition, or if you like, use them to educate and improve their intuition.
Pingback: Deutsch, Popper, Gelman and Shalizi, with a side of Mayo, on Bayesian ideas, models and fallibilism in the philosophy of science and in statistics (I)  Wine, Physics, and Song
Missing the Target.
The Unhappy Story of the Criticisms of Falsificationism
Click to access uvtext.pdf
But Popper also missed the target because he did not solve the problem of induction (does it really need solving when you take theories tentatively?) nor did he solve the demarcation problem. It’s easy for a crank theory to be falsifiable, it’s easy to test these theories in a naive way and get positive results even though a good testing would falsify them. So pseudoscience can continue on following Popper’s philosophy and get away with nonsense. Now days people think better about this problem than Popper did, “Philosophy of Pseudoscience: RECONSIDERING THE DEMARCATION PROBLEM”
http://press.uchicago.edu/ucp/books/book/chicago/P/bo15996988.html
I have also reinterpreted Popper on induction, pseudoscience, and falsification. The latest will be in my new book: Statistical Inference as Severe Testing: How to Get Beyond the Statistics Wars.
I do not understand what you mean. Are you saying he never thought induction was a problem but liked deduction better so he wanted to see if he could do scientific research just that way. It’s just some alternative he wanted to pursue. If so then I doubt many would agree.