February is a good time to read or reread these pages from Popper’s Conjectures and Refutations. Below are (a) some of my newer reflections on Popper after rereading him in the graduate seminar I taught one year ago with Aris Spanos (Phil 6334), and (b) my slides on Popper and the philosophical problem of induction, first posted here. I welcome reader questions on either.
As is typical in rereading any deep philosopher, I discover (or rediscover) different morsels of clues to understanding—whether fully intended by the philosopher or a byproduct of their other insights, and a more contemporary reading. So it is with Popper. A couple of key ideas to emerge from the seminar discussion (my slides are below) are:
- Unlike the “naïve” empiricists of the day, Popper recognized that observations are not just given unproblematically, but also require an interpretation, an interest, a point of view, a problem. What came first, a hypothesis or an observation? Another hypothesis, if only at a lower level, says Popper. He draws the contrast with Wittgenstein’s “verificationism”. In typical positivist style, the verificationist sees observations as the given “atoms,” and other knowledge is built up out of truth functional operations on those atoms. However, scientific generalizations beyond the given observations cannot be so deduced, hence the traditional philosophical problem of induction isn’t solvable. One is left trying to build a formal “inductive logic” (generally deductive affairs, ironically) that is thought to capture intuitions about scientific inference (a largely degenerating program). The formal probabilists, as well as philosophical Bayesianism, may be seen as descendants of the logical positivists–instrumentalists, verificationists, operationalists (and the corresponding “isms”). So understanding Popper throws a great deal of light on current day philosophy of probability and statistics.
- The fact that observations must be interpreted opens the door to interpretations that prejudge the construal of data. With enough interpretive latitude, anything (or practically anything) that is observed can be interpreted as in sync with a general claim H. (Once you opened your eyes, you see confirmations everywhere, as with a gestalt conversion, as Popper put it.) For Popper, positive instances of a general claim H, i.e., observations that agree with or “fit” H, do not even count as evidence for H if virtually any result could be interpreted as according with H.
Note a modification of Popper here: Instead of putting the “riskiness” on H itself, it is the method of assessment or testing that bears the burden of showing that something (ideally quite a lot) has been done in order to scrutinize the way the data were interpreted (to avoid “verification bias”). The scrutiny needs to ensure that it would be difficult (rather than easy) to get an accordance between data x and H (as strong as the one obtained) if H were false (or specifiably flawed).
Note the second modification of Popper that goes along with the first: It isn’t that GTR opened itself to literal “refutation” (as Popper says), because even if true, a positive result could scarcely be said to follow, or even to have been expected in 1919 (or long afterward). (Poor fits, at best, were expected.) So failing to find the “predicted” phenomenon (the Einstein deflection effect) would not falsify GTR. There were too many background explanations for observed anomalies (Duhem’s problem). This is so even though observing a deflection effect doescount! (This is one of my main shifts on Popper–or rather, I think Popperians make a mistake when they say otherwise.) Of course, even when they observed a “deflection effect”—an apparently positive result—a lot of work was required to rule out any number of other explanations for the “positive” result (if interested, see refs). Nor is there anything “unPopperian” about the fact that no eclipse result would have refuted GTR (certainly not in 1919). (Paul Meehl and other Popperians are wrong about this.) Admittedly Popper was not clear enough on this issue. Nevertheless, and this is my main point today, he was right to distinguish the GTR testing case from the “testing” of the popular theories he describes, wherein any data could be interpreted in light of the theory. My reading (or improvement?) on Popper, so far as this point goes, then, is that he is demarcating those empirical assessments or tests of a claim that are “scientific” (probative) and those that are “pseudoscientific” (insevere or questionable). To claim positive evidence for H from test T requires (minimally) indicating outcomes that would have been construed as evidence against or as counterinstances to H. The onus is on testers or interpreters of data to show how the charge of questionable science has been avoided.
 The verificationist’s view of meaning: the meaning of a proposition is its method of verification. Popper contrasts his problem of demarcating science and non-science from this question of “meaning”. Were the verificationist’s account of meaning used as a principle of “demarcation” it would be both too narrow and too wide. (see Popper).
For more contemporary experiments, see my discussion in Error and Inference.
NOTE: I have a “no pain philosophy” 3-part tutorial (very short) on Popper on this blog. If you search under that, you’ll find it. Questions are welcome.
Problem of Induction & some Notes on Popper