(See Part 1)
5. Duhemian Problems of Falsification
Any interesting case of hypothesis falsification, or even a severe attempt to falsify, rests on both empirical and inductive hypotheses or claims. Consider the most simplistic form of deductive falsification (an instance of the valid form of modus tollens): “If H entails O, and not-O, then not-H.” (To infer “not-H” is to infer H is false, or, more often, it involves inferring there is some discrepancy in what H claims regarding the phenomenon in question).
As with any argument, in order to detach its conclusion (without which there is no inference) the premises must be true or at least approximately true. But in fact the data are derived only with the help of various auxiliary claims A1, . . . An, so the argument more closely takes the form of:
1. If H & A1 &…and & An then O
Therefore, either not H or not A1 or….or not-An.
Thus, at best, we may infer the disjunction: either H or one of the auxiliaries used in deriving observation O is to blame. (Note that this is still an instance of modus tollens). To regard H as falsified, requires ruling out the various auxiliary hypotheses. This is often referred to as Duhem’s problem. In effect, we need to replace the “?” to infer:
1. either not-H or not A1 or….or not-An.
(By the way, this argument form is called a disjunctive syllogism.)
But we are not done. Further empirical assumptions are needed to warrant the claim that our observation (in “not-O”) is reliable, and that it counts as genuinely conflicting with the conjunction of H and the auxiliaries. When should it count as a genuine conflict? Even where a scientific theory or hypothesis is thought to be deterministic, uncertainties, inaccuracies, and limitations of the actual testing of H involve an error-laden prediction, so, strictly speaking, no outcome deductively contradicts it. Hypothesis H together with auxiliaries A1…An might be seen to entail a statistical claim about expected observations, perhaps with a range of error. Our falsification rules will have to be probabilistic in some sense. But it is not enough to regard the anomaly as genuine simply because the outcome is highly improbable under a hypothesis. Individual outcomes described in detail may easily have very small probabilities under H without being genuine anomalies for H.
Popper was well aware of these problems[i] (and one cannot be doing “something like Popper” if one ignores them). While extremely rare events may occur, Popper notes, “such occurrences would not be physical effects, because, on account of their immense improbability, they are not reproducible at will…. If, however, we find reproducible deviations from a macro effect … deduced from a probability estimate … then we must assume that the probability estimate is falsified” (Popper 1959, 203). Thus, to infer that we’ve gotten hold of a reproducible or genuine effect—sufficiently to count as an anomaly for H—is to make an inference to the existence of a reproducible effect. Popper called it a falsifying hypothesis.
Before considering a discordancy an actual anomaly for H, we need a falsifying hypothesis asserting that an observed disagreement represents a systematic and reproducible effect.
This is so even granting that the falsifying hypothesis may be closer to the “empirical basis” than some theory (as Popper called it). This just means it is thought to involve less high level theory, but notice it still involves warranting an inductive generalization. For instance, in testing general relativity, a falsifying hypothesis might be: there is a systematic departure d from the predicted light deflection effect in radioastronomical experiments.
Admittedly, this was problematic for Popper qua “deductivist”. That’s because to infer a falsifying hypothesis is an evidence-transcending, general (i.e., inductive) inference, and because Popper didn’t want to have to justify induction[ii]. But he still needed to warrant the rationality of various falsification rules.
We can group these problems together as the Duhemian problems of falsification: the need to warrant the data as reliable, and as genuinely anomalous for H (ruling out blaming auxiliaries A1…An ).
6. Methodological Falsificationism
The recognition that we need methodological rules and methods to warrant falsifying hypotheses led Lakatos to dub Popper’s philosophy “methodological falsificationism.” The task would be to warrant methods for determining or deciding that data and falsifying hypotheses are sufficiently warranted. (Lakatos delineated something like 7 different methodological “decisions” in linking data and hypotheses, not that the numbering matters. See EGEK 1996, chapter 1). That statistical tests would be relevant to this task was recognized by Popperians such as Lakatos: “The philosophical basis of some of the most interesting developments in modern statistics, the Neyman-Pearson approach rests completely on methodological falsificationism” (1978, 25). Still, neither Popper nor Lakatos made explicit use of statistical methods, a mistake in my judgment.[iii]
Mini Test 2
Why are we led to probabilistic falsification rules?
What are the Duhemian problems of falsification?
What is a falsifying hypothesis?
What is methodological falsificationism?
Mayo, D. 1996. Error and the Growth of Experimental Knowledge. Chicago: University of Chicago Press.
Mayo, D. 2011. “Statistical Science and Philosophy of Science: Where Do/Should They Meet in 2011 (and Beyond)?” Rationality, Markets and Morals (RMM) Vol. 2: 79-109.
Musgrave, A. 1999. Essays in Realism and Rationalism. Amsterdam: Rodopi; Atlanta, GA.
Popper, K. 1959. The Logic of Scientific Discovery. New York: Basic Books.
Lakatos, I. 1978. The Methodology of Scientific Research Programmes. Edited by J. Worrall and G. Currie. Vol. 1 of Philosophical Papers. Cambridge: Cambridge University Press.