Popper on pseudoscience: a comment on Pigliucci (i), (ii) 9/18, (iii) 9/20



Jump to Part (ii) 9/18/15 and (iii) 9/20/15 updates

I heard a podcast the other day in which the philosopher of science, Massimo Pigliucci, claimed that Popper’s demarcation of science fails because it permits pseudosciences like astrology to count as scientific! Now Popper requires supplementing in many ways, but we can get far more mileage out of Popper’s demarcation than Pigliucci supposes.

Pigliucci has it that, according to Popper, mere logical falsifiability suffices for a theory to be scientific, and this prevents Popper from properly ousting astrology from the scientific pantheon. Not so. In fact, Popper’s central goal is to call our attention to theories that, despite being logically falsifiable, are rendered immune from falsification by means of ad hoc maneuvering, sneaky face-saving devices, “monster-barring” or “conventionalist stratagems”. Lacking space on Twitter (where the “Philosophy Bites” podcast was linked), I’m placing some quick comments here. (For other posts on Popper, please search this blog.) Excerpts from the classic two pages in Conjectures and Refutations (1962, pp. 36-7) will serve our purpose:

It is easy to obtain confirmations, or verifications, for nearly every theory–if we look for confirmations.



Confirmations should count only if they are the result of risky predictions; that is [if the theory or claim H is false] we should have expected an event which was incompatible with the theory [or claim]….

Every genuine test of a theory is an attempt to falsify it, or to refute it. Testability is falsifiability, but there are degrees of testability, some theories are more testable..

Confirming evidence should not count except when it is the result of a genuine test of the theory, and this means that it can be presented as a serious but unsuccessful attempt to falsify the theory. (I now speak of such cases as ‘corroborating evidence’).

Some genuinely testable theories, when found to be false, are still upheld by their admirers–for example by introducing ad hoc some auxiliary assumption, or by re-interpreting the theory ad hoc in such a way that it escapes refutation. Such a procedure is always possible, but it rescues the theory from refutation only at the price of destroying, or at least lowering its scientific status. (I later described such a rescuing operation as a ‘conventionalist twist’ or a ‘conventionalist stratagem’.)…

Einstein’s theory of gravitation clearly satisfied the criterion of falsifiability. Even if our measuring instruments at the time did not allow us to pronounce on the results of the tests with complete assurance, there was clearly a possibility of refuting the theory.

Astrology did not pass the test. Astrologers were greatly impressed, and misled, by what they believed to be confirming evidence–so much so that they were quite unimpressed by any unfavourable evidence. Moreover, by making their interpretations and prophecies sufficiently vague they were able to explain away anything that might have been a refutation of the theory had the theory and the prophecies been more precise. In order to escape falsification they destroyed the testability of their theory. It is a typical soothsayer’s trick to predict things so vaguely that the predictions can hardly fail: that they become irrefutable.

The Marxist theory of history, in spite of the serious efforts of some of its founders and followers, ultimately adopted this soothsaying practice. In some of its earlier formulations…their predictions were testable, and in fact falsified. Yet instead of accepting the refutations the followers of Marx re-interpreted both the theory and the evidence in order to make them agree. In this way they rescued the theory from refutation…. They thus gave a ‘conventionalist twist’ to the theory; and by this stratagem they destroyed its much advertised claim to scientific status.

The two psycho-analytic theories were in a different class. They were simply non-testable, irrefutable. There was no conceivable human behavior which could contradict them….I personally do not doubt that much of what they say is of considerable importance, and may well play its part one day in a psychological science which is testable. But it does mean that those ‘clinical observations’ which analysts naively believe confirm their theory cannot do this any more than the daily confirmations which astrologers find in their practice.

Only in the third case does Popper take up theories that he considers non-testable due to (self-sealing) features in the theories themselves. The only difference for Popper is that in the third case the ad hoc saves are already part and parcel of the theory, but little turns on that. Popper’s central thesis is that it makes no difference how you immunize a theory from having its flaws uncovered by data—the result is that it is not actually being critically tested by data, data aren’t really being taken seriously. Thus such theories, or theory appraisals, are unscientific.

Thus, Popper is quite clear that the appraisals of theories by data, in various domains, are unscientific because, far from subjecting claims to severe criticism, far from accepting that flaws have been unearthed when predictions fail, far from giving the theories a hard time, theories are retained by means of ad hoc saves and conventionalist stratagems. The claims are logically falsifiable but they are rendered immune to criticism. In these arenas, theories are not being appraised in a scientific (critical) manner; the fact that data are involved fails utterly to make them scientific.  An unscientific appraisal is merely telling us that data “could be interpreted in the light of” the theory. In genuine sciences, passing a test must be difficult to achieve, if specifiable flaws exist.

Observations or experiments can be accepted as supporting a theory (or a hypothesis, or a scientific assertion) only if these observations or experiments are severe tests of the theory—or in other words, only if they result from serious attempts to refute the theory, and especially from trying to find faults where these might be expected in the light of all our knowledge. (Popper, 1994, p. 89)

Popper demands more than logical falsifiability for a theory (or theory appraisal) to be properly scientific. In fact, Popper intends his demarcation to capture a condition that “cannot be formalized”, it is “material” or “historical”; it can be located in the methodological process by which data are brought to bear on theories. Popper’s demarcation, remember, is intended as a contrast with the conception of science he finds in verificationists, inductivists and “confirmation theorists”.  Reject verificationism and beware of verification biases, says Popper: confirmations are “too cheap to be worth having” and can only count if they are the result of severe testing. Genuine evidence for claim H requires (at minimum) spelling out those outcomes that would have been construed as counterevidence to H. The onus is on interpreters of data to show how the charge of questionable science has been avoided. The “ability” in Popper’s falsifiability refers to the capability of the testing methodology they use.

Pigliucci is right that philosophers of science these days have tended to shy away from the task of demarcating science and pseudoscience. The task is left to those courageous committees reviewing fraud cases[1]––and they invariably turn to Popper!

There is much more that is required to flesh out Popper’s view of the demarcation of science. Here I am simply responding to this one point I heard on the podcast. I will likely update this with further remarks (look for (ii), etc.), and naturally invite Pigliucci to comment.

(ii) 9/18/15

Popper confuses things by making it sound as if he’s asking: When is a theory unscientific? when he is actually asking: When is an appraisal of a theory or claim H unscientific? Unscientific appraisals of H are those that lack severity, often as a result of various face-saving stratagems. Nowadays these are better known as cherry picking, P-hacking, trying and trying again, multiple testing, and a slew of other biasing selection effects. Popper’s main shortcoming is that he never provided an adequate account of severe testing, either in the case of falsifying (discorroborating) or corroborating. He defined “H passes a severe test with data x” as H entails (or statistically accords with) x, and x is a novel fact (one that would be surprising under existing rivals to H).

Pigliucci sees his position on demarcation as reacting to Laudan, who declared in 1983 that the demarcation problem was dead. It died, apparently, because philosophers couldn’t provide a set of necessary and sufficient conditions to define “science”. But such an analytic activity is not what’s involved in identifying minimal requirements for good or terrible tests. Nor would Laudan disagree. (He will correct me if I’m wrong.)

Laudan would have just come to Virginia Tech around the time of the demarcation paper. I was fresh out of graduate school. I think Laudan’s point was mainly that we should stick to identifying reliable/unreliable methods, rather than try and identify which practices get to wear the label “science”. The question of “what is science” used to occupy years of brown bags here, way back when, at the very start of the STS program. Thanks to Laudan, I quickly discovered how to use my work in philosophy of statistics to help solve these core problems in philosophy of science. Notably, the error statistical methodology can be used to supply an adequate account of stringent, reliable or severe tests—just what Popper lacked. With a severe testing account in hand, the demarcation task becomes scrutinizing particular inquiries and inferences, not whole fields. How well do they accomplish the task of severely probing errors? Can they reliably solve their Duhemian problems (of where to lay the blame for anomalies)? Or do they permit researchers ample degrees of freedom to explain away anomalies? It’s when an inquiry is incapable of learning from anomaly and error that it slips into the “questionable science” category–or so I argue.[2]

(iii) 9/20/15

Pigliucci goes in a different direction. Lacking necessary and sufficient conditions, he proposes to map out an array of sciences in the spirit of a (Wittgensteinian) family resemblance. The trouble is, lacking criteria for answering the above questions, the pigeon-holing tends to reflect someone’s assessment of plausibility, and would differ depending on who formulates the array. This gets us to Laudan’s worry: who is in and who is out, and whether a field is deemed scientific or fringe, may be largely a reflection of one group’s values, be they political, economic, religious, life-style or other. The danger is that determining what counts as junk science or good science itself turns into a rather non-scientific enterprise. Rival positions typically allege the “other side” is politicizing the science. See, “Will the real junk science please stand up?”

To be clear, I deny this needs to happen; it occurs when we fail to identify at least minimal requirements for passing a severe test. With that in hand, a cluster of ways of violating severity is forthcoming, e.g., cherry picking, monster barring, multiple testing, post-data selection effects, exception incorporation, barn-hunting, data reinterpretation, etc.. The capability of the practice to be genuinely self-critical–to find flaws in its models, hypotheses and data, even where they exist—is absent or low.


[1] The committee reviewing fraudster Diederik Stapel does quite a good job:

One of the most fundamental rules of scientific research is that an investigation must be designed in such a way that facts that might refute the research hypotheses are given at least an equal chance of emerging as do facts that confirm the research hypotheses. Violations of this fundamental rule, such as continuing an experiment until it works as desired, or excluding unwelcome experimental subjects or results, inevitably tends to confirm the researcher’s research hypotheses, and essentially render the hypotheses immune to the facts…. [T]he use of research procedures in such a way as to ‘repress’ negative results by some means may be called verification bias. (Report, 48).

[2] For my deconstruction of Kuhn in light of Popper on demarcation, see: “Ducks, Rabbits, and Normal Science: Recasting the Kuhn’s-eye view of Popper (EGEK Chapter 2)

Laudan, Larry. 1983. “The Demise of the Demarcation Problem.” In Physics, Philosophy and Psychoanalysis, edited by R. S. Cohen and L. Laudan, pp. 111–27. Dordrecht: D. Reidel.

Popper, K. (1962) Conjectures and Refutations: The Growth of Scientific Knowledge, New York: Basic Books.

Popper, K. (1994) The Myth of the Framework: In Defence of Science and Rationality (edited by N.A. Notturno). London: Routledge.

Categories: Error Statistics, Popper, pseudoscience, Statistics | Tags: ,

Post navigation

7 thoughts on “Popper on pseudoscience: a comment on Pigliucci (i), (ii) 9/18, (iii) 9/20

  1. Terrya

    For what it’s worth, I’ve always thought that falsification is a necessary but not sufficient criterion.

  2. Pigliucci tweeted: “@learnfromerror @philosophybites thanks Deborah. I actually don’t disagree with much of what you say there. Hopefully more comments later.”
    So I guess he agrees with most of what I say.

  3. Pingback: Friday links: against quit lit, ecologist fired, syllabus easter egg, and more | Dynamic Ecology

  4. Wanted to “like” this article, however you don’t seem to have that WordPress function activated. So LIKE. 🙂

    • Well, the posts are automatically announced on Twitter and can be liked there. I’m not too keen to add a way to appraise posts according to # likes–hits and interesting discussion matter more. But i appreciate your “liking” it.

  5. I recently was directed to Pigliucci’s criticisms of Popper’s demarcation criterion, and Googling around for more information led me to your excellent discussion here. If I had been called upon to respond to P, I would certainly have written something like this, drawing especially from C&R, but here you’ve done it much better than I would have. Thanks.

Blog at WordPress.com.