Scientism and Statisticism: a conference* (i)

images-11A lot of philosophers and scientists seem to be talking about scientism these days–either championing it or worrying about it. What is it? It’s usually a pejorative term describing an unwarranted deference to the so-called scientific method over and above other methods of inquiry. Some push it as a way to combat postmodernism (is that even still around?) Stephen Pinker gives scientism a positive spin (and even offers it as a cure for the malaise of the humanities!)[1]. Anyway, I’m to talk at a conference on Scientism (*not statisticism, that’s my word) taking place in NYC May 16-17. It is organized by Massimo Pigliucci (chair of philosophy at CUNY-Lehman), who has written quite a lot on the topic in the past few years. Information can be found here. In thinking about scientism for this conference, however, I was immediately struck by this puzzle:

How can we worry about science being held in too high a regard when every day we’re confronted with articles shouting that “most  scientific findings are false?”

Too much kowtowing toward science? Gee, in the fields I’m most closely involved, scarcely a day goes by where I’m not reading headlines: “Bad Science”, “Trouble in the Lab”, and “Science Fails to Self-correct.” Not to mention the “Crisis of Replication”.

The more I thought about it, I realized it was not really puzzling; yet my way of unraveling the puzzle points to a somewhat different direction than where writers on scientism appear heading. Even those of us utterly allergic to postmodernism can grant legitimate worries about scientism, and the most noteworthy of them, I say, grow out of methodological abuses of (broadly) statistical methodology—“lies, damned lies, and statistics.” Big data and high-powered computers allow statistical techniques to be performed with a click of a mouse in any “data driven” inquiry both in science and beyond (culturomics, philosophometrics)—but with all sorts of methodological-philosophical loopholes. It’s the false veneer of science, it’s statistics as window-dressing, that rightly bothers (most of) us; it’s the misuse and overreach of statistical methods, (QRPs[2]) that is objectionable, as are presuppositions about “what we really, really want” in using probability to express and control errors.

Here’s the blurb I wrote before fleshing out any of the details….Send me your thoughts, ideally, by Saturday. (I may blog on the conference later on; if I update this, I’ll use (ii) in the title. See the update in a new post here.)

“The Science Wars and the Statistics Wars: scientism, popular statistics, and the philosophers”

I will explore the extent to which concerns about ‘scientism’­– an unwarranted obeisance to scientific over other methods of inquiry– are intertwined with issues in the foundations of the statistical data analyses on which (social, behavioral, medical and physical) science increasingly depends. The rise of big data, machine learning, and high-powered computer programs have extended statistical methods and modeling across the landscape of science, law and evidence-based policy, but this has been accompanied by enormous hand wringing as to the reliability, replicability, and valid use of statistics. Legitimate criticisms of scientism often stem from insufficiently self-critical uses of statistical methodology, broadly construed—i.e., from what might be called “statisticism”– particularly when those methods are applied to matters of controversy.

  • While provocative articles written for popular consumption give useful exposés of classic fallacies and foibles (p-values are not posterior probabilities, statistical significance is not substantive significance, association is not causation) they often lack a depth of understanding of underling philosophical, statistical, and historical issues.
  • While “Big data” journalism offers novel ways to present information, that its correlational and causal headlines rely on a host of observable statistical associations and regressions may inadvertently allow biased or shaky claims to appear under the guise of hard-nosed, “just the facts” journalism.

Are philosophiesabout science relevant here? I say yes. To me, “getting philosophical” about uncertain inference is not articulating rarified concepts divorced from statistical practice, but providing tools to avoid obfuscating philosophically tinged notions about evidence, induction, testing, and objectivity/subjectivity, while offering a critical illumination of flaws and foibles surrounding technical statistical concepts. To warrant empirical methods of inquiry­–both in day-to-day learning or science–demands assessing and controlling misleading, biased, and erroneous interpretations of data. But such a meta-level scrutiny is itself theory-laden–only here the theories are philosophical. Understanding and resolving these issues, I argue, calls for interdisciplinary work linking philosophers of science, statistical practitioners, and science journalists. Not only would this help to make progress in the debates–the science wars and the statistics wars–it would promote philosophies of science genuinely relevant for practice.

[1] See his 2013 New Republic article, “Science is not your enemy” here. But I wonder why he’s issuing: “an impassioned plea to neglected novelists, embattled professors, and tenure-less historians”.  What does he want from the humanities anyway? Why is he trying to woo the tenure-less humanities professors? Surely they pose no threat to evolutionary psychology. 

[2] Questionable research practices.

 

Categories: Announcement, PhilStatLaw, science communication, Statistical fraudbusting, StatSci meets PhilSci | Tags: | 15 Comments

Post navigation

15 thoughts on “Scientism and Statisticism: a conference* (i)

  1. Sleepy

    Interesting. There does seem to have been a shift to focus on these issues in recent years. My understanding is that earlier scientism complaints focused on a different class of problems from the ones targeted in big data/replicability/statistical testing criticisms; the frustration over “The Moral Landscape” comes to mind, with critics more concerned about science providing the wrong kind of explanation for what we’re interested in than they were about matters like reliability. I’m curious to see where this goes.

    • Sleepy: You’re absolutely right. My twist is to begin with this puzzle to bring out some philosophical problems behind the methodological claims of scientism more generally–be the defenders scientists or philosophers.

  2. James

    Good observation but I don’t think the misuse of statistics is the full story. I’ve seen otherwise clear thinkers commit the fallacy of reasoning from a methodological restriction. For example:

    1. Reasoning from the assumptions of game theory to claim that people are instrumentally rational.
    2. Reasoning from a materialist framework to reach the conclusion that moral claims have no content.
    3. Reasoning from Boyle’s and Charles’ laws to reach the conclusion that all gases are noble gases.

    Actually, I’ve only seen one of these. For whatever reason, the people who commit 2 never have trouble see the question begging nature of 1 and 3.

    • James: Thanks for your comment. Your 3 examples fall under my statisticism, or at any rate, questionable methodology: begging the question or ill warranted extrapolation from non-rigorous empirical assessments. Philosophers ought to be outing these things rather than being put on the defensive.
      I’m interested to learn more of the dialogue in this arena as of late. Recently I’ve been immersed largely in philosophy of statistics, arguments to the likelihood principle, and writing a book “How to Tell What’s True About Statistical Inference”.

  3. Anonymous

    This reminds me of Gelman’s recent discussion with Pinker “on research that is attached to data that are so noisy as to be essentially uninformative”.
    Pinker: I don’t think that evolutionary psychology is a worse offender at noise-mining than social psychology in general.
    Quite an admission.In explaining some of the weaknesses, Gelman points up one of the main problems error statisticians are on about: the multiple comparisons problems. He also links to an excellent paper of his.
    Gelman: Multiple comparisons is the answer, and the point of our garden of forking paths paper is to explain how this problem can arise even for studies that are well motivated by substantive theory.
    http://andrewgelman.com/2014/05/08/discussion-steven-pinker-research-attached-data-noisy-essentially-uninformative/

    • “I’m no worse than social psychology” is scarcely a ringing endorsement of the scientific status of a field. Maybe they shouldn’t call so much attention to themselves or a Dean might overhear.

  4. Will

    My experience in the non-technical pop-culture expression of scientism is that it is not really promoting the scientific method, but a bastardized version of it. They, in general, leave out the introspection, the need for repeatability, etc.. For example, someone will post some article about a study that confirms their belief, and say “I f’ing love science”. Rarely do I see “This study is interesting. I wonder if it has been replicated yet, and I’d like to know more about the methods used…”

    I also never see anyone post retraction/revision news on social media, but that is part of the scientific process too! Retractions or refutations should be celebrated – that’s what should separate science from dogmatic beliefs! No need for egos, nothing to be embarrassed about, (unless the retraction is a result of unethical practices).

    Just my 2cents..it may only be worth 1 cent., or 1/2.

    • Will: What you say is consistent with my suggestion that they be outed on scientific grounds. On the other hand, if that’s all there is to their stance, it becomes less interesting to pursue altogether. I assume there must be more.

  5. Gelman forwarded this to me:
    http://andrewgelman.com/2014/05/17/forum-ecology-p-values-model-selection/

    I seem to remember seeing some of these before, even though I know none of the authors except Spanos and wasn’t asked to contribute (Gelman didn’t include his).

  6. Nathan Schachtman

    Thanks for posting your slides. I enjoyed your talk, even if I did not have the opportunity to weigh in and rebut your “off-label” comments about Dr. Harkonen. The Harkonen case was affirmed by the 9th Circuit, but the Supreme Court denied the petition for review. I think you suggested it had gone to the Supreme Court, which I suppose is true of the petition, but the case was never called up from the Circuit because the writ was denied. (Technically, the writ of certiorari is the directive to the lower court to send up the record to the reviewing court; writ denied means that the record stayed down below.)

    Dr. Harkonen’s use of the verb “demonstrate” in a press release was at worst a diction error and at best an inspired scientific inference based upon two randomized clinical trials, and a large body of clinical and pre-clinical research. The goverment had and has much better targets for its efforts to rein in fraudfeasors. How about George W. Bush for “weapons of mass destruction” in Iraq? Susan Rice for fibs told about Benghazi? Or Barak Obama for the little white lie that Snowden could have gone up the chain of command with his complaints about surveillance run amok?

    I suppose science is held in such a high regard that it has led to the rise of scientific poseurs. In the life sciences, there is money, prestige, power, and influence to be had through the research grant process. In the pharmaceutical industry, there are profits to be had. In judicial proceedings, there is money to be had by a class of lawyers that do not create, invent, or develop, but live and thrive like vultures on carrion. In regulatory matters, there are those who want to assume control over the engine of government, by dominating agency action that is determined by “substantial evidence,” a technical legal term that means many different things in different agencies and in different contexts. All these processes require creating the appearance of science without its content, or its commitment to a process. Perhaps because I live in a world of lawyers and public policy makers, I see retrograde moves all the time by scholars who want to misrepresent the scientific process and transform it into something that is more amenable to their political and social goals. See, e.g., Erica Beecher-Monas, Evaluating Scientific Evidence, for an account of science that strips away prediction and confirmation, and leaves explanation as the touchstone for science. All that is required is to be a good story teller.

    Nathan

    • Nate: Thanks for your comment. Yes I realize the Supreme Court didn’t agree to hear the Harkonen case. I hope we can post your latest legal reflections on the case.Do you have a link for Beecher-Monas? Good story telling is good scientific explanation?

      • Nathan Schachtman

        Here you go:

        E. Beecher-Monas, Evaluating Scientific Evidence: An Interdisciplinary Framework for Intellectual Due Process (New York 2007).

        EBM, not to be confused with evidence-based medicine, doesn’t make the equation you give, but she describes science as providing explanation, leaving out the confirmation and prediction part.

        Nathan

        • Nate: OK, but you said she favored mere story-telling. Warranted explanations require testing, and thus testable implications which might be ruled out by the no-prediction stance. Mere story-telling is bad science.Anyway, I’ll look for EBM.

  7. There was a very interesting response to Pinker called “Crimes Against the Humanities” (under “book brawl”) also in the New Republic: http://www.newrepublic.com/article/114548/leon-wieseltier-responds-steven-pinkers-scientism
    There are yet further rounds which the interested reader can find. But there must be some reconciliation of these positions, each of which contains many good points.

  8. Nathan Schachtman

    Mayo,

    As for E. Beecher-Monas, I didn’t write “mere,” but I might well have. If you insert the all-important adjective, warranted in front of explanation, then you have added what I found missing in EBM’s account.

    Nathan

I welcome constructive comments that are of relevance to the post and the discussion, and discourage detours into irrelevant topics, however interesting, or unconstructive declarations that "you (or they) are just all wrong". If you want to correct or remove a comment, send me an e-mail. If readers have already replied to the comment, you may be asked to replace it to retain comprehension.

Blog at WordPress.com.