Here are the slides from my presentation (May 17) at the Scientism workshop in NYC. (They’re sketchy since we were trying for 25-30 minutes.) Below them are some mini notes on some of the talks.
Now for my informal notes. Here’s a link to the Speaker abstracts;the presentations may now be found at the conference site here. Comments, questions, and corrections are welcome.
Firday, May 16 (pm): Carol Cleland argued that inference to possible worlds is the best explanation for how we commonly assign truth conditions to counterfactual conditionals, yet these are not reducible to empirical science. (Physicists seems perfectly happy with multiverses, unfortunately.) Noretta Koertge wonders why hard-headed Susan Haack is concerned to combat scientism, especially given that the general public seems “increasingly reluctant to employ even the simplest principles of scientific reasoning”. Koertge certainly agrees that we should avoid faux trappings of science, but thinks philosophers of science should lean more closely to the side of the sciences, rather than the science-doubters. I suggested in the discussion that I thought Haack was concerned (among other things) with overreliance on quantified science in judicial rulings on expertise (e.g., applications of Daubert), when it might seem preferable to use a more qualified, flexible sense of “weight of evidence”. (My own view is that judges should study “statistics in the law,” as taught, for example, by Nate Schachtman, who attended my talk.[i]) Tom Nickles argued against scientism viewed as strong scientific realism because it downplays or ignores historical dependencies of science, and tends to assume context-free “end-of-history” conclusions. (I share Nickles’ preference for a philosophy and history (PHS) perspective, which I like better than HPS, but I find that strong scientistic arguments are generally non-realist, if not radically instrumentalist, if only because arguments for realism are philosophical and not purely scientific.) Rik Peels argued against the kind of scientism that says only the natural sciences provide reliable knowledge (“epistemological scientism”) by questioning several empirical arguments purporting to show that introspection is not a reliable source of knowledge. (I think the upshot, going by his blurb, was that “epistemological scientism about introspection as a common sense source of knowledge should be rejected”.) Saturday, May 17: Massimo Pigliucci said that scientistic types harken back to extreme, naïve logical positivism, and I agree (compare with Nickles above). I agree as well on the importance of demarcation projects (see my slides above). He discussed, as did the next talk by Justin Kalef, the way the “is/ought” distinction argues against reducing value questions to science, especially in the land of ethical judgments. (The reductionists like to argue that since “human flourishing” is a factual matter, we can answer even ethical questions scientifically. My problem is that for any non-trivial question, these applications are invariably imbued with ideology; so they fail on methodological grounds. Admittedly, almost all of the meta-ethical philosophers I know are naturalists of some sort, which for some reason always strikes me as kind of a cop out.) Moti Mizrahi said that if you object to defenses of scientific induction on grounds of circularity, then you should realize that even modes ponens is defended circularly. (My own view is that inductive accounts that must be defended circularly are not worth defending.) Don Ross argued that the tendency of economists like Leamer to deny economics is a science is based on “the fact that economists do not produce timeless generalizations” like the physicists do, but Ross thinks such economists are assuming a wrong-headed view of science. (Is this really a “rhetoric of modesty” as he claims, as opposed to a way to deflect blame for their lack of clear policy guidance, especially in the past decade? There was a lot more to his rich paper, which I will study more carefully later on. That goes for the other papers as well.) Mariam Thalos argued that if we view science as theoretical reasoning, searching for truth, as distinct from practical reasoning and common sense, then there should be no concern about science encroaching on any area. (But if one requires this distinction, isn’t one back to an anti-scientism position, reflecting concerns about science encroachments similar to those arising from the is/ought distinction? I agree with Thalos that anyone worried about scienticism should care to identify demarcation of science, in contrast to what Haack apparently suggests.)
[i]Schachtman has often been at odds with Haack on legal points,judging from his blog (see [ii]). I am not familiar with those cases. I have my own disagreement with him on the Harkonen case, and in this connection I found it interesting that the audience laughed when I mentioned the Supreme Court turning down Harkonen’s appeal on grounds of free speech. Search this blog under Harkonen for details.
[ii] See for example: http://schachtmanlaw.com/haacks-holism-vs-too-much-of-nothing/ I entirely agree with Haack’s anti-probabilism, as readers to this blog know: http://schachtmanlaw.com/haack-attack-on-legal-probabilism/
Apologies for picking at a tangent, but multiverse conjectures are only considered useful/scientific by a small subset of physicists. And even in the sub-specialties where these arguments are regularly invoked, “The Multiverse” is nowhere near excepted science. It may seem otherwise because physics popularizers are (sadly) predominantly drawn from the theoretical community most prone to the unrestrained speculation.
West: Thanks for your comment. I should note right away that multiverse hypotheses did not come up at Cleland’s talk (but I was thinking it). I didn’t suggest multiverse hypotheses were accepted among physicists, but note, in relation to Cleland’s talk, they don’t think they’re going outside physics in discussing them either. But moving away entirely from the conference,here’s the kind of thing that suggests this concept is no longer deemed a wildly, far-fetched theoretical gambit by physicists:
https://errorstatistics.com/2013/08/28/is-being-lonely-unnatural-for-slim-particles-a-statistical-argument/
It’s linked to the discovery of properties of the Higgs particle (to whom I write an imaginary letter), and to discussions on Matt Strassler’s blog. Perhaps it’s only theoretical physicists–I hope so.
Pingback: Two questions for Nate Silver | prior probability
wondering if you have any thoughts on this?
http://maximum-entropy-blog.blogspot.com/2014/05/the-calibration-problem-why-science-is.html
Just one small comment on slide 15: I have always felt uncomfortable with five sigmas and the corresponding p-values because it depends so heavily on the behaviour of the distribution in the extreme tails. If it’s just a bit fatter than the normal approximation would suggest, it sort of falls apart. I believe that in most cases the CLT or your data is well behaved enough that p < 0.05 can be stated with good conscience, but I tend to disbelieve in p < 0.000001 or similar statements.
Obviously this has little to do with frequentist vs bayesian statistics. I have similar feelings about the tails of the posterior distribution. However, I think it might be one of the most overlooked problems in statistical practice which affect a lot of what we do. Just look at the entire -omics technologies, where we rely on the distribution of p-values to correct for false positives.
Erik: I think they’re extremely different cases, and I’ll come back to this this evening.
Erik: I think the use of the 5 sigmas in high energy physics (HEP) is very different. To begin with, it has to be a signal one can repeatedly generate–it’s not just one extremely far out difference from background rates. I’m working on this a bit now.
The “omics” technologies strike me as an irratic, error-filled mess, without the understanding of background mechanisms as they do in HEP. I want to study it some more now that I keep reading about it. But what do you mean “we rely on the distribution of p-values to correct for false positives”. Do you mean with things like FDRs?
@Erik: Though a complicated and laborious exercise, heavily relying on monte-carlo simulation methods, HEP analysts map out the likelihood function for each mass/energy bin in their search. Let’s say that after all the effort in simulating the likely results from the null H_0, taking into account calibration and other uncertainties, the likelihood function is a Poisson distribution with a mean rate of 2 events in a particular bin. The result from the actual experiment is 12 events in that bin, for a p-value of 2e-7 or 5.1 “sigma”.
But the search is done over multiple mass bins, say 50. Assuming the bins are independent (a conservative but often wrong assumption), one gets a global p-value of 1e-5 or 4.3 “sigma”. And if we really want to be pessimistic, we can assume that large numbers of events are more likely then first estimated by an order of magnitude. This brings the significance down to 3.7 “sigma”.
While your suspicion of VERY extreme p-values is perfectly justified, I think in this case they aren’t as problematic as they may seem at first.
Well, arguably I do not understand the full details of how the p-value regarding the Higgs-Boson was calculated. But in general, I would be surprised if there aren’t any assumptions about the background noise that might not be ideally true (small biases in soft and/or hardware, reproducibility and appropiate of what they consider standard background noise). Now, even if these factors play a role, it is minimal within the mass of the H0 distribution. But the farther one goes into the tails of the distribution the more skeptical I become. It’s always hard for a model (and those includes H0 models for background noise) to be correct about the extremes. As I said, I did not check it in detail and will do so if I have more time later.
Regarding, the -omics field I think it depends a bit. I think metabolomics is for example, in a much better place and more stable than transcriptomics. And all -omics fields have the potential to offer a lot of understanding about biological processes. Even so, I only start to have real faith in my results if I can reproduce them and ideally use them for prediction in additional data sets.
And indeed, I was referring to the correction for false-positives like Benjamini-Hochberg (but also Bonferroni) which depends strongly on how small the p-values actually gets. That’s also one of the reasons why I have more faith in metabolomics with a few hunderds variables than in transcriptomics with thousands of variables, since you don’t have to rely quite so strongly on the tails of your supposed H0 distribution.
Disclaimer: I work in the metabolomics field myself.
Hi Mayo, I’ve been reading through everything you’ve published lately and it is an enlightening experience even for a layman as me. I’m a bit puzzled, however, about your stance on FDRs, referred to as “science-wise error rates” in presentation slide 43 and would be grateful if you could clarify it for me.
When I hear “FDR” I think of Benjamini-Hochberg’s procedure for controlling false discovery rate and the accompanying p-value adjustments.
Since it is simply a more lax procedure (FWER-controlling in the weak sense) compared to Bonferroni (FWER-controlling in the strong sense), you surely can’t be speaking about that, right? If you are, I’d be grateful for links to information that could help me understand your criticism.
Or, you are saying Ioannidis is simply applying B-H FDR in the wrong scenario?
Or are you speaking about a completely different thing: pFDR as proposed by J.Storey (2005)? (that’s my understanding so far)
Thank you.
Geo: Yes, my criticism is closest to your remark:
“Or, you are saying Ioannidis is simply applying B-H FDR in the wrong scenario?”
It’s as if science is just about controlling “science wise error rates”. I will grant that in screening contexts, this kind of computation may have a role, especially given all I’m reading about the unthinking nature of (some?) research to arrive at predictors based on data mining microarrays.