StatSci meets PhilSci

“A [very informal] Conversation Between Sir David Cox & D.G. Mayo”

In June 2011, Sir David Cox agreed to a very informal ‘interview’ on the topics of the 2010 workshop that I co-ran at the London School of Economics (CPNSS), Statistical Science and Philosophy of Science, where he was a speaker. Soon after I began taping, Cox stopped me in order to show me how to do a proper interview. He proceeded to ask me questions, beginning with:

COX: Deborah, in some fields foundations do not seem very important, but we both think foundations of statistical inference are important; why do you think that is?

MAYO: I think because they ask about fundamental questions of evidence, inference, and probability. I don’t think that foundations of different fields are all alike; because in statistics we’re so intimately connected to the scientific interest in learning about the world, we invariably cross into philosophical questions about empirical knowledge and inductive inference.

Continue reading

Categories: Birnbaum, Likelihood Principle, Sir David Cox, StatSci meets PhilSci | Tags: ,

If you think it’s a scandal to be without statistical falsification, you will need statistical tests (ii)

Screen Shot 2016-08-09 at 2.55.33 PM

.

1. PhilSci and StatSci. I’m always glad to come across statistical practitioners who wax philosophical, particularly when Karl Popper is cited. Best of all is when they get the philosophy somewhere close to correct. So, I came across an article by Burnham and Anderson (2014) in Ecology:

While the exact definition of the so-called ‘scientific method’ might be controversial, nearly everyone agrees that the concept of ‘falsifiability’ is a central tenant [sic] of empirical science (Popper 1959). It is critical to understand that historical statistical approaches (i.e., P values) leave no way to ‘test’ the alternative hypothesis. The alternative hypothesis is never tested, hence cannot be rejected or falsified!… Surely this fact alone makes the use of significance tests and P values bogus. Lacking a valid methodology to reject/falsify the alternative science hypotheses seems almost a scandal.” (Burnham and Anderson p. 629)

Well I am (almost) scandalized by this easily falsifiable allegation! I can’t think of a single “alternative”, whether in a “pure” Fisherian or a Neyman-Pearson hypothesis test (whether explicit or implicit) that’s not falsifiable; nor do the authors provide any. I grant that understanding testability and falsifiability is far more complex than the kind of popularized accounts we hear about; granted as well, theirs is just a short paper.[1] But then why make bold declarations on the topic of the “scientific method and statistical science,” on falsifiability and testability? Continue reading

Categories: P-values, Severity, statistical tests, Statistics, StatSci meets PhilSci

Mayo & Parker “Using PhilStat to Make Progress in the Replication Crisis in Psych” SPSP Slides

Screen Shot 2016-06-19 at 12.53.32 PMHere are the slides from our talk at the Society for Philosophy of Science in Practice (SPSP) conference. I covered the first 27, Parker the rest. The abstract is here:

Categories: P-values, reforming the reformers, replication research, Statistics, StatSci meets PhilSci

Announcing Kent Staley’s new book, An Introduction to the Philosophy of Science (CUP)

4160cZ5qLWL._UY250_

Kent Staley has written a clear and engaging introduction to PhilSci that manages to blend the central key topics of philosophy of science with current philosophy of statistics. Quite possibly, Staley explains Error Statistics more clearly in many ways than I do in his 10 page section, 9.4. CONGRATULATIONS STALEY*

You can get this book for free by merely writing one of the simpler palindrome’s in the December contest.

Here’s an excerpt from that section:

.

Staley

9.4 Error-statistical philosophy of science and severe testing

Deborah Mayo has developed an alternative approach to the interpretation of frequentist statistical inference (Mayo 1996). But the idea at the heart of Mayo’s approach is one that can be stated without invoking probability at all. ….

Mayo takes the following “minimal scientific principle for evidence” to be uncontroversial:

Principle 3 (Minimal principle for evidence) Data xo provide poor evidence for H if they result from a method or procedure that has little or no ability of finding flaws in H, even if H is false.(Mayo and Spanos, 2009, 3) Continue reading

Categories: Announcement, Palindrome, Statistics, StatSci meets PhilSci | Tags:

Has Philosophical Superficiality Harmed Science?

images

.

I have been asked what I thought of some criticisms of the scientific relevance of philosophy of science, as discussed in the following snippet from a recent Scientific American blog. My title elicits the appropriate degree of ambiguity, I think. 

Quantum Gravity Expert Says “Philosophical Superficiality” Has Harmed Physics

By John Horgan | August 21, 2014 |  14

“I interviewed Rovelli by phone in the early 1990s when I was writing a story for Scientific American about loop quantum gravity, a quantum-mechanical version of gravity proposed by Rovelli, Lee Smolin and Abhay Ashtekar[i]

Horgan: What’s your opinion of the recent philosophy-bashing by Stephen Hawking, Lawrence Krauss and Neil deGrasse Tyson?

Rovelli: Seriously: I think they are stupid in this.   I have admiration for them in other things, but here they have gone really wrong.  Look: Einstein, Heisenberg, Newton, Bohr…. and many many others of the greatest scientists of all times, much greater than the names you mention, of course, read philosophy, learned from philosophy, and could have never done the great science they did without the input they got from philosophy, as they claimed repeatedly. You see: the scientists that talk philosophy down are simply superficial: they have a philosophy (usually some ill-digested mixture of Popper and Kuhn) and think that this is the “true” philosophy, and do not realize that this has limitations.

Here is an example: theoretical physics has not done great in the last decades. Why? Well, one of the reasons, I think, is that it got trapped in a wrong philosophy: the idea that you can make progress by guessing new theory and disregarding the qualitative content of previous theories.  This is the physics of the “why not?”  Why not studying this theory, or the other? Why not another dimension, another field, another universe?  Science has never advanced in this manner in the past.  Science does not advance by guessing. It advances by new data or by a deep investigation of the content and the apparent contradictions of previous empirically successful theories.  Quite remarkably, the best piece of physics done by the three people you mention is Hawking’s black-hole radiation, which is exactly this.  But most of current theoretical physics is not of this sort.  Why?  Largely because of the philosophical superficiality of the current bunch of scientists.”

I find it intriguing that Rovelli suggests that “Science does not advance by guessing. It advances by new data or by a deep investigation of the content and the apparent contradictions of previous empirically successful theories.” I think this is an interesting and subtle claim with which I agree. Continue reading

Categories: StatSci meets PhilSci, strong likelihood principle

What did Nate Silver just say? Blogging the JSM 2013

imagesMemory Lane: August 6, 2013. My initial post on JSM13 (8/5/13) was here.

Nate Silver gave his ASA Presidential talk to a packed audience (with questions tweeted[i]). Here are some quick thoughts—based on scribbled notes (from last night). Silver gave a list of 10 points that went something like this (turns out there were 11):

1. statistics are not just numbers

2. context is needed to interpret data

3. correlation is not causation

4. averages are the most useful tool

5. human intuitions about numbers tend to be flawed and biased

6. people misunderstand probability

7. we should be explicit about our biases and (in this sense) should be Bayesian?

8. complexity is not the same as not understanding

9. being in the in crowd gets in the way of objectivity

10. making predictions improves accountability Continue reading

Categories: Statistics, StatSci meets PhilSci

Sir David Hendry Gets Lifetime Achievement Award

images-17Sir David Hendry, Professor of Economics at the University of Oxford [1], was given the Celebrating Impact Lifetime Achievement Award on June 8, 2014. Professor Hendry presented his automatic model selection program (Autometrics) at our conference, Statistical Science and Philosophy of Science (June, 2010) (Site is here.) I’m posting an interesting video and related links. I invite comments on the paper Hendry published, “Empirical Economic Model Discovery and Theory Evaluation,” in our special volume of Rationality, Markets, and Morals (abstract below). [2]

One of the world’s leading economists, INET Oxford’s Prof. Sir David Hendry received a unique award from the Economic and Social Research Council (ESRC)…
Continue reading

Categories: David Hendry, StatSci meets PhilSci | Tags:

“Statistical Science and Philosophy of Science: where should they meet?”

img_1142

Four score years ago (!) we held the conference “Statistical Science and Philosophy of Science: Where Do (Should) They meet?” at the London School of Economics, Center for the Philosophy of Natural and Social Science, CPNSS, where I’m visiting professor [1] Many of the discussions on this blog grew out of contributions from the conference, and conversations initiated soon after. The conference site is here; my paper on the general question is here.[2]

My main contribution was “Statistical Science Meets Philosophy of Science Part 2: Shallow versus Deep Explorations” SS & POS 2. It begins like this: 

1. Comedy Hour at the Bayesian Retreat[3]

 Overheard at the comedy hour at the Bayesian retreat: Did you hear the one about the frequentist… Continue reading

Categories: Error Statistics, Philosophy of Statistics, Severity, Statistics, StatSci meets PhilSci

The Science Wars & the Statistics Wars: More from the Scientism workshop

images-11-1Here are the slides from my presentation (May 17) at the Scientism workshop in NYC. (They’re sketchy since we were trying for 25-30 minutes.) Below them are some mini notes on some of the talks.

Now for my informal notes. Here’s a link to the Speaker abstracts;the presentations may now be found at the conference site here. Comments, questions, and corrections are welcome. Continue reading

Categories: evidence-based policy, frequentist/Bayesian, Higgs, P-values, scientism, Statistics, StatSci meets PhilSci

Scientism and Statisticism: a conference* (i)

images-11A lot of philosophers and scientists seem to be talking about scientism these days–either championing it or worrying about it. What is it? It’s usually a pejorative term describing an unwarranted deference to the so-called scientific method over and above other methods of inquiry. Some push it as a way to combat postmodernism (is that even still around?) Stephen Pinker gives scientism a positive spin (and even offers it as a cure for the malaise of the humanities!)[1]. Anyway, I’m to talk at a conference on Scientism (*not statisticism, that’s my word) taking place in NYC May 16-17. It is organized by Massimo Pigliucci (chair of philosophy at CUNY-Lehman), who has written quite a lot on the topic in the past few years. Information can be found here. In thinking about scientism for this conference, however, I was immediately struck by this puzzle: Continue reading

Categories: Announcement, PhilStatLaw, science communication, Statistical fraudbusting, StatSci meets PhilSci | Tags:

Statistical Science meets Philosophy of Science: blog beginnings

2010 statsciphilsci conference logo“StatSci meets PhilSci”. (1/14/14)
As “Wasserman on Wasserman” (and links within) continues to rack up record hits (N.D.: you see you shouldn’t quit blogging)*, I’ve been asked about the origins of his and related discussions on this blog. For a quick answer:** many grew out of  attempts to tackle the general question: “Statistical Science and Philosophy of Science: Where Do (Should) They meet?”–the title of a conference I organized (with A. Spanos***) at the London School of Economics, Center for the Philosophy of Natural and Social Science, CPNSS, in June 2010. In tackling this question, errorstatistics.com regularly returns to a set of contributions stemming from the conference, and conversations initiated soon after (with Andrew Gelman and Larry Wasserman)****. The conference site is here.  My reflections in this general arena (Sept. 26, 2012) are here.

Opening with an informal (recorded) exchange:  “A statistical scientist meets a philosopher of science: a conversation between Sir David Cox and Deborah Mayo”, this special topic of the on-line journal, Rationality, Markets and Morals (RMM), edited by Max Albert[i],—also a conference participant —has been an excellent home for continual updates (to which we may return at some point!)

Authors are: David Cox, Andrew Gelman, David F. Hendry, Deborah G. Mayo, Stephen Senn, Aris Spanos, Jan Sprenger, Larry Wasserman

To those who ask me what to read as background to some of the issues, have a look at those contributions. Many of them are discussed in specific blogposts (with “deconstructions” [by me], responses by authors, and insightful “U-Phil” analyses by readers) and comments.[ii]. (Search under U-Phil.) I have gathered a list of issues that we either haven’t taken up, or need to return to.

Here is the RMM blub:

Rationality, Markets and Morals: Studies at the Intersection of Philosophy and Economics
Guest Editors: Deborah G. Mayo, Aris Spanos and Kent W. Staley

Statistical Science Meets Philosophy of Science: The Two-Way Street

At one level of analysis, statisticians and philosophers of science ask many of the same questions: What should be observed and what may justifiably be inferred from the resulting data? How well-tested or confirmed are hypotheses with data? How can statistical models and methods bridge the gaps between data and scientific claims of interest? These general questions are entwined with long standing philosophical debates, so it is no wonder that the statistics crosses over so often into philosophical territory.

The “meeting grounds” of statistical science and philosophy of science are or should be connected by a two-way street: while general philosophical questions about evidence and inference bear on statistical questions (about methods to use, and how to interpret them), statistical methods bear on philosophical problems about inference and knowledge. As interesting as this two-way street has been over many years, we seem to be in need of some entirely new traffic patterns! That is the basis for this forum. 


[i] Along with Hartmut Kliemt and Bernd Lahno.

[ii] The “deconstruction” activity on this blog began with my reaction to a paper by Jim Berger, in a recently reblogged post. Berger had replied in ‘Jim Berger on Jim Berger’. 

*From the WordPress 2013 “annual report”: The busiest day of the year was February 18th. The most popular post that day was R. A. Fisher: how an outsider revolutionized statistics.

Also attracting huge hits was the guest post by Larry Laudan: Why Presuming Innocence is Not a Bayesian Prior: https://errorstatistics.com/2013/07/20/guest-post-larry-laudan-why-presuming-innocence-is-not-a-bayesian-prior/ Many other biggies (especially from guest posters) have attracted a large number of comments and views.

** This post adapts an earlier one here. This blog is on philosophy, after all: only careful and frequent rereading brings illumination.

***For a full list of collaborators, sponsors, logisticians, and related collaborations, see the conference page. The full list of speakers is found there as well. Should we do a 2015 update? or wait for ERROR 2016?

****Conference participants who never got around to sending papers: I think there’s still time.

Categories: StatSci meets PhilSci

Two Severities? (PhilSci and PhilStat)

Janus--2faceThe blog “It’s Chancy” (Corey Yanofsky) has a post today about “two severities” which warrants clarification. Two distinctions are being blurred: between formal and informal severity assessments, and between a statistical philosophy (something Corey says he’s interested in) and its relevance to philosophy of science (which he isn’t). I call the latter an error statistical philosophy of science. The former requires both formal, semi-formal and informal severity assessments. Here’s his post:

In the comments to my first post on severity, Professor Mayo noted some apparent and some actual misstatements of her views.To avert misunderstandings, she directed readers to two of her articles, one of which opens by making this distinction:

“Error statistics refers to a standpoint regarding both (1) a general philosophy of science and the roles probability plays in inductive inference, and (2) a cluster of statistical tools, their interpretation, and their justification.”

In Mayo’s writings I see  two interrelated notions of severity corresponding to the two items listed in the quote: (1) an informal severity notion that Mayo uses when discussing philosophy of science and specific scientific investigations, and (2) Mayo’s formalization of severity at the data analysis level.

One of my besetting flaws is a tendency to take a narrow conceptual focus to the detriment of the wider context. In the case of Severity, part one, I think I ended up making claims about severity that were wrong. I was narrowly focused on severity in sense (2) — in fact, on one specific equation within (2) — but used a mish-mash of ideas and terminology drawn from all of my readings of Mayo’s work. When read through a philosophy-of-science lens, the result is a distorted and misstated version of severity in sense (1) .

As a philosopher of science, I’m a rank amateur; I’m not equipped to add anything to the conversation about severity as a philosophy of science. My topic is statistics, not philosophy, and so I want to warn readers against interpreting Severity, part one as a description of Mayo’s philosophy of science; it’s more of a wordy introduction to the formal definition of severity in sense (2).[It’s Chancy, Jan 11, 2014)

A needed clarification may be found in a post of mine which begins: 

Error statistics: (1) There is a “statistical philosophy” and a philosophy of science. (a) An error-statistical philosophy alludes to the methodological principles and foundations associated with frequentist error-statistical methods. (b) An error-statistical philosophy of science, on the other hand, involves using the error-statistical methods, formally or informally, to deal with problems of philosophy of science: to model scientific inference (actual or rational), to scrutinize principles of inference, and to address philosophical problems about evidence and inference (the problem of induction, underdetermination, warranting evidence, theory testing, etc.).

I assume the interest here* is on the former, (a). I have stated it in numerous ways, but the basic position is that inductive inference—i.e., data-transcending inference—calls for methods of controlling and evaluating error probabilities (even if only approximate). An inductive inference, in this conception, takes the form of inferring hypotheses or claims to the extent that they have been well tested. It also requires reporting claims that have not passed severely, or have passed with low severity. In the “severe testing” philosophy of induction, the quantitative assessment offered by error probabilities tells us not “how probable” but, rather, “how well probed” hypotheses are.  The local canonical hypotheses of formal tests and estimation methods need not be the ones we entertain post data; but they give us a place to start without having to go “the designer-clothes” route.

The post-data interpretations might be formal, semi-formal, or informal.

See also: Staley’s review of Error and Inference (Mayo and Spanos eds.)

Categories: Review of Error and Inference, Severity, StatSci meets PhilSci

Gandenberger: How to Do Philosophy That Matters (guest post)

greg picGreg Gandenberger                             
Philosopher of Science
University of Pittsburgh
gandenberger.org                                                                                    468px-Karl_Popper

Genuine philosophical problems are always rooted in urgent problems outside philosophy,
and they die if these roots decay
Karl Popper (1963, 72)

My concern in this post is how we philosophers can use our skills to do work that matters to people both inside and outside of philosophy.

Philosophers are highly skilled at conceptual analysis, in which one takes an interesting but unclear concept and attempts to state precisely when it applies and when it doesn’t.

What is the point of this activity? In many cases, this question has no satisfactory answer. Conceptual analysis becomes an end in itself, and philosophical debates become fruitless arguments about words. The pleasure we philosophers take in such arguments hardly warrants scarce government and university resources. It does provide good training in critical thinking, but so do many other activities that are also immediately useful, such as doing science and programming computers.

Conceptual analysis does not have to be pointless. It is often prompted by a real-world problem. In Plato’s Euthyphro, for instance, the character Euthyphro thought that piety required him to prosecute his father for murder. His family thought on the contrary that for a son to prosecute his own father was the height of impiety. In this situation, the question “what is piety?” took on great urgency. It also had great urgency for Socrates, who was awaiting trial for corrupting the youth of Athens.

In general, conceptual analysis often begins as a response to some question about how we ought to regulate our beliefs or actions. It can be a fruitful activity as long as the questions that prompted it are kept in view. It tends to degenerate into merely verbal disputes when it becomes an end in itself.

The kind of goal-oriented view of conceptual analysis I aim to articulate and promote is not teleosemantics: it is a view about how philosophy should be done rather than a theory of meaning. It is consistent with Carnap’s notion of explication (one of the desiderata of which is fruitfulness) (Carnap 1963, 5), but in practice Carnapian explication seems to devolve into idle word games just as easily as conceptual analysis. Our overriding goal should not be fidelity to intuitions, precision, or systematicity, but usefulness.

How I Became Suspicious of Conceptual Analysis

When I began working on proofs of the Likelihood Principle, I assumed that following my intuitions about the concept of “evidential equivalence” would lead to insights about how science should be done. Birnbaum’s proof showed me that my intuitions entail the Likelihood Principle, which frequentist methods violate. Voila! Voila! Scientists shouldn’t use frequentist methods. All that remained to be done was to fortify Birnbaum’s proof, as I do in “A New Proof of the Likelihood Principle” by defending it against objections and buttressing it with an alternative proof. [Editor: For a number of related materials on this blog see Mayo’s JSM presentation, and note [i].]

After working on this topic for some time, I realized that I was making simplistic assumptions about the relationship between conceptual intuitions and methodological norms. At most, a proof of the Likelihood Principle can show you that frequentist methods run contrary to your intuitions about evidential equivalence. Even if those intuitions are true, it does not follow immediately that scientists should not use frequentist methods. The ultimate aim of science, presumably, is not to respect evidential equivalence but (roughly) to learn about the world and make it better. The demand that scientists use methods that respect evidential equivalence is warranted only insofar as it is conducive to achieving those ends. Birnbaum’s proof says nothing about that issue.

  • In general, a conceptual analysis–even of a normatively freighted term like “evidence”–is never enough by itself to justify a normative claim. The questions that ultimately matter are not about “what we mean” when we use particular words and phrases, but rather about what our aims are and how we can best achieve them.

How to Do Conceptual Analysis Teleologically

This is not to say that my work on the Likelihood Principle or conceptual analysis in general is without value. But it is nothing more than a kind of careful lexicography. This kind of work is potentially useful for clarifying normative claims with the aim of assessing and possibly implementing them. To do work that matters, philosophers engaged in conceptual analysis need to take enough interest in the assessment and implementation stages to do their conceptual analysis with the relevant normative claims in mind.

So what does this kind of teleological (goal-oriented) conceptual analysis look like?

It can involve personally following through on the process of assessing and implementing the relevant norms. For example, philosophers at Carnegie Mellon University working on causation have not only provided a kind of analysis of the concept of causation but also developed algorithms for causal discovery, proved theorems about those algorithms, and applied those algorithms to contemporary scientific problems (see e.g. Spirtes et al. 2000).

I have great respect for this work. But doing conceptual analysis does not have to mean going so far outside the traditional bounds of philosophy. A perfect example is James Woodward’s related work on causal explanation, which he describes as follows (2003, 7-8, original emphasis):

My project…makes recommendations about what one ought to mean by various causal and explanatory claims, rather than just attempting to describe how we use those claims. It recognizes that causal and explanatory claims sometimes are confused, unclear, and ambiguous and suggests how those limitations might be addressed…. we introduce concepts…and characterize them in certain ways…because we want to do things with them…. Concepts can be well or badly designed for such purposes, and we can evaluate them accordingly.

Woodward keeps his eye on what the notion of causation is for, namely distinguishing between relationships that do and relationships that do not remain invariant under interventions. This distinction is enormously important because only relationships that remain invariant under interventions provide “handles” we can use to change the world.

Here are some lessons about teleological conceptual analysis that we can take from Woodward’s work. (I’m sure this list could be expanded.)

  1. Teleological conceptual analysis puts us in charge. In his wonderful presidential address at the 2012 meeting of the Philosophy of Science Association, Woodward ended a litany of metaphysical arguments against regarding mental events as causes by asking “Who’s in charge here?” There is no ideal form of Causation to which we must answer. We are free to decide to use “causation” and related words in the ways that best serve our interests.
  2. Teleological conceptual analysis can be revisionary. If ordinary usage is not optimal, we can change it.
  3. The product of a teleological conceptual analysis need not be unique. Some philosophers reject Woodward’s account because they regard causation as a process rather than as a relationship among variables. But why do we need to choose? There could just be two different notions of causation. Woodward’s account captures one notion that is very important in science and everyday life. If it captures all of the causal notions that are important, then so much the better. But this kind of comprehensiveness is not essential.
  4. Teleological conceptual analysis can be non-reductive. Woodward characterizes causal relations as (roughly) correlation relations that are invariant under certain kinds of interventions. But the notion of an intervention is itself causal. Woodward’s account is not circular because it characterizes what it means for a causal relationship to hold between two variables in terms of a different causal processes involving different sets of variables. But it is non-reductive in the sense that does not allow us to replace causal claims with equivalent non-causal claims (as, e.g., counterfactual, regularity, probabilistic, and process theories purport to do). This fact is a problem if one’s primary concern is to reduce one’s ultimate metaphysical commitments, but it is not necessarily a problem if one’s primary concern is to improve our ability to assess and use causal claims.

Conclusion

Philosophers rarely succeed in capturing all of our intuitions about an important informal concept. Even if they did succeed, they would have more work to do in justifying any norms that invoke that concept. Conceptual analysis can be a first step toward doing philosophy that matters, but it needs to be undertaken with the relevant normative claims in mind.

Question: What are your best examples of philosophy that matters? What can we learn from them?


Citations

  • Birnbaum, Allan. “On the Foundations of Statistical Inference.” Journal of the American Statistical Association 57.298 (1962): 269-306.
  • Carnap, Rudolf. Logical Foundations of Probability. U of Chicago Press, 1963.
  • Gandenberger, Greg. “A New Proof of the Likelihood Principle.” The British Journal for the Philosophy of Science (forthcoming).
  • Plato. Euthyphrohttp://classics.mit.edu/Plato/euthyfro.html.
  • Popper, Karl. Conjectures and Refutations. London: Routledge & Kegan Paul, 1963.
  • Spirtes, Peter, Clark Glymour, and Richard Scheines. Causation, Prediction, and Search. Vol. 81. The MIT Press, 2000.
  • Woodward, James. Making Things Happen: A Theory of Causal Explanation. Oxford University Press, 2003.

[i] Earlier posts are here and here. Some U-Phils are here, here, and here. For some amusing notes (e.g., Don’t Birnbaumize that experiment my friend, and Midnight with Birnbaum).

Some related papers:

  • Cox D. R. and Mayo. D. G. (2010). “Objectivity and Conditionality in Frequentist Inference” in Error and Inference: Recent Exchanges on Experimental Reasoning, Reliability and the Objectivity and Rationality of Science (D Mayo and A. Spanos eds.), Cambridge: Cambridge University Press: 276-304.
Categories: Birnbaum Brakes, Likelihood Principle, StatSci meets PhilSci

11th bullet, multiple choice question, and last thoughts on the JSM

photo-on-8-4-13-at-3-40-pm1I. Apparently I left out the last bullet in my scribbled notes from Silver’s talk. There was an 11th. Someone sent it to me from a blog: revolution analytics:

11. Like scientists, journalists ought to be more concerned with the truth rather than just appearances. He suggested that maybe they should abandon the legal paradigm of seeking an adversarial approach and behave more like scientists looking for the truth.

OK. But, given some of the issues swirling around the last few posts, I think it’s worth noting that scientists are not disinterested agents looking for the truth—it’s only thanks to its (adversarial!) methods that they advance upon truth. Question: What’s the secret of scientific progress (in those areas that advance learning)?  Answer: Even if each individual scientist were to strive mightily to ensure that his/her theory wins out, the stringent methods of the enterprise force that theory to show its mettle or die (or at best remain in limbo). You might say, “But there are plenty of stubborn hard cores in science”. Sure, and they fail to advance. In those sciences that lack sufficiently stringent controls, the rate of uncorrected spin is as bad as Silver suggests it is in journalism. Think of social psychologist Diederik Stapel setting out to show what is already presumed to be believable. (See here and here and search this blog.).

There’s a strange irony when the same people who proclaim, “We must confront those all too human flaws and foibles that obstruct the aims of truth and correctness”, turn out to be enablers, by championing methods that enable flaws and foibles to seep through. It may be a slip of logic. Here’s a multiple choice question:

Multiple choice: Circle all phrases that correctly complete the “conclusion“:

Let’s say that factor F is known to obstruct the correctness/validity of solutions to problems, or that factor F is known to adversely impinge on inferences.

(Examples of such factors include: biases, limited information, incentives—of various sorts).

Factor F is known to adversely influence inferences.

Conclusion: Therefore any adequate systematic account of inference should _______

(a) allow F to influence inferences.
(b) provide a formal niche by which F can influence inferences.
(c) take precautions to block (or at least be aware of) the ability of F to adversely influence inferences.
(d) none of the above.

(For an example, see discussion of #7 in previous post.)

II. I may be overlooking sessions (inform me if you know of any), but I would have expected more on the statistics in the Higgs boson discoveries at the JSM 2013. Especially given the desire to emphasize the widespread contributions of statistics to the latest sexy science[i].  (At one point, I was asked about being part of a session on the five sigma effect in the Higgs boson discovery–not that I’m any kind of expert– by David Banks, because of my related blog posts (e.g., here), but people were already in other sessions. But I’m thinking about something splashy by statisticians in particle physics.) Did I miss? [ii]

III. I think it’s easy to see why lots of people showed up to hear Nate Silver: It’s fun to see someone “in the news”, be it from politics, finance, high tech, acting, TV, or, even academics–I, for one, was curious. I’m sure as many would have come out to hear Esther Duflo, Cheryl Sandberg, Fabiola Gionatti, or even Huma Abedin–to list some that happen to come to mind– or any number of others who have achieved recent recognition (and whose work intersects in some way with statistics). It’s interesting that I don’t see pop philosophers invited to give key addresses in yearly philosophy meetings; maybe because philosophers eschew popularity. I may be unaware of some; I don’t attend so many meetings.

IV. Other thoughts: I’ve only been to a handful of “official” statistics meetings. Obviously the # of simultaneous sessions makes the JSM a kind of factory experience, but that’s to be expected. But do people really need to purchase those JSM backpacks? I don’t know how much of the $400 registration fee goes to that, but it seems wasteful…. I saw people tossing theirs out, which I didn’t have the heart to do. Perhaps I’m just showing my outsider status.

V. Montreal: I intended to practice my French, but kept bursting into English too soon. Everyone I met (who lives there) complained about the new money and doing away with pennies in the near future. I wonder if we’re next.

[i]On Silver’s remark (in response to a “tweeted” question) that “data science” is a “sexed-up” term for statistics, I don’t know. I can see reflecting deeply over the foundations of statistical inference, but over the foundations of data analytics?

[ii] You don’t suppose the controversy about particle physics being “bad science” had anything to do with downplaying the Higgs statistics?

Categories: Higgs, Statistics, StatSci meets PhilSci

What should philosophers of science do? (Higgs, statistics, Marilyn)

Marilyn Monroe not walking past a Higgs boson and not making it decay, whatever philosophers might say.

Marilyn Monroe not walking past a Higgs boson and not making it decay, whatever philosophers might say.

My colleague, Lydia Patton, sent me this interesting article, “The Philosophy of the Higgs,” (from The Guardian, March 24, 2013) when I began the posts on “statistical flukes” in relation to the Higgs experiments (here and here); I held off posting it partly because of the slightly sexist attention-getter pic  of Marilyn (in reference to an “irrelevant blonde”[1]), and I was going to replace it, but with what?  All the men I regard as good-looking have dark hair (or no hair). But I wanted to take up something in the article around now, so here it is, a bit dimmed. Anyway apparently MM was not the idea of the author, particle physicist Michael Krämer, but rather a group of philosophers at a meeting discussing philosophy of science and science. In the article, Krämer tells us:

For quite some time now, I have collaborated on an interdisciplinary project which explores various philosophical, historical and sociological aspects of particle physics at the Large Hadron Collider (LHC). For me it has always been evident that science profits from a critical assessment of its methods. “What is knowledge?”, and “How is it acquired?” are philosophical questions that matter for science. The relationship between experiment and theory (what impact does theoretical prejudice have on empirical findings?) or the role of models (how can we assess the uncertainty of a simplified representation of reality?) are scientific issues, but also issues from the foundation of philosophy of science. In that sense they are equally important for both fields, and philosophy may add a wider and critical perspective to the scientific discussion. And while not every particle physicist may be concerned with the ontological question of whether particles or fields are the more fundamental objects, our research practice is shaped by philosophical concepts. We do, for example, demand that a physical theory can be tested experimentally and thereby falsified, a criterion that has been emphasized by the philosopher Karl Popper already in 1934. The Higgs mechanism can be falsified, because it predicts how Higgs particles are produced and how they can be detected at the Large Hadron Collider.

On the other hand, some philosophers tell us that falsification is strictly speaking not possible: What if a Higgs property does not agree with the standard theory of particle physics? How do we know it is not influenced by some unknown and thus unaccounted factor, like a mysterious blonde walking past the LHC experiments and triggering the Higgs to decay? (This was an actual argument given in the meeting!) Many interesting aspects of falsification have been discussed in the philosophical literature. “Mysterious blonde”-type arguments, however, are philosophical quibbles and irrelevant for scientific practice, and they may contribute to the fact that scientists do not listen to philosophers.

I entirely agree that philosophers have wasted a good deal of energy maintaining that it is impossible to solve Duhemian problems of where to lay the blame for anomalies. They misrepresent the very problem by supposing there is a need to string together a tremendously long conjunction consisting of a hypothesis H and a bunch of auxiliaries Ai which are presumed to entail observation e. But neither scientists nor ordinary people would go about things in this manner. The mere ability to distinguish the effects of different sources suffices to pinpoint blame for an anomaly. For some posts on falsification, see here and here*.

The question of why scientists do not listen to philosophers was also a central theme of the recent inaugural conference of the German Society for Philosophy of Science. I attended the conference to present some of the results of our interdisciplinary research group on the philosophy of the Higgs. I found the meeting very exciting and enjoyable, but was also surprised by the amount of critical self-reflection. Continue reading

Categories: Higgs, Statistics, StatSci meets PhilSci

Statistical Science meets Philosophy of Science

2010 statsciphilsci conference logoMany of the discussions on this blog have revolved around a cluster of issues under the general question: “Statistical Science and Philosophy of Science: Where Do (Should) They meet? (in the contemporary landscape)?”  In tackling these issues, this blog regularly returns to a set of contributions growing out of a conference with the same title (June 2010, London School of Economics, Center for the Philosophy of Natural and Social Science, CPNSS), as well as to conversations initiated soon after. The conference site is here.  My most recent reflections in this arena (Sept. 26, 2012) are here. Continue reading

Categories: Statistics, StatSci meets PhilSci

Blog at WordPress.com.