Philosopher of Science
University of Pittsburgh
Genuine philosophical problems are always rooted in urgent problems outside philosophy,
and they die if these roots decay
Karl Popper (1963, 72)
My concern in this post is how we philosophers can use our skills to do work that matters to people both inside and outside of philosophy.
Philosophers are highly skilled at conceptual analysis, in which one takes an interesting but unclear concept and attempts to state precisely when it applies and when it doesn’t.
What is the point of this activity? In many cases, this question has no satisfactory answer. Conceptual analysis becomes an end in itself, and philosophical debates become fruitless arguments about words. The pleasure we philosophers take in such arguments hardly warrants scarce government and university resources. It does provide good training in critical thinking, but so do many other activities that are also immediately useful, such as doing science and programming computers.
Conceptual analysis does not have to be pointless. It is often prompted by a real-world problem. In Plato’s Euthyphro, for instance, the character Euthyphro thought that piety required him to prosecute his father for murder. His family thought on the contrary that for a son to prosecute his own father was the height of impiety. In this situation, the question “what is piety?” took on great urgency. It also had great urgency for Socrates, who was awaiting trial for corrupting the youth of Athens.
In general, conceptual analysis often begins as a response to some question about how we ought to regulate our beliefs or actions. It can be a fruitful activity as long as the questions that prompted it are kept in view. It tends to degenerate into merely verbal disputes when it becomes an end in itself.
The kind of goal-oriented view of conceptual analysis I aim to articulate and promote is not teleosemantics: it is a view about how philosophy should be done rather than a theory of meaning. It is consistent with Carnap’s notion of explication (one of the desiderata of which is fruitfulness) (Carnap 1963, 5), but in practice Carnapian explication seems to devolve into idle word games just as easily as conceptual analysis. Our overriding goal should not be fidelity to intuitions, precision, or systematicity, but usefulness.
How I Became Suspicious of Conceptual Analysis
When I began working on proofs of the Likelihood Principle, I assumed that following my intuitions about the concept of “evidential equivalence” would lead to insights about how science should be done. Birnbaum’s proof showed me that my intuitions entail the Likelihood Principle, which frequentist methods violate.
Voila! Voila! Scientists shouldn’t use frequentist methods. All that remained to be done was to fortify Birnbaum’s proof, as I do in “A New Proof of the Likelihood Principle” by defending it against objections and buttressing it with an alternative proof. [Editor: For a number of related materials on this blog see Mayo’s JSM presentation, and note [i].]
After working on this topic for some time, I realized that I was making simplistic assumptions about the relationship between conceptual intuitions and methodological norms. At most, a proof of the Likelihood Principle can show you that frequentist methods run contrary to your intuitions about evidential equivalence. Even if those intuitions are true, it does not follow immediately that scientists should not use frequentist methods. The ultimate aim of science, presumably, is not to respect evidential equivalence but (roughly) to learn about the world and make it better. The demand that scientists use methods that respect evidential equivalence is warranted only insofar as it is conducive to achieving those ends. Birnbaum’s proof says nothing about that issue.
- In general, a conceptual analysis–even of a normatively freighted term like “evidence”–is never enough by itself to justify a normative claim. The questions that ultimately matter are not about “what we mean” when we use particular words and phrases, but rather about what our aims are and how we can best achieve them.
How to Do Conceptual Analysis Teleologically
This is not to say that my work on the Likelihood Principle or conceptual analysis in general is without value. But it is nothing more than a kind of careful lexicography. This kind of work is potentially useful for clarifying normative claims with the aim of assessing and possibly implementing them. To do work that matters, philosophers engaged in conceptual analysis need to take enough interest in the assessment and implementation stages to do their conceptual analysis with the relevant normative claims in mind.
So what does this kind of teleological (goal-oriented) conceptual analysis look like?
It can involve personally following through on the process of assessing and implementing the relevant norms. For example, philosophers at Carnegie Mellon University working on causation have not only provided a kind of analysis of the concept of causation but also developed algorithms for causal discovery, proved theorems about those algorithms, and applied those algorithms to contemporary scientific problems (see e.g. Spirtes et al. 2000).
I have great respect for this work. But doing conceptual analysis does not have to mean going so far outside the traditional bounds of philosophy. A perfect example is James Woodward’s related work on causal explanation, which he describes as follows (2003, 7-8, original emphasis):
My project…makes recommendations about what one ought to mean by various causal and explanatory claims, rather than just attempting to describe how we use those claims. It recognizes that causal and explanatory claims sometimes are confused, unclear, and ambiguous and suggests how those limitations might be addressed…. we introduce concepts…and characterize them in certain ways…because we want to do things with them…. Concepts can be well or badly designed for such purposes, and we can evaluate them accordingly.
Woodward keeps his eye on what the notion of causation is for, namely distinguishing between relationships that do and relationships that do not remain invariant under interventions. This distinction is enormously important because only relationships that remain invariant under interventions provide “handles” we can use to change the world.
Here are some lessons about teleological conceptual analysis that we can take from Woodward’s work. (I’m sure this list could be expanded.)
- Teleological conceptual analysis puts us in charge. In his wonderful presidential address at the 2012 meeting of the Philosophy of Science Association, Woodward ended a litany of metaphysical arguments against regarding mental events as causes by asking “Who’s in charge here?” There is no ideal form of Causation to which we must answer. We are free to decide to use “causation” and related words in the ways that best serve our interests.
- Teleological conceptual analysis can be revisionary. If ordinary usage is not optimal, we can change it.
- The product of a teleological conceptual analysis need not be unique. Some philosophers reject Woodward’s account because they regard causation as a process rather than as a relationship among variables. But why do we need to choose? There could just be two different notions of causation. Woodward’s account captures one notion that is very important in science and everyday life. If it captures all of the causal notions that are important, then so much the better. But this kind of comprehensiveness is not essential.
- Teleological conceptual analysis can be non-reductive. Woodward characterizes causal relations as (roughly) correlation relations that are invariant under certain kinds of interventions. But the notion of an intervention is itself causal. Woodward’s account is not circular because it characterizes what it means for a causal relationship to hold between two variables in terms of a different causal processes involving different sets of variables. But it is non-reductive in the sense that does not allow us to replace causal claims with equivalent non-causal claims (as, e.g., counterfactual, regularity, probabilistic, and process theories purport to do). This fact is a problem if one’s primary concern is to reduce one’s ultimate metaphysical commitments, but it is not necessarily a problem if one’s primary concern is to improve our ability to assess and use causal claims.
Philosophers rarely succeed in capturing all of our intuitions about an important informal concept. Even if they did succeed, they would have more work to do in justifying any norms that invoke that concept. Conceptual analysis can be a first step toward doing philosophy that matters, but it needs to be undertaken with the relevant normative claims in mind.
Question: What are your best examples of philosophy that matters? What can we learn from them?
- Birnbaum, Allan. “On the Foundations of Statistical Inference.” Journal of the American Statistical Association 57.298 (1962): 269-306.
- Carnap, Rudolf. Logical Foundations of Probability. U of Chicago Press, 1963.
- Gandenberger, Greg. “A New Proof of the Likelihood Principle.” The British Journal for the Philosophy of Science (forthcoming).
- Plato. Euthyphro. http://classics.mit.edu/Plato/euthyfro.html.
- Popper, Karl. Conjectures and Refutations. London: Routledge & Kegan Paul, 1963.
- Spirtes, Peter, Clark Glymour, and Richard Scheines. Causation, Prediction, and Search. Vol. 81. The MIT Press, 2000.
- Woodward, James. Making Things Happen: A Theory of Causal Explanation. Oxford University Press, 2003.
[i] Earlier posts are here and here. Some U-Phils are here, here, and here. For some amusing notes (e.g., Don’t Birnbaumize that experiment my friend, and Midnight with Birnbaum).
Some related papers:
- Mayo, D. G. (2012). “Statistical Science Meets Philosophy of Science Part 2: Shallow versus Deep Explorations”, Rationality, Markets, and Morals (RMM) 3, Special Topic: Statistical Science and Philosophy of Science, 71–107.
- Mayo, D. G. (2011) “Statistical Science and Philosophy of Science: Where Do/Should They Meet in 2011 (and beyond).” Rationality, Markets and Morals (RMM) 2, Special Topic: Statistical Science and Philosophy of Science, 79–102.
- Mayo, D. G. (2010). “An Error in the Argument from Conditionality and Sufficiency to the Likelihood Principle” in Error and Inference: Recent Exchanges on Experimental Reasoning, Reliability and the Objectivity and Rationality of Science (D Mayo and A. Spanos eds.), Cambridge: Cambridge University Press: 305-14.
- Cox D. R. and Mayo. D. G. (2010). “Objectivity and Conditionality in Frequentist Inference” in Error and Inference: Recent Exchanges on Experimental Reasoning, Reliability and the Objectivity and Rationality of Science (D Mayo and A. Spanos eds.), Cambridge: Cambridge University Press: 276-304.
Greg: Thanks so much for the guest post. I couldn’t agree more about the importance of philosophy of science (PoS) that matters. I have often discussed on this blog the sterility of a lot of analytic epistemology (including formal epistemology, as typically pursued). I hope your work will encourage other grad students to rediscover and update the area of P0S/statistical science (e.g., https://errorstatistics.com/2012/11/04/philstat-so-youre-looking-for-a-ph-d-dissertation-topic/).
Since I’ve said so much about that issue already, let me mention something further afield. Last night I commented on Gelman’s blog about a rather different way in which PoS has impacted scientific practice—negatively! This was on the topic of Kuhn’s work and the rise of current-day fraud in applied stat science (link is below).
Here’s from my comment:
“I’ve often had the thought that Stapel is the perfect exemplar of pure deconstructionism/radical social-constructivism. That is, the more extreme interpretations of some of Kuhn’s theses—ones Kuhn never quite denies (perhaps because his fame was built upon them). Stapel makes this fairly clear in his book and in speeches: it’s all a matter of selling what people want to believe…. The following is from the NYT interview:
“Several times in our conversation, Stapel alluded to having a fuzzy, postmodernist relationship with the truth, which he agreed served as a convenient fog for his wrongdoings. “It’s hard to know the truth,” he said. “When somebody says, ‘I love you,’ how do I know what it really means?” …“People have lost faith in the church, but they haven’t lost faith in science. My behavior shows that science is not holy.”
What the public didn’t realize, he said, was that academic science, too, was becoming a business. …I am a salesman. I am on the road. People are on the road with their talk. With the same talk. It’s like a circus.”
From my blog on Stapel:
The connection to Bayesian PoS also arose:
“Separately, the Bayesian model in philosophy was/is regarded by many as a way to cope with the challenges Kuhn brought to logical empiricism. It was thought/hoped (by such hard-headed philosophers of science as Wesley
Salmon) that one could incorporate Kuhnian factors into science while not robbing it entirely of having a logic based in part on empirical evidence. Kuhn, however, explicitly denied there was any argument for supposing a shared algorithm of the Bayesian or any other sort (in science). There was a big session at the Amer Philo Assoc where Kuhn met with Salmon and Hempel!! Well, this is too long already…”
Link to Gelman comment: http://andrewgelman.com/2013/08/15/blaming-scientific-fraud-on-the-kuhnians/#comment-149326
Enjoyed reading this. Can you fix the (amusing) “Viola!” typo?
I don’t understand how philosophical “clarifications” and “analyses’ are going to matter to the sciences without the hard work involved in “going so far outside the traditional bounds of philosophy.’ The “traditional bounds” are merely post Kantian blinkers that are an excuse for intellectual laziness or incapacity. Can it be a wonder that so many scientists regard philosophers of science as useless, intellectual kibitzers?