Announcing Kent Staley’s new book, An Introduction to the Philosophy of Science (CUP)


Kent Staley has written a clear and engaging introduction to PhilSci that manages to blend the central key topics of philosophy of science with current philosophy of statistics. Quite possibly, Staley explains Error Statistics more clearly in many ways than I do in his 10 page section, 9.4. CONGRATULATIONS STALEY*

You can get this book for free by merely writing one of the simpler palindrome’s in the December contest.

Here’s an excerpt from that section:



9.4 Error-statistical philosophy of science and severe testing

Deborah Mayo has developed an alternative approach to the interpretation of frequentist statistical inference (Mayo 1996). But the idea at the heart of Mayo’s approach is one that can be stated without invoking probability at all. ….

Mayo takes the following “minimal scientific principle for evidence” to be uncontroversial:

Principle 3 (Minimal principle for evidence) Data xo provide poor evidence for H if they result from a method or procedure that has little or no ability of finding flaws in H, even if H is false.(Mayo and Spanos, 2009, 3)

Philosophical accounts of scientific reasoning have, however, generally failed to satisfy this principle. Philosophers of science have constructed theories of evidence that divorce consideration of the evidential import of data from consideration of the methods used to generate those data. Mayo argues that we should reject the project of the logical positivists that aimed to allow one, for any data or observation E, to calculate the degree of support or confirmation afforded to any hypothesis H. We should not, however, abandon the ideals of neutrality and objectivity themselves. An account of scientific reasoning that respects Principle 3 will better promote these ideals by emphasizing that reliable inferences from data require consideration of the properties of the method that produced them. (Staley 2014, pp. 153-4)

NOTE: Note: This post was accidentally put up before its time, while still incomplete. It was thought to be merely on draft; it seems the new improved WordPress has a mind of its own. Sorry Kent.

Categories: Announcement, Palindrome, Statistics, StatSci meets PhilSci | Tags:

Post navigation

10 thoughts on “Announcing Kent Staley’s new book, An Introduction to the Philosophy of Science (CUP)

  1. omaclaren

    Mayo –

    Looks interesting, thanks for pointing this out. I’ve also generally found it easier to understand your position by first thinking about the non-probability structure of your main principles.

    I wonder if you ever came across Nozick’s (1981; Chapter 3) ‘tracking theory’ of knowledge in epistemology? It seems to share a similar insight to that of severity, and he even presents a modified account which explicitly takes into account the method of arriving at a belief. He gives the conditions (pg. 179) for S to know, via method M, that p, as

    1. p is true
    2. S believes, via method M, that p.
    3. if p weren’t true, and S were to use M, then S wouldn’t believe, via M, that
    4. if p were true, and S were to use M, then S would believe, via M, that p.

    Slightly different concerns, yes, but if you replace ‘believe’ with ‘sees a good fit’ or similar then this has at least a superficial similarity. What do you think? Might be interesting to compare some of the subsequent analogous arguments pro/against this position in the epistemology literature? Closure, Kripke and all that…

    • Omaclaren: There’s too much to explain the radical difference between these analytical (i.e., definitional) approaches, that we were mostly raised on–, “X believes P iff______”, with their iterations of counterexamples and amendments to deal with them–and the break by some philosophers of science to actually say something about those methods M. (I joined the latter group, but we’re still in the minority, except maybe in certain fields.) I like the tracking idea, and the counterfactual definitions, just as many of the reliabilists, e.g., Dretske and co. give interesting definitions of things. It’s part of the whole idea that philosophy is nothing more than conceptual analysis, that philosophy can’t solve real problems or that there are no real philosophical problems, only problems of language (as Wittgenstein said). By the way, Popper tried to fight Wittgenstein on this (literally once with a poker).

      Thus, the analytic philosophers never give any “forward looking methods”, just definitions. I don’t mind this as a definition of a good method M, (replacing belief with something like warrant) but what good is an account of knowledge if you already have to start out with p is true? See what I mean?

      More specifically, within the umbrella of analytic epistemology, this is part of a movement that strove to avoid Gettier problems in defining what it is for “S to know P”, and, amazingly enough, they never really solved them. I do happen to think Gettier problems relate to error statistics–not noticed apparently by anyone else (corrections welcome).

      A lot (but not all) of today’s formal epistemology is analytic epistemology only using probability as part of the analytic definition. I could have had a very successful career in that field (using what I know from statistics and writing better analytic definitions). Staley is supposed to be the link between error statistics and analytic epistemology (being an Achinstein student and well versed in that mode, whereas I quickly lost patience with it, and found an advisor who liked to do applied philosophy of science and knew statistics, a Suppes student*.) Staley attended my NEH Summer seminar on philosophy of experiment and induction (not the full name) in 1999, and he’s been part of many (nearly all) of my conferences and workshops every since. Maybe he and I will write something together some day (as we’ve occasionally contemplated) on revolutionizing analytic epistemology–if I ever finish my book.

      *Suppes died last month at 92.

  2. Thanks for your fascinating response! The divisions you mention correspond roughly to what I thought, still surprising given some of the broadly similar concerns. It would be interesting see your take on analytic epistemology developed someday.

    Good luck with finishing the book!

    • Omaclaren: I might warn you, as well, that their counterfactuals are cashed out in terms of possible world semantics, whereas appealing to random variables and sampling distributions would provide what’s needed without traveling into the la la land of possible worlds, proportions of nearest worlds,etc. (e.g., David Lewis).

      • Yes! I didn’t want to go too far astray but was going to mention that I’d tried reading some Dretske-based work but have never really understood/liked (same thing?) possible world semantics.

        Definitely think probability concepts would be a nice alternative. Would have to make sure hypotheses themselves don’t become random variables though 😉

        Does raise the point to me that elaborating some sort of ‘hypothesis space’, probably with some sort of ‘geometry’ or ‘topology’ becomes inevitable, rather than simply H and not-H. The biggest appeal of the likelihood-style approaches to me (I don’t fully subscribe though, I should say) is that they make some attempt at this.

  3. Omaclaren: Don’t you see, the counterfactuals relate to the capabilities of methods to avoid erroneous interpretations of data, and are formally captured by the associated sampling distribution (of the relevant test statistic or the like). The error probabilities are probabilities given by the sampling distribution, and you don’t need or want possible worlds to make them out (although one could talk of simulated results or the like.) We have a hypothesis space, but I don’t see how that is relevant to giving you a way to characterize the properties of method M that would be needed to cash out, statistically, accounts like tracking. This is just what you won’t find with likelihoods at all. So I’m mystified about the ‘geometry’ or ‘typology’.

  4. Yes I get this, I’m mixing in comments on various different things. A bad habit which only confuses.

  5. My comment is rather late, I’m afraid, but I do want to say thank you, Deborah, for mentioning my book on your blog!

    Your comments about the importance of philosophical engagement with problems of scientific practice are fitting in this context. One of my principal motivations in writing the book was to show to students, especially science majors, how philosophy can have a bearing on practical problems of research and data analysis. Hence the two long chapters on probability and statistics, which is significantly more than most introductory books on the subject devote to those topics.

    • Kent: True, and I’m so glad you did. I hope someone will provide a palindrome in order to win it on my birthday Jan 6.

    • Mark

      Just want to say that I bought Staley’s book simply based on this post, just received it through the mail, and am anxious to read it (although, unfortunately, it will have to wait it’s turn). Thanks for posting about it!

Blog at