# Phil 6334: Slides from Day #1: Four Waves in Philosophy of Statistics

First installment 6334 syllabus (Mayo and Spanos)
D. Mayo slides from Day #1: Jan 23, 2014

I will post seminar slides here (they will generally be ragtag affairs), links to the papers are in the syllabus.

### 14 thoughts on “Phil 6334: Slides from Day #1: Four Waves in Philosophy of Statistics”

1. Dr. Mayo, I would have loved to have attended your course. Unfortunately, distance prevents me. In any case thank you for posting slides and the syllabus.

• Aaron: You’re very welcome. We will try some means to video some seminars later on.

2. Terrific group of students and other participants in our first seminar! Perfect mix of around 1/3 philosophers, 1/3 economists, and a great combinations of interdisciplinary people in engineering, computer science, bioinformatics, forestry, statistics*…I know I’ve forgotten some. We’re really looking forward to learning a lot!
* Had neglected him.

• Mark

But no statisticians? That’s disappointing (to me). Anyway, I’m really enjoying your slides, thanks for sharing!!

• Yes, there’s a statistician.

• Mark

Oh, great!

3. David Pattison

Thanks for the syllabus and slides.
I notice that one of the readings for next week (Mayo, 2005, “Philosophy of Statistics”) covers much of the material in this week’s slides, which is handy for those of us following at a distance. Many of the references (e.g., Hacking 1965) can be located from that article.

• David: Yes, that’s the point since it wasn’t feasible to ask the students to read that paper prior to the class, and because I knew it would take more than one class. I owe a correction sheet on that paper–they really messed it up. There will also be something on probability that Spanos will provide. We’re still playing it by ear here.

4. vl

I must be missing something basic. Couldn’t one argue that the sampling process should be part of the likelihood function? In the coin flip example, one could one interpret the stopping rule as a conditioning step in the data generation model.

More generally, in the counterexamples where the likelihood leads to a problematic interpretation, isn’t this because we are artificially excluding a part of the data generation process and calling it the sampling distribution?

• Let me chime in here as a Bayesian. The answers to your questions are yes and no. That is, yes, the sampling process should be part of the the likelihood function; but no, in the counterexamples where the likelihood leads to a problematic interpretation, the problem isn’t because we are artificially excluding a part of the data generation process.

In general, the sampling design must be part of the data probability model (and hence likelihood function). However, for the class of sampling plans called ignorable, the term in the likelihood function capturing the effect of the sampling plan on the data probability is constant with respect to the parameter. Optional stopping sampling plans are ignorable, and so the treatment of optional stopping presents an irreconcilable difference between likelihood-based paradigms and error statistics.

5. Christian Hennig

Thanks for sharing this!
A quick comment: Although I know that you don’t think that the issue is very important and therefore I’m not surprised, still in my opinion the list on p.8 could preferably have been embedded in a discussion of different interpretations of probabilities, particularly of course relative frequencies vs. epistemic (elaborating that in practice there is a certain tendency, particularly among Bayesians, to mix them up).

• Yes, but I think it’s best to consider that after you have an idea of the role probabilities should (or might) play in statistical inference (probabilism, performance, probativeness). Thanks Christian, combining seminar and blog could prove risky…we’ll see.

6. Thanks for posting these slides, Mayo. I look forward to the next installment!

• Thanks Corey, I appreciate your serious interest and contributions.