A. Spanos Probability/Statistics Lecture Notes 7: An Introduction to Bayesian Inference (4/10/14)

Main menu

- LSE PH500 (May-June 2020)
- Frequentists in Exile
- Blog Bagel
- Elbar Grease
- (LSE) PH500
- 12-12-12 December 12 Seminar (10-12)
- 12-12-12 (background): Some Recipes for p-values, type 1 and 2 error probabilities, power, etc.
- 5 Dec. seminar reading (remember it is 10a.m.-12p.m.)
- 28 Nov. Seminar and Current U-Phil
- AUTUMN SEMINARS: Contemporary Philosophy of Statistics
- office hours week of Dec. 3-10 Dec:
- SUMMER SEMINARS: Contemporary Philosophy of Statistics

- W14Phil6334
- SEV APP
- PhilStat Spring 19
- Syllabus: Second Installment
- NOTES
- SLIDES
- Mayo Slides Meeting #1 (Phil 6334/Econ 6614)
- Mayo Slides: Meeting #2 (Phil 6334/Econ 6614) Part I (Bernoulli Trials)
- Mayo Slides: Meeting #2 (Phil 6334/Econ 6614) Part II (Logic)
- Mayo Slides Meeting #3 (Phil 6334/Econ 6614)
- Mayo Slides Meeting #4 (Phil 6334/Econ 6614)
- Mayo Slides #5 (Phil 6334/Econ 6614)
- Mayo Slides Meeting #6 (Phil 6334/Econ 6614)
- Mayo Slides Meeting #7 (Phil 6334/Econ 6614)
- Mayo Slides Meeting #9 (Phil 6334/Econ 6614)
- Mayo Slides Meeting #10 (Phil 6334/Econ 6614)
- Mayo Slides Meeting #11 (Phil 6334/Econ 6614)
- Mayo Slides Meeting #12 (Phil 6334/Econ 6614)
- Spanos Lecture Notes 1 (Phil 6334/Econ 6614)
- Spanos Lecture Notes 2 (Phil 6334/Econ 6614)
- Spanos Lecture Notes 3 (Phil 6334/Econ 6614)
- Spanos Lecture Notes 4 (Phil 6334/Econ 6614)
- Spanos Lecture Notes 5 (Phil 6334/Econ 6614)
- Spanos Lecture Notes 6 (Phil 6334/Econ 6614)
- Spanos Lecture Notes 7 (Phil 6334/Econ 6614)
- Spanos Lecture Notes 8 (Phil 6334/Econ 6614)

- SIST Tour Summaries
- Captain’s Biblio with Links
- Spanos ch 1, 2 & IID R.V. explained
- Additional Stats help

- Summer Seminar
- Mayo Pubs
- Senn’s posts+

A. Spanos Probability/Statistics Lecture Notes 7: An Introduction to Bayesian Inference (4/10/14)

This site uses Akismet to reduce spam. Learn how your comment data is processed.

- 5 September, 2018 (w/updates) RSS 2018 – Significance Tests: Rethinking the Controversy | Error Statistics Philosophy on “The 2019 ASA Guide to P-values and Statistical Significance: Don’t Say What You Don’t Mean” (Some Recommendations)(ii)
- Mayo on Statistical Crises and Their Casualties–what are they?
- The Physical Reality of My New Book! Here at the RSS Meeting (2 years ago) | Error Statistics Philosophy on SIST: All Excerpts and Mementos: May 2018-June 2020 (updated)
- Mark on Statistical Crises and Their Casualties–what are they?
- Mayo on Statistical Crises and Their Casualties–what are they?
- New Phil Stat Forum | PhilStatWars on Statistical Crises and Their Casualties–what are they?
- Yusuke Ono on Severity: Strong vs Weak (Excursion 1 continues)
- Mayo on The Corona Princess: Learning from a petri dish cruise (i)

- "The 2019 ASA Guide to P-values and Statistical Significance: Don’t Say What You Don’t Mean" (Some Recommendations)(ii)
- Spurious Correlations: Death by getting tangled in bedsheets and the consumption of cheese! (Aris Spanos)
- Statistical Crises and Their Casualties--what are they?
- G. Cumming Response: The New Statistics
- Stephen Senn: Double Jeopardy?: Judge Jeffreys Upholds the Law (sequel to the pathetic P-value)
- September 24: Bayes factors from all sides: who’s worried, who’s not, and why (R. Morey)
- Bad news bears: ‘Bayesian bear’ rejoinder-reblog mashup
- Probability that it is a statistical fluke [i]
- Stephen Senn: Losing Control (guest post)
- Stephen Senn: The pathetic P-value (Guest Post)

- September 2020
- August 2020
- July 2020
- June 2020
- May 2020
- April 2020
- March 2020
- February 2020
- January 2020
- December 2019
- November 2019
- October 2019
- September 2019
- August 2019
- July 2019
- June 2019
- May 2019
- April 2019
- March 2019
- February 2019
- January 2019
- December 2018
- November 2018
- October 2018
- September 2018
- August 2018
- July 2018
- June 2018
- May 2018
- April 2018
- March 2018
- February 2018
- January 2018
- December 2017
- November 2017
- October 2017
- September 2017
- August 2017
- July 2017
- June 2017
- May 2017
- April 2017
- March 2017
- February 2017
- January 2017
- December 2016
- November 2016
- October 2016
- September 2016
- August 2016
- July 2016
- June 2016
- May 2016
- April 2016
- March 2016
- February 2016
- January 2016
- December 2015
- November 2015
- October 2015
- September 2015
- August 2015
- July 2015
- June 2015
- May 2015
- April 2015
- March 2015
- February 2015
- January 2015
- December 2014
- November 2014
- October 2014
- September 2014
- August 2014
- July 2014
- June 2014
- May 2014
- April 2014
- March 2014
- February 2014
- January 2014
- December 2013
- November 2013
- October 2013
- September 2013
- August 2013
- July 2013
- June 2013
- May 2013
- April 2013
- March 2013
- February 2013
- January 2013
- December 2012
- November 2012
- October 2012
- September 2012
- August 2012
- July 2012
- June 2012
- May 2012
- April 2012
- March 2012
- February 2012
- January 2012
- December 2011
- November 2011
- October 2011
- September 2011

Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited.

Excerpts and links may be used, provided that full and clear credit is given to Deborah G. Mayo and Error Statistics Philosophy with appropriate and specific direction to the original content.

If the data are a typical realization from a particular stochastic process, they will lie in the typical set with high (sampling) probability. Therefore, the set of models ruled out with high severity is exactly that for which the sample is not in the typical set. Does anything more need to be said about M-S testing?

Corey: if one could operationalize the joint distribution of all possible statistical models that could have given rise to this particular data, then your idea will be in the right direction. However, no such joint distribution can be operationalized and thus M-S testing needs to be applied in a piece-meal fashion to different subsets of model assumptions, cleverly grouped in ways that will secure a reliable diagnosis of what systematic statistical information in the data the statistical model in question does not account for.

Spanos: “The joint distribution of a set of statistical models” — that’s an oddly Bayesian turn of phrase…

Fact remains, for any *particular* model under consideration, you can identify the typical set and then see if the data lie within it.

Corey: the first thing one needs to know about M-S testing is that it constitutes testing outside the boundaries of the “particular” model in question. The null is that model and the alternative is the set of all other possible models that could have generated the particular data.

Spanos: I am indeed aware of how M-S testing is aimed at securing the auxiliary assumptions upon which the primary inference rests. But within any given family of models, one can examine the location of the data relative to the typical set for any given parameter value. If the observed data is not in the typical corresponding to any possible parameter value, then by definition the entire family is inadequate according to the M-S testing criterion that the data must appear to be a typical realization of the posited stochastic process. To put it another way, the hypothesis “no member of the family is the ‘true’ distribution” has passed a severe test.

This presentation of the notion of admissibility is far from charitable. The motivating idea behind admissibility is that it’s desirable to switch from an inadmissible estimator to one that dominates it — and provided that the loss function is deemed relevant, that seems hard to argue with. I find that the presentation is not clear on the fact that admissibility was only ever meant to filter out certain “bad” estimators from consideration, and was never intended to be a *sufficient* condition for a “good” estimator. (This point vitiates the “crystal ball estimator” argument; your arguments questioning the relevance of canned loss functions like squared-error-loss — or indeed, any loss function — are better, albeit not dispositive.)

Also, since the notion of admissibility only explicitly references the risk function, i.e., a sample-space expectation, you might want to make it clear why Bayesians would care about it. It’s far from obvious that the risk-Pareto-optimal set of estimators is risk-equivalent to the set of (generalized) Bayes estimators.

Corey: I was very careful in the exposition to refer to admissibility as a “minimal” property, which coincides with your claim that “admissibility was only ever meant to filter out certain “bad” estimators from consideration”! My argument is that, in addition to using the wrong definition of the MSE, admissibility should not be used as a minimal property for frequentist point estimators because it is bad at filtering out “bad” estimators. My example illustrates that point estimators do not come any worse than the crystalball estimator (which ignores the data completely and is also inconsistent), but admissibility did not “filter it out”! In that sense, admissibility is not a pertinent criterion for filtering out “bad” estimators because it allowed in the worst possible estimator but kicked out many consistent estimators. Indeed, consistency, not admissibility, is the pertinent minimal property for frequentist point estimators. As you know, there are many examples of admissible but inconsistent estimators in the statistics literature, including the famous James-Stein estimator.

Aris: I deny that admissibility is worse at filtering out “bad” estimators than consistency: consistency is a tail event, so there are literally an infinity of consistent estimators that are terrible for any practical sample size. My point: these kinds of objections aren’t worth crediting because they rest on uncharitable technicalities.

You seem to be overlooking the idea that we can ask for more than one “minimal” property. (The word “desideratum” does have a plural form!) Given two consistent estimators, one of which dominates the other with respect to “wrongly” defined MSE, are you *really* going to assert that there’s no reason to prefer the dominant one?

(There *is* literally an infinity. Yeesh.)

Aris: I’m not following your logic. If a criterion is meant to be necessary but not sufficient, the fact that it does not invalidate the crystal ball estimator does not make it not pertinent.

There’s no free lunch – unbiasedness comes at a cost with respect to the bias-variance tradeoff. By elevating unbiasedness and neglecting risk, it’s arguable that the emphasis on unbiased estimation has literally done more harm than good in statistical practice.

I think Larry W’s adage applies – “if we elevate lessons from toy examples into grand principles we will be led astray.”

Aris: A bit late, but I also want to correct the claim that the James-Stein estimator is inconsistent. The consistency or lack thereof of the estimator is a property of the way in which one assumes that the amount of data grows without bound. For concreteness, suppose that the data are an n-by-m matrix with the elements of each column sharing a mean parameter. Your claim of inconsistency rests on fixing n = 1 and letting m grow without bound, but of course no estimator is consistent under this assumption. If you allow n to grow without bound too, then consistency is recovered.