Failing to Apply vs Violating the Likelihood Principle

In writing a new chapter on the Strong Likelihood Principle [i] the past few weeks, I noticed a passage in G. Casella and R. Berger (2002) that in turn recalled a puzzling remark noted in my Jan. 3, 2012 post. The post began:

A question arose from a Bayesian acquaintance:

“Although the Birnbaum result is of primary importance for sampling theorists, I’m still interested in it because many Bayesian statisticians think that model checking violates the (strong) likelihood principle (SLP), as if this principle is a fundamental axiom of Bayesian statistics”.

But this is puzzling for two reasons. First, if the LP does not preclude testing for assumptions (and he is right that it does not[ii]), then why not simply explain that rather than appeal to a disproof of something that actually never precluded model testing?   To take the disproof of the LP as grounds to announce: “So there! Now even Bayesians are free to test their models” would seem only to ingrain the original fallacy.

You can read the rest of the original post here.

The remark in G. Casella and R. Berger seems to me equivocal on this point:

“Most data analysts perform some sort of ‘model checking’ when analyzing a set of data. Most model checking is, necessarily, based on statistics other than a sufficient statistic.  For example, it is common practice to examine residuals from a model…Such a practice immediately violates the Sufficiency Principle, since the residuals are not based on sufficient statistics. (Of course such a practice directly violates the Likelihood Principle also.) Thus, it must be realized that before considering the Sufficiency Principle (or the Likelihood Principle), we must be comfortable with the model.” (G. Casella and R. Berger 2002, 295-6)]

Now for Casella and Berger, the Likelihood Principle refers to what we call the weak LP, which concerns just a single experiment, and  so is equivalent to Sufficiency. What we call the “Strong Likelihood Principle” (SLP), they call the “Formal Likelihood Principle”. But I don’t see how that alters this point, since the Strong entails the Weak; and since we would want to say the same thing about Sufficiency. That is, we would deny sufficiency is “violated” when using statistics that are not sufficient–relative to some primary model— when we set out to test assumptions of that very model.  Yes?  I’m guessing that they meant their passage to express that we cannot even apply Sufficiency or Likelihood principles without first accepting the model, as opposed to saying it would violate sufficiency to look at these other statistics. But, given the confusion surrounding the SLP and other principles, I wonder if such remarks have anything to do with the Bayesian point with which I began the Jan. 3 post.

Do other texts put it this way? I’m not aware of any. There is a deeper reason that this might matter….


[i]Strong Likelihood Principle (SLP): For any two experiments E’ and E” with different probability models f’, f” but with the same unknown parameter θ, if the likelihood of outcomes x*’ and x*” (from E’ and E” respectively) are proportional to each other, then x*’ and x*” should have the identical evidential import for any inference concerning parameter θ.

“The only contribution of the data is through the likelihood function…In particular, if we have two pieces of data x’ and x” with the same likelihood function…the inferences about q from the two data sets should be the same. This is not usually true in the orthodox theory, and it’s falsity in that theory is an example of its incoherence. (Lindley 1976, 361, x’, x” replace x1, x2, θ replaces his q).

[ii] The LP (strong or weak) always contains a statement to the effect “assuming the model for the likelihood is adequate”, or something of that sort.  To actually determine a model’s adequacy, therefore,  a consideration of outcomes other than the one observed (the sampling distribution) is no violation of the LP, (even for those who accept it). [I realize there is debate as to whether the role of a (nonsubjective) prior is to be regarded as part of the model, but nothing in the current issue would seem to turn on that.]

G. Casella and R. Berger, 2002, Statistical Inference, 2nd ed., Duxbury Press, Pacific Grove CA.

Lindley D. V. 1976, “Bayesian Statistics”. In Foundations of Probability theory, Statistical Inference and Statistical Theories of Science, Volume 2, edited by W. L. Harper and C.A. Hooker, Dordrect, The Netherlands: D.. Reidel: 353-362.

Categories: Likelihood Principle, Philosophy of Statistics, Statistics | Tags: , , ,

Post navigation

2 thoughts on “Failing to Apply vs Violating the Likelihood Principle

  1. “Most data analysts perform some sort of ‘model checking’ when analyzing a set of data.”

    Really? I wish that were true. I see lots and lots of data analysis with little to no model checking.

  2. Well that’s what Casella and R. Berger say, but of course the issue I was getting at was whether they were correct to suggest this model checking activity constituted a “violation” of the SLP, as opposed to something more like “an inability to apply it”, since some of the required “givens” were not in place. I think the latter makes the most sense.

Blog at WordPress.com.