U-Phil: Deconstructions [of J. Berger]: Irony & Bad Faith 3

Memory Lane: 2 years ago:
My efficient Errorstat Blogpeople1 have put forward the following 3 reader-contributed interpretive efforts2 as a result of the “deconstruction” exercise from December 11, (mine, from the earlier blog, is at the end) of what I consider:

“….an especially intriguing remark by Jim Berger that I think bears upon the current mindset (Jim is aware of my efforts):

Too often I see people pretending to be subjectivists, and then using “weakly informative” priors that the objective Bayesian community knows are terrible and will give ridiculous answers; subjectivism is then being used as a shield to hide ignorance. . . . In my own more provocative moments, I claim that the only true subjectivists are the objective Bayesians, because they refuse to use subjectivism as a shield against criticism of sloppy pseudo-Bayesian practice. (Berger 2006, 463)” (From blogpost, Dec. 11, 2011)
Andrew Gelman:

The statistics literature is big enough that I assume there really is some bad stuff out there that Berger is reacting to, but I think that when he’s talking about weakly informative priors, Berger is not referring to the work in this area that I like, as I think of weakly informative priors as specifically being designed to give answers that are _not_ “ridiculous.”

Keeping things unridiculous is what regularization’s all about, and one challenge of regularization (as compared to pure subjective priors) is that the answer to the question, What is a good regularizing prior?, will depend on the likelihood.  There’s a lot of interesting theory and practice relating to weakly informative priors for regularization, a lot out there that goes beyond the idea of noninformativity.

To put it another way:  We all know that there’s no such thing as a purely noninformative prior:  any model conveys some information.  But, more and more, I’m coming across applied problems where I wouldn’t want to be noninformative even if I could, problems where some weak prior information regularizes my inferences and keeps them sane and under control.

Finally, I think subjectivity and objectivity both are necessary parts of research.  Science is objective in that it aims for reproducible findings that exist independent of the observer, and it’s subjective in that the process of science involves many individual choices.  And I think the statistics I do (mostly, but not always, using Bayesian methods) is both objective and subjective in that way.  That said, I think I see where Berger is coming from:  objectivity is a goal we are aiming for, whereas subjectivity is an unavoidable weakness that we try to minimize.  I think weakly informative priors are, or can be, as objective as many other statistical choices, such as assumptions of additivity, linearity, and symmetry, choices of functional forms such as in logistic regression, and so forth.  I see no particular purity in fitting a model with unconstrained parameter space:  to me, it is just as scientifically objective, if not more so, to restrict the space to reasonable values.  It often turns out that soft constraints work better than hard constraints, hence the value of continuous and proper priors.  I agree with Berger that objectivity is a desirable goal, and I think we can get closer to that goal by stating our assumptions clearly enough that they can be defended or contradicted by scientific theory and data—a position to which I expect Deborah Mayo would agree as well.

(see also Gelman’s blog)


David Rohde:

This comment was published in Bayesian analysis which has an obviously specialist audience, the two articles and the comments on the two articles reveals a near unanimous preference for subjective Bayes as the foundations of statistics.  To this narrow specialist audience “subjective” is a complement, an idealized limiting case of an optimal statistical analysis.  If you have a philosophical objection to subjective Bayes (or Bayes in general) as the foundations of statistics then you are really far outside the target audience and understandably the comment will be opaque.

I think Berger is saying that an objective Bayesian might understand the consequences of diffuse priors better than a subjective Bayesian, he is probably employing both Bayesian and non-Bayesian criteria to investigate the consequence of priors, making objective Bayes a bit of a piece meal “theory”.  My reading of the article is that Berger is a subjectivist, who is promoting tools outside standard subjective Bayesian theory (objective Bayes and frequentist) on practical grounds, it is interesting that the more extreme objective Bayes arguments of Jefreys and Jaynes seem to be largely abandoned now.

Of course the article reveals differences in Bayesians, but I think also reveals a remarkable convergence of opinion.  Subjective Bayes is the foundations of statistics, but in an operational sense fully specifying subjective probabilities and then conditioning on observables is not remotely practical.  Berger and Goldstein suggest different tools for dealing with this problem and the debate is largely carried out within this context (excluding Wasserman’s comments).



My guess is that there is a typo, and Berger meant to say  “the only true objectivists are the objective Bayesians…” in the quote above.  Mystery solved!


Deborah Mayo (from blogpost, December 11, 2011):

How might we deconstruct this fantastic remark of Berger’s?5  (Granted, this arises in his rejoinder to others, but this only heightens my interest in analyzing it.)

Here, “objective Bayesians” are understood as using (some scheme) of default or conventionally derived priors.  One aspect of his remark is fairly clear: pseudo-Bayesian practice allows “terrible” priors to be used, and it would be better for them to appeal to conventional “default” priors that at least will not be so terrible (but in what respect?). It is the claim he makes in his “more provocative moments” that really invites deconstruction. Why would using the recommended conventional priors make them more like “true subjectivists”?  I can think of several reasons—but none is really satisfactory, and all are (interestingly) perplexing. I am reminded of Sartre’s remarks in Being and Nothingness on bad faith and irony:

“In irony a man annihilates what he posits within one and the same act; he leads us to believe in order not to be believed; he affirms to deny and denies to affirm; he creates a positive object but it has no being other than its nothingness.” (Sartre)

So true!  (Of course I am being ironic!) Back to teasing out what’s behind Berger’s remarks.

Now, it would seem that if she did use priors that correctly reflected her beliefs (call these priors “really informed by subjective opinions”(riso?), and that satisfied the Bayesian formal coherency requirements, then that would be defensible for a subjective Bayesian. But Berger notices that, in actuality, many Bayesians (the pseudo-Bayesians) do not use riso priors. Rather, they use various priors (the origin of which they’re unsure of) as if these really reflected their subjective judgments. In doing so, she (thinks that she) doesn’t have to justify them—she claims that they reflect subjective judgments (and who can argue with them?).

According to Berger here, the Bayesian community (except for the pseudo-Bayesians?) knows that they’re terrible, according to a shared criterion (is it non-Bayesian? Frequentist?). But I wonder: if, as far as the agent knows, these priors really do reflect the person’s beliefs, then would they still be terrible? It seems not. Or, if they still would be terrible, doesn’t that suggest a distinct criterion other than using “really informed” (as far as the agent knows) opinions or beliefs?

Berger, J. (2006),“The Case for Objective Bayesian Analysis”, and “Rejoinder”, Bayesian Analysis 1(3), 385–402; 457-464.

Sartre, J.P Being and Nothingness: an essay in phenomenological ontology (1943, Gallimard); English 1956, Philosophical Library Inc…….

See also:

Irony and Bath Faith: deconstructing Bayesians1

Jim Berger on Jim Berger:

Categories: Gelman, Irony and Bad Faith, J. Berger, Statistics, U-Phil | Tags: , , ,

Post navigation

3 thoughts on “U-Phil: Deconstructions [of J. Berger]: Irony & Bad Faith 3

  1. It’s a myth that any Bayesian can survive on uninformative priors only. In practice what people mean by this is that the only priors they use are either hardly informative at all or completely informative. Any factor that is not in your model has a completely informative prior that the effect is zero. Some Bayesians then put uninformative prior distributions on those factors that are allowed into the model. This can work quite well but a notorious exception is nuisance parameters. You often get very poor predictions if you make these uninformative. See ‘Trying to be precise about vagueness’ http://onlinelibrary.wiley.com/doi/10.1002/sim.2639/abstract;jsessionid=14C91A5B36E1B9843F872F1EF8647389.f04t02?deniedAccessCustomisedMessage=&userIsAuthenticated=true

    • Stephen: It’s back to Fisher’s saying (to the effect that): if the priors alter the interpretation of the data, why do you want them? If they don’t, why do you need them? (unless of course one is dealing with empirical priors with a corresponding frequentist question or screening task).

  2. Perhaps. But I would be being hypocritical if I did not admit that nuisance parameters are a problem for frequentists also. Look at the long history of the Fisher-Behrens problem, for instance, or the problem with subgroups in clinical trials.

Blog at WordPress.com.