“What ever happened to Bayesian foundations?” was one of the final topics of our seminar (Mayo/SpanosPhil6334). In the past 15 years or so, not only have (some? most?) Bayesians come to accept violations of the Likelihood Principle, they have also tended to disown Dutch Book arguments, and the very idea of inductive inference as updating beliefs by Bayesian conditionalization has evanescencd. In one of Thursday’s readings, by Baccus, Kyburg, and Thalos (1990), it is argued that under certain conditions, it is never a rational course of action to change belief by Bayesian conditionalization. Here’s a short snippet for your Saturday night reading (the full paper is https://errorstatistics.files.wordpress.com/2014/05/bacchus_kyburg_thalos-against-conditionalization.pdf):
“We will argue that to change one’s beliefs always by [Bayesian] conditionalization on evidence is to determine once and for all the impact or import of evidence. For the temporarily irrational believer, this is epistemically fatal.
If a believer starts out doxastic life with an unreasonable set of beliefs, there is no telling when, if ever, that believer may achieve rationality just by conditionalizing on new evidence. Consider an agent who believes an outright contradiction, and suppose that this agent is a perfect logician. If this believer is in possession of contradictory beliefs, then she will know this fact about herself. Now if [Bayesian] conditionalization is the truth about rational change of belief, then such a believer has no rational way of simply ‘converting’ to rationality. So in the case of this believer, we are inclined to say that conditionalization is never a rational way to change her belief. The exceedingly rational option, and the only rational one available to her in our view, is just ‘conversion’ to rationality.
What is that you say, gentle reader? You think that it is just not possible for someone to believe a contradiction? All right. But surely you believe that it is possible that someone be in possession of a distribution, call it P, over an algebra of beliefs which, though it does not yield a contradiction outright, is nonetheless incoherent—in the technical sense that it violates the probability axioms. Now this unfortunate believer can never come to have coherent beliefs merely by conditionalization. How is this so? Let P’ be any member of the set of probability distributions over the set of sentences in our poor believer’s body of belief which are coherent. But since P’ is coherent and P is not, it can never be that
(*) P’(A) = P(A| ΛEi) = P(A & ΛEi)/P(ΛEi),
where ΛEi names the set of all those propositions which our unhappy agent ever does (or can, if you like) come to learn and upon which she conditionalizes; for P is just incoherent, by hypothesis, and if (*) were true, then our hypothesis would be false and the example altered. Hence the incoherent conditionalizer can never achieve coherence.
Now we should think that if one advocated coherence (in the sense that one championed the probability axioms in one’s own doxastic life and enjoined them upon others), then one would and ought to say that in this case it is never a rational change of belief to change belief by conditionalization. We do not tout the probability axioms in the same way, but even so we say this: the only rational course of action for a believer who believes irrationally and knows himself to believe irrationally is to ‘convert’ to rationality.”
Share your thoughts. This calls to mind a remark by Stephen Senn’s:
A related problem is that of Bayesian conversion. Suppose that you are not currently a Bayesian. What does this mean? It means that you currently own up to a series of probability statements that do not form a coherent set. How do you become Bayesian? This can only happen by eliminating (or replacing or modifying) some of the probability statements until you do have a coherent set. However, this is tantamount to saying that probability statements can be disowned and if they can be disowned once, it is difficult to see why they cannot be disowened repeatedly but this would seem to be a recipe for allowing individuals to pick and choose when to be Bayesian.”(Senn, p. 59)
A blogpost on Senn’s article when it first appeared is here; you can search for U-Phil contributions on Senn’s article, e.g., here.
Philosopher Henry Kyburg, Jr. was an old friend (and important supporter when I was just starting out). He had his own Kyburgian frequentist philosophy. Kyburg (1993, 146) shows that for any body of evidence there are prior probabilities in a hypothesis H that, while non extreme, will result in two scientists having posterior probabilities in H that differ by as much as one wants–thereby turning the tables on popular convergence claims.
 Recall that violations of the Likelihood Principle lead to incoherence: “if we have two pieces of data x* and y* with [proportional] likelihood function ….the inferences about m from the two data sets should be the same. This is not usually true in the orthodox [frequentist] theory and its falsity in that theory is an example of its incoherence. (Lindley 1976, p. 36).
Bacchus, F., Kyburg Jr, H.E., and M. Thalos (1990). “Against conditionalization”, Synthese 85: 475-506.
Kyburg Jr, H.E. (1993). “The Scope of Bayesian Reasoning”, PSA 1992, vol. 2: 139-152.
Lindley D. V. (1976). Bayesian Statistics. In Foundations of Probability theory, Statistical Inference and Statistical Theories of Science, Volume 2, edited by W. L. Harper and C.A. Hooker, Dordrect, The Netherlands: D.. Reidel: 353-362.
Senn, S. (2011), “You May Believe You Are a Bayesian But You Are Probably Wrong”,
I get really impatient with all of this philosophical wanking, both pro- and anti-subjective-Bayesian. If I discover that two claims I hold are contradictory, I do not immediately hold that all claims are simultaneously true or false, contra to the principle of explosion. Is exception-handling really so hard to understand?