(Billy Joel, “She’s Always a Woman”)
If we agree that we have degrees of belief in any and all propositions, then, it is often argued (by Bayesians), that if your beliefs do not conform to the probability calculus, you are being incoherent, and will lose money for sure (by a clever enough bookie). We can accept the claim that, were we required to take bets on our degrees of belief, then given that we prefer not to lose, we would not accept bets that ensured our losing. But this is a tautology, as others have pointed out, and entails nothing about degree of belief assignments. “That an agent ought not to accept a set of wagers according to which she loses come what may, if she would prefer not to lose, is a matter of deductive logic and not a property of beliefs” (Bacchus, Kyburg, and Thalos 1990: 476).[i] Nor need coerced (or imaginary) betting rates actually measure an agent’s degrees of belief in the truth of scientific hypothesis..
Nowadays, surprisingly, most Bayesian philosophers seem to dismiss as irrelevant the variety of threats of being Dutch-booked. Confronted with counterexamples in which violating Bayes’s rule seems perfectly rational on intuitive grounds, Bayesians contort themselves into a great many knots in order to retain the underlying Bayesian philosophy while sacrificing updating rules, long held to be the very essence of Bayesian reasoning. To face contemporary positions squarely calls for rather imaginative deconstructions. I invite your deconstructions (to email@example.com) by April 23 (see So You Want to Do a Philosophical Analysis). Says Howson:
It used to be that frequentists and others who sounded the alarm about temporal incoherency were declared irrational. Now, it is the traditional insistence on updating by Bayes’s rule that was irrational all along.
“There is nothing inconsistent in planning to entertain a degree of belief that is inconsistent with what I now hold, I am merely changing my mind” (Howson 1997: 287).
But one thought that the point of the inductive rule was to show how one ought to change one’s mind rationally. This, apparently, it does not do. The Bayesian never gives in, he just changes his mind.
The “Motley Jumble”
A main reason Howson and Urbach (2006: 83) dismiss what they call a “motley jumble of ‘justifications’[for Bayes’s rule] in the literature” is this:
While an agent may assign probability 1 to event S at time t, i.e., P(S) = 1, he also may believe that at some time in the future, say, t’, he may assign a low probability, say, .1, to S, i.e., P’(S) = .1, where P’ is the agent’s belief function at later time t’.
Let E be the assertion: P’(S) = .1.
So at time t, P(E) > 0
But P(S|E) = 1, since P(S) = 1.
Now, Bayesian updating says
If P(E) > 0, then P’(. ) = P(. |E).
But at t’ we have P’(S) = .1,
which contradicts P’(S) = P(S| P’(S) = .1) = 1, by updating.
It is assumed, by the way, that learning E does not change any of the other degree of belief assignments held at t (never mind how one knows this).
The examples that are at the heart of this variation on the counterexamples are found in William Talbott (1991), and sketched by many others. In one of them, S is:
S: Mayo eats spaghetti at 6 p.m. on April 15, 2012.
P(S) = 1,
where P is now my degree of belief in S (time t), and E is:
E: P’(S) = r, where r is the proportion of times I eat spaghetti (in some appropriate period), say, r = .1.
As certain as I am of spaghetti today, April 15, 2012, I also believe, rationally, that by this time next year I will have forgotten about it, and to obtain P’, I will (rationally) turn to the relative frequency with which I eat spaghetti[iv]. Or so the example goes. Variations on the counterexample involve current beliefs about impairment by alcohol or drugs.
One might wonder how examples like this could cause an account to flounder at the fundamental level. I’m not claiming to be up on the latest twists and turns on this saga. Ironically, though, the error statistician has no trouble accommodating the two probabilities of events causing the trouble here. But we are considering the Bayesians, and in particular Bayesian philosophers. They generally want to be able to assign probabilities to any propositions (in a language), not just events within statistical models.
One way some Bayesian philosophers explain the problem is that there is both relative frequency information such as P’(S) = .1 and also, since this is known, P’(P’(S) = .1) = 1. Bayesian epistemologists, to my knowledge, grant the counterexamples, but do not give up on the project, only on Bayesian updating. Bayes’s rule holds, they seem to allow, just when it holds. (Of course it is not just philosophers that have thrown over “betting coherence”; default Bayesian statistician Jim Berger says that it is certainly too strong (2006)). What is the current state of play here?
Howson (1997) endorses “the possibly surprising thesis that the Bayesian theory has no such rules….This does not immediately cast doubt, or more than there was before, on the validity of such classical results as the convergence of opinion theorems, since these are framed in terms of your prior probability that your prior conditional probability will exhibit suitable convergence.” (Howson 1997, 288-9)
Is he saying that those results were always a matter of your believing in your beliefs converging (using Bayes), and you’re still free to believe this? What am I missing?
“Bayes’s Rule Is ‘Completely Arbitrary,’” Say (some) Bayesians
To say there is a large amount of work by Bayesian philosophers on so-called Dutch-book arguments is a vast understatement. Howson and Urbach (2006: 83) express frustration that the field never tires of generating slight variations on the same counterexample:
“Invalid rules breed counterexamples. What is surprising in the case of conditionalization is that nobody seems to have realized why it . . . is anything other than a completely arbitrary rule when expressed unconditionally.”
Is it true that “invalid rules breed counterexamples”? Usually, a rule is found to be invalid on the basis of one counterexample. That’s the definition of an invalid rule. Then we move on and do not have to keep proving that it’s invalid. Howson is surprised by the volume and variety of counterexamples that continue to crowd the philosophical literature, along with new statements of conditions under which Bayes’s rule, and spin-off rules, hold. But I do not think he should be surprised. By the time students work through these conundrums as graduate students, the game can have a (fascinating[v]) life of its own. Overthrowing the research paradigm is not on. “And she never gives in, / She just changes her mind.”
- Bacchus, F., H. E. Kyburg, and M. Thalos (1990). “Against Conditionalization,” Synthese 85:475-506.
- Berger, J. (2006). “The Case for Objective Bayesian Analysis (with discussion),” Bayesian Analysis 1, 385-402.
- Howson, C. (1997). “A Logic of Induction,” Philosophy of Science 64(2):268-290.
- Howson, C. and Urbach, P. (2006). Scientific Reasoning: The Bayesian Approach 3rd ed. La Salle Illinois: Open Court.
- Mayo D. G. (1996). Error and the Growth of Experimental Knowledge. Chicago: Chicago University Press.
- Mayo D. G. (1997). “Duhem’s Problem, The Bayesian Way, and Error Statistics, or ‘What’s Belief got To Do With It?’” and “Response to Howson and Laudan” Philosophy of Science 64(2):222-244 and 323-333.
- Laudan, L. (1997). “How About Bust: Factoring Explanatory Back Into Theory Explanation” Philosophy of Science 64(2): 306-316
- Talbott, W. (1991). “Two Principles of Bayesian Epistemology”, Philosophical Studies 62: 135-150.
[i] On these grounds EGEK (Mayo 1996) dismissed the Dutch Book literature as irrelevant for scientific inference. But maybe the newest disavowals are of interest.
[ii] This occurred in a Philosophy of Science 64(2) exchange shortly after EGEK appeared.
[iii] I am not certain about the work that’s performed by “being induced”, but I’ll leave this to one side.
[iv] These are Talbott’s numbers, I virtually never eat spaghetti.
[v] In somewhat the same sense that puzzle solving can be addictive, but here you can also get publications!