Betting, Bookies and Bayes: Does it Not Matter?

On Gelman’s blog today he offers a simple rejection of Dutch Book arguments for Bayesian inference:

“I have never found this argument appealing, because a bet is a game not a decision. A bet requires 2 players, and one player has to offer the bets.”

But what about dynamic Bayesian Dutch book arguments which are thought to be the basis for advocating updating by Bayes’s theorem?  Betting scenarios, even if hypothetical, are often offered as the basis for making Bayesian measurements operational, and for claiming Bayes’s rule is a warranted representation of updating “uncertainty”. The question I had asked in an earlier (April 15) post (and then placed on hold) is: Does it not matter that Bayesians increasingly seem to debunk  betting representations?

Categories: Statistics | Tags: ,

Post navigation

27 thoughts on “Betting, Bookies and Bayes: Does it Not Matter?

  1. Stephen Senn

    I think that the Dutch book argument is powerful in theory but far from compelling in practice. It is an argument of how to remain perfect in this sense: a necessary condition of being a perfect inferential machine is coherence. If you think that coherence is all, it is also sufficient but I am not convinced that it is all. However, if you don’t have the necessary condition you can’t be perfect. On the other hand us applied statiticians have accepted that we belong to a fallen race but are interested in being better. Dutch book arguments can be nice to check but they can’t be the be all and end all of our programme of self-improvement.

  2. The counterexamples to the dynamic Dutch book arguments seem to falsify Bayes’ rule as a rationale/normative or even desirable way of updating. So it isn’t clear how it stands as a role model for being better, if Bayesians are taking the counterexamples to show that something else is actually better. What do you think of them? (see my earlier post). There is no rejection, by the way, of deductive logic and non-contradiction where relevant: indeed, from deductive logic it is a tautology that one should avoid accepting a particular sure loss if one wants to avoid the particular sure loss.

  3. David Rohde

    de Finetti used betting arguments in his famous paper “Foresight: its Logical Laws, Its Subjective Sources”, but then dropped them in “Theory of Probability”, they are not necessary. Frank Lad’s Operational Subjective Statistical methods, is the most accessible book on the subject that I know and it discusses subjective prevision and probability without betting arguments with nice examples.

    de Finetti also did not advocate “updating” P(X_2|X_1) is simply a different assertion to P(X_2).

    Quite a lot of arguments made in de Finetti’s name are not exactly the position he endorsed…. for example he also was not interested in P(H|D). Presumably you, Gelman and de Finetti all agree on this, but for very different reasons….

    To answer your question, it doesn’t matter, because it was never fundamental. Of course coherence is fundamental, but this is quite distinct from betting arguments.

  4. I think that the most (in)famous use of Dutch book arguments in “practice” is in the theory of option pricing and we all know where that led.

    • see my last phil stock for something on credit default swaps/JP Morgan.

      • David Rohde

        I think arbitrage is closer to the idea of a Dutch book argument…. which of course works if you can trade instantly.

        As I see it, pricing is more an application of decision theory.

  5. Christian Hennig

    I always liked de Finetti’s insistence on an operational way to determine probabilities, although for me it is important to never forget that this can only be an idealisation. In this respect, I think that the betting metaphor works for Bayesians. Relative frequencies work, too, despite the obvious idealisation of requiring infinite repetition in frequentism. I don’t see anything else that the Bayesians have that would work as well, and therefore I’m always puzzled about the meaning of, e.g. “the probability of A is 0.42” if this neither referes to an (imagined/idealised) betting rate nor to (infinite, and therefore alose idealised) relative frequencies. The whole “generalising binary logic to [0,1]”-approach seems too abstract to me.

    Dutch book arguments have been used as a to some extent artificial device in order to establish formal rules for probability that correspond to a much older intuition about fair betting rates. Something like this was needed in order to motivate the required axioms. There is nothing “natural” about it, but on the other hand there is no way to translate an apparently helpful and reasonable intuition into axioms you can work with without having such an artificial, but hopefully well motivated device. It is a “bridge” between reality and formalism.

    I think that Gelman is right in the sense that the Dutch book setup is not realistic in most cases, and that in some cases even a Bayesian is well advised to do things that would allow a Dutch book against her if there were an opponent who would use this. I think, however, that he underestimates the role of arguments like the Dutch book one for giving probabilities a meaning. It is clearer to say “what we are doing is based on a formalism that has such and such meaning in an artificial but well explained and understandable setup and we may violate its rules from time to time because they are an idealisation that doesn’t always work well” than “we get rid of such bridges to reality altogether because sometimes they don’t work well” without explaining what should be in their place.

    • Christian: I think by your comment that we (mostly) agree, but I’d go much further. Formal models, in general, may strictly speaking be regarded as only approximate or idealized representations of actual entities, processes or phenomena, but I don’t see this as analogous to the “bridge” required by the subjective Bayesian use of probabilistic models. I don’t even “get” the purported bridge between probabilities and some kind of measure of evidence (or warranted belief or the like) in a hypothesis, if only because the truth of a hypothesis is not equivalent to the occurrence of an “event”. (I think it stems from a deep equivocation of language, but I put that aside).
      Standard statistical models of data-generating processes work because for one thing, they need only capture rather coarse properties of the phenomena being probed (e.g., the relative frequencies of events need to be specifiably close to those computed under the statistical models). For another, their adequacy for the task at hand may be checked by distinct tests. So even if I imagine the definition of degrees of belief in terms of bets or selling tickets or what have you, the bridge isn’t there: Even if I could figure out exactly what I would offer for a ticket or a bet on a theory, I fail to see that the result would latch on to how strongly I (do or should) believe in the truth of the theory. Worse, in trying to figure out how much to pay for a ticket that “wins” if general relativity theory is true, say,I would be looking in the total wrong place for capturing the nature and extent of the current evidence for GTR.

      • Christian Hennig

        OK, yes. This is why de Finetti emphasises that probabilities are ultimately about observable events, not about general theories (and one may wonder, and I even recall having seen some literature on, how to adjust Bayesian probabilities in a potentially incoherent way if one realises that one loses all the time).

        • But Bayesians claim to be assigning probabilities to any “proposition” and are billing themselves as relevant for causal hypotheses and theories. None of these are observable events. I think your point gets to the very heart of the long-running confusion within Bayesianism of all stripes. We can also assign probabilities to proper events, and we do so by warranted inferences to statistical hypotheses (that assign those events probabilities).

  6. Christian Hennig

    Well not all Bayesians do this, but you’re probably right about most of them. (If anything, I’d certainly rather be a “de Finettian” than a Bayesian in this sense.)
    By the way, another thing I find puzzling about this is if you assign a Bayesian probability of 0.2 to “a certain general theory is true” and it is actually not precisely but approximately true (whatever that could mean to a constructivist like me ;-), does this rather mean, in the sense of these Bayesians, that the 20% or the 80% event occurs? (OK, perhaps this point is not that original as many Bayesians are aware of the problems that come with assigning nonzero probabilities to point null hypotheses…)

    • christian: Your point, whether original or not, is an important part of what makes probability, in the sense of the probability calculus, a wrong-headed way to capture how well warranted a theory is (or how much evidence we have for it, or any of the typical phrases that bear upon epistemological stances). If you have excellent grounds for accepting a Binomial model for n coin tosses, with p = .5, you might derive .2 as the probability for an outcome consisting of some number of successes, but it would seem odd to say the amount of evidence you have for that number of success is .2. It is relatively harmless to talk that way when dealing with events derived from a probability model M that has been warranted (and note that M itself is not assigned a probability). Yet this is far too limited for science. But even here, and this is my main point just now, it would seem an odd way to talk (am I the only one who thinks this?) It would seem more sensible to say that having warranted M, your evidence for the event with derived probability .8 and your evidence for the event with derived probability .2 are both equal to the evidence for statistical model M—and, again, the evidence for M is not the result of Bayesian updating, but of some account that allows inferring M (e.g.,as well corroborated or severely tested or the like). There’s no reason to suppose the evidence for binomial model M here is even quantifiable, but if it is, it’s not a posterior probability, and it wouldn’t obey the probability calculus, nor should it. (One may of course report relevant error probabilities.)

  7. I don’t think I understand Gelman’s complaint. Why can’t the second player — the opponent who sets the bets — be Nature? If the bookie *can* be Nature, then how is betting different from making a decision?

    • Yes, and that’s why the problem also holds for the “subjective prevision” representation that a commentator earlier noted (as a way to avoid bets). I hope Gelman will weigh in here.

      • David Rohde

        Nature won’t normally try to turn you into a “money pump”. Removing the other player removes some game theoretic complications…

        Subjective probability is a primitive in decision theory, that is what is important, betting is a conveniant but imperfect way to illustrate the idea…

      • I find betting to be an enjoyable pastime but I don’t see it as foundational to anything. When somebody tells me I should put my money where my mouth is, I’m like . . . huh? Whatever. I’ve bet against people but I’ve never bet against Nature.

        • Gelman: I totally agree. I don’t know if others have this experience, but whenever I find myself thinking or saying, “I’d bet X (will occur)” it is always as a way to express, “I don’t have anything like the evidence needed to back up a claim here, so I’d be taking a wild guess, just for the fun of it, to see if (an ill founded) intuition holds!” It’s a way of saying this is a sheer leap. Things are only a bit better betting on stocks.

  8. David Rohde

    some Bayesians do these things, but the stricter operational subjectivists do not (assign probabilities to propositions).

    This paper by David Freedman gives an overview of the distinction from a more or less non-Bayesian point of view http://www.stat.berkeley.edu/~census/fos.pdf (he calls operational subjectivists radical subjectivists, although this is strange terminology as it is arguably a very conservative position…). You seem well connected maybe you knew David personally…

    If by “We can also assign probabilities to proper events” you mean frequentists can, then I don’t understand you… It seems to me that frequentists distinguish themselves by refusing to do this, and as a consequence problems of prediction and decision making are largely outside of frequentist statistics. At least this is true in a formal sense, informally frequentists can do these things by liberally abusing strict frequentist intepretations.

    If you criticise Bayes on the basis of rejecting the use of P(H|D), there are at least some Bayesians who also do this and won’t recognise what you are criticising as Bayes.

    • Rohde:
      —You seem well connected maybe you knew David personally…
      Not well connected (remember, I’m in exile), but I did know him.

      —It seems to me that frequentists distinguish themselves by refusing to do this [assign probabilities to events].

      That’s absurd! That’s the only thing we frequentists assign probabilities to. By inferring a statistical or probabilistic model, we assign probabilities to all the outcomes defined by the rvs in the model. A statistical hypothesis assigns probabilities to the relevant events; that’s how we compute things like P(X > x; H), p-values, etc.

      • David Rohde

        I see what you are saying, but …

        A text book definition of frequentism is usually the refusal to apply probability to a unique event such as it will rain tomorrow.

        A frequentist usually assigns a distribution to a test statistic, rather than the events themselves. The test statistics is a function over a number of events or measurements under an iid assumption.

        If a less trivial specification (such as an exchangeable specification) is used instead then the predictive distribution P(X_{N+1}|X_1,…X_N) is very attractive. This is absent in non-Bayesian statistics and as I said prediction is normally not part of the frequentist paradigm. What I see in practice is a plug-in approach (estimate the parameter and substitute it back into the model). Which sometimes is useful in practice and sometimes it isn’t, but isn’t very satisfying philosophically.

        • The distribution of the rv entails the probabilities/distribution of events; so by obtaining the former one gets the latter. I really think this is one of the big misconceptions about frequentist inference (i.e., the claim it doesn’t assign probabilities to events). Once again, as a tour of some popular Bayesian texts shows, it is common to start out with this charge. I find it ironic that they do not point out the real difference in the two approaches is that Bayesians assign probabilities to hypotheses whereas frequentists do not. However, frequentists employ error probabilistic properties of methods in order to determine how well and poorly tested various hypotheses are, as I’ve noted many times on this blog and elsewhere. This essentially makes good on Popper’s idea of methodological falsification rules.
          Frequentist methods can and certainly do serve for prediction, even if they also want tools for theoretical understanding and explanation. The predictive distribution, as I understand it, uses the sampling distribution–taking an entity from frequentist statistics. When you have time, read some of the literature, e.g., by Jamie robins, J. Berger.

          • David Rohde

            …as I and others have said several times, operational subjectivists _do_ _not_ assign probabilities to hypotheses (or parameters).

            If you can supply a specific reference where frequentist methods are used for prediction I would be very happy to read it. (I know Berger somewhat, Robins not at all, and have never seen frequentist prediction except through a ad-hoc device, such as pluging in an estimate).

            • I couldn’t even begin ….this isn’t the place for trying to fill so very many gaps. I appreciate your interest; please do your homework when you have time. Thanks.

      • Corey

        I think the usual charge is that frequentists refuse to assign probabilities to events that occur outside of the context of a repeatable random experiment. That is, if there’s no way to define a long run frequency, then there’s no probability there. De Finettians assign probabilities to any observable, and Jaynesians assign probabilities to propositions that have definite truth values, even if that truth value is not directly observable.

        • David Rohde

          I guess it is inevitable to have difficulties communicating when we probably agree on little more than the topic is interesting, and it is great to have a forum such as this to discuss it. Although I think it is more accurate to refer to this as a difference in considered opinions rather than a gap.

          I am sincerely unsure which part of the literature you are directing me to read, is there a topic that these authors have written about to focus on? I fail to see any controversial element when I say prediction is key to Bayesian methods and the philosophy and practice work well together, however for non-Bayesian methods this is not so, and the dominant method in practice is a plug-in heuristic (with well known problems), yes there is some theoretical work on prediction intervals but this is of little practical importance and only available in very stylised situations…

          The definition of Bayes you give above and defend in a subsequent post by claiming I.J. Good is also on your side is in my opinion overtly narrow excluding one of the most admired approaches to the foundation of statistics the operational subjective approach. Similarly your focus upon hypothesis testing in most of your discussions is in fact irrelevant to this school.

          The problem of hypothesis testing may be of more interest in philosophy of science, and it is relevant to the “howlers”, and I think you make some reasonable points here, but this focus on hypothesis testing also excludes an important school of thought which in my opinion you do not address at all. I agree that the literature is messy and in places even sloppy, but criticising sloppy versions of an argument is easy. A criticism is much more credible if first you accurately describe what you are criticising. I accept this gives you the job of disentangling a large messy and sometimes contradictory literature, but amongst the sloppy arguments there are also good ones.

          Also, if I. J. Good at some point gave a definition of Bayesian that excluded de Finetti, I don’t think that definition should be considered accurate (I haven’t tried to find the paper).

  9. Cory: It suffices that the set up identify a hypothetical relative frequency or some would say, propensity. How do the Bayesians you mention assign their probabilities (to hypotheses or events) Don’t repeat someone’s line, ponder it anew.

Blog at WordPress.com.