Is Bayesian Inference a Religion?

Reblogging a stimulating post from the Normal Deviate!

Normal Deviate

Time for a provocative post.

There is a nice YouTube video with Tony O’Hagan interviewing Dennis Lindley. Of course, Dennis is a legend and his impact on the field of statistics is huge.

At one point, Tony points out that some people liken Bayesian inference to a religion. Dennis claims this is false. Bayesian inference, he correctly points out, starts with some basic axioms and then the rest follows by deduction. This is logic, not religion.

I agree that the mathematics of Bayesian inference is based on sound logic. But, with all due respect, I think Dennis misunderstood the question. When people say that “Bayesian inference is like a religion,” they are not referring to the logic of Bayesian inference. They are referring to how adherents of Bayesian inference behave.

(As an aside, detractors of Bayesian inference do not deny the correctness of the logic. They just don’t think…

View original post 371 more words

Categories: Error Statistics | 23 Comments

Post navigation

23 thoughts on “Is Bayesian Inference a Religion?

  1. I think the Bayesian religionists form a larger group than Normal Deviate tactfully suggests. My current blogpost points up a perfect example of how wild and fallacious charges pass as legitimate criticisms of non-Bayesian methods. What kind of illogic is taken as gospel (by some students)? The truth is: the frequentist methodological principles–correctly used– offer the public the greatest power to hold accountable various statistically-based policy making.

    • original_guest

      Why do you think there are more fundamentalists out there?

      This opinion surely can’t be based on what gets published, or talked about at conferences; the proportion of the statistics literature that berates other statisticians is mercifully small.

      Have you put your idea up against any form of severe test?

      • Og: Yes, frankly I have (subjected this hypothesis to a severe test), and there’s considerable data within this blog. I also take the evidence to be strong that Wasserman has years of first hand experience, and wouldn’t dream of writing such a post if this were a slight phenomenon. He writes:
        “They are very cliquish.
        They have a strong emotional attachment to Bayesian inference.
        They are overly sensitive to criticism.
        They are unwilling to entertain the idea that Bayesian inference might have flaws.
        When someone criticizes Bayes, they think that critic just ‘doesn’t get it.’
        They mock people with differing opinions.
        To repeat: I am not referring to most Bayesians. I am referring to a small subgroup. And, yes, this subgroup does treat it like a religion. I speak from experience because I went to all the Bayesian conferences for many years,….”

        What I object to is the irrationality and illogic (as in my last post). That people wouldn’t think through things on their own…that they would vilify well-meaning persons and methods…. Normal Deviate is courageous to tell it like it is (and to attach his name to it).

        • original_guest

          I agree about irrationality and illogic, and that Wasserman has lots of experience. I also read his post.

          But my question to you was about the number of fundamentalists – Wasserman explicitly says the subgroup is small; you disagreed – apparently based on what’s on this blog. But posts/comments and links from this blog, which attracts those who wish debate foundations, are surely a biased sample from which to draw inference about this number, no?

          Looking instead, say, at the JSM program, the amount of work presented where the authors were fundamentalist pro-Bayes (or indeed anti-Bayes) is dwarfed by the amount of work where statisticians take a pragmatic approach just to get the job done. Looking at leading journals, one sees the same pattern. Both suggest that few statisticians are actively seeking to vilify/mock colleagues of other philosophical persuasions.

          • OG: The bottom line is that the object level differs from the meta-level.
            On the JSM, I could only react as an outsider. In one remark from an interesting exchange with Norm Matloff: https://errorstatistics.com/2013/08/06/what-did-nate-silver-just-say-blogging-the-jsm/#comments
            “Your idea that the Bayesian Way was the real emperor of the JSM is intriguing. I don’t know what to compare it with, but I assume you would. I supposed that people were mostly using Bayesian techniques in relatively non-controversial ways, conjugate priors or technical tricks to get estimates with good error probabilities (as in the session I chaired).
            On the other hand, in submitting my paper I searched for a category of ‘methodology’ and found only ‘Bayesian methodology’. This supports my contention that when it comes to foundations, frequentists are ‘in exile’”.

            • original_guest

              “The bottom line is that the object level differs from the meta-level.”

              Huh? Sorry, but I remain unconvinced; you are providing anecdotes cherry-picked to support your stance.

              • OG: The “object level” here would be the object of study of the papers/contributions themselves.

                I don’t have a stance, I wish the evidence wasn’t there. Strangely, this blogpost has resulted in some people sending me stuff (privately) that is at a whole different level; but I will dismiss it as fringe…while keeping an eye out.

  2. I thought it was well-known that militant Bayesians have exactly the properties Larry described, notably the cliquishness. I might even use the term “Messiah complex.”

    • Norm: Good to hear from you. (They will aver it is a persecution complex, a holdover from 50 years ago.)
      What I especially dislike is how they personalize it, as when one Bayesian* shouted at me (in a public forum): “One day, you’ll realize we are right!” It’s like some of my old boyfriends, “some day, you’ll realize how great I really was!”. The same confident smile…

      *This was not any of the Bayesians at the forum mentioned in my Senn comment.

  3. Many applied statisticians are not fundamentalist. At the famous statisticians versus philosophers conference at the LSE in the early 90s Colin Howson said that he was dismayed at the spirit of ecumenism amongst the statisticians, pointing out that it was illogical. Frequentism and Bayesianism were two pure positions. They could not both be right. However, he failed to note that it was possible for them both to be wrong.

    I think that Normal Deviate is not quite right that there is no doggerel mocking Bayesians. Here is some by Guernsey McPearson: http://www.senns.demon.co.uk/wpoetry.html

    • Stephen: Your poetry is fantastic!

      Now that you mention it, there’s one open (sitting in my rejected posts):

      http://rejectedpostsofdmayo.com/2011/10/08/probability-poetry/
      A toast is due to one who slays
      
Misguided followers of Bayes,

      And in their heart strikes fear and terror,

      With probabilities of error!

      The answer to that query (as to the author) is Erich Lehmann. Erich wrote it for me as a result of conversations we had shortly after EGEK came out. It was a few days before I was to face Colin Howson at a Philo of Sci Assoc symposium on our recent work, and I had just gotten wind of the criticisms he was planning to lob. Erich was a prince to help me– on very short notice– with a response to one of Howson’s examples; and he gave me permission to use the poem in the published paper.
      “Response to Howson and Laudan”
      http://www.phil.vt.edu/dmayo/personal_website/(1997)%20Response%20to%20Howson%20and%20Laudan.pdf

      If statisticians were behaving at your conference, the opposite was true at a conference shortly afterwards (maybe around 2001?) During the questioning/discussion period of MY paper, 1 and then 2 Bayesians jumped up and took over the floor to declare just how wrong I was. They just moved in, and I wasn’t inclined to a yelling match. By the time I could restore my place as the speaker, the discussion period was over. But that wasn’t the end of the shouting amongst around 4 Bayesians and 1-2 frequentist defenders–all males (Howson was one of the more vocal ones). Gillies was running the conference and was mortified.

  4. Alexandre

    “This is logic, not religion”.

    Well, is assigning probability to every uncertain event the only way to reasonably treat the problem of uncertainties?

    No, it is not. It is just another belief. Probability *may* be used to model uncertain events. However, if you change *may* (or *can*) in the latter sentence by *should* (or *must*) you are being dogmatic and irrational, since there is no ultimate proof of such a claim (and there won’t be one, you would say if you really understand as math works). Any desiderata used to build such justifications can be reconsidered (The Dutch book Argument is a linear argument and we can easily create other games where probability will be an incoherent tool, the Cox axiomatization is too strong for modeling general uncertainties and so on and so forth).

    We can use probability, but we can’t impose it as the unique tool. If we impose probability as the unique tool for modeling uncertainties we are acting like a priest closing other possibilities in favor of a unique tool not proved to be the unique one.

    The hard core Bayesians use probability to model every uncertain event while classical (frequentist, likelihoodists and so on) statisticians just consider probabilities for some (and not all) uncertain events. In the Bayesian community, coherence is always defined in a very biased way trying to demonstrate that probability is the unique right toll, however we can always define coherence by convenience. There is not a unique way of defining coherence or any other abstract concepts, there are infinitely many ways to do that in a very consistent way.

    Best,
    Alexandre.

  5. Since i posted this, I’ve been received a number of strange e-mails, some of which reveal some quackiness that even I had only been very dimly aware of, if at all. In a couple of U-tubes sent, we hear about groups linked to the “Less Wrong” blog (which I’d seen before) who espouse the view that humans will be faced with the threat of being overrun by evil robots, and the only way to fight them is with Bayes’s theorem. Something like that. I never knew about this*, and am not sure what to make of it. I’m guessing readers know more. Here’s one video (by a critic) that I was sent:

    *except that Jack Good used to go on about something like this. He originated this idea, or so he said, but then changed his mind about the right way to prevent it. But I was never sure if I should take him seriously, he had a lot of speculative ideas.

    • Corey

      Mayo: I noted a number of bad arguments and outright falsehoods in the video, but it’s not worth my time to go through them case by case. I’ll trouble myself to point one reasonably representative flaw in this dude’s style of argument (starting at 6:26). He thinks that *just* pointing to the libertarian political views of Yudkowsky (who is just one researcher at MIRI, albeit the one who originated the Friendly AI idea) and the philanthropist providing most of the funding, Peter Thiel, is enough to make the case that MIRI is not apolitical, as it claims. To actually make the case that MIRI is politically active, it’s necessary to highlight some actual, y’know, political activity.

      I’ll also point out that MIRI’s advisory board is not lacking in university professors, including philosopher Nick Bostrom. As far as I can tell, MIRI is about as wacky and out of the mainstream as Bostrom; I don’t have a good feel for exactly how far out that is.

      In short, this video gives a rather jaundiced and inaccurate view of LessWrong, CFAR, and MIRI. (These are separate entities for good reasons, a fact which seems to escape the narrator — the whole point of splitting CFAR off from MIRI was to separate the fairly mainstream game-theory-based notions promulgated by CFAR from the more tendentious AI theorizing of MIRI.)

      • Corey: So you’re familiar with these groups then? Never mind the politics which looks to be a jumble, it’s the neon “Bayes theorem” signs, and Bayesian rationality bootcamps that interest me. Seriously.

        • Corey

          Mayo: Yup; I go by Cyan on LessWrong. I’ve read all of the “Sequences”. A buddy and I initiated the Ottawa LW meetup group, and I attended one of the early CFAR rationality bootcamps. I don’t doubt that you’d have little good to say about the course material, but I can attest that they’re not cult indoctrination camps, at least.

          • Corey: So are these groups actually connected to Bayesianism or is that just a symbol for a group interested in AI-type thinking, and maybe futuristic studies? Why would I have little good to say about the course material? We have usually agreed on things to a large extent (on this blog anyway).

  6. Corey

    Mayo: Let’s distinguish between LessWrong, CFAR, and MIRI. LessWrong is a group blog. Registration is open, and contributor reputation is managed by a reddit-style “karma point” system. LessWrong is accurately described by its about page. One piece of LessWrong’s philosophical baggage is the stance that probability theory is the right tool to deal with one’s uncertainty about the way reality actually is, with Bayes’ Theorem being the tool for incorporating new information into one’s model. That said, it’s not necessary to take on the entire philosophical baggage of LessWrong to be a well-regarded contributor.

    CFAR is a non-profit spun off from the erstwhile Singularity Institute. It exists to promote LessWrong-style rationality by offering training in epistemic rationality (Bayesian-style), instrumental rationality (game theory and maximizing subjective expected utility), and “internal” rationality (techniques to cope with the fact that a typical human mind is not very rational-agent-like). CFAR’s curriculum is pretty mainstream; its Bayesian stuff is the sort of thing from which this blog is in exile. CFAR, in my view, has pretty good ideas and the capability to improve them where necessary.

    MIRI, formerly the Singularity Institute, is organized around the implications of I.J. Good’s intelligence explosion. “Evil robot overlords”, as the above video puts it, are not the fear here — more like nanotechnological-engineering-capable optimization algorithm with an accurate and comprehensive model of reality. The canonical example here is a paperclip-making AI — it only cares about you to the extent that you are made of atoms it can re-organize to improve its paperclip output. Modeling human values, like face recognition, is a thing that humans do with little conscious effort that turns out to be quite difficult to encode in algorithmic form. Getting that part of the program right is very important if we only get one shot at it, as I.J. Good’s intelligence explosion conjecture implies.

    I’m not fully on board with MIRI’s program; I agree that certain alarming conclusions follow given some premises, but I can’t follow the arguments for those premises. I further observe that many people far smarter than myself have given the arguments serious thought and do not accept those premises.

    • Corey: Oy! I just had a chance to read this fully. You wrote:”Its Bayesian stuff is the sort of thing from which this blog is in exile.” Rather than being in exile from, that would be something I’d actively run away from! So it is a reference to IJ Good’s theory—I missed that since I’m far too busy these days. But IJG changed his mind! even though I could never take it all seriously, he was into lots of speculative areas, numerology and the paranormal. Too bad he never went on-line, and would barely even e-mail.

      If you build an account that cannot pick up on intentions, it’s little wonder you can’t get it to recognize humans.

      • Corey

        Mayo: Above, you wrote that Good changed his mind “about the right way to prevent [an intelligence explosion];” did he change his mind about the notion that an intelligence explosion was possible/likely/inevitable?

        When I write that the AI is assumed to have an accurate and comprehensive model of reality, this should be taken as including human intentions. It’s not that the AI doesn’t understand humans — it’s that it has been given good enough instructions about how to care about them.

        Maybe a good way to expand one’s intuition about how this might be possible is to point to James Fallon. He’s a smart and accomplished man, devoted to the advancement of scientific knowledge and the betterment of humankind (abstract notions, note), and he has no natural compassion whatsoever. It’s mere happenstance that he didn’t end up a serial killer.

        • Corey: Did you mean it has (or that it has not) been given good enough instructions?
          Look we already have robots running the markets in ways we don’t understand, and out-of-control disasters like this:
          http://fukushimaupdate.com

          On the former, I don’t think we should have handed everything over to the robots as we have. The futurists should study what’s happening right now. There are essentially no new controls from the last crash…but it remains hidden. Is spending vast resources for fanciful futuristic speculation overlooking the current threats?

  7. Mark

    Great… now I’m terrified of futuristic robots (but highly doubt that Bayes’ theorem can save us from them… or anything). Especially if they’re made of this stuff: http://www.zmescience.com/research/plastic-becomes-stronger-every-stress-042432/

  8. Mark: Great! They should call it Nietzschean plastic!

    “highly doubt that Bayes’ theorem can save us from them… or anything”. True, in fact it’s instructing them to maximize expected utility and the like that can justifiably lead to the decision to enslave humans (or worse), or so IJ Good sometimes speculated….

I welcome constructive comments for 14-21 days. If you wish to have a comment of yours removed during that time, send me an e-mail.

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Blog at WordPress.com.