What should philosophers of science do? (Higgs, statistics, Marilyn)

Marilyn Monroe not walking past a Higgs boson and not making it decay, whatever philosophers might say.

Marilyn Monroe not walking past a Higgs boson and not making it decay, whatever philosophers might say.

My colleague, Lydia Patton, sent me this interesting article, “The Philosophy of the Higgs,” (from The Guardian, March 24, 2013) when I began the posts on “statistical flukes” in relation to the Higgs experiments (here and here); I held off posting it partly because of the slightly sexist attention-getter pic  of Marilyn (in reference to an “irrelevant blonde”[1]), and I was going to replace it, but with what?  All the men I regard as good-looking have dark hair (or no hair). But I wanted to take up something in the article around now, so here it is, a bit dimmed. Anyway apparently MM was not the idea of the author, particle physicist Michael Krämer, but rather a group of philosophers at a meeting discussing philosophy of science and science. In the article, Krämer tells us:

For quite some time now, I have collaborated on an interdisciplinary project which explores various philosophical, historical and sociological aspects of particle physics at the Large Hadron Collider (LHC). For me it has always been evident that science profits from a critical assessment of its methods. “What is knowledge?”, and “How is it acquired?” are philosophical questions that matter for science. The relationship between experiment and theory (what impact does theoretical prejudice have on empirical findings?) or the role of models (how can we assess the uncertainty of a simplified representation of reality?) are scientific issues, but also issues from the foundation of philosophy of science. In that sense they are equally important for both fields, and philosophy may add a wider and critical perspective to the scientific discussion. And while not every particle physicist may be concerned with the ontological question of whether particles or fields are the more fundamental objects, our research practice is shaped by philosophical concepts. We do, for example, demand that a physical theory can be tested experimentally and thereby falsified, a criterion that has been emphasized by the philosopher Karl Popper already in 1934. The Higgs mechanism can be falsified, because it predicts how Higgs particles are produced and how they can be detected at the Large Hadron Collider.

On the other hand, some philosophers tell us that falsification is strictly speaking not possible: What if a Higgs property does not agree with the standard theory of particle physics? How do we know it is not influenced by some unknown and thus unaccounted factor, like a mysterious blonde walking past the LHC experiments and triggering the Higgs to decay? (This was an actual argument given in the meeting!) Many interesting aspects of falsification have been discussed in the philosophical literature. “Mysterious blonde”-type arguments, however, are philosophical quibbles and irrelevant for scientific practice, and they may contribute to the fact that scientists do not listen to philosophers.

I entirely agree that philosophers have wasted a good deal of energy maintaining that it is impossible to solve Duhemian problems of where to lay the blame for anomalies. They misrepresent the very problem by supposing there is a need to string together a tremendously long conjunction consisting of a hypothesis H and a bunch of auxiliaries Ai which are presumed to entail observation e. But neither scientists nor ordinary people would go about things in this manner. The mere ability to distinguish the effects of different sources suffices to pinpoint blame for an anomaly. For some posts on falsification, see here and here*.

The question of why scientists do not listen to philosophers was also a central theme of the recent inaugural conference of the German Society for Philosophy of Science. I attended the conference to present some of the results of our interdisciplinary research group on the philosophy of the Higgs. I found the meeting very exciting and enjoyable, but was also surprised by the amount of critical self-reflection.

In the opening talk Peter Godfrey-Smith from the City University of New York emphasized three roles for philosophy: an integrative role, whereby philosophy can assess and connect various fields with an emphasis on generic categories and perspectives; an incubator role, where philosophy develops new ideas in a broad and speculative form, which are then pursued in a more focussed and specific way within an individual science; and an educative role, where philosophy teaches various general skills, including critical and abstract thinking. The problem I see with the integrative and incubator roles of philosophy is the high degree of specialization in modern science. It is very hard for a philosopher to keep up with scientific progress, and how could one integrate various fields without having fully appreciated the essential features of the individual sciences? As Margaret Morrison from the University of Toronto pointed out in her talk, if philosophy steps back too far from the individual sciences, the account becomes too general and isolated from scientific practice. On the other hand, if philosophy is too close to an individual science, it may not be philosophy any longer.

I think philosophy of science should not consider itself primarily as a service to science, but rather identify and answer questions within its own domain. I certainly would not be concerned if my own research went unnoticed by biologists, chemists, or philosophers, as long as it advances particle physics. On the other hand, as Morrison pointed out, science does generate its own philosophical problems, and philosophy may provide some kind of broader perspective for understanding those problems.

So then, should we physicists listen to philosophers?

An emphatic “No!”, if philosophers want to impose their preconceptions of how science should be done. I do not subscribe to Feyerabend’s provocative claim that “anything goes” in science, but I believe that many things go, and certainly many things should be tried.

But then, “Yes!”, we should listen, as philosophy can provide a critical assessment of our methods, in particular if we consider physics to be more than predicting numbers and collecting data, but rather an attempt to understand and explain the world. And even if philosophy might be of no direct help to science, it may be of help to scientists through its educational role, and sharpen our awareness of conceptional problems in our research**.

What I want to talk about are the roles of philosophers of science. While I do not disagree with the roles Godfrey-Smith allots philosophers of science, to incubate, integrate, and educate (about things like logic and critical thinking), and his list would not preclude what I have in mind, I would press to go much further. To focus just on one of my own areas of interest, there is enormous unclarity in discussions by statistical practitioners regarding such philosophical notions as objectivity, truth, falsifiability, evidence, inductive inference, and the roles of probability in modeling and inference. It is as if a certain trepidation and groupthink take over when it comes to philosophically tinged notions, and philosophers are rarely consulted to lend insight.  When they are, I’m afraid, they do not escape the criticism Stephen Weinberg raises in the linked Godfrey-Smith article (i.e., being wedded to a position that grows out of “theory-laden” philosophy, where the theories are philosophical.) Fresh methodological problems arise in practice, but philosophers of science are not consulted. Nor is it surprising. Peter Achinstein[2] has often said that scientists do not and should not consult philosophical accounts about evidence,because while scientists evaluate evidence empirically, philosophical accounts are merely based on a priori computations. Sad, if still true.

By and large, philosophers of science have reneged on the promise of the 80s to be relevant to science. In some areas, in particular the one I know best, philosophers of science have gone backwards. Philosophers of statistics were ahead of their time in the 70s and early 80s, engaging in discussions side by side with statistical practitioners (Godambe and Sprott 1971, Harper and Hooker, 1977 come to mind.) Contributions to the field were as likely to be by a philosopher as by a statistician. I talk about this much more elsewhere (e.g., the introduction to Mayo and Spanos, Error and Inference (CUP 2010), so I’m being quick here. Soon after I got my Ph.D, things seemed to dissipate…

Nowadays, while the foundations of statistics are being considered anew by many statisticians, philosophers of statistics are almost nowhere to be found.  Arguments given for some very popular slogans (mostly by non-philosophers), are too readily taken on faith as canon by others, and are repeated as gospel. Examples are easily found: all models are false, no models are falsifiable, everything is subjective, or equally subjective and objective, and the only properly epistemological use of probability is to supply posterior probabilities for quantifying actual or rational degrees of belief. Then there is the cluster of “howlers” allegedly committed by frequentist error statistical methods repeated verbatim (discussed on this blog). Margaret Morrison is right that many ask: is truly relevant philosophy really philosophy? I and a few others[3] think the answer is Yes! I have organized conferences[4] and published papers that address these issues, and it is the focus of a current book, nearing completion.

Even in the Higgs example, recall the controversy about whether particle physicists were misinterpreting their p-values; the letter-writing campaign by subjective Bayesians, etc. [5]. There is a valid question as to whether it is the philosopher of X’s responsibility to solve philosophical problems in domain X; and the answer will surely depend on the field. But in statistical science—itself sometimes regarded as “applied philosophy of science,” –I say the answer is, emphatically, yes! Their failure to do so has left them out of one of the most interesting periods in the areas of statistical science as well as machine learning.

*For a unit on Popper that includes Duhem’s problem and falsification, see http://errorstatistics.com/2012/02/01/no-pain-philosophy-skepticism-rationality-popper-and-all-that-part-2-duhems-problem-methodological-falsification/

*Michael Krämer is a theoretical particle physicist at the RWTH Aachen University and likes philosophy. Follow him on Twitter at @mikraemer


[1] The article’s subtitle is: “Particle physicist Michael Krämer hangs out with philosophers and learns that one should be wary of irrelevant blondes” (whatever that means).

[2] See, for example, p. 2 of the intro to E&I.

[3] Clark Glymour, Jim Woodward come to mind. They, as well as Godfrey-Smith, will be at our O&M conference at Virginia Tech next weekend!

[4] E.g.,Statistical Science and Philosophy of Science: Where Do/Should They Meet? For selected contributions and related papers see http://www.rmm-journal.de/htdocs/st01.html. Several of these papers have been discussed in “U-Phils” on this blog. Search for the author or title.

[5] O’Hagan: digest of  responses, discussed on this blog here and here.

  • Achinstein, P. (2001), The Book of Evidence, Oxford: Oxford University Press.
  • Cox and Mayo 2010.
  • Godambe, V.  and Sprott, D., (eds), (1971). Foundations of Statistical Inference, Holt, Rinehart and Winston of Canada, Toronto, 1971.
  • Harper, W. L. and Hooker C. A. (eds.) (1976): Foundations of Probability Theory, Statistical Inference and Statistical Theories of Science. Vol. 2, Dordrecht, The Netherlands: D. Reidel
  • Mayo, D. G. and Spanos, A. (2010). “Introduction and Background: Part I: Central Goals, Themes, and Questions; Part II The Error-Statistical Philosophy” in Error and Inference: Recent Exchanges on Experimental Reasoning, Reliability and the Objectivity and Rationality of Science (D Mayo and A. Spanos eds.), Cambridge: Cambridge University Press: 1-14, 15-27.
Categories: Higgs, Statistics, StatSci meets PhilSci | 88 Comments

Post navigation

88 thoughts on “What should philosophers of science do? (Higgs, statistics, Marilyn)

  1. I think the bit about the irrelevant blonde was an example of a fair comment. :-)

  2. Entsophy

    What should philosphers of science do?

    They should try to read the greatest paper on the philosophy of science (and statistics) written in the 20th centurey, namely “Where do we stand on Maximum Entropy?” (http://bayes.wustl.edu/etj/articles/stand.on.entropy.pdf).

    Frankly, I don’t think there are any philsophers of science who can read it and understand both the ideas and the mathematics, let alone carry the work any further.

    So when you say “Nowadays, while the foundations of statistics are being considered anew by many statisticians, philosophers of statistics are almost nowhere to be found.” they are nowhere to be found because they weren’t even able to engage with yesterday’s discussion about the foundations. (note the paper was published in 1979) and just went back to critizing Savage et al. because that was a hell of a lot easier.

    • Entsophy: I think many of us read, with understanding, that and other papers from the Bayesian high priests, and were even brought up on them. That’s not the problem. On the contrary, the trouble grows out of the associated supposition that only a Bayesian logic can provide adequate philosophical foundations for statistics, and finding that scientific practice doesn’t reflect the closed world of that discredited philosophy of science. The popular “default” Bayesians remain somewhere in the middle, denying prior distributions reflect beliefs or even probabilities, while striving mightily to demonstrate that their methods have good frequentist performance. In so doing they must renounce the Bayesian foundations, e.g., the likelihood principle. See for just one of many relevant posts, my comments on Gelman and Shalizi:

      http://errorstatistics.com/2012/06/19/the-error-statistical-philosophy-and-the-practice-of-bayesian-statistics-comments-on-gelman-and-shalizi/

      Also, “Bayesian alternatives” in my 2010 paper with David Cox:

      Cox and Mayo 2010.

  3. Was this the very same Jaynes who denied the truth of the marginalisation paradox? See http://ba.stat.cmu.edu/~isba/articles/2004-Oct-15-02-1.pdf

    • Corey

      stephensenn: Yes, that Jaynes. If you’re interested in the marginalization paradox, you may be interested to read this note By Kevin S. Van Horn. Money quote:

      “Jaynes argues that the problem is not the use of an improper prior, but that the two derivations implicitly condition on di fferent states of information, and hence we should expect to get di fferent answers. In this note I show that Jaynes is wrong: the problem does in fact stem from the use of an improper prior. To Jaynes’s credit, however, the paradox is resolved by carefully following the procedures he advocates in [Probability Theory: The Logic of Science].”

  4. Jaynes’ remarks are given plausibility just when he is speaking about how statistically significant deviations may be taken as evidence of changed physical constraints in real experiments, about probing the systematic influences representing physical constraints and about how probabilities agree with measured frequencies when we have properly represented systematic influences at work in the real experiment (Jaynes, p. 57). But there he is just presenting what frequentist or error statistical modelers show without the distorted “middle man” of modeling knowledge, lack of information and the rest. That is, we can find out about the physical constraints, and in so doing are not merely modeling knowledge of constraints. I have never understood why Jaynes finds it superior to mask what’s going on when it leads to such distortions. By misunderstanding what the frequentist modeler is doing—going by the accusations in this paper and in the rest of his work–Jaynes has foisted an unwarranted representation and misleading language on the scientific goal of detecting statistical discrepancies from statistical models. Everything needed to understand the relationship between derived relative frequencies and probabilities, the approximate nature of statistical modeling of phenomena, etc. may be found clearly expressed in such works as Neyman’s (1952) Lectures and Conferences (among others) without any of the pompousness or straw men attacks so unjustly raised by Jaynes. And think of all the effort at finding the holy grail of “uninformative” priors, now recognized as a failure. I could go on, but let me just recommend searching this blog.

    I think I do understand what Gelman had in mind just the other day in giving this blog a shout out in a post of his listing related blogs: http://andrewgelman.com/2013/04/29/the-blogroll/
    “I learned about her through Shalizi. Mayo believes in learning through model checking, just like Jaynes (and me)” (He also noted Senn is a frequent guest poster). But after a large number of comments and exchanges between us, our points of overlap invariably reflect shared error statistical goals and not the Bayesian ones Jaynes purports to champion. It is when Gelman (and co-authors) veer off into talk of “the knowledge about these unknowns that Bayesians model as random” that the approach acquires a fuzziness with which he himself so often expresses discomfort. The link to this quote is here: http://errorstatistics.com/2013/04/14/does-statistics-have-an-ontology-does-it-need-one-draft-1/
    (Again, search the blog for more examples).

  5. Christian Hennig

    Nice posting. I wonder, reading this, how important it actually is to assign oneself to a “discipline”, and how important it is to distinguish whether what one does is rather “philosophy” or “science” (or “statistics” or “physics” etc.).
    Obviously we have different backgrounds and qualifications and a number of things I’m doing are quite clearly statistics and not philosophy, but I don’t see why what is statistics and what is philosophy should drive what I’m or anybody else is doing. We should treat *interesting* problems that we can treat convincingly, regardless to which field they belong. Or not?

    • Christian: Sure, but there are philosophically tinged problems and concepts that arise in practice that could be illuminated/disentangled by philosophers–assuming of course they had the background needed for the problem in question. Hopefully we have some examples of this on the current blog. But I agree it depends greatly on the field.The Higgs example, recall, gave rise to the controversy about whether particle physicists were misinterpreting their p-values, and a couple of philosophers jumped in (I was one). I think there’s an increasing interest these days in popular science writings on “bad science”, “bad pharma”, “bad statistics,””bad economics”—there may be others*. That alone invites philosophical scrutiny as to what’s going on. Of course, many philosophers write on evidence-based policy and values. But I do think we sell ourselves short, often, and follow rather than lead, even wrt philosophical-methodological issues.

      * I just noticed Gelman’s blog today includes some of this:

      http://andrewgelman.com/2013/05/02/7-ways-to-separate-errors-from-statistics/

  6. Cosmologist Sean Carrol talking on the interpretation of Quantum Mechanics mentions how embarrassing was that after 80 years physicist don’t come even close to an agreement. ( http://www.youtube.com/watch?v=ZacggH9wB7Y )

    I just wonder if philosophers have a say on this one, and do you?

    I remember talking once to a Physicist who worked with particle detectors placed on satellites and that was finishing a PhD in Statistics (he already had two… there are all kind of collectors) about the suitability of Calculus in physics.

    I argued that given that the universe was supposed to be quantized using Calculus, a continuous not quantized tool, might lead to delusional results about reality. He mentioned then that many Physicists give way too much importance to the math and that they focus so much on the models that they forget what the purpose of the models are.

    Also, on the reason why scientists do not listen to philosophers Richard Feynman might give us a clue:

    “We cannot define anything precisely! If we attempt to, we get into that paralysis of thought that comes to philosophers, who sit opposite each other, one saying to the other, ‘You don’t know what you are talking about!’ The second one says ‘What do you mean by know? What do you mean by talking? What do you mean by you?’, and so on.”

    In my opinion the solution for this Philosopher/Scientist disconnection is to go back to the root meaning of PhD and make mandatory a body of philosophical knowledge for anyone aspiring to such title.

    Just the same way back in the days Surgeons and Doctors were two separate professions until we realized Surgeons were better Surgeons if they were also Doctors, maybe Scientist would be better scientists if they were also Philosophers.

  7. Fran: Thanks for your comment. To your question, “I just wonder if philosophers have a say on this one, and do you?” I have no dog in any QM debates, but philosopher Prof. Laura Ruetsche, an invited speaker at our O & M conference this weekend, and author of Interpreting Quantum Theories (Oxford 2011), has many novel insights in this arena.
    On the suggestion that it should be mandatory for “doctors of philosophy” to study a body of philosophical knowledge—great idea! But how would we ever agree on the “body of philosophical knowledge”? Maybe it could vary by the program…

    • Thanks for the reference This QM thing got me thinking… Could we solve the “Bayesian problem” by redefining the question they are answering by turning into deduction an inductive process? Much debate and criticism have gone into the use of priors when doing inductive inference due the introduction of information in the process that is nowhere in the problem.

      For example, when trying to infer the probability for a coin to be heads (or tails) when tossed Bayesians will make use of their indifference principle and say “I have no preference for any value of p and therefore I use a uniform distribution as a prior”, then a classic statistician would say “Well, you’re actually giving preference to p=0.5 since this is the expected value of the prior you just put in place, so you’re messing with the inductive process”.

      But how about if instead asking what is the most likely value of p given the data we ask what’s the value of p in the most likely universe this data come from? Since in QM anything that can happen do actually happen then there must be one universe for every possible value of p, and since they do exists our starting point would be the uniform distribution.

      But changing the question now the uniform distribution merely expresses the Quantum Mechanical information we have about how the universe behaves, and by doing so, now Bayes’ theorem merely enters in a deductive process making the inference completely valid.

      Any thoughts on this one? Maybe a bit far-reached but I guess blogs like yours are the place for such reckless ideas :P

      • Corey

        Fran: It does not follow that since in QM anything that can happen does actually happen then there must be one universe for every possible value of p, and so on — measure theory doesn’t work that way. Suppose a different Bayesian (me, for instance) says that the appropriate parameter on which to assign a uniform distribution is actually the log-odds, not p? (And in this scenario, what happens to your original criticism that the uniform prior picks out 0.5 as the prior expectation of p?)

        • Hi Corey: I believe it follows for a particular Gedanken where the QM multiple universe is true and someone (or a system) makes an equally likely choice for p (imagine a physicist fabricating the coin with a p based on a quantum random process).

          I am not sure why you believe we cannot use a measure, but you cannot apply a Uniform distribution to the values of the log-odds since its support is +-infinity and Uniform distributions must have limits.

          My criticism holds as long as you introduce in your model information you don’t have, but if we assume this QM framework (or others) Bayesians can find a question for their answers.

          In other words, Bayesians would not have to resort anymore to the principle or mathematical property they fancy the most to set up a prior, but rather use the appropriate prior that comes out of their Gedanken which, by the way, it wouldn’t have to be a uniform distribution.

          This way Classical Statisticians and Bayesians would be both right in their own domain and both answer would make sense to each others perspective.

          • Corey

            Fran: The meat of your idea seems to be that the parameter to be inferred is actually sampled according to a QM-derived probability distribution prior to data collection. Multiple universes are besides the point — the key notion is that the prior measure comes from the Born rule, which is independent of any particular interpretation of QM. Making the source of the prior a Gedanken and deriving the Born-rule-implied prior measure is an interesting notion, but it’s not clear to me how (or if!) it restricts the class of possible priors.

            On measure theory: you’ve misunderstood me; perhaps I was too succinct. Your original description of the idea has the form, “Since A, therefore B,” where A is, “in QM anything that can happen [does] actually happen,” and B is, “there [is] one universe for every possible value of p and [] our starting point [is] the uniform distribution.” I was pointing out that B does not follow from A — the appropriate prior measure, howsoever defined, has nothing to do with the asserted one-to-one correspondence between worlds and values of p. The original statement falls prey to the standard objection to the principle of indifference as applied to continuous parameters. Your restatement is free of this problem since it implicitly invokes the Born rule to define the prior measure.

            Perhaps it will shock you to learn that Bayesians have been using improper priors — like the one I suggested — successfully since Laplace’s era. By “successfully” I mean that the resulting posterior distributions are proper probability distributions and give perfectly sensible inferences. The particular prior I suggested gives a reasonable result provided that the observed data contains at least one head and one tail. Now reflect:in this scenario, what happens to your original criticism that the uniform prior picks out 0.5 as the prior expectation of p?

            • Corey:

              Making the source of the prior a Gedanken and deriving the Born-rule-implied prior measure is an interesting notion, but it’s not clear to me how (or if!) it restricts the class of possible priors.

              I don’t think it does restricts the class of possible priors, it simply creates a framework where the choice of a prior makes sense for a Classical Statistician. For example, imagine you just give me such coin and ask me “what’s the p?” then me, being a Classical Statistician, I’ll say “I haven’t toss it yet, I cannot possibly tell you anything about p”. Now comes a Bayesian and kidnaps me into Bayesian QM hyperspace and shows me that for every value of p there is one universe so, being a Classical statistician, now I have information that allows me to establish a frequency of 1 for every universe and that leads me, in this particular experiment, to a uniform distribution of the values of p.

              But actually we don’t need a QM Bayesian perspective. Imagine you have a basket filled with coins with different values for p distributed uniformly, then you pick one and give it to me asking “what’s the p?” the me, being a Classical Statistician, I’ll say p=0.5 and I can say that because now I know about the basket and that you picked the coin from it.

              From a QM perspective the basket in the former example is filled with universes (or Born rule’s measures if you want) and each one with a different p, so the mere fact the coin exists (in the way it was created in our Gedanken) gives us grounds to fairly assume a prior distribution from a Classical perspective because we know our universe was picked among a known distribution of many others.

              Your restatement is free of this problem since it implicitly invokes the Born rule to define the prior measure. Perhaps it will shock you to learn that Bayesians have been using improper priors — like the one I suggested — successfully since Laplace’s era. By “successfully” I mean that the resulting posterior distributions are proper probability distributions and give perfectly sensible inferences.

              Yeah, I am aware of the improper probability distributions being successful… sometimes. Because they are not actually probability distributions they might or might not render posterior probability distribution which means that the results must always be checked just in case. To me call this mathematical artifacts improper probability distributions make as much sense as to call a dog an “improper human being” and, by the way, it always reminds me the “Shut up and Calculate” principle many physicists adopt in QM in the sense “as long as it is useful it goes” without much worrying about the meaning of what they are doing.

              • Corey

                Fran: I’m not sure what the value of the idea is, then — it seems moot.

                I can’t help but notice that you haven’t given any answer to the question I posed at the end of my two comments upthread.

                • Corey: Oh, it might very well be moot, the idea though was to explore possibilities to find a common ground for different answers given the same question based on the idea that maybe we are answering different questions.

                  About your question, I believe I did answer though not maybe the way you expected.

                  • Corey

                    Fran: The only thing you’ve written about the prior I suggested is “you cannot apply a Uniform distribution to the values of the log-odds since its support is +-infinity and Uniform distributions must have limits,” and this, I have pointed out, is false: this prior “gives a reasonable result provided that the observed data contains at least one head and one tail.” So… what happens to your original criticism that the uniform prior picks out 0.5 as the prior expectation of p?

                    • Corey: Yeah, you pointed out it is false and I just pointed out I disagree these artifacts are probability distributions “improper” or otherwise.

                      Nonetheless, if I am not wrong using that “probability distribution” would be equivalent to use the Haldane prior (another artifact) for the p. And then, again, I will ask you on what information you base your decision to use a Haldane unless you assume a Gedanken where a someone decides to build a coin with a p following a Haldane? (if that is even possible given that a Haldane is not a probability distribution)

                      Why should be my criticism be updated based on what distribution you use in our Gedanken to build the coin?

                    • Corey

                      Fran: I can’t nest under your comment for some reason. This is intended to be a reply to your comment that starts, “Yeah, you pointed out…”

                      Let me just say that the question I posed is intended to be a parallel discussion to our discussion of your Gedanken approach; I don’t want to mix the discussions together.

                      Anyway, in this instance, balking at the impropriety of the prior will avail you naught. A more conceptually laborious but more mathematically correct way of doing the analysis is to use a general Beta prior and then consider the limit of the posterior distribution as both prior hyperparameters go to zero. This procedure involves no actual improper prior (just like taking a derivative involves no actual division by zero) and yields the same posterior as using the “raw” Haldane prior from the start. (It is often the case that this sort of limiting procedure gives the same answer as simply starting with the improper prior. The marginalization paradox referred to above occurs when this correspondence fails to hold.) So no prior mean is being singled out and no improper prior is involved. Does this give you sufficient reason to update your criticism?

              • Fran: quite aside from QM space, I don’t see why I’d be interested in an urn of hypotheses, p% of which are true, when it comes to evaluating the evidence for a given hypothesis H. Even if I imagine H was randomly selected H from this urn, it would be false to say the frequentist probability for the truth of H = p. Nor is p a measure of the evidence we have in the truth of H. These are all different animals, and different too from assessing what we do and do not know about H and the phenomenon it describes. This was brought up in my discussion of Reichenbach’s attempt.

                So I deny the assumption this would be a fertile meeting ground for Bayesian and frequentists.
                A different place to begin, I claim, is with a principle like this:
                (F) x provides no evidence for H if the procedure used (to generate H and x) would find such good agreement with H even if H were false.
                I don’t know of people who would reject this. By contrast, I can think of many who would reject the idea:
                (B) that knowledge or inference is well represented by updating degrees of belief, based on an exhaustive set of hypotheses that could fit or account for data, and science warrants believing claims that agents(?) accord high probability.

                Thus, we should start with (F) not (B).
                One then moves to “would find such good agreement with high probability” where the probability depends on how often the tool would do a good or lousy job. then there’s a positive side (we learn what has severely passed,that is, what has passed tests that would, with high probability, have found flaws but did not.

                Moreover, science isn’t especially interested in highly probable hypotheses, we want highly explanatory highly fertile, highly interesting hypotheses which can be probed, and where we can learn a lot from probing and finding false. What we learn, we detach, and do not assign a probability to.

                • Hi Mayo:

                  Fran: quite aside from QM space, I don’t see why I’d be interested in an urn of hypotheses, p% of which are true, when it comes to evaluating the evidence for a given hypothesis H. Even if I imagine H was randomly selected H from this urn, it would be false to say the frequentist probability for the truth of H = p.

                  Oh no no, you see, I claim all of them are true! not just a p%. I just want to know which of them I got.

                  Also, I didn’t say frequentist but Classical. The reason I make this distinction is because once I can properly establish a prior distribution Bayes’ Theorem is a totally valid procedure and arguably the best one. Since I am absolutely entitled to use Bayes’ Theorem when the necessary conditions are in place I don’t feel identified at all with the nickname frequentist which often Bayesians use when you question their baseless choices for a prior.

                  When the choice is not baseless I can agree with them and say things like p=0.5

                  *****

                  About the principles you layout…

                  On (F), Say you have H false and H’ true but very very close to H. Wouldn’t x for H’ now be providing evidences for H despite being false?

                  Q: Curiosity… Why (F) and (B)?

                  • Fran: I don’t understand how you are using “frequentist”—surely not as someone who would deny the theorem on conditional probability (which is what applying Bayes’s theorem “when the necessary conditions are in place” would boil down to).

                    • Mayo: You probably have noticed that every time there is a comparison between, let’s say, NHST and Bayesian Factors they call the first method “Frequentist” and the second “Bayesian”…. Like if a “Frequentist” could not possibly make use of Bayes theorem in information for a proper prior is in place!

                      For example, a few days ago I went into an argument and I asked in what fields / applications Bayesian approach champions and I was given as an example applications where they use “Bayesian Networks” like if, again, I had to be Bayesian to use a properly informed Bayesian Network.

                      Well, Frequentist is the word which Bayesians have chosen to baptize those not using their ways and a particular set of techniques so, since they have coined the word, it is only fair that they give to this word whatever meaning they want, and since it seems they would never consider using Bayes for inference a Frequentist thing to do I decided to detached myself from such word… Also, I don’t like it.

                      So, because in the Gedanken I proposed there was information for a proper prior, I could join the Bayesians in their estimation and say p=0.5 with them, and I used the Classical Statistician name for the reason explained above.

                    • Fran: I haven’t really heard that usage with any consistency, if anything, Bayesians lo-o-ove to call non-Bayesians classical statisticians. I dislike that term because of the connection to the classical interpretation of probability. Anyway, you begin to see why I’ve coined a new term, error statistician, for one who employs probabilistic ideas in inference in order to evaluate error probabilities. Within that heading are behavioristic contexts and scientific ones. In the former, the concern is controlling long-run error rates; in the latter, error probabilities are used to assess the severity with which various claims, and discrepancies from those claims, are warranted. The basis for the second is the severity principle I’ve alluded to before…

          • JohnQ

            Fran I’m not sure I understand what your saying. If you flip a coin 400 times then for every way in which you could get frequency of heads f = .25 there are 10^22 ways to get f=.5.

            Similarly, for every way you get f=0, there are 10^119 was to get f=.5.

            You seem to be saying that a Bayesian looking at these numbers has no legitimate reasons for favoring .5, but they would have solid reasons once multiple universes are considered.

            • Hi JohnQ:

              Those numbers are irrelevant. For example, if the Physicists in our Gedanken manufactures a coin with p=0 then we would only get tails every time we toss the coin. So flipping a coin 400 times would give us 400 tails regardless how many other ways we have to get f=0.5.

              If a Bayesian gives any relevance to the numbers you made he would be already assuming the coin is close to fair and therefore f close to 0.5, with no information for doing so.

              • JohnQ

                Hello again,

                The Bayesian isn’t trying to accurately predict f directly. Lacking a crystal ball they are forced to have a more modest goal. They’re merely trying to identify those frequencies most warranted by the knowledge they really do possess.

                It’s hard to imagine knowing nothing about a coin, but if you really did know nothing about it, you’d still know that for every way f=0 there are 10^119 ways for f=.5. It seems reasonable to identify f=.5 as more warranted than f=0 on the grounds that whatever special causes generate the next sequence seen, it’s safer to assume it will be like one of those 10^119 examples then it would be to assume it’s the one example where f=0. Until they learn more, that’s about the best, and most robust, answer anyone can give.

                Are you saying the Bayesian’s goal is pie-in-the-sky nonsense, but you can make it less absurd using multiple universes?

                • JohnQ

                  I wanted to add to the robustness comment above. A Bayesian using the uniform distribution (out of ignorance) will predict with high confidence that for the next sequence f is between .2 and .8.

                  How robust is this conclusion? Well the true frequency distribution on the space of 2^400 sequences could have almost any shape whatsoever (even if it radically differs from the uniform distribution in almost unimaginable ways) and the Bayesian prediction would still be born out, with very high probability, by the next sequence.

                  There is practical value in answering the question “what values are most warranted by what I know” even when you don’t know very much. If any value is highly warranted by this state of information, then predicting that value will necessarily be highly robust with respect to alternate assumptions about what is causing it.

                • When talking about coins in this context we should have ideal coins in our minds which f can be anything within [0,1]. Because we know real coins are symmetrical we intuitively know that f cannot be far away from 0.5 that is why, instead a coin, try to imagine a black box that screams “heads” or “tails” every time we press a button. There’s no difference with the ideal coin we are talking about but this might help to clarify this issue.

                  About being hard to imagine knowing nothing about a coin, well, you can imagine I send you the black box described above to you with a preset f. What do you know about f now? Nothing. On what grounds you consider f=0.5 more warranted than f=0? On none.

                  Think in the example I gave above with the basket filled with coins (or black boxes if you prefer) each one with a different p and that the values of p follow a uniform distribution. Then we pick a coin from the basket to infer p…

                  What would a Bayesian do? Use a uniform prior.
                  What would a Classical Statistician do? Use a uniform prior.

                  How about if I tell nothing to the Bayesian about where the coin comes from?

                  What would a Bayesian do? Use a uniform prior.
                  What would a Classical Statistician do? Use nothing.

                  The Bayesian have no choice but to always introduce information in their models.

                  The QM framework idea pretends to play the role of the basket filled with coins and, since we live in a QM universe, we can always establish a Gedanken with a particular “basket of coins” that kicks off the Bayesian machinery.

                  So instead Classicals and Bayesians fighting for who has the right answer for same question we would be in a situation where both have different answers for different questions and they both would be correct.

                  • JohnQ

                    Fran,
                    “What do you know about f now? Nothing. On what grounds you consider f=0.5 more warranted than f=0? On none.”

                    There are definite reasons. See the robustness comment above. Remember the goal wasn’t to predict which f would happen, but merely which f is most warranted by what you know.

                    In practice the next sequence of n=400 won’t be drawn from some entirely imaginary distribution. It will be described by the time evolution of a massive coupled system of physics equations. We may be ignorant about those equations and what constants they depend on, but whatever they are, we at least know the outcome has to be in the space of 2^400 possibilities. And we know that almost all those possible outcomes lead to f closer to .5 than 0. That is a definite reason to favor .5 over 0. And happily it also explains why we tend to see exactly that when we do toss a coin 400 times.

                    P.S. it’s a technical point, but the frequency of heads in real coin flips is affect more by the subset of phase space (position-momentum space) which define the allowed set of initial conditions, then they are by the symmetry of the coin. The old idea that the frequency of heads was a physical property of the coin like mass has been thoroughly debunked by both physicist and statisticians at this point.

                    • JohnQ

                      Maybe another way of saying it helps. There are 2^400 possibilities. Almost all of them lead to f’s closer to .5 than 0. Unless those coupled system of physics equations conspire to put the next n=400 sequence into one of those highly unusual minority outcomes where f is closer to 0, then we’ll see f closer to .5 when we actually perform the experiment with a coin.

                      The key thing to realize (and this is just simple math) is that the above statement is true almost no matter the true frequency distribution on the space of 2^400 outcomes is. Our prediction based on ignorance will be accurate when we test it unless there are some physics constraints which conspire to make the sequences we observe part of that tiny minority of those 2^400 possibilities in which f is closer to 0.

                      Or to put it another way, our prediction will be accurate unless the true frequency distribution on that 2^400 set of outcomes, puts almost all of its probability mass over that incredibly small subset of outcomes where f is closer to 0. Any true frequency distribution that doesn’t do this, will make the prediction from ignorance an accurate one no matter how far it differs from the uniform distribution.

                    • JohnQ: The “technical point” is obvious whatever the geometry of a coin or anything else. Applying same force, in same direction, in same place… will render same result. That is why I was trying to move to the black box example so that physics don’t get in the way in this example.

                      You say: “Maybe another way of saying it helps. There are 2^400 possibilities. Almost all of them lead to f’s closer to .5 than 0. Unless those coupled system of physics equations conspire to put the next n=400 sequence into one of those highly unusual minority outcomes where f is closer to 0, then we’ll see f closer to .5 when we actually perform the experiment with a coin.”

                      This is really irrelevant. You are granting every possibility equal rights when you don’t have information whatsoever about the f. You need no conspiracy for it to fail. That the value of f could be anything does not mean that this anything has to be uniform as you assume.

                      But now we are repeating ourselves so I guess we have reached that point when we kindly agree to disagree :)

                    • JohnQ

                      “You are granting every possibility equal rights”

                      Actually I did the exact opposite. And since this is a mathematical question whose truth is independent of what either of us want to believe I’ll state the relevant mathematical fact again:

                      All most no matter what the true frequency distribution on the set of 2^400 outcomes is (even if it differs by extreme amounts from uniform), it will be true that the predictions made from the uniform distribution about f (i.e .2<f<.8) will turn out to be accurate with very high probability (i.e. the next sequence we generate will satisfy this inequality).

                      This happens because almost all outcomes in that 2^400 space make f closer to .5 than 0. So almost no matter where the true frequency distribution places it's probability mass, or how much or little it spreads the probability mass out, it will be over a subset that mostly makes f closer to .5 than 0, hence giving a similar prediction as the uniform distribution. The only exception is a true frequency distribution which puts almost all of it's probability mass over that small (by percentage) subset of 2^400 that makes f closer to 0. This subset is incredibly small as evidence by that 10^119 number given before.

                      Those numbers you think are irrelevant, actually imply an extreme robustness property which is highly relevant. Statistics would be useless in practice if these facts weren't true and Bayesians simply don't need mulitiple universes to be on a sound footing here.

              • Corey

                Fran: JohnQ has just repeated the text making his (correct) point, but I feel that progress might be made by a more concrete explanation.

                We’re considering the set of strings of length 400 containing only the symbols “H” and “T”. This set has 2^400 members. Let E be the “edge” subset containing only strings with either less than 80 H symbols or less than 80 T symbols, and let C be the complementary “center” subset. Note that C contains such strings as (80 T symbols followed by 320 H symbols) and (“TTTH” repeated 100 times); these two strings are vastly unlikely under any Bernoulli process. JohnQ’s point is just that for every string in E there are about 1.5 x 10^35 strings in C. This is straight-up combinatorics; it uses nothing more complicated than the binomial coefficient.

                Let’s be even more concrete: consider a distribution which puts probability p_C on each string in C and probability p_E on each string in E. Even if p_E is larger than p_C by a factor of a quintillion, the probability of sampling a string in C is ~1, just because there are so many more of them.

                • Hi Corey: I believe I understood JohnQ the first time but, since I disagree in the relevance of the argument, it seems you believe I did not understood the math and that’s why you keep repeating the same argument in different flavors. Could this be?

                  Also, I obviously failed in communicating why I believe this argument is irrelevant and I find myself doing the same; looking for different flavors for my disagreement to explain myself.

                  So now, instead trying to convince you my argument is right, I’ll try to convince you that I understand your argument.

                  A way to generate random numbers following a Normal distribution consists in using a Bernoulli process (tossing coins) and moving up or down 1 unit (being the unit the accuracy for the generated number e.g. 0.0001).

                  Such way to generate random Normal numbers is highly inefficient (I have actually tested it) but it is good to know it works because it allows you to see how a Normal distribution can be built by simply tossing a fair coin.

                  So, in you example, any solution with the same number of H and T will be more likely than any other where there is an unbalance, in other words, the most likely ration for H and T is 0.5. And in the random number generator that I described that means the most likely outcome is 0 (The expected value of the Normal distribution).

                  I would like to believe that, at this point, I have shown you that at least I understand the math of your argument so you don’t have to repeat it again. So, if I continue to fail in understanding your point it cannot possibly be the math. Do we agree on this?

                  • JohnQ

                    Fran, there is no doubt that you understand what you describe, but that’s not the math for the point we were making. I don’t see a way to make it any clearer than Corey’s example (using his notation):

                    You think it has to be the case that p_E/p_C ~1 to make everything work. Even with p_E/p_C~10^15, which is a radical departure from the uniform distribution, you still get Pr(C)~1 effectively.

                    By figuring out which values are most warranted given a low informational state, the Bayesian is also simultaneously delivering the values most likely to be seen in real life even if the true frequencies differ radically from the assumed ones.

                    This point isn’t academic either. In reality the true frequency of occurrence of most sequences in the 2^400 will be zero (i.e. they will never be visited in all of history). Yet even though the true frequency distribution differs radically from the uniform distribution it’s still true that the prediction .2<f<.8 is a very good one when flipping real coins 400 times.

                    This is a mathematical claim. The claim is true. It's highly relevant for explaining how such predictions can be true even though we can't possibly know the true frequency distribution on the set of 2^400 outcomes.

                    • Hi JohnQ, I understand the veracity of mathematical claim, so really, no need to go on on that. Whenever our disagreement lays on it is not in the veracity of the math claim. I simply believe that the statement that you can make predictions on those veracities when no other information is available is a ‘non sequitur’.

                      In fact when you begin to draw conclusions this is merely because you assume information is nowhere in the problem to be seen… I am tempted to find another flavor for my explanation… mmm.. okay, what the hell, here I go:

                      Imagine that I just want to mess with you so, when I build my “head” “tails” talking machine and before I send it to you, I set it with the frequency that I believe will take you more trials to get a good estimation.

                      So, me, knowing you are a Bayesian, I know you believe some f values are more warranted that others so, I set my machine with f=1 or f=0, the values further away from your believe.

                      But how about if you are a Classical Statistician? How can I mess with you? Well… I just can’t you see. It does not matter what value I choose for f because you are assuming nothing and it will roughly take you the same number of tosses to get a good inference.

                      So you see, it is irrelevant how many ways you can combine H and T since I picked only H (or only T) to mess with you out of a gigantic set of possibilities. And because you don’t know where the box comes from (or the coin) you cannot infer from you combinatoric truth any value for f.

                    • JohnQ

                      To summarize: you believe that unless p_E/p_C ~ 1 then Bayesian answers are crazy. To make them less crazy you invoke multiple universes.

                      I’ve pointed out that for the Bayesian conclusion holds for almost every departure from p_E/p_C ~1 that is consistent with knowledge we actually have. Even radical departures such as p_E/p_C ~10^15. You think this is irrelevant.

                      I’ve stated that the Bayesian is merely trying to identify those values which are most warranted by the information they have available. You’ve responded (correctly) that if the Bayesian’s information state is missing key facts, then the values most warranted won’t be those actually observed.

                      You’ve then gone on to say Classical Statisticians don’t have this problem because they can always get the right answer even when they’re missing highly relevant facts. Or maybe you’re saying that Classical Statisticans always possess all the highly relevant facts. Additionally, there is no way for nature or man to fool a Classical Statistician, which is why their conclusions are always correct and why classical statistical studies always lead to highly replicable scientific conclusions. In fact “classical statistical methods” and “replicable conclusions” are pretty much synonyms.

                      Incidently, how much would you say your opinion on this were informed by your training in classical statistical methods in college? How much of this was informed by your readings in philosophy such as those found on this blog?

                    • JonhQ:

                      Frequentists seek the conclusions most warranted by things they don’t know because they’re afraid their conclusions are wrong.

                      Those darn cowards… ;)

                      Bayesians seek the conclusions most warranted by what they know in order to see if those conclusions are wrong.

                      And sometimes they are dead wrong… literally…

                      But they are bravehearts “Challengers”.

                    • JohnQ

                      “And sometimes they are dead wrong… literally…”

                      You got me there Fran. It’s true that if a Bayesian’s state of knowledge contains things which are false, or if they are missing relevant facts needed to make an accurate prediction, then when they us Bayesian methods to find which predictions are most warranted by their information, they will likely get the wrong answer. I plead guilty.

                  • Corey

                    Fran: You’ve shown me that you understand a bit of the math.

                    I wonder what you now think of your previous assertion: “You are granting every possibility [out of 2^400] equal rights when you don’t have information whatsoever about the f. You need no conspiracy for it to fail.” I showed you a conspiracy in which each string in E is incredibly more probable than each string in C, and yet the prediction 0.2 < f < 0.8 has probability ~1. You further suggest that "[you could] set [your] machine with f=1 or f=0, the values further away from [JohnQ's] believe." What is this but a conspiracy? I claim that a conspiracy is exactly what is needed to make the prediction fail.

                    I think we both recognize that many (most?) frequentist methods are designed to work even under the worst-case sampling distribution in some restricted set, whereas the Bayesian approach is average-case optimal for whatever prior is selected. The problem with methods with worst-case guarantees is that unless Nature really does present the (nearly) worst case instances quite often, such methods underperform relative to a “reasonable” average-case optimizer. It’s as if frequentist methods are incredibly <a href="http://en.wikipedia.org/wiki/Risk_aversion investors, and so turn down positive expectation investment opportunities because of a little bit of variance.

                    • JohnQ: Now you are being naughty. You blow out of proportions what have I said into ridicule in your summary. So I guess I can wish you to have fun with that ;)

                      And about where my opinions come from… I have no King.

                      Corey: Oh! but life is full of conspiracies! And the ways you can make fail Bayesians beliefs vast. You can check on the disastrous Bayesian approach taken by engineers at NASA to assess the safety of the Shuttle and how it was criticized by Richard Feynman… Was that a conspiracy or just life?

                      And, by the way, you can be risky and a Classical Statistician. I might get a 0.5 estimation and my gut feeling based on experience and knowledge tell me the real value should be around 0.7 and go for it…

                      But if I do that, I know that extra 0.2 is mine and mine alone. I don’t give it a fancy prior dress to make it look mathy.

                    • JohnQ

                      This discussion seems to have highlighted the main difference between Bayesians and Classical Statisticians:

                      Bayesians seek the conclusions most warranted by what they know in order to see if those conclusions are wrong.

                      Frequentists seek the conclusions most warranted by things they don’t know because they’re afraid their conclusions are wrong.

                    • Corey: I haven’t kept up with the train of this conversation, but it would be wrong to suppose that frequentists do not use background knowledge which enables detaching inferences, and finding things out. This is not a matter of expected losses, worst cases, and the like. But, y’all might be talking about a certain class of decision theorists; I didn’t want to leave your remark without any comment.

                    • Corey

                      Fran: I can’t seem to nest under your comment for some reason. This is intended to be a reply to your comment that starts, “JohnQ: Now you are being naughty…”

                      So we’ve gone from “you need no conspiracy to make it fail” to “life is full of conspiracies”. Weakening your argument without acknowledging the weakening as a concession is a bit rude, but okay, whatever. The real point is that we are most interested in exactly those cases where the prediction fails, since that failure indicates that a conspiracy is present. If we’re looking at an experiment rather than a game against an opponent, (and by the way, mixing in game theory as you’ve done is the real non-sequitur,) that conspiracy amounts to a new and interesting scientific law. And that’s all I’ve got to say about JohnQ’s argument from combinatorics. (In the literature, it goes by the name “maximum entropy”.)

                      The Challenger disaster is irrelevant: the innumerate managers responsible for the wildly unrealistic reliability estimates were not working engineers and probably couldn’t even spell “Bayesian,” much less use the Bayesian approach — just look at how they mangled the concept of a safety factor.

                      My own statistical philosophy has nothing to do with gut feelings, but the dominant “subjective Bayesian” paradigm can certainly be interpreted that way. Blech.

                    • Corey

                      Mayo: When making my worst-case vs. average-case comment, the frequentist technique I had in mind is the two-sided confidence interval. The frequentist demands a coverage inequality guaranteed to hold pointwise over the parameter space; the Bayesian is satisfied with an expected coverage guarantee.

                    • Corey: yes, that is in sync with my objection. A frequentist error statistician would no more set a fixed confidence level than she would set fixed type 1,2 errors and rule up or down. A severity assessment improves on CIs, but even a CI estimator could and should evaluate a series of CIs at different “benchmarks”. That, at any rate,is my recommendation.

                      On your other point, you make it sound as if “being satisfied with an expected coverage guarantee” is a similar but less demanding way to proceed—but they aren’t similar at all.

                    • Corey

                      Mayo: At least y’all can be reassured that I wasn’t talking about a certain class of decision theorists.

                    • Corey:

                      “So we’ve gone from “you need no conspiracy to make it fail” to “life is full of conspiracies”. Weakening your argument without acknowledging the weakening as a concession is a bit rude, but okay, whatever.”

                      Both statements are true. You need no conspiracy for it to fail because you conclusion is a non-sequitur and life is full of events that might seemingly look a “conspiracy”.

                      So you live in a world where when you fail to understand the counterparts argument, however wrong they might be, that constitutes rudeness on their side?

                      Okay, whatever to you to.

                    • Corey

                      Fran:
                      -_-

                      Let’s recap.

                      Fran: Maximum entropy prediction needs no conspiracy to fail.
                      Corey: Here’s a strong “conspiracy” that does not cause MaxEnt to fail; an even stronger “conspiracy” is needed to make it fail.
                      Fran: Life is full of conspiracies, and the ways you can make Bayesians’ beliefs fail is vast. (Doesn’t defend original claim. Gives a game theory example irrelevant to the problem of scientific inference. Misstates the events leading up to the Challenger disaster.)
                      Corey: It’s rude to weaken your argument without noting the concession. The real point of MaxEnt is that the conclusion it offers is “Either the prediction is correct or some unknown strong physical constraint is eliminating nearly all of the outcomes considered possible a priori.” It’s the latter case which is most interesting, since we learn something new.
                      Fran: I didn’t weaken my argument. (Apparently offered weaker claim instead of defending original claim for the lulz.) You just don’t understand my argument: the conclusion is a non-sequitur. You said I’m rude — I has a sad. (Fails to take note of the actual conclusion MaxEnt offers. Fails to argue against the revelance of Corey’s “conspiracy” example. Fails to defend game as relevant.)

                      Look. I get that you think Bayesians are fanatics and therefore deserve as much trolling as you can muster. But you seem to be a pretty smart guy, so if you have any genuine, well-founded arguments, I’m open to them.

                    • Corey:

                      Let’s recap.
                      Fran: Maximum entropy prediction needs no conspiracy to fail.

                      MaxEnt makes no predictions and, if it does, you have assigned probabilities and, if you have, you used information you don’t have and, if you have it, then we should not disagree. Stephen John Senn gave a nice example in this post, maybe reading chapter 4 in his book will make clearer to you why this is so.

                      Fran: Life is full of conspiracies, and the ways you can make Bayesians’ beliefs fail is vast. (Doesn’t defend original claim. Gives a game theory example irrelevant to the problem of scientific inference. Misstates the events leading up to the Challenger disaster.)

                      It does in the sense since it claims that apparently nearly impossible results based on your MaxEnt entropy predictions like ideal coins having p=0 may perfectly occur with no higher or lower chance than p=0.5. Now, even if this was not true, that would not make me rude only wrong.

                      Corey: It’s rude to weaken your argument without noting the concession. The real point of MaxEnt is that the conclusion it offers is “Either the prediction is correct or some unknown strong physical constraint is eliminating nearly all of the outcomes considered possible a priori.” It’s the latter case which is most interesting, since we learn something new.

                      Are you a physicists? It seems like if you could not be able to deal with abstract problems, like if the second law of thermodynamics had to apply to any statistical problem you pose.

                      Fran: I didn’t weaken my argument. (Apparently offered weaker claim instead of defending original claim for the lulz.)You just don’t understand my argument: the conclusion is a non-sequitur. You said I’m rude — I has a sad. (Fails to take note of the actual conclusion MaxEnt offers. Fails to argue against the revelance of Corey’s “conspiracy” example. Fails to defend game as relevant.) Look. I get that you think Bayesians are fanatics and therefore deserve as much trolling as you can muster.

                      Oh, so besides being rude now I am a troll as well! haha… Jesus :D Well, anyone reading our posts can decide how much rudeness and ‘trollness’ I have used in my comments.

                      About your statement of me thinking Bayesians are fanatics based on a post in my blog… well… maybe you missed that the post is in the category Humor, though maybe Laplace wearing a turban should have given you a clue but, to be fair, maybe I should update the category to Psychiatry after reading papers like this from illustrious Bayesians asking for “Frequentism” stop being taught.

                      But you seem to be a pretty smart guy, so if you have any genuine, well-founded arguments, I’m open to them.

                      Sure, just let’s begin to accept that whenever someone disagree with you does not necessarily means he is a troll or rude and that, maybe, he just have a point that has not been expressed in a way that is convincing for you. So instead going “AHA! got you!” when you see a discrepancy or a seemingly flaw argument on his side try to clarify what he actually meant. This way, hopefully, you two might reach the root of your disagreement and go somewhere from there.

                    • Fran: what’s your blog?

                    • Corey

                      Fran: “Just let’s begin to accept that whenever someone disagree with you does not necessarily means he is a troll or rude and that, maybe, he just have a point that has not been expressed in a way that is convincing for you.”

                      I do accept that; if you check my comments, you’ll see that’s not what I considered rude. (The key phrase in that essay is “evasion of a responsibility to answer criticism on the merits, when that evasion is authorized by the theory criticized”. Your apparent retreat from the claim that no conspiracy is needed to make MaxEnt fail is a quite mild form of the kind of rudeness Suber describes, which is why I described it as only “a bit rude”.)

                      “Well, anyone reading our posts can decide how much rudeness and ‘trollness’ I have used in my comments.”

                      I can live with that. I’m tapping out.

  8. Jean

    There’s the Society for Philosophy of Science in Practice (SPSP) whose mission is to encourage and give a forum for science-relevant philosophy of science.

  9. I will return to comments after our conference. Thanks.

  10. See Dicing with Death chapter 4 http://www.senns.demon.co.uk/DICE.html . If you start with a vague prior distribution, once you have tossed a coin a million times the observed relative frequency must be what you now believe (pretty much) the probability will be. Thus the relative frequency is the probability. But if you state that each probability is equally likely it follows that it then must be the case that each relative frequency is equally likely. In fact, it is not hard to show that the consequence is that 500,000 heads and 500,000 tails in any order must be equally likely for the Bayesian holding a uniform prior distribution to one million heads and no tails. As DeFinetti was always keen to point out any assumption about a parameter plus conditional independence leads swiftly to a statement about the probability of sequences. These statements may challenge intuition.

    • Oh! Well, another flavor, how do you like that… thanks! :)

    • Stephen: Thanks for directing me back to chapter 4 of Dicing with Death, de Finetti and the “faithful Blackshirts” and so on. You wrote: “if you state that each probability is equally likely it follows that it then must be the case that each relative frequency is equally likely”. What about after the million tosses?

      • The point is that given an uninformative prior the relative frequency of a large number of tosses is what you believe a posteriori the probability is. Since Bayesian forecasting obeys Martingale requirments your a priori probabilities must have an expectation equal to the posterior probability which is (see above) the relative frequency. Hence every relative frequency is equally likely. i find it better to “see” this but, of course, one can do the maths.

    • Hi Stephen,

      Nice way (flavor) to explain about the probabilities of the sequences, hopefully people will understand it better now. Nonetheless, I’ve been trying to access the links to your book and courses but they are dead. I just mention in case you didn’t notice it… It happens.

  11. All: It’s funny how this blog–I mean this particular blogpost–, which was intended to catch the attention of philosophers of science, has turned into a discussion of formal probability matters. (I realize it was the QM link.) Not that there’s anything wrong with that! (I’m reminded it’s the physics link behind some of the cults of personality…)

    Corey: Regarding “y’all” I read a scary, since serious, article the other day arguing that “y’all” ought* to be adopted to avoid various gendered terms.
    *or should I say “might ought”.

    At this point, it seems that the comments are rather randomly lined up, because the # of indents is maxed out, or at any rate, I don’t know how to create more.

  12. Fran: Thanks for the link to your blog (couldn’t get this reply under yours). I love your: “There is no Theorem but Bayes’ and Laplace is His Prophet” except for the fact that I should have thought of it first, or maybe I keep it a bit lighter (more show tunes from a great play):

    http://errorstatistics.com/2011/09/03/overheard-at-the-comedy-hour-at-the-bayesian-retreat/

    http://errorstatistics.com/2012/06/02/anything-tests-can-do-cis-do-better-cis-do-anything-better-than-tests/

    After all these years, I’ve never quite understood the virulence of the Bayesian hard and medium-hard core. I noted it in the preface of Mayo 1996, but thought things would change. Now, despite backtracking all over the place (even on the use of Bayes’s theorem in inference), there’s still the religionist’s fervor, reason be damned. Blind allegiance and over-the-top hostility to all who do not pledge allegiance. As you, I too would have thought such things out of place in science and philosophy.

    • JohnQ

      Yes, Corey and I are still Bayesians because we’re fanatics who aren’t able to reason. You’ve got it all figured out.

      We can’t all be as brillaint, error free, and as moderate as you and Fran, so please forgive our ignorance. We know not what we do.

      • JohnQ: I wasn’t alluding to you, I don’t even know you, and didn’t think your comments especially extreme; and certainly not Corey, who so often shows error statistical leanings. I was alluding to the type of “let’s not teach frequentist methods anymore…” cited by Fran. I don’t know of another example where advocates of one methodology try to ban and bar another approach which, by and large, doesn’t even wind up in a very different place. And those howlers published again and again verbatim (I just saw a new one in psychology). Frequentist error statisticians have always been eclectic. Some have been scared off foundational discussions because of the divisiveness, and that’s what mainly bothers me. A lot. If methods could just be evaluated on their properties in relation to a variety of goals, there might be more progress, or at least normalcy. It’s not an accident that people have said this about many (not all) Bayesians;it’s not a mere figment of so many people’s imaginations.

        • JohnQ

          That article was talking about the teaching of non-stat majors in introductory statistics, not Statistics majors. Since there is overwhelming evidence such teaching has been a disaster, I don’t see how this is a reason-free fanatical viewpoint. It’s interesting to note that Bayesian Methods are, for all practical purposes, baned in introductory statistics courses for non-Stat majors. The only banning actually done in other words was carried out by classical statisticians.

          Those howlers you mentioned are not as ridiculous as you describe. Your solutions to them so far depend on examples where high-severity is equivalent to high-posterior probability. In the spirit of “methods could just be evaluated on their properties in relation to a variety of goals” I’ve actually looked at those situations where Severity differs significantly from the posterior probability and can report that “SEV” doesn’t come off well at all. As things stand, the only thing shown is that the Bayesian posteriors answer those howlers adequately, but then we knew that already.

          “Some have been scared off foundational discussions because of the divisiveness, and that’s what mainly bothers me. A lot.”

          There’s no getting around this: people are generally cowards and shrink from plainly stating things others dislike. Bayesians and frequentests genuinely disagree so there is no way to avoid the divisiveness. As long as Bayesians aren’t refusing to hire frequentests or refusing to publish their articles or refusing to accept their research findings on purely philosophical grounds (which was a common experience for Bayesians at the hands of frequentests), then it’s all just words. Sticks and stones and all that.

          Bayesians reject frequentest methods, despite having a great deal of experienced with them for reasons given in the first paragraph. Frequentest objections to bayesian methods always seem to reduce to your “I have never understood why Jaynes finds it superior to mask what’s going on when it leads to such distortions”. Don’t you feel at a least a little responsibility to understand the “why” before dismissing them? Especially since many of these people were wrestling with problems at the edge of statistical science when they “felt the need”? Indeed they were often successfully solving problems you deny are even statistical in nature, yet whose solutions work just the same despite your denials. Or are you like that priest who refused to look through Galileo’s telescope at the moons of Jupiter because his theology/philosophy already told him there couldn’t possibly be such moons?

          Fran clearly is like the priest (oh how ironic!) since his later posts demonstrates he still had no idea what Corey and I were even saying and didn’t intend to spend 2 seconds trying to figure it out, but what about you?

          • JohnQ: If you read this blog rather than cherry picking from this post, you’d see what I mean about the howlers…But you’d have to want to get it, rather than be driven by anger. I don’t know who Fran is, not even his name. Thanks for your comments.

          • JohnQ:

            Fran clearly is like the priest (oh how ironic!) since his later posts demonstrates he still had no idea what Corey and I were even saying and didn’t intend to spend 2 seconds trying to figure it out…

            Actually I look through the Telescope and I claim that what you see is not actually a moon but dirt on the glass.

            …I don’t see how this is a reason-free fanatical viewpoint. It’s interesting to note that Bayesian Methods are, for all practical purposes, baned in introductory statistics courses for non-Stat majors. The only banning actually done in other words was carried out by classical statisticians.

            That is patently false. For example, professors in Spain enjoy what is called “Libertad de Cátedra” which means that they cannot be told legally what to teach in their courses so, if they don’t teach Bayesian Inference in their statistic introductory courses is not for any ban but for their own personal choice.

            Also, the reason for that choice is not because we don’t know about Bayes in Spain, in fact one of the Heavy Weights of Bayesians, Mr. Bernardo, teaches in Valencia (Spain) and nobody can tell him how or what to teach.

            The college where I got my degree and master degree in statistics is not Bayesian at all yet, Bayesian Statistics courses are available and, when I asked professors about the Bayesian vs Frequentist debate, they would shrug and say the debate is senseless since we should use one way or the other according to the problem at hand…. Which I agree.

            If asking for a ban of non-Bayesian methods in every field of Science is not fanaticism for you I would like to know what it is? Maybe blowing oneself into pieces in a Frequentist conference after yelling BayeshuAkbar?

            Oh, by the way, the guy asking for ban of Frequentism… They guy you say is not a fanatic for doing so, turns out he is an strong Objective Bayesian and considers Bayesianism wrong too!!! haha… Irony. Maybe he didn’t spend those two second either to understand you and Corey?

            Oh, finally, I don’t consider Objective Bayesians like Bayesians at all… They use data to inform the prior in a variety of ways so I see nothing evidently wrong in that. The fact the use Bayes’ Theorem does not make them more Bayesians than when I use it myself when I have way to inform the prior. So, you see, I can use a variety of techniques based on the problems’ features whereas hardcore Bayesians use a wrench fro everything… So please, do not confuse the huge success of non-Bayesian techniques with “a ban” of the Bayesian alternatives.

            Anyhow, Peace out to you and Corey… it was fun. :)

            • Note: When I said “I don’t consider Objective Bayesians like Bayesians at all…” I should have said Empirical Bayesians…. Lately there are many groups and sub-groups among Bayesians that I lose track.

              • Fran: Yes, that remark puzzled me. There’s a big difference. Its interesting though that Bayesians call empirical Bayesians non-Bayesian (wasn’t it Lindley who said, there’s no one less Bayesian than an empirical Bayesian?) Empirical Bayesians tend to be interested in contexts where what I call behavioristic goals are paramount, e.g., microarrays. An example is Efron. Other contexts provide legitimate frequentist priors as well, but I’m not so familiar with them. Anyone have references?

            • JohnQ

              We were talking about the teaching on non-stat majors in introductory statistics. The current situation is that non-bayesian methods are universally taught in those classes. This situation arouse because in times past there was a ban on Bayesian methods (they were regular denounces on purely philosophical grounds as unscientific for example despite there spectacular success two hundred years ago by Laplace in astronomy). Once this “ban” became solidified, it was reinforced by journal articles policies which required those methods for any article published. That’s how we arrive at the situation today where basic Bayesian clearly improve on those basic frequentest methods in those introductory class, but they still aren’t being taught there. Everyone says we need to continue teaching the same old blah because that’s whats needed to get published. Obviously everyone is still free to study whatever they want. Bayesians can still study their methods in a few grad courses despite being indoctrinated into statistics using the frequentest methods. The reverse will be true whenever they finally dump the frequentest significance testing an so on in introductory courses. You’ll still be just as free to study your methods as bayesians are today.

              There is a lot of water under the Frequentest vs Bayesian debate at this point and two things are clear. The Bayesian replacements for the methods usually taught in introductory are demonstratively better than what’s being currently taught. In addition, the teaching of those frequentest methods has been a disaster for psychology, the drug testing, finance, and quite a few other fields.

              Pointing out these results and their history, is not fanatical. The fanatical position is the one you take. Which seems to be something like “we can’t change what were doing because I don’t really get the Bayesian Philosophy”.

              • JohnQ: Let me just say that I DO think a lot should be changed, because it appears that standard methods are often/generally taught without the full understanding/appreciation of the underlying philosophy. That is to say, disingenuously. But it’s precisely teaching them with eyes rolling if not nausea that may be contributing to the “disaster’ you claim. That’s really too bad.
                I am not any kind of fanatic, and I’m vastly opposed to unthinking uses of statistics. It would be better not to use them at all.

              • onhQ:

                There is a huge difference between “people don’t want to use it” and a “ban”… Frequentists could have criticized Laplace to death and scientists could have ignored them but, well, they didn’t, they all over the world rejected Bayesianism willingly. Including the French! I mean… not easy to turn your back to Grand Laplace to smile at the very knighted roast beef Fisher if you are French and yet, just like everybody else, they dumped Bayesianism.

                two things are clear.

                Excellent, let’s see them…

                The Bayesian replacements for the methods usually taught in introductory are demonstratively better than what’s being currently taught.

                Where is the demonstration? And what you mean by better? And this is clear for who… Bayesians? This is like saying that for Muslims is clear that Allah is the true God.

                The teaching of those frequentest methods has been a disaster for psychology, the drug testing, finance, and quite a few other fields.

                Yeah… those damn scientists not choosing Bayesianism for their research. Evil people! It if wasn’t for them there would not mental illnesses, cancer would be cured, there would be no financial crisis… and quite a few other goodies too.

                I’ve heard worse though, there was once this Bayesian guy saying Frequentism aids Terrorism so, knock yourself out.

                Pointing out these results and their history, is not fanatical. The fanatical position is the one you take. Which seems to be something like “we can’t change what were doing because I don’t really get the Bayesian Philosophy”.

                Oh! So we can’t change because I don’t get it. Well, am I not super-powerful? I wonder what would I do to make sure we don’t change… mmm… let’s see… How about if I start by writing a paper asking for a ban of Bayesianism on every field of science? I hope this is not going to far.

                But, you know, if it was just me then you would really have a point but you have a long line of brilliant mathematicians through history that they just don’t get it either. As a matter of fact, among you Bayesians there are bitch slapping contests on what flavor of Bayesianism is the true one.

                So it is not just me, or the long line of brilliant scientists and mathematicians criticizing Bayesian foundations, but many among you Bayesians that seems not to get “true” Bayesianism either. Because, at this point, you have probably figure out that if your Bayesian friend fulfills his ban dream he would impose his flavor of Bayesianism on everybody else.

                But you’re right on something, I don’t get it, I would love to but I just can’t. I read your arguments to justify uninformative priors (the variety of them according to each Bayesian flavor) and none of them ring a bell to me… and to legions of mathematicians through history. So, even if you were correct (whatever flavor of Bayesianism you root for), there must be something you don’t explain quite right for so many people not getting it… Including the Bayesians that disagree with your flavor of Bayes.

                Bayesians criticizing things like NHST showing sheer ignorance about its purpose or even ill intent… well… this kind of marketing tricks does not help to take you all seriously. I have just caught Bayesians in so many lies and claiming so many crazy things (like the aiding terrorists one) that I have always the feeling they are trying to bullshit me at the first opportunity, which is unfair for those among you that truly would like to engage in a serious debate. But I guess that, just like in religions, the fanatical ones gives you all the bad name… Though in this case seems like if the is the 99% given a bad name to the other 1%.

    • Mayo: haha.. thanks :D.

      The fervor comes when you are placed in a group. I remember this social experiment where two group of teenagers randomly assigned to each group where nicknamed ‘red group’ and ‘blue group’ while camping in the countryside… Well, they had to stop the experiment because the began to beat the hell of each other… Human, All Too Human.

      • Corey

        Fran: You’re probably thinking of the Robbers Cave Experiment. A short summary can be found here.

      • Fran/Corey: No, I don’t see it that way at all. It’s more like that story told one time (the source being a stat professor, I’d have to look it up on Gelman’s blog) about a group of Bayesian leaders having consciously decided maybe 30 years ago, to relentlessly push the Bayesian philosophy/methodology as vastly superior in all ways (everything else discredited), knowing it was a gross exaggeration, but figuring that after it monopolized a generation or so, people would later discover its weaknesses. Somehow this was to accomplish their goals. Corey will likely know….

        • I found one link (an O’Rourke comment on Gelman’s blog–not quite as extreme as I membered it).

          http://andrewgelman.com/2012/12/21/two-reviews-of-nate-silvers-new-book-from-kaiser-fung-and-cathy-oneil/

          “Many years ago someone suggested that it was a good strategy to get clinical researchers to use Bayesian methods and only after they have been doing it for a while point out the limitations and challenges (the usual new technology ploy of exagerate the benefits, hide the costs/challenges and overstate the existing technology’s capacity to do essentially the same things.)” (O’Rourke)

        • JohnQ

          Hopefully, it’s needless to say that no one should be hiding the limitations of any method under any circumstances.

  13. JohnQ: In connection to my last, I might note my attempts to cultivate meeting grounds, and to seriously tackle even the the wildest howlers (again and again), rather than just dismiss them. Even small concessions or signs of progress are worthy of mention, e.g.

    http://errorstatistics.com/2013/03/31/10081/

  14. All: I want to just make a coupla three points regarding the commentary to this post which I’ve reviewed:
    (1) Because I’d been focussed on the conference I co-ran last week, I let this commentary more or less take a life of it’s own, but it wasn’t that constructive a life, I’m afraid. Sorry.

    (2) Responding to disembodied comments. Remember when a comment from an older post (like this one) is sent disembodied, as mine are, I might not remember, or have time to check, the other comments around it. So, for example, responding to the disembodied Fran link to his blog resulted in my making a remark on a particular link which was erroneously taken as a remark on the commentators. Unintended.

    (3) As I’d said on May 7: “It’s funny how this blog–I mean this particular blogpost–, which was intended to catch the attention of philosophers of science, has turned into a discussion of formal probability matters.”Not that there’s anything wrong with that. However, I don’t happen to think that central issues or debates of interest regarding statistical inference have a lot to do with assigning probabilities to coin toss outcomes, so it is somewhat odd to have that be the focus of disagreement here. I assumed Fran was just raising some classic probability puzzles, and people were giving fanciful reflections.

    I’m still a novice at blogging, and anyway time shortages mean I can’t tend to the commentaries as much as might be desirable after a certain point. I assume people understand that.

  15. Philippe

    Dear Fran,

    Could you please tell me the name of that physicist you mention above, who seems to question the role of calculus in physics, I would be very interested to read his arguments.

I welcome constructive comments for 14-21 days

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Blog at WordPress.com. The Adventure Journal Theme.

Follow

Get every new post delivered to your Inbox.

Join 429 other followers