Posts Tagged With: Bayesian probability

Matching Numbers Across Philosophies

The search for an agreement on numbers across different statistical philosophies is an understandable pastime in foundations of statistics. Perhaps identifying matching or unified numbers, apart from what they might mean, would offer a glimpse as to shared underlying goals? Jim Berger (2003) assures us there is no sacrilege in agreeing on methodology without philosophy, claiming “while the debate over interpretation can be strident, statistical practice is little affected as long as the reported numbers are the same” (Berger, 2003, p. 1).

Do readers agree?

Neyman and Pearson (or perhaps it was mostly Neyman) set out to determine when tests of statistical hypotheses may be considered “independent of probabilities a priori” ([p. 201). In such cases, frequentist and Bayesian may agree on a critical or rejection region.

The agreement between “default” Bayesians and frequentists in the case of one-sided Normal (IID) testing (known σ) is very familiar.   As noted in Ghosh, Delampady, and Samanta (2006, p. 35), if we wish to reject a null value when “the posterior odds against it are 19:1 or more, i.e., if posterior probability of H0 is < .05” then the rejection region matches that of the corresponding test of H0, (at the .05 level) if that were the null hypothesis. By contrast, they go on to note the also familiar fact that there would be disagreement between the frequentist and Bayesian if one were instead testing the two sided: H0: μ=μ0 vs. H1: μ≠μ0 with known σ. In fact, the same outcome that would be regarded as evidence against the null in the one-sided test (for the default Bayesian and frequentist) can result in statistically significant results being construed as no evidence against the null —for the Bayesian– or even evidence for it (due to a spiked prior).[i] Continue reading

Categories: Statistics | Tags: , , ,

U-Phil: Jon Williamson: Deconstructing Dynamic Dutch Books

Jon Williamson

I am  posting Jon Williamson’s* (Philosophy, Kent) U-Phil from 4-15-12

In this paper http://www.springerlink.com/content/q175036678w17478 (Synthese 178:67–85) I identify four ways in which Bayesian conditionalisation can fail. Of course not all Bayesians advocate conditionalisation as a universal rule, and I argue that objective Bayesianism as based on the maximum entropy principle should be preferred to subjective Bayesianism as based on conditionalisation, where the two disagree.

Conditionalisation is just one possible way of updating probabilities and I think it’s interesting to see how different formal approaches compare.

*Williamson participated in our June 2010 “Phil-Stat Meets Phil Sci” conference at the LSE, and we jointly ran a conference at Kent in June 2009.

Categories: Statistics, U-Phil | Tags: , , , ,

Blog at WordPress.com.