Monthly Archives: April 2012

Comedy Hour at the Bayesian Retreat: P-values versus Posteriors

Did you hear the one about the frequentist significance tester when he was shown the nonfrequentist nature of p-values?

JB: I just simulated a long series of tests on a pool of null hypotheses, and I found that among tests with p-values of .05, at least 22%—and typically over 50%—of the null hypotheses are true!

Frequentist Significance Tester: Scratches head: But rejecting the null with a p-value of .05 ensures erroneous rejection no more than 5% of the time!

Raucous laughter ensues!

(Hah, hah,…. I feel I’m back in high school: “So funny, I forgot to laugh!)

The frequentist tester should retort:

Frequentist significance tester: But you assumed 50% of the null hypotheses are true, and  computed P(H0|x) (imagining P(H0)= .5)—and then assumed my p-value should agree with the number you get!

But, our significance tester is not heard from as they move on to the next joke….

Of course it is well-known that for a fixed p-value, with a sufficiently large n, even a statistically significant result can correspond to large posteriors in H0 [i] .  Somewhat more recent work generalizes the result, e.g., J. Berger and Sellke, 1987. Although from their Bayesian perspective, it appears that p-values come up short as measures of evidence, the significance testers balk at the fact that use of the recommended priors allows highly significant results to be interpreted as no evidence against the null — or even evidence for it!   An interesting twist in recent work is to try to “reconcile” the p-value and the posterior e.g., Berger 2003[ii].

The conflict between p-values and Bayesian posteriors considers the two sided  test of the Normal mean, H0: μ = μ0 versus H1: μ ≠ μ0 .

“If n = 50 one can classically ‘reject H0 at significance level p = .05,’ although Pr (H0|x) = .52 (which would actually indicate that the evidence favors H0).” (Berger and Sellke, 1987, p. 113).

If n = 1000, a result statistically significant at the .05 level leads to a posterior to the null of .82!


Table 1 (modified) from J.O. Berger and T. Selke (1987) “Testing a Point Null Hypothesis,” JASA 82(397) : 113.

Many find the example compelling evidence that the p-value “overstates evidence against a null” because it claims to use an “impartial” or “uninformative”(?) Bayesian prior probability assignment of .5 to H0, the remaining .5 being spread out over the alternative parameter space. Others charge that the problem is not p-values but the high prior (Casella and R.Berger, 1987).  Moreover, the “spiked concentration of belief in the null” is at odds with the prevailing view “we know all nulls are false”.  Note too the conflict with confidence interval reasoning since the value zero (0) lies outside the corresponding confidence interval (Mayo 2005).

But often, as in the opening joke, the prior assignment is claimed to be keeping to the frequentist camp and frequentist error probabilities: it is imagined that we sample randomly from a population of hypotheses, some proportion of which are assumed to be true, 50% is a common number used. We randomly draw a hypothesis and get this particular one, maybe it concerns the mean deflection of light, or perhaps it is an assertion of bioequivalence of two drugs or whatever. The percentage “initially true” (in this urn of nulls) serves as the prior probability for H0. I see this gambit in statistics, psychology, philosophy and elsewhere, and yet it commits a fallacious instantiation of probabilities:

50% of the null hypotheses in a given pool of nulls are true.

This particular null H0 was randomly selected from this urn (and, it may be added, nothing else is known, or the like).

Therefore P(H0 is true) = .5.

It isn’t that one cannot play a carnival game of reaching into an urn of nulls (and one can imagine lots of choices for what to put in the urn), and use a Bernouilli model for the chance of drawing a true hypothesis (assuming we could even tell), but this “generic hypothesis”  is no longer the particular hypothesis one aims to use in computing the probability of data x0 (be it on eclipse data, risk rates, or whatever) under hypothesis H0. [iii]  In any event .5 is not the frequentist probability that the chosen null H0 is true. (Note the selected null would get the benefit of being selected from an urn of nulls where few have been shown false yet: “innocence by association”.)

Yet J. Berger claims his applets are perfectly frequentist, and by adopting his recommended O-priors, we frequentists can become more frequentist (than using our flawed p-values)[iv]. We get what he calls conditional p-values (of a special sort). This is a reason for a coining a different name, e.g.,  frequentist error statistician.

Upshot: Berger and Sellke tell us they will cure  the significance tester’s tendency to exaggerate the evidence against the null  (in two-sided testing) by using some variant on a spiked prior. But the result of their “cure” is that outcomes may too readily be taken as no evidence against, or even evidence for, the null hypothesis, even if it is false.  We actually don’t think we need a cure.  Faced with conflicts between error probabilities and Bayesian posterior probabilities, the error statistician may well conclude that the flaw lies with the latter measure. This is precisely what Fisher argued:

Discussing a test of the hypothesis that the stars are distributed at random, Fisher takes the low p-value (about 1 in 33,000) to “exclude at a high level of significance any theory involving a random distribution” (Fisher, 1956, page 42). Even if one were to imagine that H0 had an extremely high prior probability, Fisher continues—never minding “what such a statement of probability a priori could possibly mean”—the resulting high posteriori probability to H0, he thinks, would only show that “reluctance to accept a hypothesis strongly contradicted by a test of significance” (44) . . . “is not capable of finding expression in any calculation of probability a posteriori” (43). Sampling theorists do not deny there is ever a legitimate frequentist prior probability distribution for a statistical hypothesis: one may consider hypotheses about such distributions and subject them to probative tests. Indeed, Fisher says,  if one were to consider the claim about the a priori probability to be itself a hypothesis, it would be rejected by the data!

[i] A result my late colleague I.J. wanted me to call the Jeffreys-Good-Lindley Paradox).

[ii] An applet is available at∼berger

[iii] Bayesian philosophers, e.g., Achinstein, allow this does not yield a frequentist prior, but he claims it yields an acceptable prior for the epistemic  probabilist (e.g., See Error and Inference 2010).

[iv]Does this remind you of how the Bayesian is said to become more subjective by using the Berger O-Bayesian prior? See Berger deconstruction.


Berger, J. O.  (2003). “Could Fisher, Jeffreys and Neyman have Agreed on Testing?” Statistical Science 18: 1-12.

Berger, J. O. and Sellke, T.  (1987). “Testing a point null hypothesis: The irreconcilability of p values and evidence,” (with discussion). J. Amer. Statist. Assoc. 82: 112–139.

Cassella G. and Berger, R..  (1987). “Reconciling Bayesian and Frequentist Evidence in the One-sided Testing Problem,” (with discussion). J. Amer. Statist. Assoc. 82 106–111, 123–139.

Fisher, R. A., (1956) Statistical Methods and Scientific Inference, Edinburgh: Oliver and Boyd.

Jeffreys, (1939) Theory of Probability, Oxford: Oxford University Press.

Mayo, D. G. 2005  “Philosophy of Statistics” in S. Sarkar and J. Pfeifer (eds.) Philosophy of Science: An Encyclopedia, London: Routledge: 802-815. NOTE: THERE ARE LOTS OF PRINTER’S ERRORS IN THIS

Categories: Statistics | Tags: , , , , , | 52 Comments

Matching Numbers Across Philosophies

The search for an agreement on numbers across different statistical philosophies is an understandable pastime in foundations of statistics. Perhaps identifying matching or unified numbers, apart from what they might mean, would offer a glimpse as to shared underlying goals? Jim Berger (2003) assures us there is no sacrilege in agreeing on methodology without philosophy, claiming “while the debate over interpretation can be strident, statistical practice is little affected as long as the reported numbers are the same” (Berger, 2003, p. 1).

Do readers agree?

Neyman and Pearson (or perhaps it was mostly Neyman) set out to determine when tests of statistical hypotheses may be considered “independent of probabilities a priori” ([p. 201). In such cases, frequentist and Bayesian may agree on a critical or rejection region.

The agreement between “default” Bayesians and frequentists in the case of one-sided Normal (IID) testing (known σ) is very familiar.   As noted in Ghosh, Delampady, and Samanta (2006, p. 35), if we wish to reject a null value when “the posterior odds against it are 19:1 or more, i.e., if posterior probability of H0 is < .05” then the rejection region matches that of the corresponding test of H0, (at the .05 level) if that were the null hypothesis. By contrast, they go on to note the also familiar fact that there would be disagreement between the frequentist and Bayesian if one were instead testing the two sided: H0: μ=μ0 vs. H1: μ≠μ0 with known σ. In fact, the same outcome that would be regarded as evidence against the null in the one-sided test (for the default Bayesian and frequentist) can result in statistically significant results being construed as no evidence against the null —for the Bayesian-- or even evidence for it (due to a spiked prior).[i] Continue reading

Categories: Statistics | Tags: , , , | 7 Comments

U-Phil: Jon Williamson: Deconstructing Dynamic Dutch Books

Jon Williamson

I am  posting Jon Williamson’s* (Philosophy, Kent) U-Phil from 4-15-12

In this paper (Synthese 178:67–85) I identify four ways in which Bayesian conditionalisation can fail. Of course not all Bayesians advocate conditionalisation as a universal rule, and I argue that objective Bayesianism as based on the maximum entropy principle should be preferred to subjective Bayesianism as based on conditionalisation, where the two disagree.

Conditionalisation is just one possible way of updating probabilities and I think it’s interesting to see how different formal approaches compare.

*Williamson participated in our June 2010 “Phil-Stat Meets Phil Sci” conference at the LSE, and we jointly ran a conference at Kent in June 2009.

Categories: Statistics, U-Phil | Tags: , , , , | 10 Comments

Jean Miller: Happy Sweet 16 to EGEK #2 (Hasok Chang Review of EGEK)

Jean Miller here, reporting back from the island. Tonight we complete our “sweet sixteen” celebration of Mayo’s EGEK (1996) with the book review by Dr. Hasok Chang (currently the Hans Rausing Professor of History and Philosophy of Science at the University of Cambridge). His was chosen as our top favorite in the category of ‘reviews by philosophers’. Enjoy!

REVIEW: British Journal of the Philosophy of Science 48 (1997), 455-459
DEBORAH MAYO Error and the Growth of Experimental Knowledge, 
The University of Chicago Press, 1996
By: Hasok Chang

Deborah Mayo’s Error and the Growth of Experimental Knowledge is a rich, useful, and accessible book. It is also a large volume which few people can realistically be expected to read cover to cover. Considering those factors, the main focus of this review will be on providing various potential readers with guidelines for making the best use of the book.

As the author herself advises, the main points can be grasped by reading the first and the last chapters. The real benefit, however, would only come from studying some of the intervening chapters closely. Below I will offer comments on several of the major strands that can be teased apart, though they are found rightly intertwined in the book. Continue reading

Categories: philosophy of science, Statistics | Tags: , , , | 2 Comments

Jean Miller: Happy Sweet 16 to EGEK! (Shalizi Review: “We Have Ways of Making You Talk”)

Jean Miller here.  (I obtained my PhD with D. Mayo in Phil/STS at VT.) Some of us “island philosophers” have been looking to pick our favorite book reviews of EGEK (Mayo 1996; Lakatos Prize 1999) to celebrate its “sweet sixteen” this month. This review, by Dr. Cosma Shalizi (CMU, Stat) has been chosen as the top favorite (in the category of reviews outside philosophy).  Below are some excerpts–it was hard to pick, as each paragraph held some new surprise, or unique way to succinctly nail down the views in EGEK. You can read the full review here. Enjoy.

“We Have Ways of Making You Talk, or, Long Live Peircism-Popperism-Neyman-Pearson Thought!”
by Cosma Shalizi

After I’d bungled teaching it enough times to have an idea of what I was doing, one of the first things students in my introductory physics classes learned (or anyway were taught), and which I kept hammering at all semester, was error analysis: estimating the uncertainty in measurements, propagating errors from measured quantities into calculated ones, and some very quick and dirty significance tests, tests for whether or not two numbers agree, within their associated margins of error. I did this for purely pragmatic reasons: it seemed like one of the most useful things we were supposed to teach, and also one of the few areas where what I did had any discernible effect on what they learnt. Now that I’ve read Mayo’s book, I’ll be able to offer another excuse to my students the next time I teach error analysis, namely, that it’s how science really works.

I exaggerate her conclusion slightly, but only slightly. Mayo is a dues-paying philosopher of science (literally, it seems), and like most of the breed these days is largely concerned with questions of method and justification, of “ampliative inference” (C. S. Peirce) or “non-demonstrative inference” (Bertrand Russell). Put bluntly and concretely: why, since neither can be deduced rigorously from unquestionable premises, should we put more trust in David Grinspoon‘s ideas about Venus than in those of Immanuel Velikovsky? A nice answer would be something like, “because good scientific theories are arrived at by employing thus-and-such a method, which infallibly leads to the truth, for the following self-evident reasons.” A nice answer, but not one which is seriously entertained by anyone these days, apart from some professors of sociology and literature moonlighting in the construction of straw men. In the real world, science is alas fallible, subject to constant correction, and very messy. Still, mess and all, we somehow or other come up with reliable, codified knowledge about the world, and it would be nice to know how the trick is turned: not only would it satisfy curiosity (“the most agreeable of all vices” — Nietzsche), and help silence such people as do, in fact, prefer Velikovsky to Grinspoon, but it might lead us to better ways of turning the trick. Asking scientists themselves is nearly useless: you’ll almost certainly just get a recital of whichever school of methodology we happened to blunder into in college, or impatience at asking silly questions and keeping us from the lab. If this vice is to be indulged in, someone other than scientists will have to do it: namely, the methodologists. Continue reading

Categories: philosophy of science, Statistics | Tags: , , , , | 33 Comments

Earlier U-Phils and Deconstructions

Dear Reader: If you wish to see some previous rounds of philosophical analyses and deconstructions on this blog, we’ve listed them below:

Introductory explanation:

Mayo on Jim Berger:

Contributed deconstructions of J. Berger:

J. Berger on J. Berger:

Mayo on Senn:

Others on Senn:

Gelman on Senn:

Senn on Senn:

Mayo, Senn & Wasserman on Gelman:

Hennig on Gelman:

Deconstructing Dutch books:

Deconstructing Larry Wasserman

Aris Spanos on Larry Wasserman

Hennig and Gelman on Wasserman

Wasserman replies to Spanos and Hennig

concluding the deconstruction: Wasserman-Mayo

There are  others, but this should do; if you care to write on my previous post (send directly to


D Mayo

Categories: philosophy of science, U-Phil | Tags: , , | Leave a comment

A. Spanos: Jerzy Neyman and his Enduring Legacy

A Statistical Model as a Chance Mechanism

Aris Spanos

Jerzy Neyman (April 16, 1894 – August 5, 1981), was a Polish/American statistician[i] who spent most of his professional career at the University of California, Berkeley. Neyman is best known in statistics for his pioneering contributions in framing the Neyman-Pearson (N-P) optimal theory of hypothesis testing and his theory of Confidence Intervals.

One of Neyman’s most remarkable, but least recognized, achievements was his adapting of Fisher’s (1922) notion of a statistical model to render it pertinent for  non-random samples. Fisher’s original parametric statistical model Mθ(x) was based on the idea of ‘a hypothetical infinite population’, chosen so as to ensure that the observed data x0:=(x1,x2,…,xn) can be viewed as a ‘truly representative sample’ from that ‘population’:

“The postulate of randomness thus resolves itself into the question, Of what population is this a random sample? (ibid., p. 313), underscoring that: the adequacy of our choice may be tested a posteriori.’’ (p. 314)

In cases where data x0 come from sample surveys or it can be viewed as a typical realization of a random sample X:=(X1,X2,…,Xn), i.e. Independent and Identically Distributed (IID) random variables, the ‘population’ metaphor can be helpful in adding some intuitive appeal to the inductive dimension of statistical inference, because one can imagine using a subset of a population (the sample) to draw inferences pertaining to the whole population.

This ‘infinite population’ metaphor, however, is of limited value in most applied disciplines relying on observational data. To see how inept this metaphor is consider the question: what is the hypothetical ‘population’ when modeling the gyrations of stock market prices? More generally, what is observed in such cases is a certain on-going process and not a fixed population from which we can select a representative sample. For that very reason, most economists in the 1930s considered Fisher’s statistical modeling irrelevant for economic data! Continue reading

Categories: Statistics | Tags: , , , , , , | 2 Comments

U-Phil: Deconstructing Dynamic Dutch-Books?

Oh, she takes care of herself, she can wait if she wants,
She’s ahead of her time.
Oh, and s
he never gives out and she never gives in,
She just changes her mind.

(Billy Joel, “She’s Always a Woman”)

If we agree that we have degrees of belief in any and all propositions, then, it is often argued (by Bayesians), that if your beliefs do not conform to the probability calculus, you are being incoherent, and will lose money for sure (by a clever enough bookie). We can accept the claim that, were we required to take bets on our degrees of belief, then given that we prefer not to lose, we would not accept bets that ensured our losing. But this is a tautology, as others have pointed out, and entails nothing about degree of belief assignments. “That an agent ought not to accept a set of wagers according to which she loses come what may, if she would prefer not to lose, is a matter of deductive logic and not a property of beliefs” (Bacchus, Kyburg, and Thalos 1990: 476).[i] Nor need coerced (or imaginary) betting rates actually measure an agent’s degrees of belief in the truth of scientific hypothesis..

Nowadays, surprisingly, most Bayesian philosophers seem to dismiss as irrelevant the variety of threats of being Dutch-booked. Confronted with counterexamples in which violating Bayes’s rule seems perfectly rational on intuitive grounds, Bayesians contort themselves into a great many knots in order to retain the underlying Bayesian philosophy while sacrificing updating rules, long held to be the very essence of Bayesian reasoning. To face contemporary positions squarely calls for rather imaginative deconstructions. I invite your deconstructions (to by April 23 (see So You Want to Do a Philosophical Analysis). Says Howson:

“It is the entirely rational claim that I may be induced to act irrationally that the dynamic Dutch book argument, absurdly, would condemn as incoherent”. (Howson 1997: 287)[ii] [iii]

It used to be that frequentists and others who sounded the alarm about temporal incoherency were declared irrational. Now, it is the traditional insistence on updating by Bayes’s rule that was irrational all along. Continue reading

Categories: Statistics, U-Phil | Tags: , | 22 Comments

That Promissory Note From Lehmann’s Letter; Schmidt to Speak

Juliet Shaffer and Erich Lehmann

Monday, April 16, is Jerzy Neyman’s birthday, but this post is not about Neyman (that comes later, I hope). But in thinking of Neyman, I’m reminded of Erich Lehmann, Neyman’s first student, and a promissory note I gave in a post on September 15, 2011.  I wrote:

“One day (in 1997), I received a bulging, six-page, handwritten letter from him in tiny, extremely neat scrawl (and many more after that).  …. I remember it contained two especially noteworthy pieces of information, one intriguing, the other quite surprising.  The intriguing one (I’ll come back to the surprising one another time, if reminded) was this:  He told me he was sitting in a very large room at an ASA meeting where they were shutting down the conference book display (or maybe they were setting it up), and on a very long, dark table sat just one book, all alone, shiny red.  He said he wondered if it might be of interest to him!  So he walked up to it….  It turned out to be my Error and the Growth of Experimental Knowledge (1996, Chicago), which he reviewed soon after.”

But what about the “surprising one” that I was to come back to “if reminded”? (yes, one person did remind me last month). The surprising one is that Lehmann’s letter—this is his first letter to me– asked me to please read a paper by Frank Schmidt to appear in his wife Juliet Shaffer’s new (at the time) journal, Psychological Methods, as he wondered if I had any ideas as to what may be done to answer such criticisms of frequentist tests!   But, clearly, few people could have been in a better position than Lehmann to “do something about” these arguments …hence my surprise.  But I think he was reluctant…. Continue reading

Categories: Statistics | Tags: , , , , | 1 Comment

Call for papers: Philosepi?

Dear Reader: Here’s something of interest that was sent to me today (“philosepi”!)

Call for papers: Preventive Medicine special section on philosepi

The epidemiology and public health journal Preventive Medicine is devoting a special section to the Philosophy of Epidemiology, and published the first call for papers in its April 2012 issue. Papers will be published as they are received and reviewed. Deadline for inclusion in the first issue is 30 June 2012. See the Call For Papers for further information or contact Alex Broadbent who is happy to discuss possible topics, etc. All papers will be subject to peer review.

Preventive Medicine invites submissions from epidemiologists, statisticians, philosophers, lawyers, and others with a professional interest in the conceptual and methodological challenges that emerge from the field of epidemiology for a Special Section entitled “Philoso- phy of Epidemiology” with Guest Editor Dr Alex Broadbent of the University of Johannesburg. Dr Broadent also served as the Guest Editor of a related previous Special Section, “Epidemiology, Risk, and Causation”, that appeared in the October–November 2011 issue (Prev Med 53(4–5):213–259 journal/00917435/53/4-5). Continue reading

Categories: Announcement, philosophy of science | Tags: , , | 1 Comment

N. Schachtman: Judge Posner’s Digression on Regression

I am pleased to post Nathan Schactman’s most recent blog entry on statistics in the law: he has gratefully agreed to respond to comments and queries on this blog*.
April 6th, 2012

Cases that deal with linear regression are not particularly exciting except to a small brand of “quant” lawyers who see such things “differently.”  Judge Posner, the author of several books, including Economic Analysis of Law (8th ed. 2011), is a judge who sees things differently as well.

In a case decided late last year, Judge Posner took the occasion to chide the district court and the parties’ legal counsel for failing to assess critically a regression analysis offered by an expert witness on the quantum of damages in a contract case.  ATA Airlines Inc. (ATA), a subcontractor of Federal Express Corporation, sued FedEx for breaching an alleged contract to include ATA in a lucrative U.S. military deal.

Remarkably, the contract liability was a non-starter; the panel of the Seventh Circuit reversed and rendered the judgment in favor of the plaintiff.  There never was a contract, and so the case should never have gone to trial.  ATA Airlines, Inc. v. Federal Exp. Corp., 665 F.3d 882, 888-89 (2011).

End of Story?

In a diversity case, based upon state law, with no liability, you would think that the panel would and perhaps should stop once it reached the conclusion that there was no contract upon which to predicate liability.  Anything more would be, of course, pure obiter dictum, but Judge Posner could not resist the teaching moment, both for the trial judge below, the parties, their counsel, and the bar: Continue reading

Categories: Statistics | Tags: , , , , , | 2 Comments

Going Where the Data Take Us

A reader, Cory J, sent me a question in relation to a talk of mine he once attended:

I have the vague ‘memory’ of an example that was intended to bring out a central difference between broadly Bayesian methodology and broadly classical statistics.  I had thought it involved a case in which a Bayesian would say that the data should be conditionalized on, and supports H, whereas a classical statistician effectively says that the data provides no support to H.  …We know the data, but we also know of the data that only ‘supporting’ data would be given us.  A Bayesian was then supposed to say that we should conditionalize on the data that we have, even if we know that we wouldn’t have been given contrary data had it been available.

That only “supporting” data would be presented need not be problematic in itself; it all depends on how this is interpreted.  There might be no negative results to be had (H might be true) , and thus none to “be given us”.  Your last phrase, however, does describe a pejorative case for a frequentist error statistician, in that, if “we wouldn’t have been given contrary data” to H (in the sense of data in conflict with what H asserts), even “had it been available” then the procedure had no chance of finding or reporting flaws in H.  Thus only data in accordance with H would be presented, even if H is false; so H passes a “test” with minimal stringency or severity. I discuss several examples in papers below (I think the reader had in mind Mayo and Kruse 2001). Continue reading

Categories: double-counting, Statistics | Tags: , , | 4 Comments

Fallacy of Rejection and the Fallacy of Nouvelle Cuisine

In February, in London, criminologist Katrin H. and I went to see Jackie Mason do his shtick, a one-man show billed as his swan song to England.  It was like a repertoire of his “Greatest Hits” without a new or updated joke in the mix.  Still, hearing his rants for the nth time was often quite hilarious.

A sample: If you want to eat nothing, eat nouvelle cuisine. Do you know what it means? No food. The smaller the portion the more impressed people are, so long as the food’s got a fancy French name, haute cuisine. An empty plate with sauce!

As one critic wrote, Mason’s jokes “offer a window to a different era,” one whose caricatures and biases one can only hope we’ve moved beyond:

But it’s one thing for Jackie Mason to scowl at a seat in the front row and yell to the shocked audience member in his imagination, “These are jokes! They are just jokes!” and another to reprise statistical howlers, which are not jokes, to me. This blog found its reason for being partly as a place to expose, understand, and avoid them. Recall the September 26, 2011 post “Whipping Boys and Witch Hunters”: [i]

Fortunately, philosophers of statistics would surely not reprise decades-old howlers and fallacies. After all, it is the philosopher’s job to clarify and expose the conceptual and logical foibles of others; and even if we do not agree, we would never merely disregard and fail to address the criticisms in published work by other philosophers.  Oh wait, ….one of the leading texts repeats the fallacy in their third edition: Continue reading

Categories: Statistics | Tags: , , , , | 1 Comment

History and Philosophy of Evidence-Based Health Care (EBHC)

Here is an announcement I received of an unusual short course on History and Philosophy of Evidence-Based Health Care (EBHC):  “Historical anecdotes are often easier to grasp than numbers,” the ad reads, but I hope they’re not going to be recommending the latter be replaced by the former?


The relationship between medicine and philosophy has a distinguished history. Maimonides, Avicenna, Galen, Descartes, and Locke were all philosophers and medical doctors. More recently, Peter Medawar and Archie Cochrane were strongly influenced by Karl Popper. There is an increasing body of evidence that combining History and Philosophy of Science on the one hand, and health care on the other creates synergies for the mutual benefit of all disciplines.

The course will consider:

  • How and why did the idea that comparative studies were necessary to inform health care decisions replace other ‘methods’ such as reasoning from more basic sciences and ‘expertise’?
  • Can average results be applied to individuals?
  • What is the role of values?

We believe that the history and philosophy of science is an integrated discipline, and we will explore these issues with appeal to current and historical examples.… it is fair to say that not very much attention was paid by the originators of EBM to the philosophy of science… One hopes that the attention of philosophers will be drawn to these questions (Haynes, 2002)

A wise man proportions his belief to the evidence – David Hume

History of science without philosophy of science is blind … philosophy of science without history of science is empty – Norwood Russell Hanson

Categories: Announcement | Tags: , | 9 Comments

Philosophy of Statistics: Retraction Watch, Vol. 1, No. 1

APRIL FOOL’S DAY POST: This morning I received a paper I have been asked to review (anonymously as is typical). It is to head up a forthcoming issue of a new journal called Philosophy of Statistics: Retraction Watch.  This is the first I’ve heard of the journal, and I plan to recommend they publish the piece, conditional on revisions. I thought I would post the abstract here. It’s that interesting.

“Some Slightly More Realistic Self-Criticism in Recent Work in Philosophy of Statistics,” Philosophy of Statistics: Retraction Watch, Vol. 1, No. 1 (2012), pp. 1-19.

In this paper we delineate some serious blunders that we and others have made in published work on frequentist statistical methods. First, although we have claimed repeatedly that a core thesis of the frequentist testing approach is that a hypothesis may be rejected with increasing confidence as the power of the test increases, we now see that this is completely backwards, and we regret that we have never addressed, or even fully read, the corrections found in Deborah Mayo’s work since at least 1983, and likely even before that.

Second, we have been wrong to claim that Neyman-Pearson (N-P) confidence intervals are inconsistent because in special cases it is possible for a specific 95% confidence interval to be known to be correct. Not only are the examples required to show this absurdly artificial, but the frequentist could simply interpret this “vacuous interval” “as a statement that all parameter values are consistent with the data at a particular level,” which, as Cox and Hinkley note, is an informative statement about the limitations in the data (Cox and Hinkley 1974, 226). Continue reading

Categories: Comedy, Statistics | Tags: , , , , , | 4 Comments

Blog at Customized Adventure Journal Theme.


Get every new post delivered to your Inbox.

Join 319 other followers