I lost a bet last night with my criminologist colleague Katrin H. It turns out that you can order a drink called “Elbar Grease” in London, in a “secret” comedy club in a distant suburb (see Sept. 30 post).[i] The trouble is that it’s not nearly as sour as the authentic drink (not sour enough, in any case, for those of us who lack that aforementioned gene). But I did get to hear some great comedy, which hasn’t happened since early days of exile, and it reminded me of my promise to revisit the “comedy hour at the Bayesian retreat” (see Sept. 3 post). Few things have been the butt of more jokes than examples of so-called “trivial intervals”.
Critics develop trivial-intervals by artificially constraining a confidence interval estimation procedure, resulting in a 95% confidence interval being known to be correct—trivial intervals. If we know it is true, or so the criticism goes, then to report a .95 rather than a100% confidence level is inconsistent!
The criticism, like many others, is based on the demand that error-probabilistic measures be reinterpreted (or, more correctly, misinterpreted) to accord with a philosophy of statistics that is incompatible with a frequentist (error-statistical) account. The critic here does not merely posit that probability ought to enter to provide posterior probabilities, degrees of belief or the like—the assumption I called “probabilism”—he assumes, further, that the error statistician also shares this goal. Protests to one side, we keep hearing that, deep down, we know that’s what we really want. So whenever error probabilities, whether p-values or confidence levels, disagree with a favored posterior, it’s alleged to show the unsoundness of our methods! I can point to a dozen or more examples.
I discussed this, as a baby, in a (predoctoral!) note, though it appeared a year later in 1981, with respect to a example from Teddy Seidenfeld (Mayo 1981). Cox addressed it earlier: “Viewed as a single statement [the trivial interval] is trivially true, but, on the other hand, viewed as a statement that all parameter values are consistent with the data at a particular level is a strong statement about the limitations of the data” (Cox and Hinkley 1974, 226).
But it is still used as a knock-down criticism of frequentist confidence intervals. Never mind that the criticism assumes an erroneous probabilistic instantiation—we simply can’t help it! We say no, but don’t really mean no. But we can help it, and no means no. In our construal, the trivial interval amounts to saying that no parameter values are ruled out with severity, scarcely a sign of inconsistency. Even then, specific hypotheses within the interval would be associated with different severity values. (Conversely, it can happen that all parameter values are ruled out at a chosen level of severity.) Pointing to such cases, however ad hoc and however contrived only for the purpose, is often regarded as ending the discussion of frequentist methods! Or, at the very least, it is thought to show we “should be subject to some re-education,” whether or not we have any desire to convert (Bernardo 2008).
The critics’ examples of trivial intervals are wildly artificial, but science is no stranger to situations in which none of the possible values for a parameter can be discriminated, and the “trivial interval” is precisely what we would want to infer. The famous red-shift experiments on GTR, for instance, were determined to be incapable of discriminating between different relativistic theories of gravity. It would be decades before this result became obvious, and when it did, it was exceedingly informative, not trivial.
Bernardo, J. 2008. Comment on article by Gelman. Bayesian Analysis 3 (3): 451–454.
Cox, D. R., and D. V. Hinkley. 1974. Theoretical Statistics, 226. London: Chapman & Hall.
Mayo, D. 1981. In defense of the Neyman-Pearson theory of confidence intervals. Philosophy of Science 48 (2): 269–280.