bayes factors

My Responses (at the P-value debate)

.

How did I respond to those 7 burning questions at last week’s (“P-Value”) Statistics Debate? Here’s a fairly close transcript of my (a) general answer, and (b) final remark, for each question–without the in-between responses to Jim and David. The exception is question 5 on Bayes factors, which naturally included Jim in my general answer. 

The questions with the most important consequences, I think, are questions 3 and 5. I’ll explain why I say this in the comments. Please share your thoughts.

Question 1. Given the issues surrounding the misuses and abuse of p-values, do you think they should continue to be used or not? Why or why not?

Yes we should continue to use P-values and statistical significance tests. Uses of P-values are a piece in a rich set of tools for assessing and controlling the probabilities of misleading interpretations of data (error probabilities). They’re “the first line of defense against being fooled by randomness” (Yoav Benjamini). If even larger or more extreme effects than you observed are frequently brought about by chance variability alone (P-value is not small), clearly you don’t have evidence of incompatibility with the “mere chance” hypothesis.

Even those who criticize P-values will employ them at least if they care to check the assumptions of their statistical models—this includes Bayesians George Box, Andrew Gelman, and Jim Berger.       

Critics of P-values often allege it’s too easy to obtain small P-values, but notice the replication crisis is all about how difficult it is to get small P-values with preregistered hypotheses. This shows the problem isn’t P-values but the selection effects and  data-dredging. However, the same data dredged hypothesis can occur in likelihood ratios, Bayes factors, and Bayesian updating, except that we now lose the direct grounds to criticize inferences flouting error statistical control. The introduction of prior probabilities –which may also be data dependent–offers further researcher flexibility.

Those who reject P values are saying we should reject a method because it can be used badly. That’s a very bad argument committing straw person fallacies.

We should reject misuses and abuses of P-values, but there’s a danger of blithely substituting “alternative tools” that throw out the error control baby with the bad statistics bathwater.

Final remark on P-values

What’s missed in the reject P-values movement is the major reason for calling in statistics in science is that it gives tools to inquire whether an observed phenomenon could be a real effect or just noise in the data. P-values have the intrinsic properties for this task, if used properly. To reject them is to jeopardize this important role of statistics. As Fisher emphasizes, we seek randomized controlled trials in order to ensure the validity of statistical significance tests. To reject P-values because they don’t give posterior probabilities in hypotheses is illicit. The onus is on those claiming we want such posteriors to show, for any way of getting them, why.

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Question 2 Should practitioners avoid the use of thresholds (e.g., P-value thresholds) in interpreting data? If so, does this preclude testing?

There’s a lot of confusion about thresholds. What people oppose are dichotomous accept/reject routines. We should move away from them as well as unthinking uses of thresholds like 95% confidence levels or other quantities. Attained P-values should be reported (as all the founders of tests recommended). We should not confuse fixing a threshold to habitually use with prespecifying a threshold beyond which there is evidence of inconsistency with a test hypothesis. I’ll often call it the null for short.

Some think that banishing thresholds would diminish P-hacking and data dredging. It is the opposite. In a world without thresholds, it would be harder to criticize those who fail to meet a small P-value because they engaged in data dredging & multiple testing, and at most have given us a nominally small P-value. Yet that is the upshot of declaring predesignated P-value thresholds should not be used at all in interpreting data. If an account cannot say about any outcomes in advance that they will not count as evidence for a claim, then there is no a test of that claim.

Giving up on tests means forgoing statistical falsification. What’s the point of insisting on replications if at no point can you say, the effect has failed to replicate?

You may favor a philosophy of statistics that rejects statistical falsification, but it will not do to declare by fiat that science should reject the falsification or testing view. (The “no thresholds” view also torpedoes common testing uses of confidence intervals and Bayes Factor standards.)

So my answer is NO and YES: don’t abandon thresholds, to do so is to ban tests. 

Final remark on thresholds Q-2

A common fallacy is to suppose that because we have a continuum, that we cannot distinguish points at the extremes (fallacy of the beard). We can distinguish results readily produced by random variability from cases where there is evidence of incompatibility with the chance variability hypothesis. We use thresholds throughout science to measure if you’re pre-diabetic, diabetic, etc.

When P-values are banned altogether … the eager researcher does not claim, I’m simply describing, but they invariably go on to claim evidence for a substantive psych theory—but on results that would be blocked if they’d required a reasonably small P-value threshold.

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Question 3 Is there a role for sharp null hypotheses or should we be thinking about interval nulls?

I’d agree with those who regard testing of a point null hypothesis as problematic and often misused. Notice that arguments purporting to show P-values exaggerate evidence are based on this point null and a spiked or lump of prior to it.  By giving a spike prior to the nil, it’s easy to find the nil more likely than the alternative—Jeffreys-Lindley paradox: the P-value can differ from the posterior probability on the null. But the posterior can also equal the P-value, it can range from p to 1-p. In other words, the Bayesians differ amongst themselves, because with diffuse priors the P-value can equal the posterior on the null hypothesis.  

My own work reformulates results of statistical significance tests in terms of discrepancies from the null that are well or poorly tested. A small P-value indicates discrepancy from a null value because with high probability, 1 – p the test would have produced a larger P-value (less impressive difference) in a world adequately described by H0. Since the null hypothesis would very probably have survived if correct, when it doesn’t survive, it indicates inconsistency with it. 

Final remark on sharp nulls Q-3

The move to redefine significance, advanced by a megateam including Jim, all rest upon the lump high prior probability on the null as well as evaluating P-values using Bayes factors.  It’s not equipoise, it’s biased in favor of the null. The redefiners are prepared to say there’s no evidence against or even evidence for a null hypothesis, even though that point null is entirely excluded from the corresponding 95% confidence interval. This would often erroneously fail to uncover discrepancies.

Whether to use a lower threshold is one thing, to argue we should based on Bayes factor standards lacks legitimate grounds.[1][2]

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Question 4 Should we be teaching hypothesis testing anymore, or should we be focusing on point estimation and interval estimation?

Absolutely. The way to understand confidence interval estimation, and to fix its shortcomings, is to understand their duality with tests. The same person who developed confidence intervals developed tests in the 1930s—Jerzy Neyman. The intervals are inversions of tests.

A 95% CI contains the parameter values that are not statistically significant from the data at the 5% level.

While I agree that P-values should be accompanied by CIs, my own preferred reconstruction of tests blends intervals and tests. It reports the discrepancies from a reference value that are well or poorly indicated at different levels—not just 1 level like .95. This improves on current confidence interval use. For example, the justification standardly given for inferring a particular confidence interval estimate is that it came from a method which, with high probability, would cover the true parameter value. This is a performance justification. The testing perspective on CIs gives an inferential justification. I would justify inferring evidence that the parameter exceeds the CI lower bound this way: if the parameter were smaller than the lower bound, then with high probability we would have observed a smaller value of the test statistic than we did.

Amazingly, the last president of the ASA, Karen Kafadar, had to appoint a new task force on statistical significance tests to affirm that statistical hypothesis testing is indeed part of good statistical practice. Though much credit goes to her for bringing this about.

Final remark on question 4

Understanding the duality between tests and CIs is the key to improving both. …So it makes no sense for advocates of the “new statistics” to shun tests. The testing interpretation of confidence intervals also scotches criticisms of examples where, it can happen that a 95% confidence estimate contains all possible parameter values. Although such an inference is ‘trivially true,’ it is scarcely vacuous in the testing construal. As David Cox remarks, that all parameter values are consistent with the data is an informative statement about the limitations of the data (to detect discrepancies at the particular level).

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Question 5  What are your reasons for or against the use of Bayes Factors?

Jim is a leading advocate of Bayes factors and also of the non-subjective interpretation of Bayesian prior probabilities (2006) to be used. ‘Eliciting’ subjective priors, Jim has convincingly argued, is too difficult, expert’s prior beliefs almost never even overlap he says, and scientists are reluctant for subjective beliefs to overshadow data. Default priors (reference or non-subjective priors) are supposed to prevent prior beliefs from influencing the posteriors–they are data dominant in some sense. But there’s a variety of incompatible ways to go about this job.

(A few are maximum entropy, invariance, maximizing the missing information, coverage matching.) As David Cox points out, it’s unclear how we should interpret these default probabilities. Default priors, we are told, are simply formal devices to obtain default posteriors. “The priors are not to be considered expressions of uncertainty, ignorance, or degree of belief. Conventional priors may not even be probabilities…” (Cox and Mayo 2010, 299), being improper.

Prior probabilities are supposed to let us bring in background information, but this pulls in the opposite direction from the goal of the default prior which is to reflect just the data. The goal of representing your beliefs is very different from the goal of finding a prior that allows the data to be dominant. Yet, current uses of Bayesian methods combine both in the same computation—how do you interpret them? I think this needs to be assessed now that they’re being so widely advocated.

Final remark on Q-5  

BFs give a comparative appraisal not a test. It depends on how you assign the priors to the test and alternative hypotheses.

Bayesian testing, Bayesians admit, is a work in progress. We shouldn’t kill a well worked out theory of testing for one that is admitted to being a work in progress.

It might be noted that even default Bayesian Jose Bernardo holds that the difference between the P-value and the BF (the Jeffreys Lindley paradox or Fisher-Jeffreys disagreement) is actually an indictment of the BF because it finds evidence in favor of a null hypothesis even when an alternative is much more likely.

Other Bayesians dislike the default priors because they can lead to improper posteriors and thus to violations of probability theory. This leads some like Dennis Lindley back to subjective Bayesianism.

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

Question 6 With so much examination of if/why the usual nominal type I error .05 is appropriate, should there be similar questions about the usual nominal type II error?

No, there should not be a similar examination of type 2 error bounds. Rigid bounds for either error should be avoided. N-P themselves urged the specifications be used with discretion and understanding.

It occurs to me, if an examination is wanted it should be done by the new ASA Task Force on Significance Tests and Replicability. Its members aren’t out to argue for rejecting significance tests but to show they are part of proper statistical practice. 

Power, the complement of the type II error probability, I often say is a most abused notion (only defined in terms of a threshold). Critics of statistical significance tests, I’m afraid to say, often fallaciously take a just statistically significant difference at level α as a better indication of a discrepancy from a null if the test’s power to detect that discrepancy is high rather than low. This is like saying it’s a better indication for a discrepancy of at least 10 than of at least 1 (whatever the parameter is). I call it the Mountains out of Molehill fallacy. It results from trying to use power and alpha as ingredients for a Bayes factor and from viewing non-Bayesian methods through a Bayesian lens

We set a high power to detect population effects of interest, but finding statistical significance doesn’t warrant saying we’ve evidence for those effects.

(The significance tester doesn’t infer points but inequalities, discrepancies at least such and such).

Final remark on Q-6, power

A legitimate criticism of P-values is they don’t give population effect sizes. Neyman developed power analysis for this purpose, in addition to comparing tests pre-data. Yet critics of tests typically keep to Fisherian tests that don’t have explicit alternatives or power. Neyman was keen to avoid misinterpreting non-significant results as evidence for a null hypothesis. He used power analysis post data (like Jacob Cohen much later) to set an upper bound for a discrepancy from the null value.

If a test has high power to detect a population discrepancy, but does not do so, it’s evidence the discrepancy is absent (qualified by the level).

My preference is to use the attained power but it’s the same reasoning.

I see people objecting to post-hoc power as “sinister” but they’re referring to computing power by using the observed effect as the parameter value in its computation. This is not power analysis.

^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^

QUESTION 7 What are the problems that lead to the reproducibility crisis and what are the most important things we should do to address it?

Irreplication is due to many factors from data generation and modeling, to problems of measurement, and linking statistics  to substantive science. Here I just focus on P-values. The key problem is that in many fields, latitude in collecting and interpreting data makes it too easy to dredge up impressive looking findings even when spurious. The fact it becomes difficult to replicate effects when features of the tests are tied down shows the problem isn’t P-values but exploiting researcher flexibility and  multiple testing. The same flexibility can occur when the p-hacked hypotheses enter methods being promoted as alternatives to significance tests: likelihood ratios, Bayes Factors, or Bayesian updating. But direct grounds to criticize inferences as flouting error statistical control is lost (at least not without adding non-standard stipulations). Since they condition on the actual outcome they don’t consider outcomes other than the one observed. This is embodied in something called the likelihood principle—.

Admittedly error control, some think, is only of concern to ensure low error rates in some long run. I argue instead that what bothers us about the P-hacker and data dredger is that they have done a poor job in the case at hand. Their method very probably would have found some such effect even if it is merely noise.

Probability here is to assess how well tested claims are, which is very different from how comparatively believable they are—claims can even be true though poorly tested. Though there’s room for both types of assessments in different contexts, how plausible and how well tested are very different and this needs to be recognized.

To address replication problems, statistical reforms should be developed together with a philosophy of statistics that properly underwrites them.[3]

Final remark on Q-7

Please see the video here or in this news article.

[1]  The following are footnotes 4 and 5 from page 252 of Statistical Inference as Severe testing: How to Get Beyond the Statistics Wars. The relevant section is 4.4. (pp. 246-259)

Casella and Roger (not Jim) Berger (1987b) argue, “We would be surprised if most researchers would place even a 10% prior probability of H0. We hope that the casual reader of Berger and Delampady realizes that the big discrepancies between P-values P(H0|x) . . . are due to a large extent to the large value of [the prior of 0.5 to H0] that was used. We hope that the casual reader of Berger and Delampady realizes that the big discrepancies between P-values and P(H0|x) … are due to a large extent to the large value of [the prior of .5 to H0] that was used.” The most common uses of a point null, asserting the difference between means is 0, or the coefficient of a regression coefficient is 0, merely describe a potentially interesting feature of the population, with no special prior believability.  “J. Berger and Delampady admit…, P-values are reasonable measures of evidence when there is no a priori concentration of belief about H0” (ibid., p. 345). Thus, “the very argument that Berger and Delampady use to dismiss P-values can be turned around to argue for P-values” (ibid., p. 346).

Harold Jeffreys developed the spiked priors for a very special case: to give high posterior probabilities to well corroborated theories. This is quite different from the typical use of statistical significance tests to detect indications of an observed effect that is not readily due to noise. (Of course isolated small P-values do not suffice to infer a genuine experimental phenomenon.)

In defending spiked priors, J. Berger and Sellke move away from the importance of effect size. “Precise hypotheses . . . ideally relate to, say, some precise theory being tested. Of primary interest is whether the theory is right or wrong; the amount by which it is wrong may be of interest in developing alternative theories, but the initial question of interest is that modeled by the precise hypothesis test” (1987, p. 136).

[2] As Cox and Hinkley explain, most tests of interest are best considered as running two one-sided tests, insofar as we are interested in the direction of departure. (Cox and Hinkley 1974; Cox 2020).

[3] In the error statistical view, the interest is not in measuring how strong your degree of belief in H is but how well you can show why it ought to be believed or not. How well can you put to rest skeptical challenges? What have you done to put to rest my skepticism of your lump prior on “no effect”?

 

 

Categories: bayes factors, P-values, Statistics, statistics debate NISS | Leave a comment

Live Exhibit: Bayes Factors & Those 6 ASA P-value Principles

.

Live Exhibit: So what happens if you replace “p-values” with “Bayes Factors” in the 6 principles from the 2016 American Statistical Association (ASA) Statement on P-values? (Remove “or statistical significance” in question 5.)

Does the one positive assertion hold? Are the 5 “don’ts” true? Continue reading

Categories: ASA Guide to P-values, bayes factors | 2 Comments

September 24: Bayes factors from all sides: who’s worried, who’s not, and why (R. Morey)

Information and directions for joining our forum are here.

Continue reading

Categories: Announcement, bayes factors, Error Statistics, Phil Stat Forum, Richard Morey | 1 Comment

Blog at WordPress.com.