Monthly Archives: July 2017

“A megateam of reproducibility-minded scientists” look to lowering the p-value

.

Having discussed the “p-values overstate the evidence against the null fallacy” many times over the past few years, I leave it to readers to disinter the issues (pro and con), and appraise the assumptions, in the most recent rehearsal of the well-known Bayesian argument. There’s nothing intrinsically wrong with demanding everyone work with a lowered p-value–if you’re so inclined to embrace a single, dichotomous standard without context-dependent interpretations, especially if larger sample sizes are required to compensate the loss of power. But lowering the p-value won’t solve the problems that vex people (biasing selection effects), and is very likely to introduce new ones (see my comment). Kelly Servick, a reporter from Science, gives the ingredients of the main argument given by “a megateam of reproducibility-minded scientists” in an article out today:

To explain to a broader audience how weak the .05 statistical threshold really is, Johnson joined with 71 collaborators on the new paper (which partly reprises an argument Johnson made for stricter p-values in a 2013 paper). Among the authors are some big names in the study of scientific reproducibility, including psychologist Brian Nosek of the University of Virginia in Charlottesville, who led a replication effort of high-profile psychology studies through the nonprofit Center for Open Science, and epidemiologist John Ioannidis of Stanford University in Palo Alto, California, known for pointing out systemic flaws in biomedical research.

The authors set up a scenario where the odds are one to 10 that any given hypothesis researchers are testing is inherently true—that a drug really has some benefit, for example, or a psychological intervention really changes behavior. (Johnson says that some recent studies in the social sciences support that idea.) If an experiment reveals an effect with an accompanying p-value of .05, that would actually mean that the null hypothesis—no real effect—is about three times more likely than the hypothesis being tested. In other words, the evidence of a true effect is relatively weak.

But under those same conditions (and assuming studies have 100% power to detect a true effect)—requiring a p-value at or below .005 instead of .05 would make for much stronger evidence: It would reduce the rate of false-positive results from 33% to 5%, the paper explains.

Her article is here.

From the perspective of the Bayesian argument on which the proposal is based, the p-value appears to exaggerate evidence, but from the error statistical perspective, it’s the Bayesian inference (to the alternative) that exaggerates the inference beyond what frequentists allow. Greenland, Senn, Rothman, Carlin, Poole, Goodman, Altman (2016, p. 342) observe, correctly, that whether “P-values exaggerate the evidence” “depends on one’s philosophy of statistics and the precise meaning given to the terms involved”. [1]

Share your thoughts.

[1] .”..it has been argued that P values overstate evidence against test hypotheses, based on directly comparing P values against certain quantities (likelihood ratios and Bayes factors) that play a central role as evidence measures in Bayesian analysis … Nonetheless, many other statisticians do not accept these quantities as gold standards” (Greenland et al, p. 342).

Categories: Error Statistics, highly probable vs highly probed, P-values, reforming the reformers | 13 Comments

3 YEARS AGO (JULY 2014): MEMORY LANE

3 years ago...

3 years ago…

MONTHLY MEMORY LANE: 3 years ago: July 2014. I mark in red 3-4 posts from each month that seem most apt for general background on key issues in this blog, excluding those reblogged recently[1]. Posts that are part of a “unit” or a group count as one. This month there are three such groups: 7/8 and 7/10; 7/14 and 7/23; 7/26 and 7/31.

July 2014

  • (7/7) Winner of June Palindrome Contest: Lori Wike
  • (7/8) Higgs Discovery 2 years on (1: “Is particle physics bad science?”)
  • (7/10) Higgs Discovery 2 years on (2: Higgs analysis and statistical flukes)
  • (7/14) “P-values overstate the evidence against the null”: legit or fallacious? (revised)
  • (7/23) Continued:”P-values overstate the evidence against the null”: legit or fallacious?
  • (7/26) S. Senn: “Responder despondency: myths of personalized medicine” (Guest Post)
  • (7/31) Roger Berger on Stephen Senn’s “Blood Simple” with a response by Senn (Guest Posts)

[1] Monthly memory lanes began at the blog’s 3-year anniversary in Sept, 2014.

 

Save

Save

Save

Save

Save

Categories: 3-year memory lane, Higgs, P-values | Leave a comment

On the current state of play in the crisis of replication in psychology: some heresies

.

The replication crisis has created a “cold war between those who built up modern psychology and those” tearing it down with failed replications–or so I read today [i]. As an outsider (to psychology), the severe tester is free to throw some fuel on the fire on both sides. This is a short update on my post “Some ironies in the replication crisis in social psychology” from 2014.

Following the model from clinical trials, an idea gaining steam is to prespecify a “detailed protocol that includes the study rationale, procedure and a detailed analysis plan” (Nosek et.al. 2017). In this new paper, they’re called registered reports (RRs). An excellent start. I say it makes no sense to favor preregistration and deny the relevance to evidence of optional stopping and outcomes other than the one observed. That your appraisal of the evidence is altered when you actually see the history supplied by the RR is equivalent to worrying about biasing selection effects when they’re not written down; your statistical method should pick up on them (as do p-values, confidence levels and many other error probabilities). There’s a tension between the RR requirements and accounts following the Likelihood Principle (no need to name names [ii]).

“By reviewing the hypotheses and analysis plans in advance, RRs should also help neutralize P-hacking and HARKing (hypothesizing after the results are known) by authors, and CARKing (critiquing after the results are known) by reviewers with their own investments in the research outcomes, although empirical evidence will be required to confirm that this is the case” (Nosek et. al)

A novel idea is that papers are to be provisionally accepted before the results are in. To the severe tester, that requires the author to explain how she will pinpoint blame for negative results. How will she use them to learn something (improve or falsify claims or methods)? I see nothing in preregistration, in and of itself, so far, to promote that. Existing replication research doesn’t go there. It would be wrong-headed to condemn CARKing, by the way. Post-data criticism of inquiries must be post-data. How else can you check if assumptions were met by the data in hand? [Note 7/12: Of course, what they must not be are ad hoc saves of the original finding, else they are unwarranted–minimal severity.] It would be interesting to see inquiries into potential hidden biases not often discussed. For example, what did the students (experimental subjects) know and when did they know it (the older the effect the more likely they know it)? What’s the attitude toward the finding conveyed (to experimental subjects) by the person running the study? I’ve little reason to point any fingers, it’s just part of the severe tester’s inclination toward cynicism and error probing. (See my “rewards and flexibility hypothesis” in my earlier discussion.)

It’s too soon to see how RR’s will fare, but plenty of credit is due to those sticking their necks out to upend the status quo. Research into changing incentives is a field in its own right. The severe tester may, again, appear awfully jaundiced to raise any qualms, but we shouldn’t automatically assume that research into incentivizing researchers to behave in a fashion correlated with good science –data sharing, preregistration–is itself likely to improve the original field. Not without thinking through what would be needed to link statistics up with the substantive hypotheses or problem of interest. (Let me be clear, I love the idea of badges and other carrots;it’s just that the real scientific problems shouldn’t be lost sight of.) We might be incentivizing researchers to study how to incentivize researchers to behave in a fashion correlated with good science.

Surely there are areas where the effects or measurement instruments (or both) genuinely aren’t genuine. Isn’t it better to falsify them than to keep finding ad hoc ways to save them? Is jumping on the meta-research bandwagon[iii] just another way to succeed in a field that was questionable? Heresies, I know.

To get the severe tester into further hot water, I’ll share with you her view that, in some fields, if they completely ignored statistics and wrote about plausible conjectures about human motivations, prejudices, attitudes etc. they would have been better off. There’s a place for human interest conjectures, backed by interesting field studies rather than experiments on psych students. It’s when researchers try to “test” them using sciency methods that the whole thing becomes pseudosciency.

Please share your thoughts. (I may add to this, calling it (2).)

Nosek, B. A., Ebersole, C. R., DeHaven, A. C., & Mellor, D. T. (2017, July 8). The Preregistration Revolution (PDF). Open Science Framework. Retrieved from osf.io/2dxu5

[i] This article mentions a failed replication discussed on Gelman’s blog on July 8, on which I left some comments.

[ii] New readers, please search likelihood principle on this blog

[iii] This must be distinguished from the use of “meta” in describing a philosophical scrutiny of methods (meta-methodology). Statistical meta-researchers do not purport to be doing philosophy of science.

Categories: Error Statistics, preregistration, reforming the reformers, replication research | 9 Comments

S. Senn: Fishing for fakes with Fisher (Guest Post)

.

 

Stephen Senn
Head of  Competence Center
for Methodology and Statistics (CCMS)
Luxembourg Institute of Health
Twitter @stephensenn

Fishing for fakes with Fisher

 Stephen Senn

The essential fact governing our analysis is that the errors due to soil heterogeneity will be divided by a good experiment into two portions. The first, which is to be made as large as possible, will be completely eliminated, by the arrangement of the experiment, from the experimental comparisons, and will be as carefully eliminated in the statistical laboratory from the estimate of error. As to the remainder, which cannot be treated in this way, no attempt will be made to eliminate it in the field, but, on the contrary, it will be carefully randomised so as to provide a valid estimate of the errors to which the experiment is in fact liable. R. A. Fisher, The Design of Experiments, (Fisher 1990) section 28.

Fraudian analysis?

John Carlisle must be a man endowed with exceptional energy and determination. A recent paper of his is entitled, ‘Data fabrication and other reasons for non-random sampling in 5087 randomised, controlled trials in anaesthetic and general medical journals,’ (Carlisle 2017) and has created quite a stir. The journals examined include the Journal of the American Medical Association and the New England Journal of Medicine. What Carlisle did was examine 29,789 variables using 72,261 means to see if they were ‘consistent with random sampling’ (by which, I suppose, he means ‘randomisation’). The papers chosen had to report either standard deviations or standard errors of the mean. P-values as measures of balance or lack of it were then calculated using each of three methods and the method that gave the value closest to 0.5 was chosen. For a given trial the P-values chosen were then back-converted to z-scores combined by summing them and then re-converted back to P-values using a method that assumes the summed Z-scores to be independent. As Carlisle writes, ‘All p values were one-sided and inverted, such that dissimilar means generated p values near 1’.

He then used a QQ plot, which is to say he plotted the empirical distribution of his P-values against the theoretical one. For the latter he assumed that the P-values would have a uniform distribution, which is the distribution that ought to apply for P-values for baseline tests of 1) randomly chosen baseline variates, in 2) randomly chosen RCTs 3) when analysed as randomised. The third condition is one I shall return to and the first is one many commentators have picked up, however, I am ashamed to say that the second is one I overlooked, despite the fact that every statistician should always ask ‘how did I get to see what I see?’, but which took a discussion with my daughter to reveal to me.

Little Ps have lesser Ps etc.

Carlisle finds, from the QQ plot, that the theoretical distribution does not fit the empirical one at all well. There is an excess of P-values near 1, indicating far too frequent poorer-than-expected imbalance and an excess of P-values near 0 indicating balance that is too-good-to-be-true. He then calculates a P-value of P-values and finds that this is 1.2 x 10-7.

Before going any further, I ought to make clear that I consider that the community of those working on and using the results of randomised clinical trials (RCTs), whether as practitioners or theoreticians, owe Carlisle a debt of gratitude. Even if I don’t agree with all that he has done, the analysis raises disturbing issues and not necessarily the ones he was interested in. (However, it is also only fair to note that despite a rather provocative title, Carlisle has been much more circumspect in his conclusions than some commentators.)  I also wish to make clear that I am dismissing neither error nor fraud as an explanation for some of these findings. The former is a necessary condition of being human and the latter far from incompatible with it. Carlisle, disarmingly admits that he may have made errors and I shall follow him and confess likewise. Now to the three problems.

Three coins in the fountain

First, there is one decision that Carlisle made, which almost every statistical commentator has recognised as inappropriate. (See, for example, Nick Brown for a good analysis.) In fact, Carlisle himself even raised the difficulty, but I think he probably underestimated the problem. The method he uses for combining P-values only works if the baseline variables are independent. In general, they are not: sex and height, height and baseline forced expiratory volume in one second (FEV1), baseline FEV1 and age are simple examples from the field of asthma and similar ones can be found for almost every indication. The figure shows the Z-score inflation that attends combining correlated values as if they were independent. Each line gives the ratio of the falsely calculated Z-score to what it should be given a common positive correlation between covariates. (This correlation model is implausible but sufficient to illustrate the problem and simplifies both theory and exposition (Senn and Bretz 2007).) Given the common correlation coefficient assumption, this ratio only depends on the correlation coefficient itself and the number of variates combined. It can be seen that unless either, the correlation is zero or the trivial case of 1 covariate is considered, z-inflation occurs and it can easily be considerable. This phenomenon could be one explanation for the excess of P-values close to 0.

I shall leave the second issue until last. The third issue is subtler than the first but is one Fisher was always warning researchers about and is reflected in the quotation in the rubric. If you block by a factor in an experiment but don’t eliminate it in analysis, the following are the consequences on the analysis of variance. 1) All the contribution in variability of the blocks is removed from the ‘treatment’ sum of squares. 2) That which is removed is added to the ‘error’ sum of squares. 3) The combined effect of 1) and 2)  means that the ratio of the two no longer has the assumed distribution under the null hypothesis (Senn 2004). In particular, excessively moderate Z scores may be the result.

Now, it is a fact that very few trials are completely randomised. For example, many pharma-industry trials use randomly permuted blocks and many trials run by the UK Medical Research Council (MRC) or the European Organisation for Research and Treatment of Cancer (EORTC) use minimisation (Taves 1974). As regards the former, this tends to balance trials by centre. If there are strong differences between centres, this balancing alone will be eliminated from the treatment sum of squares effect but not from the error sum of squares, which, in fact, will increase.   Since centre effects are commonly fitted in pharma-industry trials when analysing outcomes, this will not be a problem: in fact, much sharper inferences will result. It is interesting to note that Marvin Zelen, who was very familiar with public-sector trials but less so with pharmaceutical industry trials, does not appear to have been aware that this was so, and in a paper with Zheng recommended that centre effects ought to be eliminated in future (Zheng and Zelen 2008) unaware that in many cases they already were. Similar problems arise with minimised trials if covariates involved in minimisation are not fitted (Senn, Anisimov, and Fedorov 2010). Even if centre and covariate effects are fitted, if there are any time trends, both of the above methods of allocation will tend to balance by them (since typically the block size is smaller than the centre size and minimisation forces balance not only by the end of the trial but at any intermediate stage), and if so, this will inflate the error variance unless the time trend is fitted. The problem of time trends is one Carlisle himself alluded to.

Now, tests of baseline balance are nothing if not tests of the randomisation procedure itself (Berger and Exner 1999, Senn 1994). (They are useless for determining what covariates to fit.) Hence, if we are to test the randomisation procedure, we require that the distribution of the test-statistic under the null hypothesis has the required form and Fisher warned us it wouldn’t, except by luck, if we blocked and didn’t eliminate. The result would be to depress the Z-statistic. Thus, this is a plausible possible explanation of the excess of P-values near zero that Carlisle noted since he could not adjust for such randomisation approaches.

Now to the second and (given my order of exposition) last of my three issues. I had assumed, like most other commentators, that the distribution of covariates at baseline ought to be random to the degree specified by the randomisation procedure (which is covered by issue three). This is true for each and every trial looking forward. It is not true for published trials looking backward. The questions my daughter put to me was, ‘what about publication bias?,’ and stupidly I replied, ‘but this is not about outcomes’. However, as I ought to know, the conditional type I error rate of an outcome variable varies with the degree of balance and correlation with a baseline variable. What applies one way applies the other and since journals have a bias in favour of positive results (often ascribed to the process of submission only (Goldacre 2012) but very probably part of the editorial process also) (Senn 2012, 2013), then published trials do not provide a representative sample of trials undertaken. Now, although, the relationship between balance and the Type I error rate is simple (Senn 1989) the relationship between being published and balance is much more complex, depending as it does on  two difficult-to- study further things: 1) the distribution of real treatment effects (if  I can be permitted a dig at a distinguished scientist and ‘blogging treasure’, only David Colquhoun thinks this is easy); 2) the extent of publication bias.

However, despite having the information we need, it is clear, that one cannot simply expect baseline distribution of published trials to be random.

Two out of three is bad

Which of Carlisle’s findings turn out to be fraud, which error and which explicable by one of these three (or other) mechanisms, remains to be seen. The first one is easily dealt with. This is just an inappropriate analysis. Things should not be looked at this way. However, pace Meatloaf, two out of three is bad when the two are failures of the system.

As regards issue two, publication bias is a problem and we need to deal with it. Relying on journals to publish trials is hopeless: self-publication by sponsors or trialists is the answer.

However, issue three is a widespread problem: Fisher warned us to analyse as we randomise. If we block or balance by factors that we don’t include in our models, we are simply making trials bigger than they should be and producing standard errors in the process that are larger than necessary. This is sometimes defended on the grounds that it produces conservative inference but in that respect I can’t see how it is superior than multiplying all standard errors by two. Most of us, I think, would regard it as a grave sin to analyse a matched pairs design as a completely randomised one. Failure to attract any marks is a common punishment in stat 1 examinations when students make this error. Too many of us, I fear, fail to truly understand why this implies there is a problem with minimised trials as commonly analysed. (See Indefinite Irrelevance for a discussion.)

As ye randomise so shall ye analyse (although ye may add some covariates) we were warned by the master. We ignore him at our peril. MRC & EORTC, please take note.

Acknowledgements

I thank Dr Helen Senn for useful conversations. My research on inference for small populations is carried out in the framework of the IDeAL project http://www.ideal.rwth-aachen.de/ and supported by the European Union’s Seventh Framework Programme for research, technological development and demonstration under Grant Agreement no 602552.

References

Berger, V. W., and D. V. Exner. 1999. “Detecting selection bias in randomized clinical trials.” Controlled Clinical Trials no. 20 (4):319-327.

Carlisle, J. B. 2017. “Data fabrication and other reasons for non-random sampling in 5087 randomised, controlled trials in anaesthetic and general medical journals.” Anaesthisia. doi: 10.1111/anae.13938.

Fisher, Ronald Aylmer, ed. 1990. The Design of Experiments. Edited by J.H. Bennet, Statistical Methods, Experimental Design and Scientific Inference. Oxford: Oxford.

Goldacre, B. 2012. Bad Pharma: How Drug Companies Mislead Doctors and Harm Patients. London: Fourth Estate.

Senn, S., and F. Bretz. 2007. “Power and sample size when multiple endpoints are considered.” Pharm Stat no. 6 (3):161-70.

Senn, S.J. 1989. “Covariate imbalance and random allocation in clinical trials [see comments].” Statistics in Medicine no. 8 (4):467-75.

Senn, S.J. 1994. “Testing for baseline balance in clinical trials.” Statistics in Medicine no. 13 (17):1715-26.

Senn, S.J. 2004. “Added Values: Controversies concerning randomization and additivity in clinical trials.” Statistics in Medicine no. 23 (24):3729-3753.

Senn, S.J., V. V. Anisimov, and V. V. Fedorov. 2010. “Comparisons of minimization and Atkinson’s algorithm.” Statistics in  Medicine no. 29 (7-8):721-30.

Senn, Stephen. 2012. “Misunderstanding publication bias: editors are not blameless after all.” F1000Research no. 1.

Senn, Stephen. 2013. Authors are also reviewers: problems in assigning cause for missing negative studies  20132013]. Available from http://f1000research.com/articles/2-17/v1.

Taves, D. R. 1974. “Minimization: a new method of assigning patients to treatment and control groups.” Clinical Pharmacology and Therapeutics no. 15 (5):443-53.

Zheng, L., and M. Zelen. 2008. “MULTI-CENTER CLINICAL TRIALS: RANDOMIZATION AND ANCILLARY STATISTICS.” Annals of Applied Statistics no. 2 (2):582-600. doi: 10.1214/07-aoas151.

Categories: Fisher, RCTs, Stephen Senn | 5 Comments

Blog at WordPress.com.