Monthly Archives: January 2021

S. Senn: “Beta testing”: The Pfizer/BioNTech statistical analysis of their Covid-19 vaccine trial (guest post)

.

Stephen Senn

Consultant Statistician
Edinburgh, Scotland

The usual warning

Although I have researched on clinical trial design for many years, prior to the COVID-19 epidemic I had had nothing to do with vaccines. The only object of these amateur musings is to amuse amateurs by raising some issues I have pondered and found interesting.

Coverage matters

In this blog I am going to cover the statistical analysis used by Pfizer/BioNTech (hereafter referred to as P&B) in their big phase III trial of a vaccine for COVID-19. I considered this in some previous posts, in particular Heard Immunity, and Infectious Enthusiasm. The first of these posts compared the P&N trial to two others, a trial run by Moderna and another by Astra Zeneca and Oxford University (hereafter referred to as AZ&Ox) and the second discussed the results that P&N reported.

Figure 1 Stopping boundaries for three trials. Labels are anticipated numbers of cases at the looks.

All three trials were sequential in nature and, as is only proper, all three protocols gave details of the proposed stopping boundaries. These are given in Figure 1. AZ&Ox proposed to have two looks, Moderna to have three and P&B to have five. As things turned out, there was only one interim look at the P&B trial and so two, and not five, in total.

Moderna and AZ&Ox specified a frequentist approach in their protocols and P&B a Bayesian one. It is aspects of this Bayesian approach that I propose to consider.

Some symbol stuff

It is common to measure vaccine efficacy in terms of a relative risk reduction expressed as a percentage. The percentage is a nuisance and instead I shall express it as a simple ratio. If ψ, πc, πv are the true vaccine efficacy and the probabilities of being infected in the control and vaccine groups respectively, then

 

 

Note that if we have Yc, Ycases in the control and vaccine arms respectively and nc, nsubjects then an intuitively reasonable estimate of ψ is

 

 

 

where VE is the observed vaccine efficacy.

If the total number of subjects is N and we have ncrN, nv = (1 − r)N, with r being the proportion of subjects on the control arm, then we have

Note that if r = 1 − r = 1/2, that is to say that there are equal numbers of subjects on both arms, then (3) simply reduces to one minus the ratio of observed cases. VE thus has the curious property that its maximum value is 1 (when there are no cases in the vaccine group) but its minimum value is −∞ (when there are no cases in the control group and at least one in the vaccine group).

A contour plot of vaccine efficacy as a function of the control and vaccine group probabilities of infection is given in Figure 2.

Figure 2 Vaccine efficacy as a function of the probability of infection in the control and vaccine groups.

Scaly beta

P&B specified a prior distribution for their analysis but very sensibly shied away from attempting to do one for vaccine efficacy directly. Instead, they considered a transformation, or re-scaling, of the parameter defining

 

 

 

This looks rather strange but in fact it can be re-expressed as

 


 

 

Figure 3 Contour plot of transformed vaccine efficacy, θ. 

Its contour plot is given in Figure 3. The transformation is the ratio of the probability of infection in the vaccine group to the sum of the probabilities in the two groups. It thus takes on a value between 0 and 1 and this in turn implies that it behaves like a probability. In fact if we have equal numbers of subjects on both arms and condition on the total numbers of cases we can regard it as being the probability that a randomly chosen case will be in the vaccine group and therefore as an estimate of the efficacy of the vaccine. The lower this probability, the more effective the vaccine.

In fact, this simple analysis, captures much of what the data have to tell and in estimating vaccine efficacy in previous posts. I simply used this ratio as a probability and estimated ‘exact’ confidence intervals using the binomial distribution. Having calculated the intervals on this scale, I back-transformed them to the vaccine efficacy scale

A prior distribution that is commonly used for modelling data using the binomial distribution  is the beta-distribution (see pp. 55-61 of Forbes et al[1], 2011), hence the title of this post. This is a two parameter distribution with parameters (say) ν, ω and mean

 

and variance

 

Thus, the relative value of the two parameters governs the mean and, the larger the parameter values, the smaller the variance. A special case of the beta distribution is the uniform distribution, which can be obtained by setting both parameter values to 1. The resulting mean is 1/2 and the variance is 1/12. Parameter values of ½ and ½ give a distribution with the same mean but a larger variance of 1/8. For comparison, the mean and variance of a binomial proportion are p, p(1 − p)/and if you set = 1/2, n = 2 you get the same mean and variance. This gives a feel for how much information is contained in a prior distribution.

Of Ps and Qs

P&B chose a beta distribution for θ with ν = 0.700102, ω = 1. The prior distribution is plotted in Figure 4. This has a mean of 0.4118 and a variance of approximately 1/11. I now propose to discuss and speculate how these values were arrived at. I come to a rather cynical conclusion and before I give you my reasoning, I want to make two points quite clear:

a) My cynicism does not detract from my admiration for what P&B have done. I think the achievement is magnificent, not only in terms of the basic science but also in terms of trial design, management and delivery.

b) I am not criticising the choice of a Bayesian analysis. In fact, I found that rather interesting.

However, I think it is appropriate to establish what exactly is incorporated in any prior distribution and that is what I propose to do.

First note the extraordinary number of significant figures (6) for the first parameter, of the beta distribution, which has a value of 0.700102 . The distribution itself (as established by its variance) is not very informative but at first sight there would seem to be a great deal of information about the prior distribution itself. This is a feature of some analyses that I have drawn attention to before. See Dawid’s Selection Paradox.

Figure 4 Prior distribution for θ

So this is what I think happened. P&B reached for a common default uniform distribution, a beta with parameters 1,1. However, this would yield an expected value of θ = 0 . On the other hand, they wished to show that the vaccine efficacy ψ was greater than 0.3. They thus asked the question, what value of θ corresponds to a value of ψ = 0.3? Substituting in (4) the answer is 0.4117647 or 0.4118 to four decimal places. They explained this in the protocol as follows: ‘The prior is centered at θ = 0.4118 (VE = 30%) which may be considered pessimistic’.

Figure 5 Combination of parameters for the prior distribution yielding the required mean. The diagonal light blue line gives the combination of values that will produce the desired mean. The red diamond gives the parameter combination chosen by P&B. The blue circle gives the parameter combination that would also have produced the mean chosen but also the same variance as a beta(1,1). The contour lines show the variance of the distribution as a function of the two parameters.

Note that the choice of word centered is odd. The mean of the distribution is 0.4118 but the distribution is not really centered there.  Be that as it may, they now had an infinite possible combination of values for ν, ω that would yield an expected value of 0.4118. Note that solving (6) for ν, ω yields

 

 

and plugging in μ = 0.4118, ω = 1 gives ν = 0.700102. Possible choices of parameter combinations yielding the same mean are given in Figure 5. An alternative to the beta(0.700102,1) they chose might have been beta(0.78516,1.1215). This would have yielded the same mean but given the equivalent variance to a conventional beta(1,1).

It is also somewhat debatable as to whether pessimistic is the right word. The distribution is certainly very uninformative. Note also that just because if the mean value on the scale is transformed to the vaccine efficacy scale it gives a value of 0.30. It does not follow that this is the mean value of the vaccine efficacy. Only medians can be guaranteed to be invariant under transformation. The median of the distribution of θ is 0.3716 and this corresponds to a median vaccine efficacy of 0.4088.

Can you beta Bayesian?

Perhaps unfairly, I could ask, ‘what has the Bayesian element added to the analysis?’ A Bayesian might reply, ‘what advantage does subtracting the Bayesian element bring to the analysis?’ Nevertheless, the choice of prior distribution here points a problem. It clearly does not reflect what anybody believed about the vaccine efficacy before the trial began. Of course, establishing reasonable prior parameters for any statistical analysis is extremely difficult[2].

On the other hand, if a purely conventional prior is required why not choose beta(1,1) or beta(1/2,1/2), say? I think the 0.3 hypothesised value for vaccine efficacy is a red herring here. What should be of interest to a Bayesian is the posterior probability that the vaccine efficacy is greater than 30%. This does not require that the prior distribution is ‘centred’ on this value.

Of course the point is that provided that the variance of the prior distribution is large enough, the posterior inference is scarcely affected. In any case a Bayesian might reply, ‘if you don’t like the prior distribution choose your own’. To which a diehard frequentist might reply, ‘it is a bit late for choosing prior distributions’.

I take two lessons from this, however. First, where Bayesian analyses are being used we should all try to understand what the prior distribution implies: in what we now ‘believe’ and how data would update such belief[3]. Second, disappointing as this may be to inferential enthusiasts, this sort of thing is not where the action is. The trial was well conceived, designed and conducted and the product was effective. My congratulations to all the scientists involved, including, but not limited to, the statisticians.

References

  1. Forbes, C., et al., Statistical distributions. 2011: John Wiley & Sons.
  2. Senn, S.J., Trying to be precise about vagueness. Statistics in Medicine, 2007. 26: p. 1417-1430.
  3. Senn, S.J., You may believe you are a Bayesian but you are probably wrong. Rationality, Markets and Morals, 2011. 2: p. 48-66.

Categories: covid-19, PhilStat/Med, S. Senn | 12 Comments

Why hasn’t the ASA Board revealed the recommendations of its new task force on statistical significance and replicability?

something’s not revealed

A little over a year ago, the board of the American Statistical Association (ASA) appointed a new Task Force on Statistical Significance and Replicability (under then president, Karen Kafadar), to provide it with recommendations. [Its members are here (i).] You might remember my blogpost at the time, “Les Stats C’est Moi”. The Task Force worked quickly, despite the pandemic, giving its recommendations to the ASA Board early, in time for the Joint Statistical Meetings at the end of July 2020. But the ASA hasn’t revealed the Task Force’s recommendations, and I just learned yesterday that it has no plans to do so*. A panel session I was in at the JSM, (P-values and ‘Statistical Significance’: Deconstructing the Arguments), grew out of this episode, and papers from the proceedings are now out. The introduction to my contribution gives you the background to my question, while revealing one of the recommendations (I only know of 2). 

[i] Linda Young, (Co-Chair), Xuming He, (Co-Chair) Yoav Benjamini, Dick De Veaux, Bradley Efron, Scott Evans, Mark Glickman, Barry Graubard, Xiao-Li Meng, Vijay Nair, Nancy Reid, Stephen Stigler, Stephen Vardeman, Chris Wikle, Tommy Wright, Karen Kafadar, Ex-officio. (Kafadar 2020)

You can access the full paper here.

 

Rejecting Statistical Significance Tests: Defanging the Arguments^

Abstract: I critically analyze three groups of arguments for rejecting statistical significance tests (don’t say ‘significance’, don’t use P-value thresholds), as espoused in the 2019 Editorial of The American Statistician (Wasserstein, Schirm and Lazar 2019). The strongest argument supposes that banning P-value thresholds would diminish P-hacking and data dredging. I argue that it is the opposite. In a world without thresholds, it would be harder to hold accountable those who fail to meet a predesignated threshold by dint of data dredging. Forgoing predesignated thresholds obstructs error control. If an account cannot say about any outcomes that they will not count as evidence for a claim—if all thresholds are abandoned—then there is no a test of that claim. Giving up on tests means forgoing statistical falsification. The second group of arguments constitutes a series of strawperson fallacies in which statistical significance tests are too readily identified with classic abuses of tests. The logical principle of charity is violated. The third group rests on implicit arguments. The first in this group presupposes, without argument, a different philosophy of statistics from the one underlying statistical significance tests; the second group—appeals to popularity and fear—only exacerbate the ‘perverse’ incentives underlying today’s replication crisis. 

1. Introduction and Background 

Today’s crisis of replication gives a new urgency to critically appraising proposed statistical reforms intended to ameliorate the situation. Many are welcome, such as preregistration, testing by replication, and encouraging a move away from cookbook uses of statistical methods. Others are radical and might inadvertently obstruct practices known to improve on replication. The problem is one of evidence policy, that is, it concerns policies regarding evidence and inference. Problems of evidence policy call for a mix of statistical and philosophical considerations, and while I am not a statistician but a philosopher of science, logic, and statistics, I hope to add some useful reflections on the problem that confronts us today. 

In 2016 the American Statistical Association (ASA) issued a statement on P-values, intended to highlight classic misinterpretations and abuses. 

The statistical community has been deeply concerned about issues of reproducibility and replicability of scientific conclusions. …. much confusion and even doubt about the validity of science is arising. (Wasserstein and Lazar 2016, p. 129) 

The statement itself grew out of meetings and discussions with over two dozen others, and was specifically approved by the ASA board. The six principles it offers are largely rehearsals of fallacious interpretations to avoid. In a nutshell: P-values are not direct measures of posterior probabilities, population effect sizes, or substantive importance, and can be invalidated by biasing selection effects (e.g., cherry picking, P-hacking, multiple testing). The one positive principle is the first: “P-values can indicate how incompatible the data are with a specified statistical model” (ibid., p. 131). 

The authors of the editorial that introduces the 2016 ASA Statement, Wasserstein and Lazar, assure us that “Nothing in the ASA statement is new” (p. 130). It is merely a “statement clarifying several widely agreed upon principles underlying the proper use and interpretation of the p-value” ( p. 131). Thus, it came as a surprise, at least to this outsider’s ears, to hear the authors of the 2016 Statement, along with a third co-author (Schirm), declare in March 2019 that: “The ASA Statement on P-Values and Statistical Significance stopped just short of recommending that declarations of ‘statistical significance’ be abandoned” (Wasserstein, Schirm and Lazar 2019, p. 2, hereafter, WSL 2019). 

The 2019 Editorial announces: “We take that step here….[I]t is time to stop using the term ‘statistically significant’ entirely. …[S]tatistically significant –don’t say it and don’t use it” (WSL 2019, p. 2). Not just outsiders to statistics were surprised. To insiders as well, the 2019 Editorial was sufficiently perplexing for the then ASA President, Karen Kafadar, to call for a New ASA Task Force on Significance Tests and Replicability. 

Many of you have written of instances in which authors and journal editors—and even some ASA members—have mistakenly assumed this editorial represented ASA policy. The mistake is understandable: The editorial was co-authored by an official of the ASA. 

… To address these issues, I hope to establish a working group that will prepare a thoughtful and concise piece … without leaving the impression that p-values and hypothesis tests…have no role in ‘good statistical practice’. (K. Kafadar, President’s Corner, 2019, p. 4) 

This was a key impetus for the JSM panel discussion from which the current paper derives (“P-values and ‘Statistical Significance’: Deconstructing the Arguments”). Kafadar deserves enormous credit for creating the new task force.1 Although the new task force’s report, submitted shortly before the JSM 2020 meeting, has not been disclosed, Kadar’s presentation noted that one of its recommendations is that there be a “disclaimer on all publications, articles, editorials, … authored by ASA Staff”.2 In this case, a disclaimer would have noted that the 2019 Editorial is not ASA policy. Still, given that its authors include ASA officials, it has a great deal of impact. 

We should indeed move away from unthinking and rigid uses of thresholds—not just with significance levels, but also with confidence levels and other quantities. No single statistical quantity from any school, by itself, is an adequate measure of evidence, for any of the many disparate meanings of “evidence” one might adduce. Thus, it is no special indictment of P-values that they fail to supply such a measure. We agree as well that the actual P-value should be reported, as all the founders of tests recommended (see Mayo 2018, Excursion 3 Tour II). But the 2019 Editorial goes much further. In its view: Prespecified P-value thresholds should not be used at all in interpreting results. In other words, the position advanced by the 2019 Editorial, “reject statistical significance”, is not just a word ban but a gatekeeper ban. For example, in order to comply with its recommendations, the FDA would have to end its “long established drug review procedures that involve comparing p-values to significance thresholds for Phase III drug trials” as the authors admit (p. 10). 

Kafadar is right to see the 2019 Editorial as challenging the overall use of hypothesis tests, even though it is not banning P-values. Although P-values can be used as descriptive measures, rather than as tests, when we wish to employ them as tests, we require thresholds. Ideally there are several P-value benchmarks, but even that is foreclosed if we take seriously their view: “[T]he problem is not that of having only two labels. Results should not be trichotomized, or indeed categorized into any number of groups…” (WSL 2019, p. 2). 

The March 2019 Editorial (WSL 2019) also includes a detailed introduction to a special issue of The American Statistician (“Moving to a World beyond p < 0.05”). The position that I will discuss, reject statistical significance, (“don’t say ‘significance’, don’t use P-value thresholds”), is outlined largely in the first two sections of the 2019 Editorial. What are the arguments given for the leap from the reasonable principles of the 2016 ASA Statement to the dramatic “reject statistical significance” position? Do they stand up to principles for good argumentation? 

Continue reading the paper here. Please share your comments.

NOTES:

1 Linda Young, (Co-Chair), Xuming He, (Co-Chair) Yoav Benjamini, Dick De Veaux, Bradley Efron, Scott Evans, Mark Glickman, Barry Graubard, Xiao-Li Meng, Vijay Nair, Nancy Reid, Stephen Stigler, Stephen Vardeman, Chris Wikle, Tommy Wright, Karen Kafadar, Ex-officio. (Kafadar 2020) 

2 Kafadar, K., “P-values: Assumptions, Replicability, ‘Significance’,” slides given in the Contributed Panel: P-Values and “Statistical Significance”: Deconstructing the Arguments at the (virtual) JSM 2020. (August 6, 2020). 

^CITATION: Mayo, D. (2020). Rejecting Statistical Significance Tests: Defanging the Arguments. In JSM Proceedings, Statistical Consulting Section. Alexandria, VA: American Statistical Association. (2020). 236-256.

*Jan 11 update. The ASA executive director, Ron Wasserstein, wants to emphasize that it is leaving to the members of the Task Force when and how to release the report on their own. I do not know if it will do so or if all of the authors will agree to this shift. Personally, I don’t know why the ASA Board would not wish to reveal the recommendations of the Task Force that it created–even without any presumption that it thereby is understood to be a policy document. There can be a clear disclaimer that it is not. The Task Force carried out the work that was asked of them in a timely manner. You can find a statement of the charge given to the Task Force in my comments.

Categories: 2016 ASA Statement on P-values, JSM 2020, replication crisis, statistical significance tests, straw person fallacy | 7 Comments

Next Phil Stat Forum: January 7: D. Mayo: Putting the Brakes on the Breakthrough (or “How I used simple logic to uncover a flaw in …..statistical foundations”)

The fourth meeting of our New Phil Stat Forum*:

The Statistics Wars
and Their Casualties

January 7, 16:00 – 17:30  (London time)
11 am-12:30 pm (New York, ET)**
**note time modification and date change

Putting the Brakes on the Breakthrough,

or “How I used simple logic to uncover a flaw in a controversial 60-year old ‘theorem’ in statistical foundations” 

Deborah G. Mayo

.

HOW TO JOIN US: SEE THIS LINK

ABSTRACT: An essential component of inference based on familiar frequentist (error statistical) notions p-values, statistical significance and confidence levels, is the relevant sampling distribution (hence the term sampling theory). This results in violations of a principle known as the strong likelihood principle (SLP), or just the likelihood principle (LP), which says, in effect, that outcomes other than those observed are irrelevant for inferences within a statistical model. Now Allan Birnbaum was a frequentist (error statistician), but he found himself in a predicament: He seemed to have shown that the LP follows from uncontroversial frequentist principles! Bayesians, such as Savage, heralded his result as a “breakthrough in statistics”! But there’s a flaw in the “proof”, and that’s what I aim to show in my presentation by means of 3 simple examples:

  • Example 1: Trying and Trying Again
  • Example 2: Two instruments with different precisions
    (you shouldn’t get credit/blame for something you didn’t do)
  • The Breakthrough: Don’t Birnbaumize that data my friend

As in the last 9 years, I posted an imaginary dialogue (here) with Allan Birnbaum at the stroke of midnight, New Year’s Eve, and this will be relevant for the talk.

The Phil Stat Forum schedule is at the Phil-Stat-Wars.com blog 


Readings:

One of the following 3 papers:

My earliest treatment via counterexample:

A deeper argument can be found in:

For an intermediate Goldilocks version (based on a presentation given at the JSM 2013):

This post from the Error Statistics Philosophy blog will get you oriented. (It has links to other posts on the LP & Birnbaum, as well as background readings/discussions for those who want to dive deeper into the topic.)


Slides and Video Links:

D. Mayo’s slides: “Putting the Brakes on the Breakthrough, or ‘How I used simple logic to uncover a flaw in a controversial 60-year old ‘theorem’ in statistical foundations’”

D. Mayo’s  presentation:

Discussion on Mayo’s presentation:


Mayo’s Memos: Any info or events that arise that seem relevant to share with y’all before the meeting.

You may wish to look at my rejoinder to a number of statisticians: Rejoinder “On the Birnbaum Argument for the Strong Likelihood Principle”. (It is also above in the link to the complete discussion in the 3rd reading option.)

I often find it useful to look at other treatments. So I put together this short supplement to glance through to clarify a few select points.


Please post comments on the Phil Stat Wars blog here.

 

 
 
 
Categories: Birnbaum, Birnbaum Brakes, Likelihood Principle | 5 Comments

Blog at WordPress.com.