Author Archives: Mayo

The Booster Wars: A prepost

.

We’re always reading about how the pandemic has created a new emphasis on preprints, so it stands to reason that non-reviewed preposts would now have a place in blogs. Maybe then I’ll “publish” some of the half-baked posts languishing on draft on errorstatistics.com. I’ll update or replace this prepost after reviewing.

The Booster wars

Like most wars, the recent “booster wars” have (unintended) casualties. I refer, of course, to the disagreement about whether third shots of Covid vaccines are called for because of the evidence of waning protection after 6 or so months, coupled with the more virulent delta variant. Last week’s skirmish, resulting in the FDA advisory committee voting 16 to 2 against approving a third shot of Pfizer’s vaccine (to anyone over 16) seemed to be more of a backlash by some members FDA’s Office of Vaccines against being sidelined by the White House when they announced last month already that a booster shot was forthcoming for all. (Two members, including the director of the Office of Vaccines is leaving, presumably as a result, at least that’s how it was described in the press). The FDA advisory committee claimed there was not enough evidence of benefit to recommend boosters for all, given the unknown risks such as myocarditis (although the data Pfizer presented include just 1 case, I believe).

I watched the last 3 hours of the day’s session on Friday September 17. (It was oddly reassuring that the FDA had at least as many technical glitches with zoom as the rest of us; but not reassuring to see the seeming cavalier attitude of some members). Right after voting the booster plan down the panel immediately turned around and approved the booster for anyone over 65 or who was in a severe risk group. Then, 15 minutes later, they broadened that to include anyone “at high risk of occupational exposure”—at any level of exposure—to Covid,  such as healthcare workers, teachers and many others.

Do our health experts realize how detrimental their infighting is to the rest of us? Couldn’t they have come to somewhat of an agreement—at least as to how to explain their opposed standpoints—before making rival pronouncements and issuing dueling preprints? For at least one whole month now, we’ve been witnessing squabbling agencies. It came as a surprise to hear Biden/Fauci announce in mid-August that boosters were necessary, it was only a matter of time. “Even among government scientists, the idea has been met with skepticism and anger” we read in the NYT. Fauci said last month it would probably be 8 months, no make that 6 months (after the last vaccine).[1] “Fauci said: he was ‘certain that Americans would need booster shots of the COVID-19 vaccine” possibly at 5 months! Today he even strengthened that view, asserting “the third shots should be viewed as a part of the COVID-19 vaccine regimen, just like the first and second shots,… I think that three shots will be the actual correct regimen”. The idea has been met with skepticism and anger.

Does that mean he’s prepared now, despite the analysis of the FDA panel, to have the booster mandated wherever the others currently are? Apparently.

We’d all love to see the plan

So what would the plan be then? Boosters every 6 months? Israel is preparing for a 4th dose already. What about the development of boosters for Delta and other variants? Is that in the works in the U.S.? And if boosters are to be recommended on the basis of declining levels of neutralizing antibodies (correlated, it is thought, with breakthrough cases), why not recommend people test their levels? I did go out and get my levels tested a couple of weeks ago but it was anything but routine. Friday’s FDA panel voted to wait until more evidence is in. If our numbers are high, then, is it advisable to wait, even if we fall into the FDA’s (vague) permissible category? The answers we are getting are simplistic, defensive, and, to my knowledge, don’t address this and other fairly obvious conundrums for an anxious public.

The main basis for the rejection by the FDA panel was described in a Lancet article appearing right before: they find the available evidence pointing to the need for boosters to be weak, based on observational studies, they claim, of just a few weeks:

“Randomised trials are relatively easy to interpret reliably, but there are substantial challenges in estimating vaccine efficacy from observational studies undertaken in the context of rapid vaccine roll-out.

Although the benefits of primary COVID-19 vaccination clearly outweigh the risks, there could be risks if boosters are widely introduced too soon, or too frequently, especially with vaccines that can have immune-mediated side-effects (such as myocarditis, which is more common after the second dose of some mRNA vaccines, or Guillain-Barre syndrome, which has been associated with adenovirus-vectored COVID-19 vaccines). If unnecessary boosting causes significant adverse reactions, there could be implications for vaccine acceptance that go beyond COVID-19 vaccines. Thus, widespread boosting should be undertaken only if there is clear evidence that it is appropriate.”[2]

This seems a sensible precautionary stance, unfortunately obscured by the feeling it reflected agency power dynamics.[3] Maybe the U.S. would have more of its own data, if the CDC had not stopped recording breakthrough infections in May, 2021 (except for those who are hospitalized or die). Anyway, Fauci does not address the panel’s concerns about limited data. But, given those concerns, it does make one wonder why the same panel turned around and recommended approval of the booster for various occupations, rather than recommending waiting for more data? The FDA’s misgivings will doubtless also give grounds for unvaccinees to point out that even the FDA is worried about safety of approved vaccines. After all, a third dose, 6 months after the second, does not seem substantially riskier, especially given the lack of caveats in telling those who have had Covid to get fully vaccinated in addition. Actually, it now appears that getting vaxxed after having Covid provides “superhuman” Covid immunity.

Will our immunity (from vaccinations) evolve, or be obstructed?

In a study published online last month, Bieniasz and his colleagues found antibodies in these individuals that can strongly neutralize the six variants of concern tested, including delta and beta, as well as several other viruses related to SARS-CoV-2, including one in bats, two in pangolins and the one that caused the first coronavirus pandemic, SARS-CoV-1. (see link)

In fact, these antibodies were even able to deactivate a virus engineered, on purpose, to be highly resistant to neutralization. This virus contained 20 mutations that are known to prevent SARS-CoV-2 antibodies from binding to it. Antibodies from people who were only vaccinated or who only had prior coronavirus infections were essentially useless against this mutant virus. But antibodies in people with the “hybrid immunity” could neutralize it.

Understandably, many are excited about the possibility that a booster shot will create, in vaccinated people, the kind of “super-human” immunity response seen in those who followed Covid infections with vaccines (i.e., those with hybrid immunity). Then Covid, it is thought, would become like the common cold. Even though it was only 14 people, that they all showed this is impressive. (There isn’t information on the reverse order, vaccine, then infection.) Throughout the pandemic, I have found that Paul Bieniasz, who led this study, is doing some of the most interesting and path-breaking work.

But there are worries by other researchers that repeated infection with one strain can actually reduce the development of immunity to novel strains—although you don’t typically hear about this.

In the case of Covid, some scientists are concerned that the immune system’s reaction to the vaccines being deployed now could leave an indelible imprint, and that next-generation products, updated in response to emerging variants of the SARS-CoV-2, won’t confer as much protection. (Stat News)

Immunologists call this ‘original antigenic sin, and it is apparently a key obstacle to creating immunity to flu variants—although, again, we don’t hear about it in the yearly prodding to get flu shots.

The concern is that even when a booster variant comes along, our immune systems, having repeatedly encountered the early Covid variant, will largely trigger neutralizing antibodies to it rather than the novel variant. As such, I’ve heard some doctors advise people to try to go as long as they can with the primary shots. To know how long to wait, we’d need to know our (approximate) neutralizing antibody levels. As of now, if you do manage to get a quantitative test (no dichotomania), you have to go through non-standard channels to find an interpretation of numbers. Not even doctors seem to know. The public is capable of understanding that, at present, there is no clear “correlate of protection” as they call it, (between neutralizing antibodies and infection); that’s not a reason to obscure or bury the information, especially as policy decisions that affect them rely on precisely these numbers. We should also be conducting studies to test what those numbers mean in terms of infection, disease and transmission (V. Prasad)

Here are some very useful discussions:

It may be argued that future booster variants are going to be so loaded up that they will force our immune systems to pay attention (to the new variant)–but are we sure? And do we want to get to that point? Of course, like many of you,  I’m just a member of the non-expert lay population whose life is affected by Covid policy decisions that are made without my input. (I don’t know what P. Bieniasz thinks of this, but he’s convincing on the need for boosters.)

A simple first step

We’re bound to hear, any day now—perhaps even before I put up this prepost—of the FDA’s ruling on Pfizer, based on Friday’s FDA panel. Presumably they will concur with the panel, and a similar approval seems likely for Moderna in a few weeks (although I hear Moderna wants the booster to be a half dose of the original[4]). But these narrow rulings will not address the broad and legitimate questions people have, and without answers to those questions, people cannot wisely decide whether to take up any opportunity to get a booster. This just increases the feeling that agencies and politicians have their agendas, and we have to fend for ourselves. As a simple first step, how about calling all of the point people on Covid vaccines together—being particularly sure to include representatives of rival positions—to address these specific questions, and reveal the uncertainties that are the engine behind their policies, although they are generally hidden under wraps. Not one of these hour long glitzy “roundtables”, but an extended (and perhaps ongoing) forum, where answers are challenged by others and by data.

Lest people start to have hesitations with this new policy, why not give the public the information they need to critically navigate the pandemic for themselves? It’s fairly clear that our agencies aren’t doing it for us.

What do you think? Please write with your thoughts and corrections. I’d be interested to hear as well, what questions you’d like the vaccine and virology experts to answer. 

 

[1] In Mid-august, CDC director Walensky, agreeing with Fauci, gave these reasons for boosting: “First, vaccine-induced protection against SARS-CoV-2 infection begins to decrease over time. Second, vaccine effectiveness against severe disease, hospitalization and death remains relatively high. And third, vaccine effectiveness is generally decreased against the delta variant.” (Washington Post)

[2] An additional shot has already been approved for anyone considered immunocompromised. Several other countries are either contemplating or already giving boosters.

[3] Perhaps the disagreement is between the weight to be given infections vs severe disease. Or perhaps it’s about which is worse: that announcing boosters would increase vaccine hesitancy, or that declining anti-virus potency of vaccines will increase transmission. They also felt it would be more beneficial to increase global vaccination, but the committee announced at the start, that such considerations would not be considered relevant.

[4] So are half-doses being manufactured, or will people who want Moderna boosters have to wait until they’re produced?

Categories: the (Covid vaccine) booster wars | 1 Comment

Workshop-New Date!

The Statistics Wars
and Their Casualties

New Date!

4-5 April 2022

London School of Economics (CPNSS)

Yoav Benjamini (Tel Aviv University), Alexander Bird (University of Cambridge), Mark Burgman (Imperial College London),  Daniele Fanelli (London School of Economics and Political Science), Roman Frigg (London School of Economics and Political Science), Stephen Guettinger (London School of Economics and Political Science), David Hand (Imperial College London), Margherita Harris (London School of Economics and Political Science), Christian Hennig (University of Bologna), Katrin Hohl (City University London), Daniël Lakens (Eindhoven University of Technology), Deborah Mayo (Virginia Tech), Richard Morey (Cardiff University), Stephen Senn (Edinburgh, Scotland), Jon Williamson (University of Kent)

Panel Leaders: TBA

While the field of statistics has a long history of passionate foundational controversy the last decade has, in many ways, been the most dramatic. Misuses of statistics, biasing selection effects, and high powered methods of Big-Data analysis, have helped to make it easy to find impressive-looking but spurious, results that fail to replicate. As the crisis of replication has spread beyond psychology and social sciences to biomedicine, genomics and other fields, people are getting serious about reforms.  Many are welcome (preregistration, transparency about data, eschewing mechanical uses of statistics); some are quite radical. The experts do not agree on how to restore scientific integrity, and these disagreements reflect philosophical battles–old and new– about the nature of inductive-statistical inference and the roles of probability in statistical inference and modeling. These philosophical issues simmer below the surface in competing views about the causes of problems and potential remedies. If statistical consumers are unaware of assumptions behind rival evidence-policy reforms, they cannot scrutinize the consequences that affect them (in personalized medicine, psychology, law, and so on). Critically reflecting on proposed reforms and changing standards requires insights from statisticians, philosophers of science, psychologists, journal editors, economists and practitioners from across the natural and social sciences. This workshop will bring together these interdisciplinary insights–from speakers as well as attendees.

Organizers: D. Mayo and R. Frigg

Logistician (chief logistics and contact person): Jean Miller 

*We expect one or more additional participants

Categories: Error Statistics | Leave a comment

All She Wrote (so far): Error Statistics Philosophy: 10 years on

Dear Reader: I began this blog 10 years ago (Sept. 3, 2011)! A double celebration is taking place at the Elbar Room–remotely for the first time due to Covid– both for the blog and the 3 year anniversary of the physical appearance of my book: Statistical Inference as Severe Testing: How to Get Beyond the Statistics Wars [SIST] (CUP, 2018). A special rush edition made an appearance on Sept 3, 2018 in time for the RSS meeting in Cardiff, where we had a session deconstructing the arguments against statistical significance tests (with Sir David Cox, Richard Morey and Aris Spanos). Join us between 7 and 8 pm in a drink of Elba Grease.

.

Many of the discussions in the book were importantly influenced (corrected and improved) by reader’s comments on the blog over the years. I posted several excerpts and mementos from SIST here. I thank readers for their input. Readers might want to look up the topics in SIST on this blog to check out the comments, and see how ideas were developed, corrected and turned into “excursions” in SIST.

I recently invited readers to weigh in on the ASA Task Force on Statistical significance and Replication--any time through September–to be part of a joint guest post (or posts). All contributors will get a free copy of SIST. Continue reading

Categories: 10 year memory lane, Statistical Inference as Severe Testing | Leave a comment

Should Bayesian Clinical Trialists Wear Error Statistical Hats? (i)

 

I. A principled disagreement

The other day I was in a practice (zoom) for a panel I’m in on how different approaches and philosophies (Frequentist, Bayesian, machine learning) might explain “why we disagree” when interpreting clinical trial data. The focus is radiation oncology.[1] An important point of disagreement between frequentist (error statisticians) and Bayesians concerns whether and if so, how, to modify inferences in the face of a variety of selection effects, multiple testing, and stopping for interim analysis. Such multiplicities directly alter the capabilities of methods to avoid erroneously interpreting data, so the frequentist error probabilities are altered. By contrast, if an account conditions on the observed data, error probabilities drop out, and we get principles such as the stopping rule principle. My presentation included a quote from Bayarri and J. Berger (2004): Continue reading

Categories: multiple testing, statistical significance tests, strong likelihood principle | 26 Comments

Performance or Probativeness? E.S. Pearson’s Statistical Philosophy: Belated Birthday Wish

E.S. Pearson

This is a belated birthday post for E.S. Pearson (11 August 1895-12 June, 1980). It’s basically a post from 2012 which concerns an issue of interpretation (long-run performance vs probativeness) that’s badly confused these days. Yes, i know I’ve been neglecting this blog as of late, but this topic will appear in a new guise in a post I’m writing now, to appear tomorrow.

HAPPY BELATED BIRTHDAY EGON!

Are methods based on error probabilities of use mainly to supply procedures which will not err too frequently in some long run? (performance). Or is it the other way round: that the control of long run error properties are of crucial importance for probing the causes of the data at hand? (probativeness). I say no to the former and yes to the latter. This, I think, was also the view of Egon Sharpe (E.S.) Pearson.  Continue reading

Categories: E.S. Pearson, Error Statistics | 2 Comments

Fair shares: sexual justice in patient recruitment in clinical trials

.

 

Stephen Senn
Consultant Statistician
Edinburgh, Scotland

It is hard to argue against the proposition that approaches to clinical research should treat not only men but also women fairly, and of course this applies also to other ways one might subdivide patients. However, agreeing to such a principle is not the same as acting on it and when one comes to consider what in practice one might do, it is far from clear what the principle ought to be. In other words, the more one thinks about implementing such a principle the less obvious it becomes as to what it is.

Three possible rules

Continue reading

Categories: evidence-based policy, PhilPharma, RCTs, S. Senn | 5 Comments

Invitation to discuss the ASA Task Force on Statistical Significance and Replication

.

The latest salvo in the statistics wars comes in the form of the publication of The ASA Task Force on Statistical Significance and Replicability, appointed by past ASA president Karen Kafadar in November/December 2019. (In the ‘before times’!) Its members are:

Linda Young, (Co-Chair), Xuming He, (Co-Chair) Yoav Benjamini, Dick De Veaux, Bradley Efron, Scott Evans, Mark Glickman, Barry Graubard, Xiao-Li Meng, Vijay Nair, Nancy Reid, Stephen Stigler, Stephen Vardeman, Chris Wikle, Tommy Wright, Karen Kafadar, Ex-officio. (Kafadar 2020)

The full report of this Task Force is in the The Annals of Applied Statistics, and on my blogpost. It begins:

In 2019 the President of the American Statistical Association (ASA) established a task force to address concerns that a 2019 editorial in The American Statistician (an ASA journal) might be mistakenly interpreted as official ASA policy. (The 2019 editorial recommended eliminating the use of “p < 0.05” and “statistically significant” in statistical analysis.) This document is the statement of the task force… (Benjamini et al. 2021)

Continue reading

Categories: 2016 ASA Statement on P-values, ASA Task Force on Significance and Replicability, JSM 2020, National Institute of Statistical Sciences (NISS), statistical significance tests | 2 Comments

Statistics and the Higgs Discovery: 9 yr Memory Lane

.

I’m reblogging two of my Higgs posts at the 9th anniversary of the 2012 discovery. (The first was in this post.) The following, was originally “Higgs Analysis and Statistical Flukes: part 2” (from March, 2013).[1]

Some people say to me: “severe testing is fine for ‘sexy science’ like in high energy physics (HEP)”–as if their statistical inferences are radically different. But I maintain that this is the mode by which data are used in “uncertain” reasoning across the entire landscape of science and day-to-day learning, at least, when we’re trying to find things out [2] Even with high level theories, the particular problems of learning from data are tackled piecemeal, in local inferences that afford error control. Granted, this statistical philosophy differs importantly from those that view the task as assigning comparative (or absolute) degrees-of-support/belief/plausibility to propositions, models, or theories.

The Higgs discussion finds its way into Tour III in Excursion 3 of my Statistical Inference as Severe Testing: How to Get Beyond the Statistics Wars (2018, CUP). You can read it (in proof form) here, pp. 202-217. in a section with the provocative title:

3.8 The Probability Our Results Are Statistical Fluctuations: Higgs’ Discovery

Continue reading

Categories: Higgs, highly probable vs highly probed, P-values | Leave a comment

Statisticians Rise Up To Defend (error statistical) Hypothesis Testing

.

What is the message conveyed when the board of a professional association X appoints a Task Force intended to dispel the supposition that a position advanced by the Executive Director of association X does not reflect the views of association X on a topic that members of X disagree on? What it says to me is that there is a serious break-down of communication amongst the leadership and membership of that association. So while I’m extremely glad that the ASA appointed the Task Force on Statistical Significance and Replicability in 2019, I’m very sorry that the main reason it was needed was to address concerns that an editorial put forward by the ASA Executive Director (and 2 others) “might be mistakenly interpreted as official ASA policy”. The 2021 Statement of the Task Force (Benjamini et al. 2021) explains:

In 2019 the President of the American Statistical Association (ASA) established a task force to address concerns that a 2019 editorial in The American Statistician (an ASA journal) might be mistakenly interpreted as official ASA policy. (The 2019 editorial recommended eliminating the use of “p < 0.05” and “statistically significant” in statistical analysis.) This document is the statement of the task force…

Continue reading

Categories: ASA Task Force on Significance and Replicability, Schachtman, significance tests | 9 Comments

June 24: “Have Covid-19 lockdowns led to an increase in domestic violence? Drawing inferences from police administrative data” (Katrin Hohl)

The tenth meeting of our Phil Stat Forum*:

The Statistics Wars
and Their Casualties

24 June 2021

TIME: 15:00-16:45 (London); 10:00-11:45 (New York, EST)

For information about the Phil Stat Wars forum and how to join, click on this link.

Katrin Hohl_copy

.

“Have Covid-19 lockdowns led to an increase in domestic violence? Drawing inferences from police administrative data” 

Katrin Hohl Continue reading

Categories: Error Statistics | Leave a comment

At long last! The ASA President’s Task Force Statement on Statistical Significance and Replicability

The ASA President’s Task Force Statement on Statistical Significance and Replicability has finally been published. It found a home in The Annals of Applied Statistics, after everyone else they looked to–including the ASA itself– refused to publish it.  For background see this post. I’ll comment on it in a later post. There is also an Editorial: Statistical Significance, P-Values, and Replicability by Karen Kafadar. Continue reading

Categories: ASA Task Force on Significance and Replicability | 10 Comments

June 24: “Have Covid-19 lockdowns led to an increase in domestic violence? Drawing inferences from police administrative data” (Katrin Hohl)

The tenth meeting of our Phil Stat Forum*:

The Statistics Wars
and Their Casualties

24 June 2021

TIME: 15:00-16:45 (London); 10:00-11:45 (New York, EST)

For information about the Phil Stat Wars forum and how to join, click on this link.

Katrin Hohl_copy

.

“Have Covid-19 lockdowns led to an increase in domestic violence? Drawing inferences from police administrative data” 

Katrin Hohl Continue reading

Categories: Error Statistics | Leave a comment

The F.D.A.’s controversial ruling on an Alzheimer’s drug (letter from a reader)(ii)

I was watching Biogen’s stock (BIIB) climb over 100 points yesterday because its Alzheimer’s drug, aducanumab [brand name: Aduhelm], received surprising FDA approval.  I hadn’t been following the drug at all (it’s enough to try and track some Covid treatments/vaccines). I knew only that the FDA panel had unanimously recommended not to approve it last year, and the general sentiment was that it was heading for FDA rejection yesterday. After I received an email from Geoff Stuart[i] asking what I thought, I found out a bit more. He wrote: Continue reading

Categories: PhilStat/Med, preregistration | 10 Comments

Bayesian philosophers vs Bayesian statisticians: Remarks on Jon Williamson

While I would agree that there are differences between Bayesian statisticians and Bayesian philosophers, those differences don’t line up with the ones drawn by Jon Williamson in his presentation to our Phil Stat Wars Forum (May 20 slides). I hope Bayesians (statisticians, or more generally, practitioners, and philosophers) will weigh in on this. 

Continue reading
Categories: Phil Stat Forum, stat wars and their casualties | 11 Comments

Mayo Casualties of O-Bayesianism and Williamson response

.

After Jon Williamson’s talk, Objective Bayesianism from a Philosophical Perspective, at the PhilStat forum on May 22, I raised some general “casualties” encountered by objective, non-subjective or default Bayesian accounts, not necessarily Williamson’s. I am pasting those remarks below, followed by some additional remarks and the video of his responses to my main kvetches. Continue reading

Categories: frequentist/Bayesian, objective Bayesians, Phil Stat Forum | 4 Comments

May 20: “Objective Bayesianism from a Philosophical Perspective” (Jon Williamson)

The ninth meeting of our Phil Stat Forum*:

The Statistics Wars
and Their Casualties

20 May 2021

TIME: 15:00-16:45 (London); 10:00-11:45 (New York, EST)

For information about the Phil Stat Wars forum and how to join, click on this link.

“Objective Bayesianism from a philosophical perspective” 

Jon Williamson Continue reading

Categories: Error Statistics | Tags: | Leave a comment

Tom Sterkenburg Reviews Mayo’s “Statistical Inference as Severe Testing: How to Get Beyond the Statistics Wars” (2018, CUP)

T. Sterkenburg

Tom Sterkenburg, PhD
Postdoctoral Fellow
Munich Center for Mathematical Philosophy
LMU Munich
Munich, German

Deborah G. Mayo: Statistical Inference as Severe Testing: How to Get Beyond the Statistics Wars

The foundations of statistics is not a land of peace and quiet. “Tribal warfare” is perhaps putting it too strong, but it is the case that for decades now various camps and subcamps have been exchanging heated arguments about the right statistical methodology. That these skirmishes are not just an academic exercise is clear from the widespread use of statistical methods, and contemporary challenges that cry for more secure foundations: the rise of big data, the replication crisis.

Continue reading

Categories: SIST, Statistical Inference as Severe Testing–Review, Tom Sterkenburg | 9 Comments

CUNY zoom talk on Wednesday: Evidence as Passing a Severe Test

If interested, write to me for the zoom link (error@vt.edu).

Categories: Announcement | Leave a comment

April 22 “How an information metric could bring truce to the statistics wars” (Daniele Fanelli)

The eighth meeting of our Phil Stat Forum*:

The Statistics Wars
and Their Casualties

22 April 2021

TIME: 15:00-16:45 (London); 10:00-11:45 (New York, EST)

For information about the Phil Stat Wars forum and how to join, click on this link.

“How an information metric could bring truce to the statistics wars

Daniele Fanelli Continue reading

Categories: Phil Stat Forum, replication crisis, stat wars and their casualties | Leave a comment

A. Spanos: Jerzy Neyman and his Enduring Legacy (guest post)

I am reblogging a guest post that Aris Spanos wrote for this blog on Neyman’s birthday some years ago.   

A. Spanos

A Statistical Model as a Chance Mechanism
Aris Spanos 

Jerzy Neyman (April 16, 1894 – August 5, 1981), was a Polish/American statistician[i] who spent most of his professional career at the University of California, Berkeley. Neyman is best known in statistics for his pioneering contributions in framing the Neyman-Pearson (N-P) optimal theory of hypothesis testing and his theory of Confidence Intervals. (This article was first posted here.) Continue reading

Categories: Neyman, Spanos | Leave a comment

Blog at WordPress.com.