Announcement

Going round and round again: a roundtable on reproducibility & lowering p-values

.

There will be a roundtable on reproducibility Friday, October 27th (noon Eastern time), hosted by the International Methods Colloquium, on the reproducibility crisis in social sciences motivated by the paper, “Redefine statistical significance.” Recall, that was the paper written by a megateam of researchers as part of the movement to require p ≤ .005, based on appraising significance tests by a Bayes Factor analysis, with prior probabilities on a point null and a given alternative. It seems to me that if you’re prepared to scrutinize your frequentist (error statistical) method on grounds of Bayes Factors, then you must endorse using Bayes Factors (BFs) for inference to begin with. If you don’t endorse BFs–and, in particular, the BF required to get the disagreement with p-values–*, then it doesn’t make sense to appraise your non-Bayesian method on grounds of agreeing or disagreeing with BFs. For suppose you assess the recommended BFs from the perspective of an error statistical account–that is, one that checks how frequently the method would uncover or avoid the relevant mistaken inference.[i] Then you will find the situation is reversed, and the recommended BF exaggerates the evidence!  (In particular, with high probability, it gives an alternative H’ fairly high posterior probability, or comparatively higher probability, even though H’ is false.) They’re measuring very different things, and it’s illicit to expect an agreement on numbers.[ii] We’ve discussed this quite a lot on this blog (2 are linked below [iii]).

If the given list of panelists is correct, it looks to be 4 against 1, but I’ve no doubt that Lakens can handle it.

  1. Daniel Benjamin, Associate Research Professor of Economics at the University of Southern California and a primary co-author of “Redefine Statistical Significance”
  2. Daniel Lakens, Assistant Professor in Applied Cognitive Psychology at Eindhoven University of Technology and a primary co-author of a response to ‘Redefine statistical significance’ (under review).
  3. Blake McShane, Associate Professor of Marketing at Northwestern University and a co-author of the recent paper “Abandon Statistical Significance”.
  4. Jennifer Tackett, Associate Professor of Psychology at Northwestern University and a co-author of the recent paper “Abandon Statistical Significance”.
  5. E.J. Wagenmakers, Professor at the Methodology Unit of the Department of Psychology at the University of Amsterdam, a co-author of the paper “Redefine Statistical Significance”

To tune in to the presentation and participate in the discussion after the talk, visit this site on the day of the talk. To register for the talk in advance, click here.

The paradox for those wishing to abandon significance tests on grounds that there’s “a replication crisis”–and I’m not alleging everyone under the “lower your p-value” umbrella are advancing this–is that lack of replication is effectively uncovered thanks to statistical significance tests. They are also the basis for fraud-busting, and adjustments for multiple testing and selection effects. Unlike Bayes Factors, they:

  • are directly affected by cherry-picking, data dredging and other biasing selection effects
  • are able to test statistical model assumptions, and may have their own assumptions vouchsafed by appropriate experimental design
  • block inferring a genuine effect when a method has low capability of having found it spurious.

In my view, the result of a significance test should be interpreted in terms of the discrepancies that are well or poorly indicated by the result. So we’d avoid the concern that leads some to recommend a .005 cut-off to begin with. But if this does become the standard for testing the existence of risks, I’d make “there’s an increased risk of at least r” the test hypothesis in a one-sided test, as Neyman recommends. Don’t give a gift to the risk producers. In the most problematic areas of social science, the real problems are (a) the questionable relevance of the “treatment” and “outcome” to what is purported to be measured, (b) cherry-picking, data-dependent endpoints, and a host of biasing selection effects, and (c) violated model assumptions. Lowering a p-value will do nothing to help with these problems; forgoing statistical tests of significance will do a lot to make them worse.

 *Added Oct 27. This is worth noting because in other Bayesian assessment, indeed, in assessments deemed more sensible and less biased in favor of the null hypothesis–the p-value scarcely differs from the posterior on Ho. This is discussed, for example, in Casella and R. Berger 1987. See links in [iii]. The two are reconciled with 1-sided tests, and insofar as the typical study states a predicted direction, that’s what they should be doing.

[i] Both “frequentist” and “sampling theory” are unhelpful names. Since the key feature is basing inference on error probabilities of methods, I abbreviate by error statistics. The error probabilities are based on the sampling distribution of the appropriate test statistic. A proper subset of error statistical contexts are those that utilize error probabilities to assess and control the severity by which a particular claim is tested.

[ii] See#4 of my recent talk on statistical skepticism “7 challenges and how to respond to them”

[iii]  Two related posts: p-values overstate the evidence against the null fallacy

How likelihoodists exaggerate evidence from statistical tests (search the blog for others)

 

 

Save

Save

Save

Save

Save

Save

Categories: Announcement, P-values, reforming the reformers, selection effects | 5 Comments

New venues for the statistics wars

I was part of something called “a brains blog roundtable” on the business of p-values earlier this week–I’m glad to see philosophers getting involved.

Next week I’ll be in a session that I think is intended to explain what’s right about P-values at an ASA Symposium on Statistical Inference : “A World Beyond p < .05”. Continue reading

Categories: Announcement, Bayesian/frequentist, P-values | 3 Comments

Professor Roberta Millstein, Distinguished Marjorie Grene speaker September 15

 

CANCELED

Virginia Tech Philosophy Department

2017 Distinguished Marjorie Grene Speaker

 

Professor Roberta L. Millstein


University of California, Davis

“Types of Experiments and Causal Process Tracing: What Happened on the Kaibab Plateau in the 1920s?”

September 15, 2017

320 Lavery Hall: 5:10-6:45pm

 

.

Continue reading

Categories: Announcement | 4 Comments

The Fourth Bayesian, Fiducial and Frequentist Workshop (BFF4): Harvard U

 

May 1-3, 2017
Hilles Event Hall, 59 Shepard St. MA

The Department of Statistics is pleased to announce the 4th Bayesian, Fiducial and Frequentist Workshop (BFF4), to be held on May 1-3, 2017 at Harvard University. The BFF workshop series celebrates foundational thinking in statistics and inference under uncertainty. The three-day event will present talks, discussions and panels that feature statisticians and philosophers whose research interests synergize at the interface of their respective disciplines. Confirmed featured speakers include Sir David Cox and Stephen Stigler.

The program will open with a featured talk by Art Dempster and discussion by Glenn Shafer. The featured banquet speaker will be Stephen Stigler. Confirmed speakers include:

Featured Speakers and DiscussantsArthur Dempster (Harvard); Cynthia Dwork (Harvard); Andrew Gelman (Columbia); Ned Hall (Harvard); Deborah Mayo (Virginia Tech); Nancy Reid (Toronto); Susanna Rinard (Harvard); Christian Robert (Paris-Dauphine/Warwick); Teddy Seidenfeld (CMU); Glenn Shafer (Rutgers); Stephen Senn (LIH); Stephen Stigler (Chicago); Sandy Zabell (Northwestern)

Invited Speakers and PanelistsJim Berger (Duke); Emery Brown (MIT/MGH); Larry Brown (Wharton); David Cox (Oxford; remote participation); Paul Edlefsen (Hutch); Don Fraser (Toronto); Ruobin Gong (Harvard); Jan Hannig (UNC); Alfred Hero (Michigan); Nils Hjort (Oslo); Pierre Jacob (Harvard); Keli Liu (Stanford); Regina Liu (Rutgers); Antonietta Mira (USI); Ryan Martin (NC State); Vijay Nair (Michigan); James Robins (Harvard); Daniel Roy (Toronto); Donald B. Rubin (Harvard); Peter XK Song (Michigan); Gunnar Taraldsen (NUST); Tyler VanderWeele (HSPH); Vladimir Vovk (London); Nanny Wermuth (Chalmers/Gutenberg); Min-ge Xie (Rutgers)

Continue reading

Categories: Announcement, Bayesian/frequentist | 2 Comments

Announcement: Columbia Workshop on Probability and Learning (April 8)

I’m speaking on “Probing with Severity” at the “Columbia Workshop on Probability and Learning” On April 8:

Meetings of the Formal Philosophy Group at Columbia

April 8, 2017

Department of Philosophy, Columbia University

Room 716
Philosophy Hall, 1150 Amsterdam Avenue
New York 10027
United States

Sponsor(s):

  • The Formal Philosophy Group (Columbia)

Main speakers:

Gordon Belot (University of Michigan, Ann Arbor)

Simon Huttegger (University of California, Irvine)

Deborah Mayo (Virginia Tech)

Teddy Seidenfeld (Carnegie Mellon University)

Organisers:

Michael Nielsen (Columbia University)

Rush Stewart (Columbia University)

Details

Unfortunately, access to Philosophy Hall is by swipe access on the weekends. However, students and faculty will be entering and exiting the building throughout the day (with relateively high frequency since there is a popular cafe on the main floor).

https://www.facebook.com/events/1869417706613583/

Categories: Announcement | Leave a comment

BOSTON COLLOQUIUM FOR PHILOSOPHY OF SCIENCE: Understanding Reproducibility & Error Correction in Science

BOSTON COLLOQUIUM FOR PHILOSOPHY OF SCIENCE

2016–2017
57th Annual Program

Download the 57th Annual Program

The Alfred I. Taub forum:

UNDERSTANDING REPRODUCIBILITY & ERROR CORRECTION IN SCIENCE

Cosponsored by GMS and BU’s BEST at Boston University.
Friday, March 17, 2017
1:00 p.m. – 5:00 p.m.
The Terrace Lounge, George Sherman Union
775 Commonwealth Avenue

  • Reputation, Variation, &, Control: Historical Perspectives
    Jutta Schickore History and Philosophy of Science & Medicine, Indiana University, Bloomington.
  • Crisis in Science: Time for Reform?
    Arturo Casadevall Molecular Microbiology & Immunology, Johns Hopkins
  • Severe Testing: The Key to Error Correction
    Deborah Mayo Philosophy, Virginia Tech
  • Replicate That…. Maintaining a Healthy Failure Rate in Science
    Stuart Firestein Biological Sciences, Columbia

 

boston-mayo-2017

Categories: Announcement, Statistical fraudbusting, Statistics | Leave a comment

Winners of December Palindrome: Kyle Griffiths & Eileen Flanagan

Winners of the December 2016 Palindrome contest

Since both November and December had the contest word verifies/reverifies, the judges decided to give two prizes this month. Thank you both for participating!

 

kyle

.

Kyle Griffiths

Palindrome: Sleep, raw Elba, ere verified ire; Sir, rise, ride! If I revere able war peels.

The requirement: A palindrome using “verifies” (reverifies) or “verified” (reverified) and Elba, of course.

Statement: Here’s my December submission, hope you like it, it has a kind of revolutionary war theme. I have no particular history of palindrome-writing or contest-entering.  Instead, I found Mayo’s work via the recommendation of Jeremy Fox of Dynamic Ecology.  I am interested in her take on modern statistical practices in ecology, and generally in understanding what makes scientific methods robust and reliable.  I’m an outsider to philosophy and stats (I have an MS in Biology), so I appreciate the less-formal tone of the blog. I’m really looking forward to Mayo’s next book.

Book choice (out of 12 or more):  Principles of Applied Statistics (D. R. Cox and C. A. Donnelly 2011, Cambridge: Cambridge University Press)

Bio: Part-time Biology Instructor, Scientific Aide for California Dept. of Fish & Wildlife. Interested in aquatic ecology, fish population dynamics.

*******************************************************************************************

 

.

Eileen Flanagan

Palindrome: Elba man, error reels inanities. I verified art I trade, if I revise it in an isle. Error renamable.

The requirement: A palindrome using “verifies” (reverifies) or “verified” (reverified) and Elba, of course.

Bio: Retired civil servant with a philosophy Ph.D; a bit camera shy so used a stand-in for my photo. 🙂

Statement: I found your blog searching for information on fraud in science a few years ago, and now that I am retired, I am enjoying twisting my mind around palindromes and other word games that I find on-line. 🙂

Book choice (out of 12 or more):  For my book, I would like a copy of Error and the Growth of Experimental Knowledge (D. G. Mayo, 1996, Chicago: Chicago University Press).

 

*******************************************************************************************

Some of Mayo’s attempts, posted through Nov-Dec:

Elba felt busy, reverifies use. I fire very subtle fable.

To I: disabled racecar ties. I verified or erode, if I revise it. Race card: Elba’s idiot.

Elba, I rave to men: “I felt busy!” Reverified, I hide, I fire very subtle fine mote variable.

I deified able deities. I verified a rap parade. If I revise, I tied. Elba deified I.

Categories: Announcement, Palindrome | Leave a comment

BOSTON COLLOQUIUM FOR PHILOSOPHY OF SCIENCE: Understanding Reproducibility & Error Correction in Science

BOSTON COLLOQUIUM FOR PHILOSOPHY OF SCIENCE

2016–2017
57th Annual Program

Download the 57th Annual Program

The Alfred I. Taub forum:

UNDERSTANDING REPRODUCIBILITY & ERROR CORRECTION IN SCIENCE

Cosponsored by GMS and BU’s BEST at Boston University.
Friday, March 17, 2017
1:00 p.m. – 5:00 p.m.
The Terrace Lounge, George Sherman Union
775 Commonwealth Avenue

  • Reputation, Variation, &, Control: Historical Perspectives
    Jutta Schickore History and Philosophy of Science & Medicine, Indiana University, Bloomington.
  • Crisis in Science: Time for Reform?
    Arturo Casadevall Molecular Microbiology & Immunology, Johns Hopkins
  • Severe Testing: The Key to Error Correction
    Deborah Mayo Philosophy, Virginia Tech
  • Replicate That…. Maintaining a Healthy Failure Rate in Science
    Stuart Firestein Biological Sciences, Columbia

 

boston-mayo-2017

Categories: Announcement, philosophy of science, Philosophy of Statistics, Statistical fraudbusting, Statistics | Leave a comment

I’ll be speaking to a biomedical group at Emory University, Nov 3

 

d-g-mayo-emory-flyer
Link to Seminar Flyer pdf.

Categories: Announcement | 1 Comment

Philosophy of Science Association 2016 Symposium

screen-shot-2016-10-26-at-10-23-07-pmPSA 2016 Symposium:
Philosophy of Statistics in the Age of Big Data and Replication Crises
Friday November 4th  9-11:45 am
(includes coffee  break 10-10:15)
Location: Piedmont 4 (12th Floor) Westin Peachtree Plaza
Speakers:

  • Deborah Mayo (Professor of Philosophy, Virginia Tech, Blacksburg, Virginia) “Controversy Over the Significance Test Controversy” (Abstract)
  • Gerd Gigerenzer (Director of Max Planck Institute for Human Development, Berlin, Germany) “Surrogate Science: How Fisher, Neyman-Pearson, and Bayes Were Transformed into the Null Ritual” (Abstract)
  • Andrew Gelman (Professor of Statistics & Political Science, Columbia University, New York) “Confirmationist and Falsificationist Paradigms in Statistical Practice” (Abstract)
  • Clark Glymour (Alumni University Professor in Philosophy, Carnegie Mellon University, Pittsburgh, Pennsylvania) “Exploratory Research is More Reliable Than Confirmatory Research” (Abstract)

Key Words: big data, frequentist and Bayesian philosophies, history and philosophy of statistics, meta-research, p-values, replication, significance tests.

Summary:

Science is undergoing a crisis over reliability and reproducibility. High-powered methods are prone to cherry-picking correlations, significance-seeking, and assorted modes of extraordinary rendition of data. The Big Data revolution may encourage a reliance on statistical methods without sufficient scrutiny of whether they are teaching us about causal processes of interest. Mounting failures of replication in the social and biological sciences have resulted in new institutes for meta-research, replication research, and widespread efforts to restore scientific integrity and transparency. Statistical significance test controversies, long raging in the social sciences, have spread to all fields using statistics. At the same time, foundational debates over frequentist and Bayesian methods have shifted in important ways that are often overlooked in the debates. The problems introduce philosophical and methodological questions about probabilistic tools, and science and pseudoscience—intertwined with technical statistics and the philosophy and history of statistics. Our symposium goal is to address foundational issues around which the current crisis in science revolves. We combine the insights of philosophers, psychologists, and statisticians whose work interrelates philosophy and history of statistics, data analysis and modeling. Continue reading

Categories: Announcement | 1 Comment

Formal Epistemology Workshop 2017: call for papers

images-33

.

Formal Epistemology Workshop (FEW) 2017


Home Call For Papers Schedule Venue Travel and Accommodations

Call for papers

Submission Deadline: December 1st, 2016
Authors Notified: February 8th, 2017

We invite papers in formal epistemology, broadly construed. FEW is an interdisciplinary conference, and so we welcome submissions from researchers in philosophy, statistics, economics, computer science, psychology, and mathematics.

Submissions should be prepared for blind review. Contributors ought to upload a full paper of no more than 6000 words and an abstract of up to 300 words to the Easychair website. Please submit your full paper in .pdf format. The deadline for submissions is December 1st, 2016. Authors will be notified on February 1st, 2017.

The final selection of the program will be made with an eye towards diversity. We especially encourage submissions from PhD candidates, early career researchers and members of groups that are underrepresented in philosophy. Continue reading

Categories: Announcement | Leave a comment

International Prize in Statistics Awarded to Sir David Cox

00-statprize

Unknown-1

.

International Prize in Statistics Awarded to Sir David Cox for
Survival Analysis Model Applied in Medicine, Science, and Engineering

EMBARGOED until October 19, 2016, at 9 p.m. ET

ALEXANDRIA, VA (October 18, 2016) – Prominent British statistician Sir David Cox has been named the inaugural recipient of the International Prize in Statistics. Like the acclaimed Fields Medal, Abel Prize, Turing Award and Nobel Prize, the International Prize in Statistics is considered the highest honor in its field. It will be bestowed every other year to an individual or team for major achievements using statistics to advance science, technology and human welfare.

Cox is a giant in the field of statistics, but the International Prize in Statistics Foundation is recognizing him specifically for his 1972 paper in which he developed the proportional hazards model that today bears his name. The Cox Model is widely used in the analysis of survival data and enables researchers to more easily identify the risks of specific factors for mortality or other survival outcomes among groups of patients with disparate characteristics. From disease risk assessment and treatment evaluation to product liability, school dropout, reincarceration and AIDS surveillance systems, the Cox Model has been applied essentially in all fields of science, as well as in engineering. Continue reading

Categories: Announcement | 1 Comment

Announcement: Scientific Misconduct and Scientific Expertise

Scientific Misconduct and Scientific Expertise

1st Barcelona HPS workshop

November 11, 2016

Departament de Filosofia & Centre d’Història de la Ciència (CEHIC),  Universitat Autònoma de Barcelona (UAB)

Location: CEHIC, Mòdul de Recerca C, Seminari L3-05, c/ de Can Magrans s/n, Campus de la UAB, 08193 Bellaterra (Barcelona)

Organized by Thomas Sturm & Agustí Nieto-Galan

Current science is full of uncertainties and risks that weaken the authority of experts. Moreover, sometimes scientists themselves act in ways that weaken their standing: they manipulate data, exaggerate research results, do not give credit where it is due, violate the norms for the acquisition of academic titles, or are unduly influenced by commercial and political interests. Such actions, of which there are numerous examples in past and present times, are widely conceived of as violating standards of good scientific practice. At the same time, while codes of scientific conduct have been developed in different fields, institutions, and countries, there is no universally agreed canon of them, nor is it clear that there should be one. The workshop aims to bring together historians and philosophers of science in order to discuss questions such as the following: What exactly is scientific misconduct? Under which circumstances are researchers more or less liable to misconduct? How far do cases of misconduct undermine scientific authority? How have standards or mechanisms to avoid misconduct, and to regain scientific authority, been developed? How should they be developed?

All welcome – but since space is limited, please register in advance. Write to: Thomas.Sturm@uab.cat

09:30 Welcome (Thomas Sturm & Agustí Nieto-Galan) Continue reading

Categories: Announcement, replication research | 7 Comments

Philosophy and History of Science Announcements

.

.

2016 UK-EU Foundations of Physics Conference

Start Date:16 July 2016

Categories: Announcement | Leave a comment

“Using PhilStat to Make Progress in the Replication Crisis in Psych” at Society for PhilSci in Practice (SPSP)

Screen Shot 2016-06-15 at 1.19.23 PMI’m giving a joint presentation with Caitlin Parker[1] on Friday (June 17) at the meeting of the Society for Philosophy of Science in Practice (SPSP): “Using Philosophy of Statistics to Make Progress in the Replication Crisis in Psychology” (Rowan University, Glassboro, N.J.)[2] The Society grew out of a felt need to break out of the sterile straightjacket wherein philosophy of science occurs divorced from practice. The topic of the relevance of PhilSci and PhilStat to Sci has often come up on this blog, so people might be interested in the SPSP mission statement below our abstract.

Using Philosophy of Statistics to Make Progress in the Replication Crisis in Psychology

Deborah Mayo Virginia Tech, Department of Philosophy United States
Caitlin Parker Virginia Tech, Department of Philosophy United States

Continue reading

Categories: Announcement, replication research, reproducibility | 8 Comments

My Popper Talk at LSE: The Statistical Replication Crisis: Paradoxes and Scapegoats

I’m giving a Popper talk at the London School of Economics next Tuesday (10 May). If you’re in the neighborhood, I hope you’ll stop by.

Popper talk May 10 location

A somewhat accurate blurb is here. I say “somewhat” because it doesn’t mention that I’ll talk a bit about the replication crisis in psychology, and the issues that crop up (or ought to) in connecting statistical results and the causal claim of interest.

smallPHONEBOOTH

.

Categories: Announcement | 6 Comments

Philosophy & Physical Computing Graduate Workshop at VT

A Graduate Summer Workshop at Virginia Tech (Poster)

Application deadline: May 8, 2016 

 

Think & Code VT

PHILOSOPHY & PHYSICAL COMPUTING
JULY 11-24, 2016 at Virginia Tech

Who should apply:

  • This workshop is open to graduate students in master’s or PhD programs in philosophy or the sciences, including computer science.

For additional information or to apply online, visit thinkandcode.vtlibraries.org, or contact Dr. Benjamin Jantzen at bjantzen@vt.edu

Categories: Announcement | Leave a comment

I’m speaking at Univ of Minnesota on Friday

I’ll be speaking at U of Minnesota tomorrow. I’m glad to see a group with interest in philosophical foundations of statistics as well as the foundations of experiment and measurement in psychology. I will post my slides afterwards. Come by if you’re in the neighborhood. 

University of Minnesota
“The ASA (2016) Statement on P-values and
How to Stop Refighting the Statistics Wars”


April 8, 2016 at 3:35 p.m.

 

bathwater

.

Deborah G. Mayo
Department of Philosophy, Virginia Tech

The CLA Quantitative Methods
Collaboration Committee
&
Minnesota Center for Philosophy of Science

275 Nicholson Hall
216 Pillsbury Drive SE
University of Minnesota
Minneapolis MN

 

This will be a mixture of my current take on the “statistics wars” together with my reflections on the recent ASA document on P-values. I was invited over a year ago already by Niels Waller, a co-author of Paul Meehl. I’ll never forget when I was there in 1997: Paul Meehl was in the audience, waving my book in the air–EGEK (1996)–and smiling!

Categories: Announcement | 3 Comments

Winner of December Palindrome: Mike Jacovides

Mike Jacovides

.

Winner of the December 2015 Palindrome contest

Mike Jacovides: Associate Professor of Philosophy at Purdue University

Palindrome: Emo, notable Stacy began a memory by Rome. Manage by cats, Elba to Nome.

The requirement: A palindrome using “memory” or “memories” (and Elba, of course).

Book choice (out of 12 or more)Error and the Growth of Experimental Knowledge (D. Mayo 1996, Chicago)

Bio: Mike Jacovides is an Associate Professor of Philosophy at Purdue University. He’s just finishing a book whose title is constantly changing, but which may end up being called Locke’s Image of the World and the Scientific Revolution.

Statement: My interest in palindromes was sparked by my desire to learn more about the philosophy of statistics. The fact that you can learn about the philosophy of statistics by writing a palindrome seems like evidence that anything can cause anything, but maybe once I read the book, I’ll learn that it isn’t. I am glad that ‘emo, notable Stacy’ worked out, I have to say.

Congratulations Mike! I hope you’ll continue to pursue philosophy of statistics! We need much more of that. Good choice of book prize too. D. Mayo Continue reading

Categories: Announcement, Palindrome | 1 Comment

Preregistration Challenge: My email exchange

images-2

.

David Mellor, from the Center for Open Science, emailed me asking if I’d announce his Preregistration Challenge on my blog, and I’m glad to do so. You win $1,000 if your properly preregistered paper is published. The recent replication effort in psychology showed, despite the common refrain – “it’s too easy to get low P-values” – that in preregistered replication attempts it’s actually very difficult to get small P-values. (I call this the “paradox of replication”[1].) Here’s our e-mail exchange from this morning:

          Dear Deborah Mayod,

I’m reaching out to individuals who I think may be interested in our recently launched competition, the Preregistration Challenge (https://cos.io/prereg). Based on your blogging, I thought it could be of interest to you and to your readers.

In case you are unfamiliar with it, preregistration specifies in advance the precise study protocols and analytical decisions before data collection, in order to separate the hypothesis-generating exploratory work from the hypothesis testing confirmatory work. 

Though required by law in clinical trials, it is virtually unknown within the basic sciences. We are trying to encourage this new behavior by offering 1,000 researchers $1000 prizes for publishing the results of their preregistered work. 

Please let me know if this is something you would consider blogging about or sharing in other ways. I am happy to discuss further. 

Best,

David
David Mellor, PhD

Project Manager, Preregistration Challenge, Center for Open Science

 

Deborah Mayo To David:                                                                          10:33 AM (1 hour ago)

David: Yes I’m familiar with it, and I hope that it encourages people to avoid data-dependent determinations that bias results. It shows the importance of statistical accounts that can pick up on such biasing selection effects. On the other hand, coupling prereg with some of the flexible inference accounts now in use won’t really help. Moreover, there may, in some fields, be a tendency to research a non-novel, fairly trivial result.

And if they’re going to preregister, why not go blind as well?  Will they?

Best,

Mayo Continue reading

Categories: Announcement, preregistration, Statistical fraudbusting, Statistics | 11 Comments

Blog at WordPress.com.