Author Archives: Mayo

Announcement: Prof. Stephen Senn to lead LSE grad seminar: 12-12-12

senncropped1Prof. Stephen Senn, Head of the Competences Center for Methodology and Statistics (CCMS), Luxembourg, will lead our graduate research seminar tomorrow, 12 December (London School of Economics 10-12 T 2.06. (see (LSE) PH500 page on the top of this blog, (background paper for seminar)):

“A statistician is one who prefers true doubts to false certainties.” (Senn)

Professor Senn has been the recipient of national and international awards, including the 1st  George C Challis award for Biostatistics at the University of Florida, and the Bradford Hill Medal of the Royal Statistical Society. He is the author of the monographs

Cross-over Trials in Clinical Research (1993, 2002),

Statistical Issues in Drug Development (1997, 2007)

Dicing with Death (2003)

Prof. Senn is a Fellow of the Royal Society of Edinburgh and an honorary life member of Statisticians in the Pharmaceutical Industry (PSI) and the International Society for Clinical Biostatistics (ISCB) and has an honorary chair in statistics at University College London .

Senn is also a monthly contributor to this blog on matters of philosophical foundations of statistics and methodology.  Here are some examples:

Stephen Senn: Fooling the Patient: an Unethical Use of Placebo? (Phil/Stat/Med)

Stephen Senn: Randomization, ratios and rationality: rescuing the randomized clinical trial from its critics

Stephen Senn: A Paradox of Prior Probabilities

Guest Blogger. STEPHEN SENN: Fisher’s alternative to the alternative

___________________________

Categories: Announcement | Leave a comment

Don’t Birnbaumize that experiment my friend*–updated reblog

img_0196Our current topic, the strong likelihood principle (SLP), was recently mentioned by blogger Christian Robert (nice diagram). So ,since it’s Saturday night, and given the new law just passed in the state of Washington*, I’m going to reblog a post from Jan. 8, 2012, along with a new UPDATE (following a video we include as an experiment). The new material will be in red (slight differences in notation are explicated within links).

(A)  “It is not uncommon to see statistics texts argue that in frequentist theory one is faced with the following dilemma: either to deny the appropriateness of conditioning on the precision of the tool chosen by the toss of a coin[i], or else to embrace the strong likelihood principle which entails that frequentist sampling distributions are irrelevant to inference once the data are obtained.  This is a false dilemma … The ‘dilemma’ argument is therefore an illusion”. (Cox and Mayo 2010, p. 298)

The “illusion” stems from the sleight of hand I have been explaining in the Birnbaum argument—it starts with Birnbaumization. Continue reading

Categories: Birnbaum Brakes, Likelihood Principle, Statistics | 9 Comments

Rejected post: Nov. Palindrome Winner: Kepler

See Thomas Kepler’s statement and palindrome.

Categories: Announcement | Leave a comment

Announcement: U-Phil Extension: Blogging the Likelihood Principle

U-Phil: I am extending to Dec. 19, 2012 the date for sending me responses to the “U-Phil” call, see initial call, given some requests for more time. The details of the specific U-Phil may be found here, but you might also look at the post relating to my 28 Nov. seminar at the LSE, which is directly on the topic: the infamous (strong) likelihood principle (SLP). “U-Phil, ” which is short for “you ‘philosophize'” is really just an opportunity to write something .5-1 notch above an ordinary comment (focussed on one or more specific posts/papers, as described in each call): it can be longer (~500-1000 words), and it appears in the regular blog area rather than as a comment.  Your remarks can relate to the guest graduate student post by Gregory Gandenberger, and/or my discussion/argument. Graduate student posts (e.g., attendees of my 28 Nov. LSE seminar?) are especially welcome*. Earlier explemplars of U-Phils may be found here; and more by searching this blog.

Thanks to everyone who sent me names of vintage typewriter repair shops in London, after the airline damage: the “x” is fixed, but the “z” key is still misbehaving.

*Another post of possible relevance to graduate students comes up when searching this blog for  “sex”.

Categories: Announcement, Likelihood Principle, U-Phil | Leave a comment

Mayo Commentary on Gelman & Robert

The following is my commentary on a paper by Gelman and Robertforthcoming (in early 2013) in the The American Statistician* (submitted October 3, 2012).

_______________________

mayo 2010 conference IphoneDiscussion of Gelman and Robert, “Not only defended but also applied”: The perceived absurdity of Bayesian inference”
Deborah G. Mayo

1. Introduction

I am grateful for the chance to comment on the paper by Gelman and Robert. I welcome seeing statisticians raise philosophical issues about statistical methods, and I entirely agree that methods not only should be applicable but also capable of being defended at a foundational level. “It is doubtful that even the most rabid anti-Bayesian of 2010 would claim that Bayesian inference cannot apply” (Gelman and Robert 2012, p. 6). This is clearly correct; in fact, it is not far off the mark to say that the majority of statistical applications nowadays are placed under the Bayesian umbrella, even though the goals and interpretations found there are extremely varied. There are a plethora of international societies, journals, post-docs, and prizes with “Bayesian” in their name, and a wealth of impressive new Bayesian textbooks and software is available. Even before the latest technical advances and the rise of “objective” Bayesian methods, leading statisticians were calling for eclecticism (e.g., Cox 1978), and most will claim to use a smattering of Bayesian and non-Bayesian methods, as appropriate. George Casella (to whom their paper is dedicated) and Roger Berger in their superb textbook (2002) exemplify a balanced approach. Continue reading

Categories: frequentist/Bayesian, Statistics | 24 Comments

Statistical Science meets Philosophy of Science

2010 statsciphilsci conference logoMany of the discussions on this blog have revolved around a cluster of issues under the general question: “Statistical Science and Philosophy of Science: Where Do (Should) They meet? (in the contemporary landscape)?”  In tackling these issues, this blog regularly returns to a set of contributions growing out of a conference with the same title (June 2010, London School of Economics, Center for the Philosophy of Natural and Social Science, CPNSS), as well as to conversations initiated soon after. The conference site is here.  My most recent reflections in this arena (Sept. 26, 2012) are here. Continue reading

Categories: Statistics, StatSci meets PhilSci | Leave a comment

Normal Deviate’s blog on false discovery rates

There is an interesting guest post by Ryan Tibshirani on the Normal Deviate’s blog comparing the False Discovery Rates (FDR)* associated with different methods of screening for potentially interesting genes (based on p-value assessments). I want to come to this at some point.

*FDR = E(number of null genes called significant/number of genes called significant)

Categories: Announcement | Leave a comment

Error Statistics (brief overview)

In view of some questions about “behavioristic” vs “evidential” construals of frequentist statistics (from the last post), and how the error statistical philosophy tries to improve on Birnbaum’s attempt at providing the latter, I’m reblogging a portion of a post from Nov. 5, 2011 when I also happened to be in London. (The beginning just records a goofy mishap with a skeletal key, and so I leave it out in this reblog.) Two papers with much more detail are linked at the end.

Error Statistics

(1) There is a “statistical philosophy” and a philosophy of science. (a) An error-statistical philosophy alludes to the methodological principles and foundations associated with frequentist error-statistical methods. (b) An error-statistical philosophy of science, on the other hand, involves using the error-statistical methods, formally or informally, to deal with problems of philosophy of science: to model scientific inference (actual or rational), to scrutinize principles of inference, and to address philosophical problems about evidence and inference (the problem of induction, underdetermination, warranting evidence, theory testing, etc.). Continue reading

Categories: Error Statistics, Philosophy of Statistics, Statistics | Tags: , , | 10 Comments

Blogging Birnbaum: on Statistical Methods in Scientific Inference

I said I’d make some comments on Birnbaum’s letter (to Nature), (linked in my last post), which is relevant to today’s Seminar session (at the LSE*), as well as to (Normal Deviate‘s) recent discussion of frequentist inference–in terms of constructing procedures with good long-run “coverage”. (Also to the current U-Phil).

NATURE VOL. 225 MARCH 14, 1970 (1033)

LETTERS TO THE EDITOR

Statistical methods in Scientific Inference

 It is regrettable that Edwards’s interesting article[1], supporting the likelihood and prior likelihood concepts, did not point out the specific criticisms of likelihood (and Bayesian) concepts that seem to dissuade most theoretical and applied statisticians from adopting them. As one whom Edwards particularly credits with having ‘analysed in depth…some attractive properties” of the likelihood concept, I must point out that I am not now among the ‘modern exponents” of the likelihood concept. Further, after suggesting that the notion of prior likelihood was plausible as an extension or analogue of the usual likelihood concept (ref.2, p. 200)[2], I have pursued the matter through further consideration and rejection of both the likelihood concept and various proposed formalizations of prior information and opinion (including prior likelihood).  I regret not having expressed my developing views in any formal publication between 1962 and late 1969 (just after ref. 1 appeared). My present views have now, however, been published in an expository but critical article (ref. 3, see also ref. 4)[3] [4], and so my comments here will be restricted to several specific points that Edwards raised. Continue reading

Categories: Likelihood Principle, Statistics, U-Phil | 5 Comments

Likelihood Links [for 28 Nov. Seminar and Current U-Phil]

old blogspot typewriterDear Reader: We just arrived in London[i][ii]. Jean Miller has put together some materials for Birnbaum LP aficionados in connection with my 28 November seminar. Great to have ready links to some of the early comments and replies by Birnbaum, Durbin, Kalbfleish and others, possibly of interest to those planning contributions to the current “U-Phil“.  I will try to make some remarks on Birnbaum’s 1970 letter to the editor tomorrow.

November 28th reading

Categories: Birnbaum Brakes, Likelihood Principle, U-Phil | Leave a comment

Announcement: 28 November: My Seminar at the LSE (Contemporary PhilStat)

28 November: (10 – 12 noon):
Mayo: “On Birnbaum’s argument for the Likelihood Principle: A 50-year old error and its influence on statistical foundations”
PH500 Seminar, Room: Lak 2.06 (Lakatos building). 
London School of Economics and Political Science (LSE)

Background reading: PAPER

See general announcement here.

Background to the Discussion: Question: How did I get involved in disproving Birnbaum’s result in 2006?

Answer: Appealing to something called the “weak conditionality principle (WCP)” arose in avoiding a classic problem (arising from mixture tests) described by David Cox (1958), as discussed in our joint paper:

Cox D. R. and Mayo. D. (2010). “Objectivity and Conditionality in Frequentist Inference” in Error and Inference: Recent Exchanges on Experimental Reasoning, Reliability and the Objectivity and Rationality of Science (D Mayo & A. Spanos eds.), CUP 276-304. Continue reading

Categories: Announcement, Likelihood Principle, Statistics | 12 Comments

Irony and Bad Faith: Deconstructing Bayesians-reblog

 The recent post by Normal Deviate, and my comments on it, remind me of why/how I got back into the Bayesian-frequentist debates in 2006, as described in my first “deconstruction” (and “U-Phil”) on this blog (Dec 11, 2012):

Some time in 2006 (shortly after my ERROR06 conference), the trickle of irony and sometime flood of family feuds issuing from Bayesian forums drew me back into the Bayesian-frequentist debates.1 2  Suddenly sparks were flying, mostly kept shrouded within Bayesian walls, but nothing can long be kept secret even there. Spontaneous combustion is looming. The true-blue subjectivists were accusing the increasingly popular “objective” and “reference” Bayesians of practicing in bad faith; the new O-Bayesians (and frequentist-Bayesian unificationists) were taking pains to show they were not subjective; and some were calling the new Bayesian kids on the block “pseudo Bayesian.” Then there were the Bayesians somewhere in the middle (or perhaps out in left field) who, though they still use the Bayesian umbrella, were flatly denying the very idea that Bayesian updating fits anything they actually do in statistics.3 Obeisance to Bayesian reasoning remained, but on some kind of a priori philosophical grounds. Doesn’t the methodology used in practice really need a philosophy of its own? I say it does, and I want to provide this. Continue reading

Categories: Likelihood Principle, objective Bayesians, Statistics | Tags: , , , , | 33 Comments

Comments on Wasserman’s “what is Bayesian/frequentist inference?”

What I like best about Wasserman’s blogpost (Normal Deviate) is his clear denial that merely using conditional probability makes the method Bayesian (even if one chooses to call the conditional probability theorem Bayes’s theorem, and even if one is using ‘Bayes’s’ nets). Else any use of probability theory is Bayesian, which trivializes the whole issue. Thus, the fact that conditional probability is used in an application with possibly good results is not evidence of (yet another) Bayesian success story [i].

But I do have serious concerns that in his understandable desire (1) to be even-handed (hammers and screwdrivers are for different purposes, both perfectly kosher tools), as well as (2) to give a succinct sum-up of methods,Wasserman may encourage misrepresenting positions. Speaking only for “frequentist” sampling theorists [ii], I would urge moving away from the recommended quick sum-up of “the goal” of frequentist inference: “Construct procedures with frequency guarantees”. If by this Wasserman means that the direct aim is to have tools with “good long run properties”, that rarely err in some long run series of applications, then I think it is misleading. In the context of scientific inference or learning, such a long-run goal, while necessary is not at all sufficient; moreover, I claim, that satisfying this goal is actually just a byproduct of deeper inferential goals (controlling and evaluating how severely given methods are capable of revealing/avoiding erroneous statistical interpretations of data in the case at hand.) (So I deny that it is even the main goal to which frequentist methods direct themselves.) Even arch behaviorist Neyman used power post-data to ascertain how well corroborated various hypotheses were—never mind long-run repeated applications (see one of my Neyman’s Nursery posts). Continue reading

Categories: Error Statistics, Neyman's Nursery, Philosophy of Statistics, Statistics | 21 Comments

New kvetch/PhilStock: Rapiscan Scam

See Rejected Posts.

This is my first aeroblog–has a name been coined by anyone yet for mile-high blogging?

Categories: Rejected Posts | Leave a comment

What is Bayesian/Frequentist Inference? (from the normal deviate)

I see that Larry Wasserman (Normal Deviate) has an intricate blog post of relevance today: What is Bayesian/Frequentist Inference.  Firstly, I’m very glad he’s decided not to exile frequentist/Bayesian issues, as he had declared in starting his blog. Second, I would wish to suggest some revisions of certain points he lists; so it should be a good basis for discussion. I will come back to this when I return to NY from San Diego.

Categories: Uncategorized | Leave a comment

Philosophy of Science Association (PSA) 2012 Program

Here is the program of the Philosophy of Science Association PSA, currently meeting in San Diego (with the History of Science Society HSS). The image (from the program cover) comes from an editions of Kuhn’s (1962) Structure of Scientific Revolutions (50 year anniversary*)

THURSDAY NOVEMBER 15

Session 1 (2-­‐3:30

Contributed Papers: Issues for Practice in Medicine and Anthropology
RM: Spinnaker 1

James Krueger (University of Redlands), “Theoretical Health and Medical Practice”

Cecilia Nardini  (University of Milan), “Bias and Conditioning in Sequential Medical Trials”

Inkeri Koskinen (University of Helsinki), “Critical Subjects: Participatory Research Needs to Make Room for Debate”

Chair: Roger Stanev (University of South Florida)

Contributed Papers: Values iScience and Inductive Risk
RM: Marina 6 Continue reading

Categories: Announcement | Leave a comment

continuing the comments….

I’m sure I’m not alone in finding it tedious and confusing to search down through 40+ comments to follow the thread of a discussion, as in the last post (“Bad news bears“), especially while traveling as I am (to the 2012 meeting of the Philosophy of Science Association in San Diego–more on that later in the week). So I’m taking a portion of the last round between a reader and I, and placing it here, opening up a new space for comments. (For the full statements, please see the comments).

(Mayo to Corey*) Cyanabear: … Here’s a query for you: 
Suppose you have your dreamt of probabilistic plausibility measure, and think H is a plausible hypothesis and yet a given analysis has done a terrible job in probing H. Maybe they ignore contrary info, use imprecise tools or what have you. How do you use your probabilistic measure to convey you think H is plausible but this evidence is poor grounds for H? Sorry to be dashing…use any example.

*He also goes by Cyan.

(Corey to Mayo): .….Ideally, if I “think H is plausible but this evidence is poor grounds for H,” it’s because I have information warranting that belief. The word “convey” is a bit tricky here. If I’m to communicate the brute fact that I think H is plausible, I’d just state my prior probability for H; likewise, to communicate that I think that the evidence is poor grounds for claiming H, I’d say that the likelihood ratio is 1. But if I’m to *convince* someone of my plausibility assessments, I have to communicate the information that warrants them. (Under certain restrictive assumptions that never hold in practice, other Bayesian agents can treat my posterior distribution as direct evidence. This is Aumann’s agreement theorem.)

New: Mayo to Corey: I’m happy to put aside the agent talk as well as the business of trying to convince.  I take it that reporting “the likelihood ratio is 1” conveys roughly that the data  have supplied no information as regards H, and one of my big points on this blog is that this does not capture being “a bad test” or “poor evidence”.  Recall some of the problems that arose in our recent discussions of ESP experiments (e.g., multiple end points, trying and trying again, ignoring or explaining away disagreements with H, confusing statistical and substantive significance, etc.)

Categories: Metablog, poor tests | Tags: | 21 Comments

new rejected post: kvetch (and query)

See Rejected Posts of D. Mayo

Categories: Rejected Posts | Leave a comment

Bad news bears: ‘Bayesian bear’ rejoinder- reblog

To my dismay, I’ve been sent, once again, that silly, snarky, adolescent, clip of those naughty “what the p-value” bears (see Aug 5 post),, who cannot seem to get a proper understanding of significance tests into their little bear brains. So apparently some people haven’t  seen my rejoinder which, as I said then, practically wrote itself. So since it’s Saturday night here at the Elbar Room, let’s listen in to a reblog of my rejoinder (replacing p-value bears with hypothetical Bayesian bears)–but you can’t get it without first watching the Aug 5 post, since I’m mimicking them.  [My idea for the rejoinder was never polished up for actually making a clip.  In fact the original post had 16 comments where several reader improvements were suggested. Maybe someone will want to follow through*.] I just noticed a funny cartoon on Bayesian intervals on Normal Deviate’s post from Nov. 9.

This continues yesterday’s post: I checked out the the” xtranormal” http://www.xtranormal.com/ website. Turns out there are other figures aside from the bears that one may hire out, but they pronounce “Bayesian” as an unrecognizable, foreign-sounding word with around five syllables. Anyway, before taking the plunge, here is my first attempt, just off the top of my head. Please send corrections and additions.

Bear #1: Do you have the results of the study?

Bear #2:Yes. The good news is there is a .996 probability of a positive difference in the main comparison.

Bear #1: Great. So I can be well assured that there is just a .004 probability that such positive results would occur if they were merely due to chance.

Bear #2: Not really, that would be an incorrect interpretation. Continue reading

Categories: Comedy, Metablog, significance tests, Statistics | Tags: , , | 42 Comments

Seminars at the London School of Economics: Contemporary Problems in Philosophy of Statistics

As a visitor of the Centre for Philosophy of Natural and Social Science (CPNSS) at the London School of Economics and Political Science, I am leading 3 seminars in the department of Philosophy, Logic, and Scientific Method on Wednesdays from Nov. 28-Dec 12 on Contemporary Philosophy of Statistics under the PH500 rubric, Room: Lak 2.06 (Lakatos building). Interested individuals who have not yet contacted me, write:  error@vt.edu .*
The Autumn seminars will also feature discussions with distinguished guest statisticians: Sir David Cox (Oxford); Dr. Stephen Senn: (Competence Center for Methodology and Statistics, Luxembourg); Dr. Christian Hennig (University College, London):
  • 28 November: (10 – 12 noon): Mayo: On Birnbaum’s argument for the Likelihood Principle: A 50-year old error and its influence on statistical foundations (See my blog and links within.)

5 December and 12 December: Statistical Science meets philosophy of science: Mayo and guests:

  • 5 Dec: 12 (noon)- 2p.m.: Sir David Cox
  • 12 Dec (10-12).Dr. Stephen Senn;
    Dr. Christian Hennig: TBA

Topics, activities, readings :TBA (Two 2012 Summer Seminars may be found here).

Blurb: Debates over the philosophical foundations of statistical science have a long and fascinating history marked by deep and passionate controversies that intertwine with fundamental notions of the nature of statistical inference and the role of probabilistic concepts in inductive learning. Progress in resolving decades-old controversies which still shake the foundations of statistics, demands both philosophical and technical acumen, but gaining entry into the current state of play requires a roadmap that zeroes in on core themes and current standpoints. While the seminar will attempt to minimize technical details, it will be important to clarify key notions to fully contribute to the debates. Relevance for general philosophical problems will be emphasized. Because the contexts in which statistical methods are most needed are ones that compel us to be most aware of strategies scientists use to cope with threats to reliability, considering the nature of statistical method in the collection, modeling, and analysis of data is an effective way to articulate and warrant general principles of evidence and inference.
Room 2.06 Lakatos Building; Centre for Philosophy of Natural and Social Science
 London School of Economics
 Houghton Street
London WC2A 2AE
Administrator: T. R. Chivers@lse.ac.uk

For  updates, details, and associated readings: please check the LSE Ph500 page on my blog or write to me.
*It is not necessary to have attended the 2 sessions held during the summer of 2012.

Categories: Announcement, philosophy of science, Statistics | Tags: , | 28 Comments

Blog at WordPress.com.