Philosophy of Statistics

U-PHIL: Deconstructing Larry Wasserman

Deconstructing [i] Larry Wasserman

The temptation is strong, but I shall refrain from using the whole post to deconstruct Al Franken’s 2003 quip about media bias (from Lies and Lying Liars Who Tell Them: A Fair and Balanced Look at the Right), with which Larry Wasserman begins his paper “Low Assumptions, High Dimensions” (2011) in his contribution to Rationality, Markets and Morals (RMM) Special Topic: Statistical Science and Philosophy of Science:

Wasserman: There is a joke about media bias from the comedian Al Franken:
‘To make the argument that the media has a left- or right-wing, or a liberal or a conservative bias, is like asking if the problem with Al-Qaeda is: do they use too much oil in their hummus?’

According to Wasserman, “a similar comment could be applied to the usual debates in the foundations of statistical inference.”

Although it’s not altogether clear what Wasserman means by his analogy with comedian (now senator) Franken, it’s clear enough what Franken meant if we follow up the quip with the next sentence in his text (which Wasserman omits): “The problem with al Qaeda is that they’re trying to kill us!” (p. 1). The rest of Franken’s opening chapter is not about al Qaeda but about bias in media. Conservatives, he says, decry what they claim is a liberal bias in mainstream media. Franken rejects their claim.

The mainstream media does not have a liberal bias. And for all their other biases . . . , the mainstream media . . . at least try to be fair. …There is, however, a right-wing media. . . . They are biased. And they have an agenda…The members of the right-wing media are not interested in conveying the truth… . They are an indispensable component of the right-wing machine that has taken over our country… .   We have to be vigilant.  And we have to be more than vigilant.  We have to fight back… . Let’s call them what they are: liars. Lying, lying, liars. (Franken, pp. 3-4)

When I read this in 2004 (when Bush was in office), I couldn’t have agreed more. How things change*. Now, of course, any argument that swerves from the politically correct is by definition unsound, irrelevant, and/ or biased. [ii]

But what does this have to do with Bayesian-frequentist foundations? What is Wasserman, deep down, really trying to tell us by way of this analogy (if only subliminally)? Such are my ponderings—and thus this deconstruction.  (I will invite your “U-Phils” at the end.) I will allude to passages from my contribution to  RMM (2011) (in red).

A.What Is the Foundational Issue?

Wasserman: To me, the most pressing foundational question is: how do we reconcile the two most powerful needs in modern statistics: the need to make methods assumption free and the need to make methods work in high dimensions… . The Bayes-Frequentist debate is not irrelevant but it is not as central as it once was. (p. 201)

One may wonder why he calls this a foundational issue, as opposed to, say, a technical one. I will assume he means what he says and attempt to extract his meaning by looking through a foundational lens.

Let us examine the urgency of reconciling the need to make methods assumption-free and that of making them work in complex high dimensions. The problem of assumptions of course arises when they are made about unknowns that can introduce threats of error and/or misuse of methods. Continue reading

Categories: Philosophy of Statistics, Statistics, U-Phil | Tags: , , , | 21 Comments

Clark Glymour: The Theory of Search Is the Economics of Discovery (part 2)

“Some Thoughts Prompted by David Hendry’s Essay * (RMM) Special Topic: Statistical Science and Philosophy of Science,” by  Professor Clark Glymour

Part 2 (of 2) (Please begin with part 1)

The first thing one wants to know about a search method is what it is searching for, what would count as getting it right. One might want to estimate a probability distribution, or get correct forecasts of some probabilistic function of the distribution (e.g., out-of-sample means), or a causal structure, or some probabilistic function of the distribution resulting from some class of interventions.  Secondly, one wants to know about what decision theorists call a loss function, but less precisely, what is the comparative importance of various errors of measurement, or, in other terms, what makes some approximations better than others. Third, one wants a limiting consistency proof: sufficient conditions for the search to reach the goal in the large sample limit. There are various kinds of consistency—pointwise versus uniform for example—and one wants to know which of those, if any, hold for a search method under what assumptions about the hypothesis space and the sampling distribution. Fourth, one wants to know as much as possible about the behavior of the search method on finite samples. In simple cases of statistical estimation there are analytic results; more often for search methods only simulation results are possible, but if so, one wants them to explore the bounds of failure, not just easy cases. And, of course, one wants a rationale for limiting the search space, as well as, some sense of how wrong the search can be if those limits are violated in various ways.

There are other important economic features of search procedures. Probability distributions (or likelihood functions) can instantiate any number of constraints—vanishing partial correlations for example, or inequalities of correlations. Suppose the hypothesis space delimits some big class of probability distributions. Suppose the search proceeds by testing constraints (the points that follow apply as well if the procedure computes posterior probabilities for particular hypotheses and applies a decision rule.) There is a natural partial ordering of classes of constraints: B is weaker than A if and only if every distribution that satisfies class A satisfies class B.  Other things equal, a weakest class might be preferred because it requires fewer tests.  But more important is what the test of a constraint does in efficiently guiding the search. A test that eliminates a particular hypothesis is not much help. A test that eliminates a big class of hypotheses is a lot of help.

Other factors: the power of the requisite tests; the numbers of tests (or posterior probability assessments) required; the computational requirements of individual tests (or posterior probability assessments.) And so on.  And, finally, search algorithms have varying degrees of generality. For example, there are general algorithms, such as the widely used PC search algorithm for graphical causal models, that are essentially search schema: stick in whatever decision procedure for conditional independence and PC becomes a search procedure using that conditional independence oracle. By contrast, some searches are so embedded in a particular hypothesis space that it is difficult to see the generality.

I am sure I am not qualified to comment on the details of Hendry’s search procedure, and even if I were, for reasons of space his presentation is too compressed for that. Still, I can make some general remarks.  I do not know from his essay the answers to many of the questions pertinent to evaluating a search procedure that I raised above. For example, his success criterion is “congruence” and I have no idea what that is. That is likely my fault, since I have read only one of his books, and that long ago.

David Hendry dismisses “priors,” meaning, I think, Bayesian methods, with an argument from language acquisition. Kids don’t need priors to learn a language. I am not sure of Hendry’s logic.  Particular grammars within a parametric “universal grammar” could in principle be learned by a Bayesian procedure, although I have no reason to think they are. But one way or the other, that has no import for whether Bayesian procedures are the most advantageous for various search problems by any of the criteria I have noted above. Sometimes they may be, sometimes not, there is no uniform answer, in part because computational requirements vary. I could give examples, but space forbids.

Abstractly, one could think there are two possible ways of searching when the set of relationships to be uncovered may form a complex web: start by positing all possible relationships and eliminate from there, or start by positing no relationships and build up.  Hendry dismisses the latter, with what generality I do not know. What I do know is that the relations between “bottom-up” and “top-down” or “forward” and “backward” search can be intricate, and in some cases one may need both for consistency.  Sometimes either will do. Graphical models, for example can be searched starting with the assumption that every variable influences every other and eliminating, or starting with the assumption that no variable influences any other and adding.  There are pointwise consistent searches in both directions. The real difference is in complexity.

Continue reading

Categories: Philosophy of Statistics, Statistics, U-Phil | Tags: , , , , | 11 Comments

Clark Glymour: The Theory of Search Is the Economics of Discovery (part 1)

The Theory of Search Is the Economics of Discovery:
Some Thoughts Prompted by Sir David Hendry’s Essay  *
in Rationality, Markets and Morals (RMM) Special Topic:
Statistical Science and Philosophy of Science

Part 1 (of 2)

Professor Clark Glymour

Alumni University Professor
Department of Philosophy[i]
Carnegie Mellon University

Professor Hendry* endorses a distinction between the “context of discovery” and the “context of evaluation” which he attributes to Herschel and to Popper and could as well have attributed also to Reichenbach and to most contemporary methodological commentators in the social sciences. The “context” distinction codes two theses.

1.“Discovery” is a mysterious psychological process of generating hypotheses; “evaluation” is about the less mysterious process of warranting them.

2. Of the three possible relations with data that could conceivably warrant a hypothesis—how it was generated, its explanatory connections with the data used to generate it, and its predictions—only the last counts.

Einstein maintained the first but not the second. Popper maintained the first but that nothing warrants a hypothesis.  Hendry seems to maintain neither–he has a method for discovery in econometrics, a search procedure briefly summarized in the second part of his essay, which is not evaluated by forecasts. Methods may be esoteric but they are not mysterious. And yet Hendry endorses the distinction. Let’s consider it.

As a general principle rather than a series of anecdotes, the distinction between discovery and justification or evaluation has never been clear and what has been said in its favor of its implied theses has not made much sense, ever. Let’s start with the father of one of Hendry’s endorsers, William Herschel. William Herschel discovered Uranus, or something. Actually, the discovery of the planet Uranus was a collective effort with, subject to vicissitudes of error and individual opinion, was a rational search strategy. On March 13, 1781, in the course of a sky survey for double stars Hershel reports in his journal the observation of a “nebulous star or perhaps a comet.”  The object came to his notice how it appeared through the telescope, perhaps the appearance of a disc. Herschel changed the magnification of his telescope, and finding that the brightness of the object changed more than the brightness of fixed stars, concluded he had seen a comet or “nebulous star.”  Observations that, on later nights, it had moved eliminated the “nebulous star” alternative and Herschel concluded that he had seen a comet. Why not a planet? Because lots of comets had been hitherto observed—Edmund Halley computed orbits for half a dozen including his eponymous comet—but never a planet.  A comet was much the more likely on frequency grounds. Further, Herschel had made a large error in his estimate of the distance of the body based on parallax values using his micrometer.  A planet could not be so close.

Continue reading

Categories: philosophy of science, Philosophy of Statistics, Statistics, U-Phil | Tags: , , , | 1 Comment

Deconstructing Larry Wasserman–it starts like this…

In my July 8, 2012 post “Metablog: Up and Coming,” I wrote: “I will attempt a (daring) deconstruction of Professor Wasserman’s paper[i] and at that time will invite your “U-Phils” for posting around a week after (<1000 words).” These could reflect on Wasserman’s paper and/or my deconstruction of it. See an earlier post for the way we are using “deconstructing” here. For some guides, see “so you want to do a philosophical analysis“.

So my Wasserman deconstruction notes have been sitting in the “draft” version of this blog for several days as we focused on other things.  Here’s how it starts…

             Deconstructing Larry Wasserman–it starts like this…

1.Al Franken’s Joke

The temptation is strong, but I shall refrain from using the whole post to deconstruct Al Franken’s 2003 quip about media bias (from Lies and Lying Liars Who Tell Them: A Fair and Balanced Look at the Right), with which Larry Wasserman begins his paper “Low Assumptions, High Dimensions” (2011):

To make the argument that the media has a left- or right-wing, or a liberal or a conservative bias, is like asking if the problem with Al-Qaeda is: do they use too much oil in their hummus?

According to Wasserman, “a similar comment could be applied to the usual debates in the foundations of statistical inference.”

Although it’s not altogether clear what Wasserman means by his analogy with comedian (now senator) Franken, it’s clear enough what Franken means if we follow up the quip with the next sentence in his text (which Wasserman omits): “The problem with al Qaeda is that they’re trying to kill us!” (p. 1) The rest of Franken’s opening chapter is not about al Qaeda but about bias in media.

But what does this have to do with the usual debates in the foundations of statistical inference? What is Wasserman, deep down, perhaps unconsciously, really, really, possibly implicitly, trying to tell us by way of this analogy? Such are the ponderings in my deconstruction of him…

Yet the footnote to my July 8 blog also said that my post assumed ” I don’t chicken out”.  So I will put it aside until I get a chorus of encouragement to post it…

Categories: Philosophy of Statistics, Statistics, U-Phil | Tags: , , | 5 Comments

Metablog: Up and Coming

Dear Reader: Over the next week, in addition to a regularly scheduled post by Professor Stephen Senn, we will be taking up two papers[i] from the contributions to the special topic: “Statistical Science and Philosophy of Science: Where Do (Should) They Meet in 2011 and Beyond?” in Rationality, Markets and Morals: Studies at the Intersection of Philosophy and Economics.

I will attempt a (daring) deconstruction of Professor Wasserman’s paper[ii] and at that time will invite your “U-Phils” for posting around a week after (<1000 words).  I will be posting comments by Clark Glymour on Sir David Hendry’s paper later in the week. So you may want to study those papers in advance.

The first “deconstruction” (“Irony and Bad Faith, Deconstructing Bayesians 1”) may be found here / https://errorstatistics.com/2012/04/17/3466/; for a selection of both U-Phils and Deconstructions, see https://errorstatistics.com/2012/04/17/3466/

D. Mayo

P.S. Those who had laughed at me for using this old trusty typewriter were asking to borrow it last week when we lost power for 6 days and their computers were down.


[i] *L. Wasserman, “Low Assumptions, High Dimensions”. RMM Vol. 2, 2011, 201–209;

D. Hendry, “Empirical Economic Model Discovery and Theory Evaluation”. RMM Vol. 2, 2011, 115–145.

[ii] Assuming I don’t chicken out.

Categories: Metablog, Philosophy of Statistics, U-Phil | Tags: , | Leave a comment

Metablog: May 31, 2012

Dear Reader: I will be traveling a lot in the next few weeks, and may not get to post much; we’ll see. If I do not reply to comments, I’m not ignoring them—they’re a lot more fun than some of the things I must do now to complete my book, but need to resist, especially while traveling and giving seminars.* The  rule we’ve followed is for comments to shut after 10 days, but we wanted to allow them still to appear. The blogpeople on Elba forward comments for 10 days, so beyond that it’s just haphazard if I notice them. It’s impossible otherwise to keep this blog up at all, and I would like to. Feel free to call any to my attention (use “can we talk” page or error@vt.edu). If there’s a burning issue,  interested readers might wish to poke around (or scour) the multiple layers of goodies on the left hand side of this web page, wherein all manner of foundational/statistical controversies are considered from many years of working in this area. In a recent attempt by Aris Spanos and I to address the age-old criticisms from the perspective of the “error statistical philosophy,” we delineate  13 criticisms.  I list them below. Continue reading

Categories: Metablog, Philosophy of Statistics, Statistics | Tags: , , | 10 Comments

Going Where the Data Take Us

A reader, Cory J, sent me a question in relation to a talk of mine he once attended:

I have the vague ‘memory’ of an example that was intended to bring out a central difference between broadly Bayesian methodology and broadly classical statistics.  I had thought it involved a case in which a Bayesian would say that the data should be conditionalized on, and supports H, whereas a classical statistician effectively says that the data provides no support to H.  …We know the data, but we also know of the data that only ‘supporting’ data would be given us.  A Bayesian was then supposed to say that we should conditionalize on the data that we have, even if we know that we wouldn’t have been given contrary data had it been available.

That only “supporting” data would be presented need not be problematic in itself; it all depends on how this is interpreted.  There might be no negative results to be had (H might be true) , and thus none to “be given us”.  Your last phrase, however, does describe a pejorative case for a frequentist error statistician, in that, if “we wouldn’t have been given contrary data” to H (in the sense of data in conflict with what H asserts), even “had it been available” then the procedure had no chance of finding or reporting flaws in H.  Thus only data in accordance with H would be presented, even if H is false; so H passes a “test” with minimal stringency or severity. I discuss several examples in papers below (I think the reader had in mind Mayo and Kruse 2001). Continue reading

Categories: double-counting, Statistics | Tags: , , | 4 Comments

RMM-7: Commentary and Response on Senn published: Special Volume on Stat Scie Meets Phil Sci

Dear Reold blogspot typewriterader: My commentary, “How Can We Cultivate Senn’s Ability, Comment on Stephen Senn, ‘You May Believe You are a Bayesian But You’re Probably Wrong’” and Senn’s, “Names and Games, A Reply to Deborah G. Mayo” have been published under the Discussion Section of Rationality, Markets, and Morals.(Special Topic: Statistical Science and Philosophy of Science: Where Do/Should They Meet?”)

I encourage you to submit your comments/exchanges on any of the papers in this special volume [this is the first].  (Information may be found on their webpage [no longer active 3/21/2021]. Questions/Ideas: please write to me at error@vt.edu.)

Categories: Philosophy of Statistics, Statistics | Tags: | Leave a comment

Blogologue*

Gelman responds on his blog today: “Gelman on Hennig on Gelman on Bayes”.

http://andrewgelman.com/2012/03/gelman-on-hennig-on-gelman-on-bayes/

I invite comments here….

*An ongoing exchange among a group of blogs that remain distinct (just coined)

Categories: Philosophy of Statistics, Statistics, U-Phil | Tags: , , | Leave a comment

U-PHIL: A Further Comment on Gelman by Christian Hennig (UCL, Statistics)

Comment on Gelman’sInduction and Deduction in Bayesian Data Analysis” (RMM)

Dr. Christian Hennig (Senior Lecturer, Department of Statistical Science, University College London)

I have read quite a bit of what Andrew Gelman has written in recent years, including some of his blog. One thing that I find particularly refreshing and important about his approach is that he contrasts the Bayesian and frequentist philosophical conceptions honestly with what happens in the practice of data analysis, which often cannot (or does better not to) proceed according to an inflexible dogmatic book of rules.

I also like the emphasis on the fact that all models are wrong. I personally believe that a good philosophy of statistics should consistently take into account that models are rather tools for thinking than able to “match” reality, and in the vast majority of cases we know clearly that they are wrong (all continuous models are wrong because all observed data are discrete, for a start).

There is, however, one issue on which I find his approach unsatisfactory (or at least not well enough explained), and on which both frequentism and subjective Bayesianism seem superior to me.

Continue reading

Categories: Philosophy of Statistics, Statistics, U-Phil | Tags: , , , | 5 Comments

Lifting a piece from Spanos’ contribution* will usefully add to the mix

The following two sections from Aris Spanos’ contribution to the RMM volume are relevant to the points raised by Gelman (as regards what I am calling the “two slogans”)**.

 6.1 Objectivity in Inference (From Spanos, RMM 2011, pp. 166-7)

The traditional literature seems to suggest that ‘objectivity’ stems from the mere fact that one assumes a statistical model (a likelihood function), enabling one to accommodate highly complex models. Worse, in Bayesian modeling it is often misleadingly claimed that as long as a prior is determined by the assumed statistical model—the so called reference prior—the resulting inference procedures are objective, or at least as objective as the traditional frequentist procedures:

“Any statistical analysis contains a fair number of subjective elements; these include (among others) the data selected, the model assumptions, and the choice of the quantities of interest. Reference analysis may be argued to provide an ‘objective’ Bayesian solution to statistical inference in just the same sense that conventional statistical methods claim to be ‘objective’: in that the solutions only depend on model assumptions and observed data.” (Bernardo 2010, 117)

This claim brings out the unfathomable gap between the notion of ‘objectivity’ as understood in Bayesian statistics, and the error statistical viewpoint. As argued above, there is nothing ‘subjective’ about the choice of the statistical model Mθ(z) because it is chosen with a view to account for the statistical regularities in data z0, and its validity can be objectively assessed using trenchant M-S testing. Model validation, as understood in error statistics, plays a pivotal role in providing an ‘objective scrutiny’ of the reliability of the ensuing inductive procedures.

Continue reading

Categories: Philosophy of Statistics, Statistics, Testing Assumptions, U-Phil | Tags: , , , , | 43 Comments

Mayo, Senn, and Wasserman on Gelman’s RMM** Contribution

Picking up the pieces…

Continuing with our discussion of contributions to the special topic,  Statistical Science and Philosophy of Science in Rationality, Markets and Morals (RMM),* I am pleased to post some comments on Andrew **Gelman’s paper “Induction and Deduction in Bayesian Data Analysis”.  (More comments to follow—as always, feel free to comment.)

Note: March 9, 2012: Gelman has commented to some of our comments on his blog today: http://andrewgelman.com/2012/03/coming-to-agreement-on-philosophy-of-statistics/

D. Mayo

For now, I will limit my own comments to two: First, a fairly uncontroversial point, while Gelman writes that “Popper has argued (convincingly, in my opinion) that scientific inference is not inductive but deductive,” a main point of my series (Part 123) of “No-Pain” philosophy was that “deductive” falsification involves inductively inferring a “falsifying hypothesis”.

More importantly, and more challengingly, Gelman claims the view he recommends “corresponds closely to the error-statistics idea of Mayo (1996)”.  Now the idea that non-Bayesian ideas might afford a foundation for strands of Bayesianism is not as implausible as it may seem. On the face of it, any inference to a claim, whether to the adequacy of a model (for a given purpose), or even to a posterior probability, can be said to be warranted just to the extent that the claim has withstood a severe test (i.e, a test that would, at least with reasonable probability, have discerned a flaw with the claim, were it false).  The question is: How well do Gelman’s methods for inferring statistical models satisfy severity criteria?  (I’m not sufficiently familiar with his intended applications to say.)

Continue reading

Categories: Philosophy of Statistics, Statistics, U-Phil | Tags: , , , | 1 Comment

Two New Properties of Mathematical Likelihood

17 February 1890–29 July 1962

Note: I find this to be an intriguing, if perhaps little-known, discussion, long before the conflicts reflected in the three articles (the “triad”) below,  Here Fisher links his tests to the Neyman and Pearson lemma in terms of power.  I invite your deconstructions/comments.

by R.A. Fisher, F.R.S.

Proceedings of the Royal Society, Series A, 144: 285-307 (1934)

      To Thomas Bayes must be given the credit of broaching the problem of using the concepts of mathematical probability in discussing problems of inductive inference, in which we argue from the particular to the general; or, in statistical phraselogy, argue from the sample to the population, from which, ex hypothesi, the sample was drawn.  Bayes put forward, with considerable caution, a method by which such problems could be reduced to the form of problems of probability.  His method of doing this depended essentially on postulating a priori knowledge, not of the particular population of which our observations form a sample, but of an imaginary population of populations from which this population was regarded as having been drawn at random.  Clearly, if we have possession of such a priori knowledge, our problem is not properly an inductive one at all, for the population under discussion is then regarded merely as a particular case of a general type, of which we already possess exact knowledge, and are therefore in a position to draw exact deductive inferences.

Continue reading

Categories: Likelihood Principle, Statistics | Tags: , , , , , | 2 Comments

Distortions in the Court? (PhilStock Feb 8)

Anyone who trades in biotech stocks knows that the slightest piece of news, rumors of successful /unsuccessful drug trials, upcoming FDA panels, anecdotal side effects, and much, much else, can radically alter a stock price in the space of a few hours.  Pre-market, for example, websites are busy disseminating bits of information garnered from anywhere and everywhere, helping to pump or dump biotechs.  I think just about every small biotech stock I’ve ever traded has been involved in some kind of lawsuit regarding what the company should have told shareholders during earnings.  (Most don’t go very far.)  If you ever visit the FDA page, you can find every drug/medical device coming up for considerations, recent letters to the company etc., etc.

Nevertheless, you might be surprised to learn that companies are not required to inform shareholders of news simply because it is likely to be relevant to an investor’s overall cost-benefit analysis in deciding how much the stock is worth, and where its price is likely to move. It’s more minimalist than that.  It is only required to provide information which, if not revealed, would render misleading something the company already said.

So for example suppose a drug company M publicly denied any reports claiming a link between its drug Z and effect E, declaring that drug Z had a clean bill of health as regards this risk concern.  Having made that statement, the company would then be in violation of the requirements if they did not also reveal information such as: numerous consumers were suing them alleging the untoward effect E from having taken drug Z; several letters had been written to the company from the FDA expressing concern about the number of cases where doctors had reported effect E among patients taking drug Z, still other letters warning company M that they should cease and desist from issuing statements that any alleged links between drug Z and effect E were entirely baseless and unfounded.

Now the information that company M was not revealing did not, and could not, have shown a statistically significant correlation between drug Z and effect E.  But failing to reveal this information rendered company M in violation of FDA and stock rules, because of the statements company M already made about drug Z’s clean bill of health regarding this very effect E (along with bullish price projections).  Not revealing this information, and the related information in their possession, rendered misleading things the company already said when it comes to information shareholders use in deciding on M’s value.

Pretty obvious, right?

Suppose then that company M is found in violation of this rule.  And suppose someone inferred from this that evidence of statistical significance is not required for showing a causal connection between a drug and hazardous side-effects.

Well, to infer that would be like doubly (or perhaps triply) missing the point: the ruling had nothing to do with what’s required to show cause and effect, but only what information a company is required to reveal to its shareholders in order not to mislead them (as regards information that could be of relevance to them in their cost-benefit assessments of the stock’s value and future price).

Secondly, the ruling made it very explicit that it was not making any claim about the actual existence of evidence linking drug Z and effect E: they were only proclaiming that drug company M would be in error, if they claimed they did not violate the rule of disclosure.[i]  (Determining whether there is any link between Z and E was an entirely separate matter.)

This is precisely  the situation as regards a drug company Matrixx, over the counter cold remedy Zicam, and side effect E: anosmia (loss or diminished sense of smell).  It was the focus of lawyer and guest blogger Nathan Schachtman yesterday.

“The potentially fraudulent aspect of Matrixx’s conduct was not that it had “hidden” adverse event reports, but rather that it had adverse event reports and a good deal of additional information, none of which it had disclosed to investors, when at the same time, the company chose to give the investment community particularly bullish projections of future sales.” (Schachtman)

Nevertheless, critics of statistical significance testing wasted no time in declaring that this ruling (which for some inexplicable reason made it to the Supreme Court) just goes to show that statistical significance is not and should not be required to show evidence of a causal link[ii]. (See also my Sept. 26 post).  Kadane’s article, which is quite interesting, concludes:

“The fact-based consideration that the Supreme Court endorses is very much in line with the Bayesian decision-theoretic approach that models how to make rational decisions under uncertainty. The presence or absence of statistical significance (in the formal, narrow sense) plays little role in such an analysis. “ (Jay Kadane)

I leave it to interested readers to explore the various  ins and outs of the case, which our guest poster has summarized in a much more legally correct fashion.


[i] Company M would certainly be in error, if the reason they claimed  not to have violated the rule of disclosure is that the information they did not reveal could not have constituted evidence of a statistically significant link between drug Z and effect E!

[ii] There was a session at the ASA last summer on this, including Kadane, Ziliac, and I don’t know who else (I had to leave prior to it).

Categories: Philosophy of Statistics | Tags: , , , | 2 Comments

Senn Again (Gelman)

Senn will be glad to see that we haven’t forgotten him!  (see this blog Jan. 14, Jan. 15,  Jan. 23, and 24, 2012).  He’s back on Gelman’s blog today .

http://andrewgelman.com/2012/02/philosophy-of-bayesian-statistics-my-reactions-to-senn/

I hope to hear some reflections this time around on the issue often noted but not discussed: updating and down dating (see this blog, Jan. 26, 2012).

Categories: Philosophy of Statistics, Statistics | Tags: , | Leave a comment

U-PHIL (3): Stephen Senn on Stephen Senn!

I am grateful to Deborah Mayo for having highlighted my recent piece. I am not sure that it deserves the attention it is receiving.Deborah has spotted a flaw in my discussion of pragmatic Bayesianism. In praising the use of background knowledge I can neither be talking about automatic Bayesianism nor about subjective Bayesianism. It is clear that background knowledge ought not generally to lead to uninformative priors (whatever they might be) and so is not really what objective Bayesianism is about. On the other hand all subjective Bayesians care about is coherence and it is easy to produce examples where Bayesians quite logically will react differently to evidence, so what exactly is ‘background knowledge’?. Continue reading

Categories: Philosophy of Statistics, Statistics, U-Phil | Tags: , , | Leave a comment

U-PHIL: Stephen Senn (2): Andrew Gelman

 I agree with Senn’s comments on the impossibility of the de Finetti subjective Bayesian approach.  As I wrote in 2008, if you could really construct a subjective prior you believe in, why not just look at the data and write down your subjective posterior.  The immense practical difficulties with any serious system of inference render it absurd to think that it would be possible to just write down a probability distribution to represent uncertainty.  I wish, however, that Senn would recognize “my” Bayesian approach (which is also that of John Carlin, Hal Stern, Don Rubin, and, I believe, others).  De Finetti is no longer around, but we are!
Categories: Philosophy of Statistics, Statistics, U-Phil | Tags: , , , , | 4 Comments

U-PHIL: Stephen Senn (1): C. Robert, A. Jaffe, and Mayo (brief remarks)

I very much appreciate C. Robert and A. Jaffe sharing some reflections on Stephen Senn’s article for this blog, especially as I have only met these two statisticians recently, at different conferences. My only wish is that they had taken a bit more seriously my request to “hold (a portion of) the text at ‘arm’s length,’ as it were. Cycle around it, slowly. Give it a generous interpretation, then cycle around it again self-critically” (January 13, 2011).  (I conceded it would feel foreign, but I strongly recommend it!)
Since these authors have given bloglinks, I’ll just note them here and give a few brief responses:
Categories: Philosophy of Statistics, Statistics, U-Phil | Tags: , , , | 3 Comments

RMM-6: Special Volume on Stat Sci Meets Phil Sci

The article “The Renegade Subjectivist: José Bernardo’s Reference Bayesianism” by Jan Sprenger has now been published in our special volume of the on-line journal, Rationality, Markets, and Morals (Special Topic: Statistical Science and Philosophy of Science: Where Do/Should They Meet?)

Abstract: This article motivates and discusses José Bernardo’s attempt to reconcile the  subjective Bayesian framework with a need for objective scientific inference, leading to a special kind of objective Bayesianism, namely reference Bayesianism. We elucidate principal ideas and foundational implications of Bernardo’s approach, with particular attention to the classical problem of testing a precise null hypothesis against an unspecified alternative.

Categories: Philosophy of Statistics, Statistics | Tags: , , | Leave a comment

Mayo Philosophizes on Stephen Senn: "How Can We Cultivate Senn’s-Ability?"

Where’s Mayo?

Although, in one sense, Senn’s remarks echo the passage of Jim Berger’s that we deconstructed a few weeks ago, Senn at the same time seems to reach an opposite conclusion. He points out how, in practice, people who claim to have carried out a (subjective) Bayesian analysis have actually done something very different—but that then they heap credit on the Bayesian ideal. (See also the blog post “Who Is Doing the Work?”) Continue reading

Categories: Philosophy of Statistics, Statistics, U-Phil | Tags: , , , , | 7 Comments

Blog at WordPress.com.