At the JSM: 2013 International Year of Statistics

Photo on 8-4-13 at 3.40 PM“2013 is the International Year of Statistics” the JSM (Joint Statistical Meetings) brochures ring out! What does it mean?  Whatever it is, it’s exciting! Errorstatistics.com never took up this question, but it’s been on some of the blogs in my “Blog bagel”. So, Since I’m at the JSM here in Montreal, I may report on any clues. Please share your comments. I’m not a statistician, but a philosopher of science, and of inductive-statistical inference much more generally. So I have no dog in this fight, as they say. (Or do I? ) On the other hand, I have often rued “the decline of late in the lively and long-standing exchange between philosophers of science and statisticians” (see this post). [i] (We did have that one parody on “big data or pig data”.)

I know from Larry Wasserman (normaldeviate) that the “year of” label grows, at least in part, to help prevent Statistical Science being eclipsed by the fashionable “Big Data” crowd. In one blog he even spoke of “the end of statistics”. “Aren’t We Data Science?” Marie Davidian, president of the ASA, asks in a recent AmStatNews article.[ii] Davidian worries, correctly I’ve no doubt, that Big Dadaists may be collecting data with “little appreciation for the power of design principle. Statisticians could propel major advances through developments of ‘experimental design for the 21st century’!”.  This recalls Stan Young’s recent post:

Until relatively recently, the microarray samples were not sent through assay equipment in random order. Clinical trial statisticians at GSK insisted that the samples go through assay in random order. Rather amazingly the data became less messy and p-values became more orderly. The story is given here:
 http://blog.goldenhelix.com/?p=322. 
Essentially all the microarray data pre-2010 is unreliable…..So often the problem is not with p-value technology, but with the design and conduct of the study.

So without statistical design principles, they may have wasted a decade!

Back to the JSM, I see they’ve even invited pollster Nate Silver to give the AMA presidential address. I thought he was more baseball stat expert/pundit/pollster than statistician, but some are calling him an “analytics rock star”. Never mind that there’s at least one extremely strange chapter (8) in his popular book (The Signal and the Noise). Here’s an excerpt from Wasserman’s review, which he titles:  “Nate Silver is a Frequentist: Review of The signal and the noise”:



I have one complaint. Silver is a big fan of Bayesian inference, which is fine. Unfortunately, he falls into that category I referred to a few posts ago. He confuses ‘Bayesian inference’ with ‘using Bayes’ theorem.’ His description of frequentist inference is terrible. He seems to equate frequentist inference with Fisherian significance testing, most using Normal distributions. Either he learned statistics from a bad book or he hangs out with statisticians with a significant anti-frequentist bias. Have no doubt about it: Nate Silver is a frequentist.[iii] (Wasserman)

I didn’t discuss Silver’s book on this blog, but looking up a few comments I made on other blogs, (e.g.,on a Gelman blog reviewing Silver), I see I am a bit less generous than Wasserman: “Frequentists, Silver alleges, go around reporting hypotheses like toads predict earthquakes and other “manifestly ridiculous” findings that are licensed by significance testing and data dredged correlations. (Silver, 253). But it is the frequentist who prevents such spurious correlations…. “  (Mayo) So Silver’s criticisms of frequents are way off base.  I was also slightly aghast at his Fisher ridicule and I poke fun at his “All-You-Need is Bayesian cheerleading. The simple use of Bayes Theorem solves all problems (he seems not to realize they too require statistical models)” I wrote.  It’s hard to tell if he’s just reporting or chiming in with those who advocate that schools stop teaching frequentist methods. Some statistical self-inflicted wounds perhaps? The other chapters look interesting, though I didn’t get too much further…(The Bayesian examples are all ordinary frequentist updating, it appears.)   If I can, I’ll go to Silver’s talk.

[i] In that post I wrote: “Philosophy of statistical science not only deals with the philosophical foundations of statistics but also questions about the nature of and justification for inductive-statistical learning more generally. So it is ironic that just as philosophy of science is striving to immerse itself in and be relevant to scientific practice, that statistical science and philosophy of science—so ahead of their time in combining the work of philosophers and practicing scientists—should see such dialogues become rather rare.  (See special topic here.)” (Mayo)

[ii] Some of the turf battles I hear about appear to reflect less substance than style (i.e., people being galvanized to use the latest meme in funding opportunities). Even in philosophy, the dept. head asked us to try and work it in.   In my view, rather than suggesting “Plato and Big Data”, they should be asking to highlight interconnections between statistical evidence, critical thinking, logic, ethics,  philosophy of science, and epistemology. That would advance our courses.

[iii] For example, Wasserman says, in his review of Silver:

One of the most important tests of a forecast — I would argue that it is the single most important one — is called calibration. Out of all the times you said there was a 40 percent chance of rain, how often did rain actually occur? If over the long run, it really did rain about 40 percent of the time, that means your forecasts were well calibrated.  (Wasserman)

Categories: Error Statistics | 5 Comments

Post navigation

5 thoughts on “At the JSM: 2013 International Year of Statistics

  1. Nate Silver gave his ASA Presidential talk to a packed audience (with questions tweeted). I will likely write a separate post on this, but just to jot down my thoughts very quickly—based on scribbled notes—Silver gave a list of 10 points that went something like this:
    1. statistics are not just numbers
    2. context is needed to interpret data
    3. correlation is not causation
    4. averages are the most useful tool
    5. human intuitions about numbers tend to be flawed and biased
    6. people misunderstand probability
    7. we should be explicit about our biases and (in this sense) should be Bayesian*
    8. complexity is not the same as not understanding
    9. being in the in crowd gets in the way of objectivity
    10. making predictions improves accountability

    *Just to comment on #7, I don’t know if this is a brand new philosophy of Bayesianism, but his position went like this: Journalists and others are incredibly biased, they view data through their prior conceptions, wishes, goals, and interests, and you cannot expect them to be self-critical enough to be aware of, let alone be willing to expose, their propensity toward spin, prejudice, etc. Silver said the reason he favors the Bayesian philosophy (yes he used the words “philosophy” and “epistemology”) is that people should be explicit about disclosing their biases. I have three queries: (1) If we concur that people are so inclined to see the world through their tunnel vision, what evidence is there that they are able/willing to be explicit about their biases? (2) If priors are to be understood as the way to be explicit about one’s biases, shouldn’t they be kept separate from the data rather than combined with them? (3) I don’t think this is how Bayesians view Bayesianism or priors—is it? Subjective Bayesians, I thought, view priors as representing prior or background information about the statistical question of interest; but Silver sees them as admissions of prejudice, bias or what have you. As a confession of bias, I’d be all for it—though I think people may be better at exposing other’s biases than their own. Only thing: I’d need an entirely distinct account of warranted inference from data.

  2. 2. We can use data to interpret context too.
    4. Means can be really mean.
    5. Maybe not so much… https://aleadeum.wordpress.com/2013/06/09/are-human-minds-statistical-machines/
    6. Can we blame them? So do statisticians
    7. I agree Bayesianism is a philosophy stand, and about (3) I’d say Nate is the Subjective kind for whom everything is a belief, including Objectivist priors.
    9. Interesting comment coming from a Subjectivist… I guess he was his subjectivity to be objective.

  3. rv

    I think his philosophy is a bit more than subjectivist bayes. He emphasizes the idea of calibrated bayesianism in his book quite a lot (which led to Gelman and Wasserman debating whether calibration makes one a frequentist or not).

    To be honest, I wasn’t even familiar with what it meant to be a calibrated bayesian until I read Nate’s book, the most cited paper on the subject seems to be this one by Philip Dawid:

    http://amstat.tandfonline.com/doi/abs/10.1080/01621459.1982.10477856

  4. rv. This kind of “calibration” might make sense for events, but I don’t see how to check that, say, 80% of the scientific hypotheses to which one assigns .8 posterior probability turn out to be true–never mind that it’s not clear what one would do with such a number. But the main issue, as I see it, is with what he said in his talk (under point #7).Thanks for the link.

  5. rv

    I agree, it’s an approach that is very specific to the kinds of prediction problems Nate works on. The idea doesn’t provide a general solution scientific epistemology.

    Maybe it would be applicable if replications were more common in scientific culture. Correct calibration would result in models that were consistent with replication study outcomes. One could perform meta-analyses to show whether researchers were consistently underestimating or overestimating the uncertainty of their findings. This would require a pretty big change to how science is institutionalized though (for one thing, replication experiments would have to be respected and publishable), which probably isn’t realistic.

I welcome constructive comments for 14-21 days. If you wish to have a comment of yours removed during that time, send me an e-mail.

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Blog at WordPress.com.