28 July 1902 – 17 September 1994
Today is Karl Popper’s birthday. I’m linking to a reading from his Conjectures and Refutations[i] along with: Popper Self-Test Questions. It includes multiple choice questions, quotes to ponder, an essay, and thumbnail definitions at the end[ii].
Blog Readers who wish to send me their answers will have their papers graded [use the comments or firstname.lastname@example.org.] An A- or better earns a signed copy of my forthcoming book: Statistical Inference as Severe Testing: How to Get Beyond the Statistics Wars. [iii]
[i] Popper reading from Conjectures and Refutations
[ii] I might note the “No-Pain philosophy” (3 part) Popper posts on this blog: parts 1, 2, and 3.
[iii] I posted this once before, but now I have a better prize.
HAPPY BIRTHDAY POPPER!
Popper, K. (1962). Conjectures and Refutations: The Growth of Scientific Knowledge. New York: Basic Books.
3 years ago…
MONTHLY MEMORY LANE: 3 years ago: July 2015. I mark in red 3-4 posts from each month that seem most apt for general background on key issues in this blog, excluding those reblogged recently, and in green up to 3 others of general relevance to philosophy of statistics . Posts that are part of a “unit” or a group count as one.
- 07/03 Larry Laudan: “When the ‘Not-Guilty’ Falsely Pass for Innocent”, the Frequency of False Acquittals (guest post)
- 07/09 Winner of the June Palindrome contest: Lori Wike
- 07/11 Higgs discovery three years on (Higgs analysis and statistical flukes)-reblogged recently
- 07/14 Spot the power howler: α = ß?
- 07/17 “Statistical Significance” According to the U.S. Dept. of Health and Human Services (ii)
- 07/22 3 YEARS AGO (JULY 2012): MEMORY LANE
- 07/24 Stephen Senn: Randomization, ratios and rationality: rescuing the randomized clinical trial from its critics
- 07/29 Telling What’s True About Power, if practicing within the error-statistical tribe
 Monthly memory lanes began at the blog’s 3-year anniversary in Sept, 2014.
 New Rule, July 30, 2016, March 30,2017 -a very convenient way to allow data-dependent choices (note why it’s legit in selecting blog posts, on severity grounds).
Personal perils: are numbers needed to treat misleading us as to the scope for personalised medicine?
A common misinterpretation of Numbers Needed to Treat is causing confusion about the scope for personalised medicine.
Thirty years ago, Laupacis et al1 proposed an intuitively appealing way that physicians could decide how to prioritise health care interventions: they could consider how many patients would need to be switched from an inferior treatment to a superior one in order for one to have an improved outcome. They called this the number needed to be treated. It is now more usually referred to as the number needed to treat (NNT).
Within fifteen years, NNTs were so well established that the then editor of the British Medical Journal, Richard Smith could write: ‘Anybody familiar with the notion of “number needed to treat” (NNT) knows that it’s usually necessary to treat many patients in order for one to benefit’2. Fifteen years further on, bringing us up to date, Wikipedia makes a similar point ‘The NNT is the average number of patients who need to be treated to prevent one additional bad outcome (e.g. the number of patients that need to be treated for one of them to benefit compared with a control in a clinical trial).’3
This common interpretation is false, as I have pointed out previously in two blogs on this site: Responder Despondency and Painful Dichotomies. Nevertheless, it seems to me the point is worth making again and the thirty-year anniversary of NNTs provides a good excuse. Continue reading
I’m reblogging a few of the Higgs posts at the 6th anniversary of the 2012 discovery. (The first was in this post.) The following, was originally “Higgs Analysis and Statistical Flukes: part 2″ (from March, 2013).
Some people say to me: “This kind of [severe testing] reasoning is fine for a ‘sexy science’ like high energy physics (HEP)”–as if their statistical inferences are radically different. But I maintain that this is the mode by which data are used in “uncertain” reasoning across the entire landscape of science and day-to-day learning (at least, when we’re trying to find things out) Even with high level theories, the particular problems of learning from data are tackled piecemeal, in local inferences that afford error control. Granted, this statistical philosophy differs importantly from those that view the task as assigning comparative (or absolute) degrees-of-support/belief/plausibility to propositions, models, or theories. Continue reading