philosophy of science

Call for papers: Philosepi?

Dear Reader: Here’s something of interest that was sent to me today (“philosepi”!)

Call for papers: Preventive Medicine special section on philosepi

The epidemiology and public health journal Preventive Medicine is devoting a special section to the Philosophy of Epidemiology, and published the first call for papers in its April 2012 issue. Papers will be published as they are received and reviewed. Deadline for inclusion in the first issue is 30 June 2012. See the Call For Papers for further information or contact Alex Broadbent who is happy to discuss possible topics, etc. All papers will be subject to peer review.

Preventive Medicine invites submissions from epidemiologists, statisticians, philosophers, lawyers, and others with a professional interest in the conceptual and methodological challenges that emerge from the field of epidemiology for a Special Section entitled “Philoso- phy of Epidemiology” with Guest Editor Dr Alex Broadbent of the University of Johannesburg. Dr Broadent also served as the Guest Editor of a related previous Special Section, “Epidemiology, Risk, and Causation”, that appeared in the October–November 2011 issue (Prev Med 53(4–5):213–259 http://www.sciencedirect.com/science/ journal/00917435/53/4-5). Continue reading

Categories: Announcement, philosophy of science | Tags: , , | 1 Comment

Statistical Science Court?

Nathan Schactman has an interesting blog post onScientific illiteracy among the judiciary”:

February 29th, 2012

Ken Feinberg, speaking at a symposium on mass torts, asks what legal challenges do mass torts confront in the federal courts. The answer seems obvious.

Pharmaceutical cases that warrant federal court multi-district litigation (MDL) treatment typically involve complex scientific and statistical issues. The public deserves having MDL cases assigned to judges who have special experience and competence to preside in cases in which these complex issues predominate. There appears to be no procedural device to ensure that the judges selected in the MDL process have the necessary experience and competence, and a good deal of evidence to suggest that the MDL judges are not up to the task at hand.

In the aftermath of the Supreme Court’s decision in Daubert, the Federal Judicial Center assumed responsibility for producing science and statistics tutorials to help judges grapple with technical issues in their cases. The Center has produced videotaped lectures as well as the Reference Manual on Scientific Evidence, now in its third edition. Despite the Center’s best efforts, many federal judges have shown themselves to be incorrigible. It is time to revive the discussions and debates about implementing a “science court.”

I am intrigued to hear Schachtman revive the old and controversial idea of a “science court”, although it has actually never left, but has come up for debate every few years for the past 35 or 40 years! In the 80s, it was a hot topic in the new “science and values” movement, but I do not think it was ever really put to an adequate experimental test. The controversy directly relates to the whole issue of distinguishing evidential and policy issues (in evidence-based policy), Continue reading
Categories: philosophy of science, PhilStatLaw, Statistics | Tags: , , , , | 2 Comments

No-Pain Philosophy (part 3): A more contemporary perspective

See (Part 2)

See (Part 1)

 

7.  How the story turns out (not well)

This conception of testing, which Lakatos called “sophisticated methodological falsificationism,” takes us quite a distance from the more familiar if hackneyed conception of Popper as a simple falsificationist.[i]  It called for warranting a host of different methodological rules for each of the steps along the way in order to either falsify or corroborate hypotheses.  But it doesn’t end well.  Continue reading

Categories: No-Pain Philosophy, philosophy of science | Tags: , , , , | 10 Comments

No-Pain Philosophy: Skepticism, Rationality, Popper and All That (part 2): Duhem’s problem & methodological falsification

(See Part 1)

5. Duhemian Problems of Falsification

Any interesting case of hypothesis falsification, or even a severe attempt to falsify, rests on both empirical and inductive hypotheses or claims. Consider the most simplistic form of deductive falsification (an instance of the valid form of modus tollens): “If H entails O, and not-O, then not-H.”  (To infer “not-H” is to infer H is false, or, more often, it involves inferring there is some discrepancy in what H claims regarding the phenomenon in question). Continue reading

Categories: No-Pain Philosophy, philosophy of science | Tags: , , , , | 4 Comments

Reposting from Jan 29: No-Pain Philosophy: Skepticism, Rationality, Popper, and All That: The First of 3 Parts

I want to shift to the arena of testing the adequacy of statistical models and misspecification testing (leading up to articles by Aris Spanos, Andrew Gelman, and David Hendry). But first, a couple of informal, philosophical mini-posts, if only to clarify terms we will need (each has a mini test at the end).
 1. How do we obtain Knowledge, and how can we get more of it?
     Few people doubt that science is successful and that it makes progress. This remains true for the philosopher of science, despite her tendency to skepticism. By contrast, most of us think we know a lot of things, and that science is one of our best ways of acquiring knowledge. But how do we justify our lack of skepticism? Continue reading
Categories: philosophy of science | Tags: , , , | 3 Comments

No-Pain Philosophy: Skepticism, Rationality, Popper, and All That: First of 3 Parts

I want to shift to the arena of testing the adequacy of statistical models and misspecification testing (leading up to articles by Aris Spanos, Andrew Gelman, and David Hendry). But first, a couple of informal, philosophical mini-posts, if only to clarify terms we will need (each has a mini test at the end). Continue reading
Categories: No-Pain Philosophy, philosophy of science | Tags: , , , | 2 Comments

U-PHIL: "So you want to do a philosophical analysis?"

“Philosophy, as I have so far understood and lived it, means living voluntarily among ice and high mountains—seeking out everything strange and questionable in existence”. Nietzsche*
I am about to turn to philosophical analyses/deconstructions of short portions of articles from the special issue “Statistical Science and Philosophy of Science” (RMM 2011),  and I will invite contributed analyses from readers and, of course, the author(s).  The first text, to be posted tomorrow, will be from Professor Stephen Senn. (Full article)
Categories: philosophy of science, U-Phil | Tags: , | 3 Comments

Little Bit of Blog Log-ic

I have a logic license

My “Logic” chariot,  crunched from behind before my travels, you might recall (blogpost Nov. 15, “Logic Takes a Bit of a Hit”), has been robustly repaired and beautifully corrected, all in my absence!1  So here’s a little bit of blog logic…. Continue reading

Categories: philosophy of science | Tags: , , , | Leave a comment

Deconstructing and Deep-Drilling* 2

Constructing Thebes Library: 2002

Deconstructing: The deconstructionist idea, initially associated with French philosophers like Derrida, and literary theory, denies that a “text” has a single interpretation, intended by the author, but rather that the reader constructs its meaning, unearthing conscious or unconscious significations. While the general philosophy is linked with relativism, postmodernism, and social constructivism—positions to which I am highly allergic—one needn’t embrace them to accord validity to the activity of disinterring meanings: ironies, deceptions, and unintended assumptions and twists in an author’s writing. The passage I cited from Berger seems to offer an example for creative deconstruction of the statistical kind. I wouldn’t have proposed the exercise if I didn’t suspect we might learn something of relevance to our deep-sea drilling activity…. Please continue to send your ponderings….

* DO stock is nearly at a year low! (I surmise a fairly quick trip back up 10 points)

Categories: Irony and Bad Faith, philosophy of science, U-Phil | Tags: , , , | Leave a comment

The UN Charter: double-counting and data snooping

John Worrall, 26 Nov. 2011

Last night we went to a 65th birthday party for John Worrall, philosopher of science and guitarist in his band Critique of Pure Rhythm. For the past 20 or more of these years, Worrall and I have been periodically debating one of the most contested principles in philosophy of science: whether evidence in support of a hypothesis or theory should in some sense be “novel.”

A novel fact for a hypothesis H may be: (1) one not already known, (2) one not already predicted (or counter-predicted) by available hypotheses, or (3) one not already used in arriving at or constructing H. The first corresponds to temporal novelty (Popper), the second, to theoretical novelty (Popper, Lakatos), the third, to heuristic or use-novelty. It is the third, use-novelty (UN), best articulated by John Worrall, that seems to be the most promising at capturing a common intuition against the “double use” of evidence:

If data x have been used to construct a hypothesis H(x), then x should not be used again as evidence in support of H(x).

(Note: Writing H(x) in this way emphasizes that, one way or another, the inferred hypothesis selected or constructed to fit or agree with data x. The particular instantiation can be written as H(x0).)

The UN requirement, or, as Worrall playfully puts it, the “UN Charter,” is this:

Use-novelty requirement (UN Charter): for data x to support hypothesis H (or for x to be a good test of H), H should not only agree with or “fit” the evidence x, but x itself must not have been used in H‘s construction.

The problem has arisen as a general prohibition against data mining, hunting for significance, tuning on the signal, ad hoc hypotheses, and data peeking, and as a preference for predesignated hypotheses and novel predictions.

The intuition underlying the UN requirement seems straightforward: it is no surprise that data x fits H(x), if H(x) was deliberately constructed to accord with data x, and then x is used once again to support H(x). To use x both to construct and to support a hypothesis is to face the accusation of illicit “double-counting.” In order for x to count as genuine evidence for a hypothesis, we need to be able to say that so good a fit between data x and H is practically impossible or extremely improbable (or an extraordinary coincidence, or the like) if in fact it is a mistake to regard x as evidence for H.

In short, the epistemological rationale for the UN requirement is essentially the intuition informing the severity demand associated with Popper. The disagreement between me and Worrall has largely turned on whether severity can be satisfied even in cases of UN violation. (Worrall 2010).

I deny that UN is necessary (or sufficient) for good tests or warranted inferences—there are severe tests that are non-novel, and novel tests that are not-severe. Various types of UN violations do alter severity, by altering the error-probing capacities of tests. Without claiming that it is easy to determine just when this occurs, at least the severity requirement provides a desiderata for discriminating problematic from unproblematic types of double-counting.

Its goal is also to explain why we often have conflicting intuitions about the novelty requirement. On the one hand, it seems clear that were you to search out several factors and report only those that show (apparently) impressive correlations, there would be a high probability of erroneously inferring a real correlation. But it is equally clear that we can and do reliably use the same data both to arrive at and to warrant hypotheses: in forensics, for example, where DNA is used to identify a criminal; in using statistical data to find out if it has satisfied its own assumptions; as well as in common realms such as measurement—inferring, say, my weight gain after three days in London. Here, although any inferences (about the criminal, the model assumptions, my weight) are constructed to fit or account for the data, they are deliberately constrained to reflect what is correct, at least approximately.  We use the data all right, but we go where it takes us (not where we want it to go.)

What matters is not whether H was deliberately constructed to accommodate data x. What matters is how well the data, together with background information, rule out ways in which an inference to H can be in error. Or so I have argued [1]

I claim that if we focus on the variety of “use-construction rules” and associated mistakes that need to be ruled out or controlled in each case, we can zero in on the problematic cases. Even where UN violations can alter the error-probabilistic properties of tools, this recognition can lead us to correct overall severity assessments.

Despite some differences, there are intriguing  are parallels between how this debate has arisen in philosophy and statistics. Traditionally, philosophers who deny that an appraisal of evidence can or should be altered by UN considerations have adhered to “logical theories of confirmation.” As Alan Musgrave notes:

According to modern logical empiricist orthodoxy, in deciding whether hypothesis h is confirmed by evidence e, and how well it is confirmed, we must consider only the statements h and e, and the logical relations between them. It is quite irrelevant whether e was known first and h proposed to explain it, or whether e resulted from testing predictions drawn from h. (Musgrave 1974, 2)

These logical theories of confirmation have an analogy in formal statistical accounts that obey the likelihood principle:

The likelihood principle implies . . . the irrelevance of predesignation, of whether an hypothesis was thought of beforehand or was introduced to explain known effects. (Rosenkrantz 1977, 122)

A prime example of a UN violation is one in which a hypothesis or theory contains an “adjustable” or free parameter, which is then “tied down” on the basis of data (in order to accord with it).

Bayesians looking to justify the preference against such UN violations (without violating the likelihood principle) typically look for it to show up in prior probability assignments. For instance, Jim Berger, in statistics, and Roger Rosenkrantz, in philosophy of science, maintain that a theory that is free of adjustable parameters is “simpler” and therefore enjoys a higher prior probability. There is a long history of this type of move based on different kinds of simplicity considerations.  Conversely, according to philosopher John Earman (discussing GTR): “On the Bayesian analysis,” the countenancing of parameter fixing that we often see in science “is not surprising, since it is not at all clear that GTR deserves a higher prior than the [use-constructed rivals to GTR]” (Earman 1992, 115). He continues: “Why should the prior likelihood of the evidence depend upon whether it was used in constructing T?” (p 116).

Given the complexity and competing intuitions, it’s no surprise that Bayesians appear to hold different positions here. Andrew Gelman tells me that Bayesians have criticized his (Bayesian?) techniques for checking models on the grounds that they commit double-counting (and thereby have problems with power?).  I’m unsure what exactly the critical argument involves.  Frequentist model checking techniques are deliberately designed to allow computing error probabilities for the questions about assumptions, distinct from those needed to answer the primary question.  Whether this error statistical distinction can be relevant for Gelman’s “double counting” I cannot say.

Earman, J. 1992. Bayes or Bust? A Critical Examination of Bayesian Confirmation Theory. Cambridge, MA: MIT Press.

Musgrave, A. 1974. Logical versus historical theories of confirmation. British Journal for the Philosophy of Science 25:1-23.

Rosenkrantz, R. 1977. Inference, Method and Decision: Towards a Bayesian Philosophy of Science. Dordrecht, The Netherlands: D. Reidel.

Worrall, J. 2010. Theory, confirmation and novel evidence. In Error and Inference: Recent Exchanges on Experimental Reasoning, Reliability, and the Objectivity and Rationality of Science, edited by D. Mayo and A. Spanos, 125-154. Cambridge: Cambridge University Press.

[1] For my discussions on the novelty and severity business (updated Feb. 24, 2021):

Categories: double counting, philosophy of science | Tags: , | Leave a comment

Skeleton Key and Skeletal Points for (Esteemed) Ghost Guest

Secret Key

Why attend presentations of interesting papers or go to smashing London sites when you can spend better than an hour racing from here to there because the skeleton key to your rented flat won’t turn the lock (after working fine for days)? [3 other neighbors tried, by the way, it wasn’t just me.] And what are the chances of two keys failing, including the porter’s key, and then a third key succeeding–a spare I’d never used but had placed in a hollowed-out volume of Error and Inference, and kept in an office at the London School of Economics?  (Yes, that is what the photo is!  A anonymous e-mailer guessed it right, so they must have spies!)  As I ran back and forth one step ahead of the locksmith, trying to ignore my still-bum knee (I left the knee brace in the flat) and trying not to get run over—not easy, in London, for me—I mulled over the perplexing query from one of my Ghost Guests (who asked for my positive account). Continue reading

Categories: philosophy of science, Statistics | Tags: , , | Leave a comment

RMM-1: Special Volume on Stat Sci Meets Phil Sci

Little by little the articles on Stat Sci Meets Phil Sci are appearing in “Rationality, Markets and Morals,”  online.

The article “Statistical Science and Philosophy of Science: Where Do/Should They Meet in 2011 (and Beyond)?” has now been published.

Categories: philosophy of science, Philosophy of Statistics, Statistics | Tags: , | 4 Comments

SF conferences & E. Lehmann

I’m jumping off the Island for a bit.  Destination: San Francisco, a conference on “The Experimental Side of Modeling” http://www.isabellepeschard.org/ .  Kuru makes a walk on appearance in my presentation, “How Experiment Gets a Life of its Own”.  It does not directly discuss statistics, but I will post my slides.

The last time I was in SF was in 2003 with my econometrician colleague, Aris Spanos.  We were on our way to Santa Barbara to engage in an unusual powwow on statistical foundations at NCEAS*, and stopped off in SF to meet with Erich Lehmann and his wife, Julie Shaffer.   We discussed, among other things, this zany idea of mine to put together a session for the Second Lehmann conference in 2004 that would focus on philosophical foundations of statistics. (Our session turned out to include David Freedman and D.R. Cox). Continue reading

Categories: philosophy of science, Statistics | Tags: , , , , , | 1 Comment

KURU

I have been reading about a disorder that intrigues me, Kuru (which means “shaking”) widespread among the Fore people of New Guinea in the 1960s. In around 3-6 months, Kuru victims go from having difficulty walking, to outbursts of laughter, to inability to swallow and death. Kuru, and (what we now know to be) related diseases, e.g., Mad Cow, Crutzfield Jacobs, scrapie) are “spongiform” diseases, causing brains to appear spongy. (They are also called TSEs: transmissible spongiform encephalopathies). Kuru clustered in families, in particular among Fore women and their children, or elderly parents. Continue reading

Categories: philosophy of science, Reformers: Prionvac, Statistics | Tags: , , , , , | Leave a comment

Drilling Rule #1*

A simple rule before getting started: In presenting their arguments, philosophers sometimes appear to go off into far distant islands entirely, and then act as if they have shown something about the case at hand. The mystery evaporates if one keeps in mind the following rule of argument:

  • If one argument is precisely analogous to another, in all relevant respects, and the second argument is pretty clearly fishy, then so is the first. Likewise, if one argument is precisely analogous to another, in all relevant respects, and the second argument passes swimmingly, then so must the first.

If the argument at hand is murky, while the one in the distant land crystal clear, then appealing to the latter is a powerful way to make a point.  Because the relevance for the case at hand seems obvious, details may be left unstated.  Of course you may avoid these conclusions by showing just where the analogies break down.

*Full disclosure:  I own a fair amount of Diamond Offshore (DO), but do not plan to purchase more in the next 72 hours.

Categories: philosophy of science, Statistics | Tags: , , , , | Leave a comment

Blog at WordPress.com.