Has Philosophical Superficiality Harmed Science?

images

.

I have been asked what I thought of some criticisms of the scientific relevance of philosophy of science, as discussed in the following snippet from a recent Scientific American blog. My title elicits the appropriate degree of ambiguity, I think. 

Quantum Gravity Expert Says “Philosophical Superficiality” Has Harmed Physics

By John Horgan | August 21, 2014 |  14

“I interviewed Rovelli by phone in the early 1990s when I was writing a story for Scientific American about loop quantum gravity, a quantum-mechanical version of gravity proposed by Rovelli, Lee Smolin and Abhay Ashtekar[i]

Horgan: What’s your opinion of the recent philosophy-bashing by Stephen Hawking, Lawrence Krauss and Neil deGrasse Tyson?

Rovelli: Seriously: I think they are stupid in this.   I have admiration for them in other things, but here they have gone really wrong.  Look: Einstein, Heisenberg, Newton, Bohr…. and many many others of the greatest scientists of all times, much greater than the names you mention, of course, read philosophy, learned from philosophy, and could have never done the great science they did without the input they got from philosophy, as they claimed repeatedly. You see: the scientists that talk philosophy down are simply superficial: they have a philosophy (usually some ill-digested mixture of Popper and Kuhn) and think that this is the “true” philosophy, and do not realize that this has limitations.

Here is an example: theoretical physics has not done great in the last decades. Why? Well, one of the reasons, I think, is that it got trapped in a wrong philosophy: the idea that you can make progress by guessing new theory and disregarding the qualitative content of previous theories.  This is the physics of the “why not?”  Why not studying this theory, or the other? Why not another dimension, another field, another universe?  Science has never advanced in this manner in the past.  Science does not advance by guessing. It advances by new data or by a deep investigation of the content and the apparent contradictions of previous empirically successful theories.  Quite remarkably, the best piece of physics done by the three people you mention is Hawking’s black-hole radiation, which is exactly this.  But most of current theoretical physics is not of this sort.  Why?  Largely because of the philosophical superficiality of the current bunch of scientists.”

I find it intriguing that Rovelli suggests that “Science does not advance by guessing. It advances by new data or by a deep investigation of the content and the apparent contradictions of previous empirically successful theories.” I think this is an interesting and subtle claim with which I agree.

Would this have been brought to light by being better tuned into current philosophy of science? Unclear. I don’t know Hawking’s criticisms, but I think philosophers of science would admit—most of them, at least if they’ve been in the field for awhile—that the promises of 30, 20 and 10 years ago, to be relevant to scientific practice, haven’t really panned out. To be clear, I absolutely think that philosophers of science can and should be at the forefront in any number of methodological, conceptual, and epistemological quagmires across the landscape of the natural and social sciences. I have written about this many times, and have organized forums with likeminded philosophers of science and scientists. With few exceptions, philosophers of science have not been involved in tackling these issues. In philosophy of statistics, philosophers are less of a presence now than when I was in graduate school[ii].

Here was the start of my introduction for a conference in June 2010 at the London School of Economics:

Debates over the philosophical foundations of statistics have a long and fascinating history; the decline of a lively exchange between philosophers of science and statisticians is relatively recent. Is there something special about 2011 (and beyond) that calls for renewed engagement in these fields? I say yes. There are some surprising, pressing, and intriguing new philosophical twists on the long-running controversies that cry out for philosophical analysis, and I hope to galvanize my co-contributors as well as the reader to take up the general cause. It is ironic that statistical science and philosophy of science—so ahead of their time in combining the work of philosophers and practicing scientists1—should now find such dialogues rare, especially at a time when philosophy of science has come to see itself as striving to be immersed in, and relevant to, scientific practice. I will have little to say about why this has occurred, although I do not doubt there is a good story there, with strands colored by philosophy,sociology, economics, and trends in other fields. I want instead to take some steps toward answering our question: Where and why should we meet from this point forward?

The on-line volume growing out of that conference, and contributions obtained shortly after, may be found here.

In a week or so, my paper, “On the Birnbaum Argument for the Strong Likelihood Principle” (and my rejoinder to comments) will appear in Statistical Science. The Likelihood Principle and the general topic of inductive-statistical “concepts of evidence” was of keen interest in philosophy when I was starting out, along with philosophy of statistics more generally. Now there may actually be more interest among statisticians than philosophers. We shall see.

[i] A link Horgan gives to read more on Rovelli’s views on physics and philosophy is a 2012 conversation with him on Edge.org.

[ii] There was a post which garnered quite a lot of comment last year: “What should philosophers of science do” (The comments got kind of off topic.)

Categories: StatSci meets PhilSci, strong likelihood principle

Post navigation

34 thoughts on “Has Philosophical Superficiality Harmed Science?

  1. David Colquhoun

    I’m afraid that I must admit that I’m guilty in this respect, having written “Why philosophy is largely ignored by science” -see http://www.dcscience.net/?p=4799
    But that was before I found your blog.

    You might approve of the fact that it starts with a picture of R.A. Fisher. It was the basis of “In Praise of Randomisation” and elicited when I discovered some philosophers who appeared not to understand the importance of randomisation.

    • David. First, thanks for the blog compliment, and for the link to your interesting article.
      As for Fisher vs Popper I think Popper could have learned a lot from Fisher, but if we assume one has separate access to the formal statistical methods, then scientists are likely to learn much more from Popper (if he is read broadly) than Fisher.
      I am sympathetic to your claim that “there is a group of philosophers of science who could, if anyone took any notice of them, do real harm to research” when they reject randomization. The issue of randomization in PoS, well, you see, it’s complicated. It’s very much a result of Bayesianism in PoS, and Howson and Urbach’s effective training in opposition. David Papineau told me, not long ago, that the one huge problem with frequentist statistics was its reliance on randomization,which he thought was unjustifible*. (Hence, my exile.) Senn has written some things on this blog alluding also to John Worrall (at LSE):
      https://errorstatistics.com/2012/07/09/stephen-senn-randomization-ratios-and-rationality-rescuing-the-randomized-clinical-trial-from-its-critics/
      https://errorstatistics.com/2013/07/14/stephen-senn-indefinite-irrelevance-2/

      One central criticism is what to do when randomization “fails”—how can you call for a do-over if it was the result of a legitimate randomization. (What do you say to this? )
      The biggest issue, naturally, is the justification for the concern with the data generation, once the data are in hand (violation of likelihood principle). Of course, Bayesian statisticians have found it precarious to whole-heartedly justify it as well–especially subjectivists. I mean if you’re going to condition on the outcome observed, then what’s with considering the general procedure as in randomization?

      That said, I do not dismiss concerns underlying a conflict that exists between those who want to rely on statistical modeling vs those who want to go experimental, using RCTs in various areas. One example is Clark Glymour and his causal modeling methods. A second, in a session with Nancy Carwright 2012 at the Phil Sci Assoc, picked up on criticisms of certain uses of RCTs in DevEcon (development economics). (They call it RCT for D).Here they use RCTs to test the effectiveness of programs to reduce poverty (e.g., school uniforms or sex-ed, and completing high school, or avoiding unsafe sex). The problems people raise, however, as I tried to explain, isn’t randomization, but classic statistical fallacies, e.g.,post-data searches for subgroups and a tendency toward ad hoc story-telling when confronted with non-statistically significant results.

      Just a couple of months ago I attended a conference at LSE on personalized medicine. There were lots of good people there. Philosophers gave their “randomization isn’t a gold standard” argument, and it’s true that RCTs aren’t automatically protected from any number of problems. But it struck me as especially strange in a context which has demonstrated the severe problems with ignoring randomization, e.g., in micro array studies. I think I was the only one who spoke up on this…

      *I had written this in an equivocal way earlier.

    • David and others: There’s an important thing that doesn’t come across when we say things like, scientists can learn a lot from Popper. Without either training or a good teacher, it’s not at all obvious that a lot would be understood correctly. But if someone studied Popper with me, say, with a willingness to read, they would get a pretty thorough understanding that would go very far beyond the caricatures of falsification and skepticism. (Likewise for other philosophers if taught by scholars of that philosopher.) My point is simply that we often overlook how incredibly dense and specialized philosophical writing is, even where there are no technical terms. It takes our students a while, regardless of field. The casual reader might think there’s a lot less there than there actually is.

  2. Rovelli also speaks to scientists, including NDT, who argue that philosophy is irrelevant to their work, and that the real scientific questions of interest are orthogonal to philosophical reasoning. One way to put NDT’s point: If you want to figure out what particles are and how they behave, learning particle physics will help, while learning the latest take on theories of inference will be useless. From this perspective, by definition, anyone doing work that contributes to science IS a scientist and not a philosopher (a neat way to win the argument, but there’s a fallacy here somewhere…). Lawrence Krauss made a similar point, that philosophical knowledge is useless to science. Someone asked him, what about Bertrand Russell? He was a philosopher who made contributions to science. Krauss responded: Russell was a mathematician, not a philosopher. So, I imagine, if a philosopher were to contribute to physics today, NDT and Krauss would make a similar move – That person knows physics, ergo that person isn’t a philosopher. Given that most philosophers of physics have advanced degrees in physics and sometimes mathematics, I’m not sure how easily this distinction can be sustained.

    But there is another question in Rovelli, other than whether philosophers can contribute. Rovelli, as I understand him, has it that scientists who deride philosophy often have substantive philosophical commitments, which they use in their work: commitments to, for instance, falsification, empiricism, positivism, or, one might add, Bayesianism or frequentism. Because these scientists don’t think such commitments are important, they ignore the impact of their background beliefs on their scientific practice. This strikes me as an interesting point, but one that needs to be made and urged carefully – I can see some very bad arguments that might be made on its basis, but I can also see a careful analysis that might reveal substantive questions about scientific practice.

    • Philosopherpatton: Thanks for your comment which I agree with, but I’m wondering about the “very bad arguments” that might come out of the recognition that scientist’s subliminal philosophies influence their work and positions more than they recognize.

  3. vl

    “I mean if you’re going to condition on the outcome observed, then what’s with considering the general procedure as in randomization?”

    The justification comes from the causal effect being estimated. It’s to make sure that the parameter being estimated as a causal effect is due to conditioning on nothing other than the intervention of interest. I find this to be most clear from thinking of this in terms of causal graphs glymour, pearl, and others.

    Randomization addresses the issue of what effect is being estimated by a statistical procedure, which is entirely relevant regardless of whether bayesian or frequentist approaches are used.

    Pearl might even argue that this is a question outside the realm of statistics, but I don’t have a strong opinion on that.

    • vl: I take it you know there is a very long history of attempts by Bayesian statisticians to find a justification for randomization within their system. It’s not just something that clearly has a home there, even if roles can be and are found. Randomization has a few distinct functions, one serving as a basis for computing significance levels. That’s the one that fits least into the Bayesian framework. The general? Bayesian line, at least in philosophy, is that it is not essential and advocate achieving ‘balance’ by other means, e.g., expert knowledge in matching, and various ingenious techniques. Some philosophers have argued that it encourages unthinking, mechanistic applications of statistics (Cartwright)

      • vl

        Thanks for the reply. I’m a practicing scientist, so what is and isn’t circumscribed by Bayesian philosophy only matters insofar as it gets me less wrong about causal relationships.

        From my readings, concepts such as shrinkage, partial pooling (and generally sharing information), model-based methods help with respect to estimation (as I said in the other comment, I don’t exclude RCTs from model-based analyses) and lend themselves to bayesian methods, so I find them useful.

        However estimation is only useful if what’s being estimated is a causal effect in a first place (hence my earlier point). Pearl argues that the distinction between a causal effect and a confounded correlation is extra-statistical. Perhaps he’s right, in which case the justification falls outside of Bayesianism (and frequentism for that matter). That’s perfectly fine with me.

  4. Also “statistical modeling vs those who want to go experimental, using RCTs in various areas.” As per my above comment, the statistical modeling subsumes RCTs as a special (ideal) case… it’s not a dichotomy.

    Also:

    – I think you mean “microarray”
    – A central problem with personalized medicine is that although treatment is randomized, predictors/classifier-variables on which “personalization” is determined are not randomized. (finally we may have some agreement here)

  5. Oh dear. Weinberg says philosophy of science is at best a “pleasing gloss” on science, and of no use to it. By and large, he’s right that its of no use. But it isn’t just that its of no use to science;, much of it isn’t pleasing either. Philosophy of science is not alone in that–consider meta-ethics. Or better, read philosophyreviewed.blogspot.com

    Clark

    • Hi Clark, (Clark Glymour is a fellow philo-sufferer over many years.) Remember our gig at the PSA, I believe in 2004, calling on philosophers of science to connect up with methodological issues of modeling and science-based policy?
      I wonder where Weinberg got the “pleasing gloss” idea. Maybe from some of its neat and tidy rational reconstructions of theory change or confirmation.
      At least ethics is mainstream, unlike PoS.
      I’ll check out the blogspot.

  6. As is often the case here, I feel like I’ve just walked into a conversation at a party by a group of people whom I don’t know. 🙂 I’m not sure I understand all the concepts here.

    But I must say I was taken by that phrase, “Physics of the why not?”, which neatly describes what I’ve been told is what physics research is largely like these days. I guess it’s somewhat like the educational concept of Creative Spelling. 🙂 My impression (from afar, I have to admit) is that this is at least partly driven by today’s emphasis on the sensational. Something like capriciously adding another dimension is a crowd pleaser, I’m sure. If that is really the case, i.e. investigation of how the world works is no longer so much valued in physics, then yes, they certainly don’t need philosophy, do they?

    • MATLOFF: “I feel like I’ve just walked into a conversation at a party by a group of people whom I don’t know.”
      Well, that isn’t such a bad thing–come on in and have a drink, but no Kool Aid here.
      It’s an interesting idea that physicists are caught up in the drive for the sensational. I’ve always found physicists–and philosophers of physics–to be highly theoretical/speculative. But perhaps there is more of that in physics now, what with string theory and multiverses (not a crowd pleaser to me). On the other hand, hasn’t the Higgs result, and other experimental physics developments, shown just how far they can go with large scale experiments?

      As for “creative spelling”, which I don’t think I’ve heard of, officially, is that a movement to drop official spellings and permit ichat slang and mojis (or whatever they’re called?)

  7. Glad to hear you have a Kool Aid-Free Zone at your parties. 🙂

    Higgs was a crowd pleaser too, right. But even then, their calling it the “God Particle” epitomizes what we’re talking about–an emphasis on sensationalism.

    As I understand it, Creative Spelling has the goal of making kids more comfortable with writing, something like that, by taking the pressure off them to spell correctly. Sounds odd to me, in this era of spell-checking software etc.

    Thanks for your comments on my blog, Deborah. It was a subject of controversy on Slashdot today, http://science-beta.slashdot.org/story/14/08/27/1219240/statistics-losing-ground-to-cs-losing-image-among-students And guess what? A lot of the posters dismissed the field of Statistics for its (perceived) failure to embrace Bayesianism! 🙂

    • Hi Matloff: No Kool-Aid, right, we drink Elba Grease here in exile (mixes bourbon and lemon liquor).https://errorstatistics.com/elbar-grease/
      I went to yur link on slashdot–not that I’d ever heard of that before. The person I read who remarked about people recognizing priors was completely inconsistent within his post. I didn’t see others touting this,may have missed. But of course you had set the stage by bringing up that Bayesian candidate from machine learning who was clueless about her priors.
      I’ll tell you I know 2 grad students who moved into CS vs stat precisely because they didn’t want to feel pressured to pretend to go along with Bayesian techniques/drum beating. Another simply didn’t want to face ideological battles where she didn’t think she was on the majority (Bayesian) side. Most students want to minimize political factions where their mentors have so much power.Remember your story about the junior faculty in your dept who started to blurt out that he too was a Bayesian (upon hearing colleagues praise another faculty for wearing his Bayesian brotherhood stripes, or whatever).
      Spell checker diminishes spelling skills, even though it’s a gem. I was a semi finalist once…

      • Yes, Slashdot is very big in the CS world. Hard to follow the comments-within-comments-within comments, though, which may have caused you to miss the ones that portrayed CS as enlightened for using Bayesian methods, Here is one:

        The “action” today is in Bayesian statistics. This formulation allows for statistical concepts to be expressed is ways that (I believe) most people can understand. But executing Bayesian statistics mandates that one understand the underlying formulation of models; in general, they are not black-box methods.

        Both of these sentences, in addition to characterizing characterizing (just some, the poster grants) statisticians as being hopelessly out of date, are really quite provocative. Needless to say, I disagree on both counts, especially the first. I think most consumers would be up in arms if they were told the truth about the use of subjective priors. (“Hey, wait a damn minute! Are you saying you watered down what they data said by adding a fudge factor reflecting your own biased opinion?! I want my money back!”)

        But for (unwitting) comic relief, did you see the reply to the person, u38cg, who wrote the remark you cited, “Anyone doing ‘statistics’ who doesn’t understand the concept of a prior is just pretending to do statistics”? The reply said, simply, “Please explain” — thereby amplifying u38cg’s point. 🙂

        • The action today* is by the consumer of statistics who is increasingly refusing to fund “trust me” science wherein we are not allowed to know the method of data dredging, data ransacking, cherry picking, and optional stopping, simply because it violates someone’s favorite statistical philosophy (Bayesian or likelihood). Replication and responsibility–spanking new ideas!– turn on holding the Bayesian’s feet to the fire, and this requires knowing just how often they produce hunky dory models even if they’re wrong. The root of Bayesianism is a “gentleman’s” logic where the untutored masses are in no position to hold the “experts” accountable. It won’t stand.
          *futuristically

          • anon-fan

            I couldn’t agree more. If only all those applied stat papers had used classical methods we wouldn’t have this crises of reproducibility now. I’m glad someone is holding the Bayesian feet to the fire whenever p-values yield wrong conclusions.

            Hopefully now frequentist methods wont be relegated to specialized topics only seen in late statistics graduate school like it’s been for the past 70 years. Now that Bayesians are finally being held accountable, with any luck classical statistics will become mainstream and finally taught with the serious dedication it deserves. No more amateur/lackadaisical/unserious teaching of frequentism allowed!

            • A non-fan: I don’t think you understood my reply to Matloff, or maybe I wasn’t clear enough, so let me try again. I was making reference to a “today” that is so new that it is futuristic. In that future time (we may hope) that a voice will be given to those “most consumers [who] would be up in arms if they were told the truth about the use of subjective priors. (“Hey, wait a damn minute! Are you saying you watered down what they data said by adding a fudge factor reflecting your own biased opinion?! I want my money back!”)[from Matloff]” What is so new as to be (almost) futuristic, as well, is ascertaining what elements of self-correction, fraudbusting and severe scrutiny enable humans to make progress in error prone domains. So, you see I was reacting to Matloff’s remarks—and I fully admit that it hasn’t happened yet, feet are not often being held to the fire, etc.

              I’m not sure what you mean by “classical methods” but if broadly understood error statistical methods and its philosophy were well taught and responsibly used, indeed “we wouldn’t have this crises of reproducibility now”—if indeed there is “a crisis” (in contrast to sloppy/bad science in some fields).
              Error probabilities associated with methods don’t go away. It’s rather that their logic completely breaks down when they are abused and distorted, e.g., applied to data-dependent hypotheses in hunting, ignoring stopping rules, untested model assumptions, and using p-values as mere “fit” measures without the associated sampling distributions holding. These well-known foibles were clearly exposed back in the 60s by Morrison and Henkle and others. Now some people act as if the methods are to blame for their utter violation of the statistical requirements that allow them to work.

              • anon-fan

                I wish Bayesians would simply define their terms. They never tell you what a prior is. Then their feet could truly be held to the fire. Is there a gentleman’s agree to not tell anyone? Why the secrecy? I don’t know how to test someone’s undefined opinions!

            • A non-fan: As for your undisguised sarcastic points, I really do think it’s too bad if it’s true that central error statistical results and methods are taught lackadaisically. Worse, if taught with a suggestion that there’s lots better stuff you’re not allowed to teach. I have absolutely no idea, since I’m an outsider and only attend stat classes run by enthusiastic scholars who demonstrate how the methods and models really work. I was completely thrilled with stat theorems I learned from the first day I accidentally attended a stat class, and still feel the same way as I learn new methods. I am surprised at your suggestion that frequentist error statistical methods are so divorced from Bayesian methods, unless you have in mind radically subjective methods. But then you wouldn’t be any kind of a fan of mine, and only being disingenuous (a non fan in fact). In that case, readers might need to reverse or suspend all of your remarks to avoid confusion.

        • E.Berk

          Matloff, it’s time to warn people of what you say:
          “I think most consumers would be up in arms if they were told the truth about the use of subjective priors. (“Hey, wait a damn minute! Are you saying you watered down what they data said by adding a fudge factor reflecting your own biased opinion?! I want my money back!”)”
          Occupy Bayesians!

          • I’m waiting for my Big Chance, E-Berk. Some study will come along that Congress will use as the rationale for some sensitive policy decision, yet upon closer inspection will be found to be based on a subjective prior. I’m hoping for a headline along the lines of “Key Study Revealed to Have Conscious, Deliberate Bias Incorporated by Author.” 🙂

  8. Christian Hennig

    “Philosophy” can mean many things… I tend to use the term quite generally for thinking about issues such as what knowledge is and should be, why we do what we do, what can be known and what cannot etc. In this sense, I think that the majority of scientists has philosophical thoughts and some interest in the thoughts of others. Sometimes I feel sad about the fact that this often seems so detached from Philosophy or PoS as a discipline, and I’m saying this from a scientist’s perspective. Probably there are problems on both sides, and also this is to some extent inevitable, due to specialization in philosophical and scientific writing, as already mentioned by Mayo.

    I think that an appropriate attitude for scientists toward the discipline of Philosophy/PoS would be one of skeptical curiosity. Unfortunately people seem to just take in caricatures of basic ideas by, for example, Popper and Kuhn, which can be explained in a single paragraph, for embracing or bashing them depending on how they relate to their own intuition. I don’t think that scientists should listen to philosophers for finding out what they should do and how they should think. I think they should listen to the philosophers (and sometimes openly disagree with them) for widening their scope and sharpening their own thoughts, some of which are philosophical. A discussion on the influence of Philosophy/PoS on science would then not center on whether science follows the right or wrong philosophy, as suggested by Rovelli.

    One message for philosophers of science in this, I think, is that they should listen to and address the issues scientists are interested in, and actively take part in scientific discussions. Mayo seems to be quite good at this, and a few others are too (Hasok Chang for example); I’m not really sure to what extent these are exceptions but they may well be.

    • Christian: Well something got you to mosey on into my seminar at the London School of Econ in 2008, and I’m so glad that it did. I/We should hold a forum at some point, once my book is done….

      Crossing the divide is difficult (especially when so many assume there’s no such thing–it’s in Eng. or whatever language they speak, after all) and requires some patient immersion.

  9. Steven McKinney

    Rovelli: “But most of current theoretical physics is not of this sort. Why? Largely because of the philosophical superficiality of the current bunch of scientists.”

    Most scientists are not good philosophers – they are good scientists, and should know when they are out of their bailiwick. Their superficial efforts, from a philosophical point of view, are lacking. I am amazed that Neil deGrasse Tyson, one of my scientist heroes, would say such silly stuff about philosophy. Scientists should seek out decent philosophers. I’d understand if Neil deGrasse Tyson said “I just don’t see any good philosophers of science right now” rather than be dismissive of a pursuit whose central core is trying to understand how we know anything. As String Theory languishes (I’m more into Yarn Theory these days) some good philosophical discourse will do the Physics community a world of good. Neil should read some of Einstein’s writings on the value of philosophy done properly.

    I agree with Mayo: “In philosophy of statistics, philosophers are less of a presence now than when I was in graduate school[ii].” Reference [ii] being to another blog post discussing: “Philosophers of statistics were ahead of their time in the 70s and early 80s, engaging in discussions side by side with statistical practitioners (Godambe and Sprott 1971, Harper and Hooker, 1977 come to mind.)”. That’s when I was first studying statistics, and had the great good fortune to study at the University of Waterloo with Godambe and Sprott. They were indeed heady times. Godambe would grill us in theory classes – and woe betide the student who said nothing. Godambe didn’t care if you blurted out the right answer or the wrong answer the first time – he just wanted you to blurt out something – to think and discuss – because which answer is right isn’t always so obvious. That’s why philosophical discourse is necessary.

    So I am delighted to have recently come across Mayo’s writings and blog site. Statistics indeed needs another round of philosophical introspection.

    One of the great things I saw come out of the philosopher and statisical practitioner discourses of the 1970s and 1980s was an acceptance of Bayesian ideas and methodologies, ideas that had been pushed underground by the majority of upper level statisticians. These things don’t always happen because this idea or that idea is right, but also because powerful personalities with the ability to make and break careers dictated these directions. As attitudes towards Bayesian ideas softened, I took every opportunity to study and discuss Bayesian ideas. On to the University of Washington in the late 1980s, I was surrounded by an active Bayesian research group in the UW department of Statistics. Whether or not all the ideas there generated will stand the test of time is still up for debate – but that debate should involve philosophers, not just powerful statistical personalities sitting on granting and tenure boards lording their attitudes over others.

    So at this point statistics will benefit from another round of philosophical introspection. What have we learned over the past 3 decades, with Bayesians out of the closet and new ideas hatched? Which will prove useful, and which are snake oil? I’ve seen several “personalities” visit this blog site and say the strangest things, then off to their blog sites to say dismissive things about severe testing including ad-hominem attacks against Mayo. Some of those blog sites no longer exist, some sit dormant – but not this one. I can’t imagine how thick one’s skin must be to hang in there as a philosopher, but I for one am most appreciative of the efforts of competent philosophers, in particular at present Mayo.

    Philosophical superficiality doesn’t do anyone much good, at any point in history, so superficial philosophers should step aside, and most of us should seek out the talented philosophers that occasionally grace our scientific arenas and pay more attention. I’m not up on philosophers of physics at present, but after reading many of the posts on this blog, and the papers and books referenced, I am most pleased that serious philosophical discourse is once again reappearing in the statistical sciences.

    Christian Hennig mentions Hasok Chang – I’ll do some homework on that suggestion – if others are aware of current philosophers relevant to statistics, keep the suggestions coming. Hennig is right – scientists need to listen to philosophers because in order to think clearly, philosophical talents are necessary. Oh if only they were easier! Thankfully they are interesting and stimulating when properly done.

    • Steven: Thanks so much for your appreciative comment–rare and encouraging (even if it does make it harder to quit).
      You couldn’t be more correct in noting the leadership of personality in this arena: “These things don’t always happen because this idea or that idea is right, but also because powerful personalities with the ability to make and break careers dictated these directions”

  10. Pingback: Friday links: Lego academics vs. admin, ESA vs. other summer conferences, greatest syllabus ever, a taxonomy of papers, and more | Dynamic Ecology

  11. Christian Hennig

    Not sure yet whether this is a recommendation, but I’m currently fighting my way through some Bas van Fraassen reading. Mayo, any comments on him?

    • Christian: Do you mean his “constructive empiricism”? He’s also done work in phil physics. I’m not up on his current work, but often include his material in classes. What surprises me is that a constructive empiricist (non-realist) like him assumes there is no problem with making inductive inference about observables, It assumes the shaky business enters only with unobservables. When I asked once how he justified the inductive inferences to empirical claims, he didn’t seem to think there was a problem. I find inferences to observables and unobservables are on par and need to be critically evaluated on a case by case basis. Perhaps if you ask me something more specific I can be of more use.

  12. Christian Hennig

    Thanks. I’m reading “Scientific Representation” (I thought I should go for his up-to-date views) – obviously I sympathize with something called “constructive empiricism” but I should really know the details better when writing something like this. The issue you raise is also of interest to me. When making inference about observables, we could check, in principle, whether we got it wrong or not. It’s well defined what that means. Not so regarding unobservables. But that’s rather me than van Fraassen, for the moment (and it looks suspiciously de Finettian…).

    • Christian:Observables—and VF is first to admit there is no hard line, but rather that scientists themselves (or rather scientific theories!) divide the observables/unobservables–include empirical generalizations of all stripes. These are not claims you can check in principle whether we got it wrong or not.

  13. Then there are those saying “philosophy is a dirty word” because they’re allowing that evolution need not be inconsistent with God.
    http://whyevolutionistrue.wordpress.com/2012/07/03/elliott-sober-argues-again-that-god-might-have-caused-mutations/

  14. Richard Sanchez

    CONJECTURE. TESTED BY OBSERVATIONS AND EXPERIMENTS, NOT DERIVED FROM IT. GOOD HARD TO VARY EXPLANATIONS ARE KEY AS DAVID DEUTSCH NOTES. FREE CREATIVITY AND GUESSING AS FEYNMAN NOTES. CARL IS NOT BRIGHT….

Blog at WordPress.com.