Our presentations from the PSA: Philosophy in Science (PinS) symposium

Philosophy in Science:
Can Philosophers of Science Contribute to Science?


Below are the presentations from our remote session on “Philosophy in Science”on November 13, 2021 at the Philosophy of Science Association meeting. We are having an extended discussion on Monday November, 22 at 3pm Eastern Standard Time. If you wish to take part, write to me of your interest by email (error) with the subject “PinS” or use comments below. (Include name, affiliation and email).

Session Abstract: Although the question of what philosophy can bring to science is an old topic, the vast majority of current philosophy of science is a meta-discourse on science, taking science as its object of study, rather than an attempt to intervene on science itself. In this symposium, we discuss a particular interventionist approach, which we call “philosophy in science (PinS)”, i.e., an attempt at using philosophical tools to make a significant scientific contribution. This approach remains rare, but has been very successful in a number of cases, especially in philosophy of biology, medicine, physics, statistics, and the social sciences. Our goal is to provide a description of PinS through both a bibliometric approach and the examination of specific case studies. We also aim to explain how PinS differs from mainstream philosophy of science and partly similar approaches such as “philosophy of science in practice”.

Here are the members and the titles of their talks. (Link to session/abstracts):

  • Thomas Pradeu (CNRS & University Of Bordeaux) & Maël Lemoine (University Of Bordeaux): Philosophy in Science: Definition and Boundaries
  • Deborah Mayo (Virginia Tech): My Philosophical Interventions in Statistics
  • Elliott Sober (University Of Wisconsin – Madison): Philosophical Interventions in Science – a Strategy and a Case Study (Parsimony)
  • Randolph Nesse (Arizona State University) & Paul Griffiths (University of Sydney): How Evolutionary Science and Philosophy Can Collaborate to Redefine Disease


T. Pradeu & M. Lemoine slides: “Philosophy in Science: Definition and Boundaries”:


D. Mayo slides: “Philosophical Interventions in the Statistics Wars”:


E. Sober: “Philosophical Interventions in Science – A Strategy and a Case Study (Parsimony)”


R. Nesse & P. Griffiths: How Evolutionary Science and Philosophy Can Collaborate to Redefine Disease”:

Categories: PSA 2021

Post navigation

7 thoughts on “Our presentations from the PSA: Philosophy in Science (PinS) symposium

  1. Hello,
    I would love to take part in the extended discussion.

    Ze-No Centre for Logic and Metaphysics, Indonesia

  2. Stuart Bevan

    I think you (plural) have a really good idea in this approach/idea. There is absolutely no question in my mind that:
    a. the critical thinking skills of Philosophy/Philosophers have a valuable role to play in the Sciences;
    b. based on my own experience in business (infra) those competences lead to real-world results.

    I return to Philosophy after a 37-year career in business; I can say from personal experience that the analytic methods I learnt in Philosophy enabled me to solve I would guess most if not all, the most difficult problems I came across in the World of Commerce.
    Those methods certainly enabled me to write twelve (12) patents. Eight (8) of those issued, three (3) of the remainder were filed nationally this past June. The first Patent had no prior art; analytic with writing skills are two key abilities for authoring Patents.
    More Philosophers and more Philosophy are needed in Science and Technology.
    My 2 cents worth.

    • Stuart:
      Thanks for your comment. You’re saying that analytical/philosophical skills were importantly relevant for your patents? Wow. I’d be very curious to know what any of them are.

      • Stuart Bevan

        Deborah, I think background information is needed in addition to the list. Working on a synopsis which will hopefully help the overall discussion.
        Happy Thanksgiving to all in America!

      • Stuart Bevan

        Last week I sent you a detailed summary of the Patents in an email, with some technical background as a PDF attachment. I tried but formatting constraints stopped me from placing the information into this blog.
        The rest of this commentary as before is my 2 cents worth.
        Philosophy In Science has methinks, two (2) components:
        1. consult to Scientists;
        2. collaborate with Scientists.
        To illustrate, Elliot Sober’s presentation strikes me as an instance of consultation. Where the Philosopher used his/her analytic skills to propose a solution to a Scientific problem, or question or issue. Which it seems to me is a perfectly valid approach.
        However, consultants suggest, they do not implement.
        I propose that in addition to consultation in the Sciences, the Philosophy In Science initiative consider collaboration with Scientists.
        Which in turn implies implementation, at least to some extent. Seeing if it works. Or not, as the case may be.
        It seems to me that if such an approach were to be taken the actual praxes of the Sciences must be considered. The question thus arises – which Sciences? As an example consider this non-exhaustive list:
        the Engineering (largely if not totally ignored in PS analyses);
        the Social;
        the Life;
        the Physical; or
        the Formal Sciences.
        Each of these Faculties or groupings of sciences, all with numerous departments have their own unique research methodologies. Consideration of the groups of the Sciences and the individual disciplines within each group perforce raises questions about differences in the experimental methodologies employed by the various specialties, along with the consequent dissimilarities in associated experimental designs, not forgetting instrumentation. Experiments in Medicine are not the same as those in Physics. Consider CERN (endnote 1) versus Randomized Control Trials (endnote 2). As Allan Franklin has dryly noted you don’t have to ask an atomic particle for informed consent (my paraphrase).
        The domain specificity of scientific practices then arises from the question of the realities (where realities iff truths) of nature (endnote 3). Given this perspective ab intio, what counts as data must be domain specific. As well, the data acquisition and analysis methods. (I am fully aware that this is a thoroughly Empiricist approach to the PS, which I am happy both to defend and justify. Just not in this context, as we are after all looking at how Philosophy does (can?) work IN science; not debating which Philosophy OF Science).
        As the explananda vary widely across the domain of the Sciences, so inevitably will the explanantia (endnote 4) It is worthwhile copying the original definitions from the Hempel and Oppenheim paper. Videlicet:

        §3. The basic pattern of scientific explanation.
        From the preceding sample cases let us now abstract some general characteristics of scientific explanation. We divide an explanation into two major constituents, the explanandum and the explanans. By the explanandum, we understand the sentence describing the phenomenon to be explained (not that phenomenon itself); by the explanans, the class of those sentences which are adduced to account for the phenomenon. As was noted before, the explanans falls into two subclasses; one of these contains certain sentences C1, C2, · · · , Ck which state specific antecedent conditions; the other is a set of sentences L1 , L2,_ , · · · LR, which represent general laws.
        If a proposed explanation is to be sound, its constituents have to satisfy certain conditions of adequacy, which may be divided into logical and empirical conditions.

        “…which may be divided into logical and empirical conditions.” And thereby hangs a tale. For in the Sciences a great divide exists between Theory and Experiment. Each must comprehend the other. However as was explained to me by an Experimental Physical Chemist, Experimentalists whilst knowing a lot of Theory will inevitably turn to a Theoretical Chemist for interpretation of the data (the explanandum). In turn the Theorist will or may propose a formalism (the explanans) to complete the explanation. The Experimentalist will then investigate the data in light of the explanans and either agree that the proposed explanandum fits or needs tweaking, or perhaps should need reworking. Thus, a reciprocal mutual interaction of Experiment and Theory. Allan Franklin’s written extensively and IMHO correctly on the epistemology of experiment where he shows the mutual interdependence of Experiment and Theory. Franklin uses high energy Physics experiments to demonstrate and support his arguments. His precise summaries of the Theory-Experiment interconnections, give additional support to the comments he makes on the Philosophical implications of his position. But always and everywhere in Franklin’s analyses as in Science as a whole, Experiment trumps Theory.
        Thus, the iterative interplay between Experiment and Theory, largely if not totally ignored in the Realist anti-Realist debates. (Except by Franklin; there may be others of the same persuasion but Franklin’s the one I know of). An additional complicating factor is the Domain Specificity (supra) of knowledge in the Sciences.
        Which leads directly into the issue underpinning collaboration with the Sciences.
        Which Science? Given the vast amount of knowledge required to attain even a minimal level of understanding in any given area of the Sciences, pursuing the approach of collaboration with Scientists necessitates I think “choosing your lane and sticking to it”.

        Another issue which arguably derives from this consideration of the Theory/Experiment interplay is the Domain Specificity of the Sciences (supra). In a word the problem statement is this: that much of the PS ignores the praxes of the Sciences. In so doing the PS ignores or overlooks (or both), the Domain Specificity of Knowledge (DSK) in the Sciences. This DSK encompasses machines such as the Large Hadron Collider (LHC) at CERN (High Energy Physics) (endnote 1), to the comprehensive use in Medicine of blinding (endnote 2 – Randomised Control Trials RCTs), through to manually counting colonies with a binocular microscope in a Microbiology Laboratory (endnote 5 – Hacking).

        For discussion and debate – I hope.

        1. The name CERN is derived from the acronym for the French “Conseil Européen pour la Recherche Nucléaire”, or European Council for Nuclear Research, a provisional body founded in 1952 with the mandate of establishing a world-class fundamental physics research organization in Europe.
        “… CERN’s main area of research is particle physics – the study of the fundamental constituents of matter and the forces acting between them. Because of this, the laboratory operated by CERN is often referred to as the European Laboratory for Particle Physics.”
        The Large Hadron Collider (LHC) is the world’s largest and highest-energy particle collider and the largest machine in the world.[1][2]

        2 Definition/Introduction
        A clinical research study or a clinical trial is an experiment or observation performed on human subjects to generate data on the safety and efficacy of various biomedical and behavioral interventions.[1]
        Blinding or masking refers to the withholding of information regarding treatment allocation from one or more participants in a clinical research study. It is an essential methodological feature of clinical studies that help maximize the validity of the research results.[2]

        Blinding refers to the act of masking the nature of the treatment that participants in a randomized controlled trial (RCT) receive. …
        RCTs are classified into four types on the basis of their level of blinding: open label, single blind, double blind and triple blind. Open-label RCTs employ no blinding and are thus the most susceptible to measurement bias. Open-label RCTs should only be conducted if blinding is deemed to be impossible, such as in comparisons of medical and surgical interventions: it is of course not ethical to subject patients to sham surgeries. In single-blind RCTs the nature of the treatment is concealed from either the study participants or the research team, whereas in double-blind RCTs it is concealed from both the participants and the researchers, including those who administer the treatment. Triple-blind studies entail concealing the nature of the treatment from participants, researchers and administrators of the treatment, and data analysts. In triple-blind studies, data are analyzed by codes to prevent data analysts from introducing judgment bias because of their knowledge of group assignments.

        3. “Research on interdisciplinary science has for the most part concentrated on the institutional obstacles that discourage or hamper interdisciplinary work, with the expectation that interdisciplinary interaction can be improved through institutional reform strategies such as through reform of peer review systems. …
        Lessons from cognitive science and anthropological studies of labs in sociology of science suggest that scientific practices may be very domain specific, where domain specificity is an essential aspect of science that enables researchers to solve complex problems in a cognitively manageable way. The limit or extent of domain specificity in scientific practice, and how it constrains interdisciplinary research, is not yet fully understood, which attests to an important role for philosophers of science in the study of interdisciplinary science.”
        McLeod’s (I think well argued) paper is aimed at interdisciplinary research hence his use of “domain specificity” is arguably a stipulative definition.
        Without going into details, I have taken the liberty to extend McLeod’s definition.
        For additional background please see:
        (Gupta, Anil, “Definitions”, The Stanford Encyclopedia of Philosophy (Winter 2021 Edition), Edward N. Zalta (ed.), forthcoming URL = .)
        MacLeod, M. A. J. (2018). What makes interdisciplinarity difficult? Some consequences of domain specificity in interdisciplinary practice. Synthese, 195(2), 697-720. https://doi.org/10.1007/s11229-016-1236-4

        4. Studies in the Logic of Explanation
        Carl G. Hempel; Paul Oppenheim Philosophy of Science, Vol. 15, No. 2. (Apr., 1948), pp. 135 – 175

        5. Do we see through a microscope?
        Ian Hacking Pacific Philosophical Quarterly October 1981 https://doi.org/10.1111/j.1468-0114.1981.tb00070.x
        PDF available at: https://philpapers.org/archive/HACDWS.pdf

  3. Interesting, sorry I missed it all but thankfully some of it was put here.

    My interests maybe somewhat different – highlighting the role of abstract representations in the scientific process and how that needs to be central in statistics where the abstract representations are probability models. So a general outlook rather than a specific resolution of a problem.

    These probability models and data give rise to likelihood and in turn p_values, severity, probation and posterior probabilities. But if abstract representations reflected in the probability models represent the world too incorrectly in important ways, these all can be vacuous. In Nesse and Griffiths talk, the manifest and scientific image being too different.

    In the first example in Sober’s talk, given what I understood, the phylogenetic trees are an abstract representation of the world and so ideally the likelihood should be based on a probability model that represent the world the same way.

    As for likelihood, ideally should be thought of as as assessment of compatibility with the data and assumptions – over all parameter values – maximum likelihood is just the one point in the parameter space making it most compatible.

    Keith O’Rourke

    • Keith:It would have been good if the group discussed more of the content of our interventions, as with some of the points you mention.
      The thing about viewing likelihood as assessing compatibility is that this puts it at odds measures like p-values. It’s true that p-values can be used as mere “fit” measures, but this is to lose their error probabilistic aspects.

Blog at WordPress.com.