O & M conference

“Error statistical modeling and inference: Where methodology meets ontology” A. Spanos and D. Mayo

copy-cropped-ampersand-logo-blog1

.

A new joint paper….

“Error statistical modeling and inference: Where methodology meets ontology”

Aris Spanos · Deborah G. Mayo

Abstract: In empirical modeling, an important desideratum for deeming theoretical entities and processes real is that they can be reproducible in a statistical sense. Current day crises regarding replicability in science intertwine with the question of how statistical methods link data to statistical and substantive theories and models. Different answers to this question have important methodological consequences for inference, which are intertwined with a contrast between the ontological commitments of the two types of models. The key to untangling them is the realization that behind every substantive model there is a statistical model that pertains exclusively to the probabilistic assumptions imposed on the data. It is not that the methodology determines whether to be a realist about entities and processes in a substantive field. It is rather that the substantive and statistical models refer to different entities and processes, and therefore call for different criteria of adequacy.

Keywords: Error statistics · Statistical vs. substantive models · Statistical ontology · Misspecification testing · Replicability of inference · Statistical adequacy

To read the full paper: “Error statistical modeling and inference: Where methodology meets ontology.”

The related conference.

Mayo & Spanos spotlight

Reference: Spanos, A. & Mayo, D. G. (2015). “Error statistical modeling and inference: Where methodology meets ontology.” Synthese (online May 13, 2015), pp. 1-23.

Categories: Error Statistics, misspecification testing, O & M conference, reproducibility, Severity, Spanos

Mayo’s slides from the Onto-Meth conference*

img_1249-e1356389909748Methodology and Ontology in Statistical Modeling: Some error statistical reflections (Spanos and Mayo)uncorrected

 Our presentation falls under the second of the bulleted questions for the conference (conference blog is here):

How do methods of data generation, statistical modeling, and inference influence the construction and appraisal of theories?

Statistical methodology can influence what we think we’re finding out about the world, in the most problematic ways, traced to such facts as:

  • All statistical models are false
  • Statistical significance is not substantive significance
  • Statistical association is not causation
  • No evidence against a statistical null hypothesis is not evidence the null is true
  • If you torture the data enough they will confess.

(or just omit unfavorable data)

These points are ancient (lying with statistics, lies damn lies, and statistics)

People are discussing these problems more than ever (big data), but it’s rarely realized is how much certain methodologies are at the root of the current problems.

__________________1__________________

All Statistical Models are False

Take the popular slogan in statistics and elsewhere is “all statistical models are false!”

What the “all models are false” charge boils down to:

(1)          the statistical model of the data is at most an idealized and partial representation of the actual data generating source.

(2) a statistical inference is at most an idealized and partial answer to a substantive theory or question.

  • But we already know our models are idealizations: that’s what makes them models
  • Reasserting these facts is not informative,.
  • Yet they are taken to have various (dire) implications about the nature and limits of statistical methodology
  • Neither of these facts precludes the use of these to find out true things
  • On the contrary, it would be impossible to learn about the world if we did not deliberately falsify and simplify.

    __________________2__________________

  • Notably, the “all models are false” slogan is followed up by “But some are useful”,
  • Their usefulness, we claim, is being capable of adequately capturing an aspect of a phenomenon of interest
  • Then a hypothesis asserting its adequacy (or inadequacy) is capable of being true!

Note: All methods of statistical inferences rest on statistical models.

What differentiates accounts is how well they step up to the plate in checking adequacy, learning despite violations of statistical assumptions (robustness)

__________________3__________________

Statistical significance is not substantive significance

Statistical models (as they arise in the methodology of statistical inference) live somewhere between

  1. Substantive questions, hypotheses, theories H
  1. Statistical models of phenomenon, experiments, data: M
  1. Data x

What statistical inference has to do is afford adequate link-ups (reporting precision, accuracy, reliability)

__________________4__________________ Continue reading

Categories: O & M conference

Mayo: Meanderings on the Onto-Methodology Conference

mayo blackboard b&w 2Writing a blog like this, a strange and often puzzling exercise[1], does offer a forum for sharing half-baked chicken-scratchings from the back of frayed pages on themes from our Onto-Meth[2] conference from two weeks ago[3]. (The previous post had notes from blogger and attendee, Gandenberger.)

Onto-Meth conference

Onto-Meth conference

Several of the talks reflect a push-back against the idea that the determination of “ontology” in science—e.g., the objects and processes of theories, models and hypotheses—is (or should strive to correspond to?)  “real” objects in the world and/or what is approximately the case about them. Instead, at least some of the speakers wish to liberate ontology to recognize how “merely” pragmatic goals, needs, and desires are not just second-class citizens, but can and do (and should?) determine the categories of reality. Well there are a dozen equivocations here, most of which we did not really discuss at the conference.

In my own half of the Spanos-Mayo (D & P presentation[4]) I granted and even promoted the idea of a methodology that was pragmatic while also objective, so I’m not objecting to that part. The measurement of my weight is a product of “discretionary” judgments (e.g., to weigh in pounds with a scale having a given precision), but it is also a product of how much I really weigh (no getting around it). By understanding the properties of methodological tools and measuring systems, it is possible to “subtract out” the influence of the judgments to get at what is actually the case. At least approximately. But that view is different, it seems to me, from someone like Larry Laudan (at least in his later metamorphosis). Even though he considers his “reticulated” view a fairly hard-nosed spin on the Kuhnian idea of scientific paradigms as invariably containing an ontology (e.g., theories), a methodology, and (what he called) an “axiology” or set of aims (OMA), Laudan seems to think standards are so variable that what counts as evidence is constantly fluctuating (aside from maybe retaining the goal of fitting diverse facts). So I wonder if these pragmatic leanings are more like Laudan or more like me (and my view here, I take it, is essentially that of Peirce). I am perfectly sympathetic to the piecemeal “locavoracity” idea in Ruesche, by the way.

My worry, one of them, is that all kinds of rival entities and processes arise to account for (accord with, predict, and purportedly explain) data and patterns in data, and don’t we need ways to discriminate them? During the open discussion, I mentioned several examples, some of which I can make out all scrunched up in the corners of my coffee-logged program, such as appeals to “cultural theories” of risk and risk perceptions. These theories say appeals to supposedly “real” hazards, e.g, chance of disease, death, catastrophe, and other “objective” risk assessments are wrong.  They say it is not only possible but preferable (truer?) to capture attitudes toward risks, e.g., GM foods, nuclear energy, climate change, breast implants, etc. by means of one or another favorite politico-cultural grid-group categories (e.g., marginal-individualists, passive-egalitarians, hierarchical-border people, fatalists,  etc.). (Your objections to these vague category schemes are often taken as further evidence that you belong in one of the pigeon-holes!) And the other day I heard a behavioral economist declare that he had found the “mechanism” to explain deciding between options in virtually all walks of life using a regression parameter, he called it beta, and guess what? beta = 1/3! He proved it worked statistically too. He might be right, he had a lot of data. Anyway, in my deliberate attempt to trigger discussion at the conference end, I was wondering if some of the speakers and/or attendees (Danks, Woodward, Glymour? Anyone?) had anything to say about cases that some of us might wish to call reification. Continue reading

Categories: O & M conference, Statistics

Gandenberger on Ontology and Methodology (May 4) Conference: virginia Tech

greg pic

Gregory Gandenberger
Ph.D graduate student: Dept. of History and Philosophy of Science & Dept. of Statistics
University of Pittsburgh
http://gsganden.tumblr.com/

Onto-Meth conference

Onto-Meth conference


Some Thoughts on the O&M 2013 Conference
I was struck by how little speakers at the Ontology and Methodology conference engaged with the realism/antirealism debate. Laura Ruetsche defended a version of Arthur Fine’s Natural Ontological Attitude (NOA) in the first talk of the conference, but none of the speakers after her addressed the debate directly. David Danks and Jim Woodward made it particularly clear that they were deliberately avoiding questions about realism in favor of questions about what kinds of ontologies our theories should have in order to best serve the various purposes for which we develop them.

I am not criticizing the speakers! I am inclined to agree with Clark Glymour that the kinds of questions Danks and Woodward addressed are more interesting and important than questions about “what’s really real.” On the other hand, I worry that we lose something when we focus only on the use of science toward such ends as prediction and control. During the discussion period at the end of the conference, Peter Godfrey-Smith argued that science has some value simply for telling us what really is the case. For instance, science tells us that all living things on earth have a common ancestor, and that fact is a good thing to know regardless of whether or not it helps us predict or control anything.

One feature of the realism/antirealism debate that has long bothered me is that it treats all of “our best sciences” as if they had roughly the same epistemic status. In fact, realism about quantum field theory, for instance, is much harder to defend than realism about evolutionary biology. I am inclined to dismiss the realism debate as ill-formed insofar as it presumes that the question of scientific realism is a single question that spans all of the sciences. I am also suspicious of the debate in its bread-and-butter domain of fundamental physics. It is not clear to me that there is such a thing as fundamental physics; that if there is such a thing as fundamental physics, then it is converging toward a unified ontology; that if it is converging toward a unified ontology, then we can make sense of the question whether or not that ontology is correct; or that if we can make sense of the question whether or not that ontology is correct, then we have the means to give a justified answer to that question.

Nevertheless, as Glymour pointed out during the open discussion period, there are still good and open questions to address about whether and how we are justified in believing that science tells us the truth in other domains (such as evolutionary theory) where the realism question seems relatively well-formed and answerable. We can dismiss questions about “what’s really real” at a “fundamental level” while still thinking that philosophers of science should have a story to tell the 46% of Americans who believe that human beings were created in more or less their current form within the last 10,000 years—not a story about how science serves purposes of prediction and control, but a story about how science can help us find the truth.

Categories: O & M conference

Blog at WordPress.com.