Writing a blog like this, a strange and often puzzling exercise, does offer a forum for sharing half-baked chicken-scratchings from the back of frayed pages on themes from our Onto-Meth conference from two weeks ago. (The previous post had notes from blogger and attendee, Gandenberger.)
Several of the talks reflect a push-back against the idea that the determination of “ontology” in science—e.g., the objects and processes of theories, models and hypotheses—is (or should strive to correspond to?) “real” objects in the world and/or what is approximately the case about them. Instead, at least some of the speakers wish to liberate ontology to recognize how “merely” pragmatic goals, needs, and desires are not just second-class citizens, but can and do (and should?) determine the categories of reality. Well there are a dozen equivocations here, most of which we did not really discuss at the conference.
In my own half of the Spanos-Mayo (D & P presentation) I granted and even promoted the idea of a methodology that was pragmatic while also objective, so I’m not objecting to that part. The measurement of my weight is a product of “discretionary” judgments (e.g., to weigh in pounds with a scale having a given precision), but it is also a product of how much I really weigh (no getting around it). By understanding the properties of methodological tools and measuring systems, it is possible to “subtract out” the influence of the judgments to get at what is actually the case. At least approximately. But that view is different, it seems to me, from someone like Larry Laudan (at least in his later metamorphosis). Even though he considers his “reticulated” view a fairly hard-nosed spin on the Kuhnian idea of scientific paradigms as invariably containing an ontology (e.g., theories), a methodology, and (what he called) an “axiology” or set of aims (OMA), Laudan seems to think standards are so variable that what counts as evidence is constantly fluctuating (aside from maybe retaining the goal of fitting diverse facts). So I wonder if these pragmatic leanings are more like Laudan or more like me (and my view here, I take it, is essentially that of Peirce). I am perfectly sympathetic to the piecemeal “locavoracity” idea in Ruesche, by the way.
My worry, one of them, is that all kinds of rival entities and processes arise to account for (accord with, predict, and purportedly explain) data and patterns in data, and don’t we need ways to discriminate them? During the open discussion, I mentioned several examples, some of which I can make out all scrunched up in the corners of my coffee-logged program, such as appeals to “cultural theories” of risk and risk perceptions. These theories say appeals to supposedly “real” hazards, e.g, chance of disease, death, catastrophe, and other “objective” risk assessments are wrong. They say it is not only possible but preferable (truer?) to capture attitudes toward risks, e.g., GM foods, nuclear energy, climate change, breast implants, etc. by means of one or another favorite politico-cultural grid-group categories (e.g., marginal-individualists, passive-egalitarians, hierarchical-border people, fatalists, etc.). (Your objections to these vague category schemes are often taken as further evidence that you belong in one of the pigeon-holes!) And the other day I heard a behavioral economist declare that he had found the “mechanism” to explain deciding between options in virtually all walks of life using a regression parameter, he called it beta, and guess what? beta = 1/3! He proved it worked statistically too. He might be right, he had a lot of data. Anyway, in my deliberate attempt to trigger discussion at the conference end, I was wondering if some of the speakers and/or attendees (Danks, Woodward, Glymour? Anyone?) had anything to say about cases that some of us might wish to call reification.
I am tempted to view Woodward’s idea of developing heuristics for choosing variables as well-captured by my favorite goal: finding things out via severe tests (be stringent but learn something, promote error correcting improvements, etc.) One can start almost anywhere, and with adequate error probes speed up the goals of finding things out (yet another Peircean theme). Woodward did not say whether the rationale behind his heuristics would be something along these lines. But a rational is needed, or so I would claim. I was trying (in the discussion) to drive home this felt need to articulate a rationale, without which I suspect one overlooks the creative drive toward satisfying these heuristics; I mean why prefer these heuristics? That they may be found satisfied in “successful science” (after the fact) would not necessarily mean they identified forward-looking rules or criteria.
Maybe it’s the contrarian in me, but I might like to add a heuristic such as:
- find ways to suspect your variables and model even though all the previous heuristic rules are well-satisfied.
- pursue variables that fail to satisfy your preference for “variables that have unambiguous effects under manipulation”—as of now, given all we know–and discover a novel way to discriminate them anyway.
Or, another way to get at my contrarian inklings, suppose variables have been chosen along the lines of Woodward’s heuristics, and everything seems hunky dory. What impetus is there to find out how the model may be wrong (despite satisfying all those nice expectations)? Retrospectively, these rules might be satisfied, but prospectively, might not they encourage leaning back? (not to allude to the one year anniversary of Facebook’s IPO).
There are some other chicken scratchings I may come back to if I hear from anyone….
 Ben Jantzen discovered that abbreviating our conference this way would lead people to methamphetamine websites; so we didn’t use it officially. Thus I use it here.
 I’ve been deeply engaged in something I’ll explain later on–not to mention traveling to faraway places–and anyway, for some reason the blog is getting tons and tons of spam. I’m not sure what has changed over at wordpress.
 Dog and pony. I may post my pony slides.