Call for papers
PSX Philosophy of Scientific Experimentation 3 (PSX3)
Friday and Saturday, October 5 and 6, 2012
University of Colorado, Boulder
Keynote Speakers: Professor Eric Cornell, University of Colorado, Nobel Prize (Physics, 2001)
Professor Friedrich Steinle, History of Science, University of Berlin
Experiments play essential roles in science. Philosophers of science have emphasized their role in the testing of theories but they also play other important roles. They are, for example, essential in exploring new phenomenological realms and discovering new effects and phenomena. Nevertheless, experiments are still an underrepresented topic in mainstream philosophy of science. This conference on the philosophy of scientific experimentation, the third in a series, is intended to give a home to philosophical interests in, and concerns about, experiment. Among the questions that will be discussed are the following: How is experimental practice organized, around theories or around something else? How independent is experimentation from theories? Does it have a life of its own? Can experiments undermine the threat posed to the objectivity of science by the thesis of theory-ladenness, underdetermination, or the Duhem-Quine thesis? What are the important similarities and differences between experiments in different sciences? What are the experimental strategies scientists use for making sure that their experiments work correctly? How are phenomena discovered or created in the laboratory? Is experimental knowledge epistemically more secure than observational knowledge? Can experiments give us good reasons for belief in theoretical entities? What role do computer simulations play in the assessment of experimental background? How trustworthy are they? Do they warrant the same kind of inferences as experimental knowledge? Are they theory by other means?
Submissions on any aspect of experiment and simulation are welcome. They should be in the form of an extended abstract (1000 words) submitted through EasyChair https://www.easychair.org/conferences/?conf=psx3 Continue reading
The Nature of the Inferences From Graphical Techniques: What is the status of the learning from graphs? In this view, the graphs afford good ideas about the kinds of violations for which it would be useful to probe, much as looking at a forensic clue (e.g., footprint, tire track) helps to narrow down the search for a given suspect, a fault-tree, for a given cause. The same discernment can be achieved with a formal analysis (with parametric and nonparametric tests), perhaps more discriminating than can be accomplished by even the most trained eye, but the reasoning and the justification are much the same. (The capabilities of these techniques may be checked by simulating data deliberately generated to violate or obey the various assumptions.)
The combined indications from the graphs indicate departures from the LRM in the direction of the DLRM, but only, for the moment, as indicating a fruitful model to probe further. We are not licensed to infer that it is itself a statistically adequate model until its own assumptions are subsequently tested. Even when they are checked and found to hold up – which they happen to be in this case – our inference must still be qualified. While we may infer that the model is statistically adequate – this should be understood only as licensing the use the model as a reliable tool for primary statistical inferences but not necessarily as representing the substantive phenomenon being modeled.
Part 1 is here.
Graphing t-plots (This is my first experiment with blogging data plots, they have been blown up a bit, so hopefully they are now sufficiently readable).
Here are two plots (t-plots) of the observed data where yt is the population of the USA in millions, and xt our “secret” variable, to be revealed later on, both over time (1955-1989).
Fig 1: USA Population (y)
Fig. 2: Secret variable (x)
Figure 3: A typical realization of a NIID process.
Pretty clearly, there are glaring departures from IID when we compare a typical realization of a NIID process, in fig. 3, with the t-plots of the two series in figures 1-2. In particular, both data series show the mean is increasing with time – that is, strong mean-heterogeneity (trending mean).Our recommended next step would be to continue exploring the probabilistic structure of the data in figures 1 and 2 with a view toward thoroughly assessing the validity of the LRM assumptions - (table 1). But first let us take a quick look at the traditional approach for testing assumptions, focusing just on assumption  traditionally viewed as error non-autocorrelation: E(ut,us)=0 for t≠s, t,s=1,2,…,n. Continue reading
See (Part 2)
See (Part 1)
7. How the story turns out (not well)
This conception of testing, which Lakatos called “sophisticated methodological falsificationism,” takes us quite a distance from the more familiar if hackneyed conception of Popper as a simple falsificationist.[i] It called for warranting a host of different methodological rules for each of the steps along the way in order to either falsify or corroborate hypotheses. But it doesn’t end well. Continue reading
(See Part 1)
5. Duhemian Problems of Falsification
Any interesting case of hypothesis falsification, or even a severe attempt to falsify, rests on both empirical and inductive hypotheses or claims. Consider the most simplistic form of deductive falsification (an instance of the valid form of modus tollens): “If H entails O, and not-O, then not-H.” (To infer “not-H” is to infer H is false, or, more often, it involves inferring there is some discrepancy in what H claims regarding the phenomenon in question). Continue reading