Some of the recent comments to my May 20 post leads me to point us back to my earlier (April 15) post on dynamic dutch books, and continue where Howson left off:
“And where does this conclusion leave the Bayesian theory? ….I claim that nothing valuable is lost by abandoning updating rules. The idea that the only updating policy sanctioned by the Bayesian theory is updating by conditionalization was untenable even on its own terms, since the learning of each conditioning proposition could not itself have been by conditionalization.” (Howson 1997, 289).
So a Bayesian account requires a distinct account of empirical learning in order to learn “of each conditioning proposition” (propositions which may be statistical hypotheses). This was my argument in EGEK (1996, 87)*. And this other account, I would go on to suggest, should ensure the claims (which I prefer to “propositions”) are reliably warranted or severely corroborated.
*Error and the Growth of Experimental Knowledge (Mayo 1996): Scroll down to chapter 3.
- Howson, C. (1997). “A Logic of Induction,” Philosophy of Science 64(2):268-290.
- Mayo D. G. (1996). Error and the Growth of Experimental Knowledge. Chicago: Chicago University Press.
- Mayo D. G. (1997). “Duhem’s Problem, The Bayesian Way, and Error Statistics, or ‘What’s Belief got To Do With It?’” and “Response to Howson and Laudan” Philosophy of Science 64(2):222-244 and 323-333.
I don’t get it. If I design a machine to perform online Bayes in response to some signal, it just goes to the RAM location of the conditioning propositions, which in this case are the measured signal data. What corresponds to the “distinct account of empirical learning” in this setup?