I’m sure I’m not alone in finding it tedious and confusing to search down through 40+ comments to follow the thread of a discussion, as in the last post (“Bad news bears“), especially while traveling as I am (to the 2012 meeting of the Philosophy of Science Association in San Diego–more on that later in the week). So I’m taking a portion of the last round between a reader and I, and placing it here, opening up a new space for comments. (For the full statements, please see the comments).
(Mayo to Corey*) Cyanabear: … Here’s a query for you: Suppose you have your dreamt of probabilistic plausibility measure, and think H is a plausible hypothesis and yet a given analysis has done a terrible job in probing H. Maybe they ignore contrary info, use imprecise tools or what have you. How do you use your probabilistic measure to convey you think H is plausible but this evidence is poor grounds for H? Sorry to be dashing…use any example.
*He also goes by Cyan.
(Corey to Mayo): .….Ideally, if I “think H is plausible but this evidence is poor grounds for H,” it’s because I have information warranting that belief. The word “convey” is a bit tricky here. If I’m to communicate the brute fact that I think H is plausible, I’d just state my prior probability for H; likewise, to communicate that I think that the evidence is poor grounds for claiming H, I’d say that the likelihood ratio is 1. But if I’m to *convince* someone of my plausibility assessments, I have to communicate the information that warrants them. (Under certain restrictive assumptions that never hold in practice, other Bayesian agents can treat my posterior distribution as direct evidence. This is Aumann’s agreement theorem.)
New: Mayo to Corey: I’m happy to put aside the agent talk as well as the business of trying to convince. I take it that reporting “the likelihood ratio is 1” conveys roughly that the data have supplied no information as regards H, and one of my big points on this blog is that this does not capture being “a bad test” or “poor evidence”. Recall some of the problems that arose in our recent discussions of ESP experiments (e.g., multiple end points, trying and trying again, ignoring or explaining away disagreements with H, confusing statistical and substantive significance, etc.)