RE:’deemed counterintuitive’ – ironically my position is ‘consistent with’ those who adopt intuitionist logic!

(Though this is really just a result of prioritising trying to state it in a natural mathematical formulation instead of the ‘logical’ formulation. My biggest peeve with philsci is the use of artificial logical language where it doesn’t fit).

RE:’consistent’ – as mentioned I wanted to keep this as a side issue, an ‘irrelevant conjunct’ say. I have an explicit formulation in mind, but that would distract the issue I think.

]]>“Consistent with” is multiply ambiguous. Merely not contradicting–which is its strict meaning–is no longer to give a theory of evidence or inference (x can be utterly irrelevant to while consistent with H, x can even support ~H and be con with H) . Being “consistent with” in the sense used by Cox, say, wrt significance tests, or ordinary usage, actually is much stronger. More later on in the week. ]]>

My main point is that Likelihoodists and Bayesians have had a natural response to this for years, and have used it in practice – they treat it as a problem of nuisance parameters.

A couple of more philosophical points

1. Chalmers’ discussion is slightly different in that he doesn’t use the entailment condition and focuses the ‘paradox’ on the fact that the conjunct ‘good theory & irrelevant theory’ can be confirmed.

Your solution is that the conjunct is not confirmed. The e.g. Bayesian/Likelihood solution is that only the conjunct but not necessarily the ‘parts’ are confirmed.

This leads to the question – can we ‘confirm’ in some way or other a good theory with a few possibly eliminable (irrelevant) parts? Surely we do in fact want this (or something like this) – better a good theory with some irrelevant parts than no theory, right?

Otherwise no theory could ever be confirmed because it could be argued to depend on details beyond our measurement capacity e.g. whether string theory or some other theory of quantum gravity is ‘correct’ should not affect our evaluation of some macroscopic theory.

This would be another paradox – good theories are so sensitive that they can be destroyed by tacking on irrelevant propositions. But irrelevant propositions are always lurking about.

2. In terms of the argument (in your original post) that *does* use the entailment condition, my post basically argues that it is a ‘type’ error: a theory is (more like) a function from propositions to observations, i.e. AxB->Y, than just propositions, i.e. AxB.

So we cannot in general go from a function AxB->Y to a function A->Y without a B value, though we can go from a proposition on AxB to a new proposition C using the propositional operators and their ‘truth-functionality’.

3. In general I would prefer to use (similarly to Laurie, and many others) evaluations that specify a model as ‘adequate wrt’ or ‘consistent with’ some data, rather than ‘confirmed’. This is not that important for the present arguments though.

So with 1. The point is that we need to allow overall confirmation even if potential irrelevant parts exist, as long as there are *some* good parts to the theory.

With 2. we block inconsistent localisation via entailment by recognising that theories are functions not propositions.

]]>]]>@omaclaren Here's Chalmers (1999, p. 200) giving me credit for my way of solving the hypothetical-deductivist's "tacking paradox" pic.twitter.com/WJ81OOJzAv

— Deborah G. Mayo (@learnfromerror) October 25, 2016

Also, do you know if the usual Bayesian philosophers have made the same point as me, or do they address differently? For fun I note that a logician who finds constructive logic (or Kripke semantics etc) compelling would likely come to the equivalent conclusion as me.

]]>I’d be interested if you had any comments or response, since you (and other philosophers) apparently view this as a real issue for e.g. Bayes or Likelihood inference, while it seems like a misleading example to me.

]]>Mayo, D. G. and Cox, D. R. (2010). “Frequentist Statistics as a Theory of Inductive Inference” in Error and Inference: Recent Exchanges on Experimental Reasoning, Reliability and the Objectivity and Rationality of Science (D Mayo and A. Spanos eds.), Cambridge: Cambridge University Press: 1-27.

This paper appeared in The Second Erich L. Lehmann Symposium: Optimality, 2006, Lecture Notes-Monograph Series, Volume 49, Institute of Mathematical Statistics, pp. 247-275.

http://www.phil.vt.edu/dmayo/personal_website/Ch%207%20mayo%20&%20cox.pdf

Cox D. R. and Mayo. D. G. (2010). “Objectivity and Conditionality in Frequentist Inference” in Error and Inference: Recent Exchanges on Experimental Reasoning, Reliability and the Objectivity and Rationality of Science (D Mayo and A. Spanos eds.), Cambridge: Cambridge University Press: 276-304.

http://www.phil.vt.edu/dmayo/personal_website/ch%207%20cox%20&%20mayo.pdf

That is the standard statistical approach, which appears different from the philosophical approach in which simple propositions are used. I find it unclear how the philosophers are formulating the question. Is the ‘confirmation’ function of two propositions defined for all T/F combinations? Ie (T,T), (T,F), (F,T), (F,F). And then compared over these combinations to see which case is supported? That is the analogue of the statistical approach.

A philosopher would perhaps call this a counterfactual approach- I would call it ‘using functions and variables to model things’.

A particular instance of the conjunction ‘A & B’ is then ‘parameter a takes the value a* and parameter b takes the value b*’ say.

We can compare all such pairs (a*,b*). To say something like ‘the value b* of b is supported’ is ill-defined unless an ‘a value’ is also given, for the simple reason that your function takes two arguments.

In the case of orthogonal parameters (ie the product likelihood factorisation above holds) then we can ‘localise’ inferences. For a Bayesian they can assume a product factorisation of the prior for independent parameters.

All of this is straightforward if you adopt a ‘functions and variables’ formulations or, perhaps, at least define and compare the confirmation measure (or whatever) for all possible propositional (T/F) combinations.

]]>“if you reject entailment (sometimes called special consequence) then various criticisms aimed at frequentist inference can’t be lodged”

I have no particular desire to criticise frequentist inference and advocate for Bayes/likelihood or vice versa.

I just think this particular argument is a poor criticism of Bayes/likelihood, and think entailment is evidently a bad idea (regardless of whether some philosophical Bayesians argue for it – I don’t think any Bayesian statistician holds it since it contradicts probability theory and their methods for dealing with nuisance parameters).

]]>My point is no it is not. It follows from standard statistical practice. The point is presumably that B is a ‘neutral’ parameter, neither good nor bad. If B was a false theory then this would be strange, but B is not contradicted by the data either.

Compare ‘predictions from a good physical theory and my hat is red’ vs ‘predictions from a bad physical theory and my hat is blue’. The former is better supported than the latter. We can also localise to see that it is the physical theory part doing the work.

]]>http://www.cs.ubc.ca/~murphyk/Teaching/CS532c_Fall04/Papers/schervish-pvalues.pdf

]]>