Here are a few comments on your recent blog about my ideas on parsimony. Thanks for inviting me to contribute!
You write that in model selection, “’parsimony fights likelihood,’ while, in adequate evolutionary theory, the two are thought to go hand in hand.” The second part of this statement isn’t correct. There are sufficient conditions (i.e., models of the evolutionary process) that entail that parsimony and maximum likelihood are ordinally equivalent, but there are cases in which they are not. Biologists often have data sets in which maximum parsimony and maximum likelihood disagree about which phylogenetic tree is best.
You also write that “error statisticians view hypothesis testing as between exhaustive hypotheses H and not-H (usually within a model).” I think that the criticism of Bayesianism that focuses on the problem of assessing the likelihoods of “catch-all hypotheses” applies to this description of your error statistical philosophy. The General Theory of Relativity, for example, may tell us how probable a set of observations is, but its negation does not. I note that you have “usually within a model” in parentheses. In many such cases, two alternatives within a model will not be exhaustive even within the confines of a model and of course they won’t be exhaustive if we consider a wider domain.
Under your entry on Falsification, you write that “Sober made a point of saying that his account does not falsify models or hypotheses. We are to start out with all the possible models to be considered (hopefully including one that is true or approximately true), akin to the “closed universe” of standard Bayesian accounts, but do we not get rid of any as falsified, given data? It seems not.” My view is that we rarely start out with all possible models. In addition, I agree that we can get rid of models that deductively entail (perhaps with the help of auxiliary assumptions) observational outcomes that do not happen. But as soon as the relation is nondeductive, is there “falsification”? I do think that there are restricted, special, contexts in which Bayesians are right to say that we can discover that a given statement is very probably false (where the probabilities are objective). In that kind of case, there is a kind of falsification. But I have been critical of significance tests and of Neyman-Pearson testing, which of course attempt to describe a kind of nonBayesian falsification.
On the Law of Likelihood You correctly point out that it is easy to find hypotheses H1 and H2, and observations O, where the law of likelihood says that O favors H1 over H2, and yet we think that H1 is in some sense less satisfactory than H2. Bayesians bring in prior probabilities here. NonBayesians of course bring in other considerations. I agree that such situations exist and that epistemological ideas not provided by the Law of Likelihood are needed. But this, by itself, doesn’t show that the Law of Likelihood is false. The LoL doesn’t tell you what to accept or reject, and it doesn’t tell you what is most plausible, everything considered. It simply describes what the evidence at hand says.
University of Wisconsin – Madison