A comment from Professor Peter Grünwald
Head, Information-theoretic Learning Group, Centrum voor Wiskunde en Informatica (CWI)
Part-time full professor at Leiden University.
This is a follow-up on Vladimir Cherkassky’s comments on Deborah’s blog. First of all let me thank Vladimir for taking the time to clarify his position. Still, there’s one issue where we disagree and which, at the same time, I think, needs clarification, so I decided to write this follow-up.[related posts 1]
The issue is about how central VC (Vapnik-Chervonenkis)-theory is to inductive inference.
I agree with Vladimir that VC-theory is one of the most important achievements in the field ever, and indeed, that it fundamentally changed our way of thinking about learning from data. Yet I also think that there are many problems of inductive inference to which it has no direct bearing. Some of these are concerned with hypothesis testing, but even when one is concerned with prediction accuracy – which Vladimir considers the basic goal – there are situations where I do not see how it plays a direct role. One of these is sequential prediction with log-loss or its generalization, Cover’s loss. This loss function plays a fundamental role in (1) language modeling, (2) on-line data compression, (3a) gambling and (3b) sequential investment on the stock market (here we need Cover’s loss). [a superquick intro to log-loss as well as some references are given below under [A]; see also my talk at the Ockham workshop (slides 16-26 about weather forecasting!) )
Continue reading →
After a month of traveling, I’m soon to return to home port; then it’s just a ferry back to Elba. I promise to post (hopefully by Monday) some philosophical reflections on the past few days at the Ockham’s Razor conference, here at CMU (see post from June 12, 2012), and catch up on your comments/e-mails. I am to present Sunday (tomorrow) at 9 a.m.
Carnegie Mellon University, Center for Formal Epistemology:
Workshop on Foundations for Ockham’s Razor
All are welcome to attend.
June 22-24, 2012
Adamson Wing, Baker Hall 136A, Carnegie Mellon University
Workshop web page and schedule
Contact: Kevin T. Kelly (firstname.lastname@example.org)
Rationale: Scientific theory choice is guided by judgments of simplicity, a bias frequently referred to as “Ockham’s Razor”. But what is simplicity and how, if at all, does it help science find the truth? Should we view simple theories as means for obtaining accurate predictions, as classical statisticians recommend? Or should we believe the theories themselves, as Bayesian methods seem to justify? The aim of this workshop is to re-examine the foundations of Ockham’s razor, with a firm focus on the connections, if any, between simplicity and truth.
- Vladimir Cherkassky (University of Minnesota, Computer and Electrical Engineering),
- Peter Gruenwald (Leiden and Amsterdam, Machine Learning),
- Malcolm Forster (University of Wisconsin, Philosophy),
- Kevin Kelly (Carnegie Mellon, Philosophy),
- Hannes Leeb (University of Vienna, Statistics),
- Deborah Mayo (Virginia Tech, Philosophy),
- Oliver Schulte (Simon Fraser, Computer Science),
- Cosma Shalizi (Carnegie Mellon, Statistics),
- Elliott Sober (University of Wisconsin, Philosophy),
- Vladimir Vapnik (Columbia University, Center for Computational Learning Systems),
- Larry Wasserman (Carnegie Mellon, Statistics and Machine Learning).