Wilson Schmidt Professor of Economics
Department of Economics
The following guest post (link to updated PDF) was written in response to C. Hennig’s presentation at our Phil Stat Wars Forum on 18 February, 2021: “Testing With Models That Are Not True”.
Great post, Aris. I’m curious about something, though: the relationship between these results (generally) and those that come from the study of algorithmic randomness.
From the AR perspective, a random sequence cannot be compressed. Thus random sequences cannot contain any regularities that could be used to formulate a model to predict it and thus compress it. The primary theorem in this field is (roughly) that there is no way to determine whether a given sequence is random.
The validation of a model in your post, though, seems to proceed by fitting a model and then checking to see whether the residuals are essentially random. But if algorithmic randomness applies, we can’t determine that.
I may be misrepresenting your work or failing to understand something important about the relationship between algorithmic and stochastic randomness, but I wonder if assumptions made regarding the size of the model class from which the data-generating model is drawn is doing the extra work, and whether those assumptions are typically justified in worldly applications. What do you think?