**I was reviewing blog comments and various links people have sent me. I have noticed a kind of comment often arises about a type of (subjective?) Bayesian who does not assign probabilities to a general hypothesis H but only to observable events. In this way, it is claimed, one can avoid various criticisms but retain the Bayesian position, label it (A):**

(A) the warrant accorded to an uncertain claim is in terms of probability assignments (to events).

But what happens when H’s predictions are repeatedly and impressively born out in a variety of experiments? Either one can say nothing about the warrant for H (having assumed A), or else one seeks a warrant for H other than a probability assignment to H*.

Take the former. In that case what good is it to have passed many of H’s predictions? We cannot say we have grounds to accept H in some non-probabilistic sense (since that’s been ruled out by (A)). We also cannot say that the impressive successes in the past warrant predicting that future successes are probable because events do not warrant other events. It is only through some general claim or statistical hypothesis that we may deduce predicted probabilities of events.

But cannot someone just claim to hold a principle or rule that whenever many past successes occur, we are allowed to believe or predict further successes are probable? This is much like a rule called *enumerative induction* R. It’s not a very reliable rule (without qualifications), but suppose the Bayesian described here wanted to hold it. The trouble then is that holding rule R would have to be held with a probability (due to (A)), but rule R is a general hypothesis, not a particular prediction.

Nothing changes if rule R is constrained so that not just any observed successes are taken to predict future successes are probable, but only those successes we would regard as passing H severely. Call it R’. Now R’ is a reliable rule, but again a Bayesian who only assigns probabilities to events cannot claim R’ is warranted, because to do so would require assigning R’ a probability. But R’ is a general claim, not an event.

Therefore, it would seem that one needs a warrant for a general statistical hypothesis H other than a probability assignment to H*. Thoughts?

*One example would be an error statistical assessment of the overall test that hypothesis H has passed.

I’ve never seen this before. Can you point to any examples?

Guest: Are you asking where in the comments it comes up? It comes up usually in mentioning de Finnetti, as for example in some comments to my reblogged post (on 9/15/12) from last year:

http://errorstatistics.com/2012/09/15/more-on-using-background-info/#comments

De Finetti’s concept of objectivity is closely connected to observability, and I don’t think that he would be interested too much in any warrant for H apart from what the current posterior (after having observed lots of observations in line with H) says about new observations. He may not like saying “I believe that H is true” at all, but iof he would, it could for him be nothing else than a shorthand for having the belief *about new observations* expressed by a posterior that can be written as a distribution with high posterior probability for H.

You write: “We also cannot say that the impressive successes in the past warrant predicting that future successes are probable because events do not warrant other events.” De Finetti could still say that the impressive successes in the past warrant predictions of future successes to the degree that the posterior after updating it with the information of the past successes does that.

Christian. You wrote: “De Finetti could still say that the impressive successes in the past warrant predictions of future successes to the degree that

the posteriorafter updating it with the information of the past successes does that.”The posterior of what? We’ve established it is not the posterior for H (for someone who denies we assign probs to H).

Nor can he go from the past events to a posterior assignment to the future successful event without affirming, accepting, or otherwise warranting a rule that permits this. But such a rule is a general statistical claim, so he cannot assign it a probability. I’m just repeating what i wrote in my post, so please go back to it.

Mayo: The thing is that assigning probs to H is a mathematical device for having a prior (and then later a posterior) for observable events. De Finetti in fact would *write down a prior distribution that has a certain probability for H*, although he wouldn’t claim that this is the probability for H to be true, but rather that this in turn assigns probabilities to observable events, which are the probabilities that can really be interpreted.

If the posterior then has a higher probability for H, he’d say that he is not interested in whether H is really true (which is unobservable), but rather that given past experience the predictive probabilities for new events are about *as if* H was true. Which is as close as he gets to having a warrant for H.

So now it turns out we are to assign probabilities to general hypotheses after all. That goes against what I thought was given. I didn’t say anything about truth. We can agree, if you like, that to accept H is to hold that it’s “as if” H is true (or approximately true).

Well, OK, H is a statistical hypothesis assigning probs to outcomes of a kind of trial, and to accept or warrant P(H) will simply mean, as you say, that we will use it to compute probabilities of observables via Bayes’s theorem. Let T be this assignment P(H). So now I take it T is merely accepted, and is not also given a probability? This again goes against (A).

I put aside (for now!) crucial questions of the meaning of the probabilities, and even what evidence would warrant their assignments, except that, you agree that it would be relevant to warranting a hypothesis (be it H or T) to consider that its predictions are repeatedly and impressively born out in a variety of experiments.

I’m very curious because this de Finneti–type position has often come up and I haven’t had time to reflect on this aspect of it.

Perhaps I already gave more away than de Finetti would do by trying to be as close to your use of terminology as I could, which de Finetti himself probably wouldn’t have done… if you really want to understand things precisely, you need to read him and not rely on what I can do on your blog (particularly because I’m not really a de Finettian).

Anyway, it is important to understand what “mathematical device” means. If you assign predictive probabilities to observables in a certain manner, this can be written down as having a distribution over some parameters (which de Finetti regards as fictional) and then the observables given the parameters (cf. de Finetti’s theorem). There is some computational benefit to it, but the aim of doing this is really not to state afterwards that the posterior probability for H (assumed to be a subset or a single element of the parameter set) is so-and-so in the sense of a “strength of belief in H”. H for de Finetti is a mathematical fiction, helpful only for evaluating predictive probabilities for observable events.

T is not a “hypothesis” about which there is uncertainty either, but a subjective probability, and therefore not “accepted” but elicited (at least in the ideal world in which all subjective probabilities can be elicted by betting games). Furthermore what is elicited in not directly T but predictive probabilities for observable events that determine T.

Christian: Even though my “query” was in connection with some comments that were sometimes linked to de Finetti, I wasn’t aiming to talk about him (but readers might be interested in some of the places he has come up in this blog*). I have read him, and his position is as fuzzy and distant (from science) as others working along similar lines at the time. I thought you and some others were entertaining (if not endorsing) some kind of improved position. I aim to encourage people to think on their own, from scratch, about some of the implications of the views some of the Gurus have handed down.

The prior probability T, you say, “is not a ‘hypothesis’ about which there is uncertainty either, but a subjective probability, and therefore not ‘accepted’ but elicited’ perhaps by betting games.”

But the result of the elicitations of the (actual or hypothetical) agent’s set of subjective probabilities is an empirical claim (about the agent’s beliefs in the future observables, or the like). It is accepted as the agent’s degree of belief distribution. Is it not? Since T is a fully known (if solipsistic) claim, and there’s no uncertainty about T, the question of using evidence from many and varied tests, does not enter.

Well, it’s late….

*http://errorstatistics.com/2012/03/10/a-further-comment-on-gelman-by-c-hennig/

http://errorstatistics.com/2012/01/23/u-phil-stephen-senn-2-andrew-gelman/

(you can use the blog “search” for more).

Mayo: It’s a good point and actually something with which I agree (I have mentioned this somewhere in a paper) that the “true subjective belief” is as idealistic a concept as the “true frequentist distribution”, and that therefore subjective Bayes doesn’t really escape what subjective Bayesians may think of as the worst problem with frequentism.

One can legitimately ask, as you do, why not to put a probability on subjective probabilities (actually some work on imprecise probabilities tries to make something of that idea). I think that one can still defend subjective probabilities in the operational sense by saying that if they arise through a proper elicitation procedure, they are actually *constructed/chosen* by the individual rather than (as frequentist parameter estimates) *estimators* of something “true” but unobservable in the individual’s brain. If you don’t find this convincing, fine. I have my doubts about it, too. The thing that I’d hold up is that it is a quite useful approach for decision making at times, as David pointed out as well. And that it (and de Finettis position as a whole) makes sense as long as (or in the situations in which) you believe that the elicited personal probabilities reflect something useful that you can’t handle otherwise.

This may rarely be the case in science, though.

Christian: You wrote: “the “true subjective belief” is as idealistic a concept as the “true frequentist distribution”, and that therefore subjective Bayes doesn’t really escape what subjective Bayesians may think of as the worst problem with frequentism.”

There is a MUCH deeper problem for the subjectivist. Even if one succeeds with a 100% reliable machine to read out an agent’s degrees of belief, one has not thereby identified an entity of relevance to science or learning or warranted knowledge. By contrast, we very fruitfully use (deliberately!) approximate models of patterns of statistical regularity to learn about the world and self-correct testable claims/assumptions. We only advance knowledge by deliberate approximations and simplifications, this is not something to be avoided.

I always find it ironic to hear subjective Bayesians proudly declare that they restrict their interest to pure “observables”, unlike we frequentists. But we do not observe probabilities (not even by introspectively looking within for personal opinions*), whereas we do observe relative frequencies. We empirically determine, as well, when the relative frequencies we observe are so close to the probabilities derived from a given probability model of experiment, that it is more efficient to just derive them from the formal model. We deliberately pose questions so that this kind of formal computation is serviceable. There is no statisticism!

*Never mind the dubious relevance of such personal opinions to finding things out.

As I wade through these discussions, it seems to me that it would be even more productive “to encourage people to think on their own, from scratch” about how best to conduct statistics *without* the historical baggage of the views Gurus handed down.

(thanks for inviting me to comment… I am glad you are interested in this topic and I was actually already writing this post… even though I am on holiday!)

The goal of the operational subjective approach is decision making in the face of uncertainty (or prediction which is formulated as decision making with arbitrary utility functions). So I would reject (A) as a description of the operational subjective position.

The famous “probability does not exist” is sometimes taken to mean “models don’t exist”. From a practical point of view I take it to mean don’t do decision theory on either the hypothesis or parameter space (which is more or less the same thing…).

Frequentist statistics and the Popperian view of science is a search for the true “law” or “model”. I agree that many great scientific breakthroughs can and usually are told in terms of finding the truth. Which is a bit difficult for the operational subjective position as it seems to replace some of the sexiest results in science with something awkward and subtle.

Your post suggests that you are (understandably) interpreting the operational subjective approach through this lens as a method for finding the true model – it is actually a rejection of this goal. The appeal of the approach is it attempts to be very practical. You stick to things that are unquestionably real (observables) consider your decision preferences and try not to contradict yourself. The goal is not to find the truth, rather a much more modest goal simply to be articulate about uncertainty.

Your critique of Bayes seems to dismiss subjective Bayes, but I think you underestimate the influence of the operational subjective approach. You mention Berger and Bernado as objective Bayesians (and fair enough if you must apply a simple label) but both have done considerable work in the operational subjective framework e.g. Berger under the name robust Bayesian analysis and much of Bernado and Smith’s book. In fact the stated purpose of the book is (as I remember) to take off where de Finetti left off.

I think many (perhaps including Berger and Bernado) would see objective Bayes as a heuristic device for approximating operational subjective analysis (not something to base foundations on). At least this is how I see it…

… and most applied Bayesian work is focused more upon the implications of the computational revolution (improper “objective” priors are rare in this area)… although the use of diffuse proper priors not too carefully chosen is nearly ubiquitous… again this is more or less heuristic not something to base foundations upon, or to be taken as a consensus view of Bayesian foundations.

On another note… I am interested to know if you would concede subjective Bayes to be the right approach to any problem at all. For example would you reject subjective Bayes as a way to guide personal decision making in a situation with a small number of relevant possible outcomes (not i.i.d) and a small number of possible decisions?

David: Oy. Can’t even begin except to say that it is false to suppose that “Frequentist statistics and the Popperian view of science is a search for the true “law” or “model”. ” Neyman was as instrumentalist and behavioristic as could be, and Popper denied empirical claims were true or even justifiable. They shared what might be called Peirce’s “pragmaticism” (You can find a paper of mine on Peirce and error-correcting.)

By the way, I couldn’t open the link in the comment you sent back in Sept. 2012.

Type I and type II errors are probabilities of false rejections of models. What I am meaning to say is that this framework references models as well as observations.

The standard metaphors in philosophy of science are about true laws. Relativity/Newtonian or green/grue, colour of swans etc. Popper’s claim that induction is impossible interprets induction as a search for the truth e.g. statements such as all swans are white can never be shown to be true.

The operational subjective position disagrees that finding the truth is the right (or meaningful) thing to do.

More modest interpretations of induction where observations are used to guide decision making are possible.

I will look at your paper when I get the chance.

was this the link http://arxiv.org/pdf/bayes-an/9512001.pdf

I needed to reload a couple of times to get it….

David: No, induction for Popper is enumerative induction, like rule R. A heuristic search for truth is not induction. Ignore wherever you are reading about these “metaphors” in philosophy of science (not that philosophers agree on much). If you want some insight on Popper, perhaps read my “no pain” Popper posts on this blog. Also, insofar as general claims and theories could be assigned (logical) probabilities, Popper gave them probability 0. We seek highly improbable theories in science.

I think the last iteration between you and Christian was particularly useful.

“It is accepted as the agent’s degree of belief distribution. Is it not? Since T is a fully known (if solipsistic) claim, and there’s no uncertainty about T, the question of using evidence from many and varied tests, does not enter.”

I agree testing does not enter…. but T is a joint distribution so observations today can be used to compute conditional probabilities for tomorrow….

In practice T won’t be fully specified only aspects of it i.e. T might be imprecisely specified but T is not uncertain….

de Finetti is pretty difficult to read (at least for me), I would highly recommend operational subjective statistical methods by Frank Lad.

The general idea of T being incompletely specified yet known or accepted or warranted without assigning probabilities to T should have been taken to heart by subjectivists. They might avoid their faulty slide from incomplete or partial knowledge in something to a presumed degree of probability in it.This all began with philosophers defining knowledge in terms of belief,leading to the supposition that partial knowledge is captured by degree of belief. A couple of steps more, and they wind up with a position where the scientific task has become studying the degrees of beliefs of agents… Anyway, I’m repeating stuff I’ve said more clearly elsewhere.

David: wrt your mentioning Jim Berger and O-Bayesian foundations, you might be interested in:

http://errorstatistics.com/2011/12/11/irony-and-bad-faith-deconstructing-bayesians-1/

Mayo

I would like to respond more fully… but as I said I am on holidays (before returning home) and the conversation has splintered into so many distinct interesting topics that it is difficult to comment on all of them …. maybe if you are interested I will respond by email in some weeks…

There is plenty of room for misinterpretation in doing this, but here are a few discrete responses to different parts of the conversation.

* If you think that the future frequency of a a repeated phenomena will resemble a past frequency you already have made a slew of subjective opinions to come to that conclusion including exchangeability, and exchangeable extendability…

* yes, I think this is all traditional operational subjectivism which I think has stood the test of time well.

* we are clearly misunderstanding each other in parts. I am sure this is partially my fault, sometimes semantic and sometimes deeper… I think that you see a precise distinctions between concepts that I am mingling…. e.g. I don’t understand the distinction between “heuristic search for truth and induction” and more than that I suspect it is tangential to the discussion (if an interesting distinction…)

* I think you get much less stylized examples if you have a continuous parameter \theta as a parameter rather than a number of discrete models H_1, H_2 etc….

* If you have exchangeability you can use the de Finetti representation i.e. $p(x_1,…,x_n) = \int \prod_i g(x_i|\theta) h(\theta) d\theta$. The left side of the equation is the fundamental bit not the right… this has real advantages in my opinion including operationalism (as you point out the posterior of a parameter is hard to interpret) but it also generalises the framework you can use it when exchangeability doesn’t apply or you can use different kinds of exchangeability. Increasingly a representation theorem by Aldous-Hoover is being used in Bayesian statistics. Bayesian networks are subjective probability over small number of discrete events that are not exchangeable. Hidden markov models use a kind of partial exchangeability. Finite exchangeabie extendability can be used by allowing $h(\theta)$ to be negative.

* I think one reason we differ is that we think about different kinds of applications. A statistical model of say an image or of sound can’t be very simple, and while approximations or simplifications can’t be avoided a cost (big or small) must result from engineering a system based upon false assumptions.

* In applications that I work on testing is not very common, because it is not common to have a number of discrete alternatives. This is related to the point that I seem to have failed to make that your discussion seems to focus on “sexy” science where there are two or more discrete plausible models e.g. newtonian vs relativistic physics… in my work the frequentist option seems to be point estimation (usually maximum likelihood) in order to produce a plug-in predictive distribution perhaps using the bootstrap to deal with sampling uncertainty. For me both the Bayesianism and the frequentism that you discuss are quite far from the problems that I work on. I can make more informed commentary in this context….

David:David: You wrote: “I think you get much less stylized examples if you have a continuous parameter \theta as a parameter rather than a number of discrete models H_1, H_2 etc….”

I actually don’t focus on what you call “sexy” science with rival large-scale theories, because I think that actual tests probe very localized questions, not rival large-scale theories. So, let me ask you something, given we’re dealing with a continuous parameter theta, and T the (full) prior distribution for theta.(Theta could be different values for the deflection parameter.)

The agent (idealized perhaps) knows or holds or otherwise accepts T (and does not assign T itself a probability). Now is this to be understood as the agent’s degrees of belief for the different theta values? Or does the agent have full belief that the different theta values occur with various probabilities?

I am trying to put it in terms a subjectivist might use, in order to ask what I think is a key question. Here’s another try: If there is whole belief and no uncertainty in T, that would seem to mean either:

(1) the agent knows he believes in different theta values with different intensities (the intensities given by the probabilities)?

(2) The agent knows he believes different theta values occur with different relative frequencies (the relative frequencies given by the probabilities)?

If (1), (whatever it might mean) then one couldn’t draw out the predictions for relative frequencies of observable events. But even if it is cashed out as (2) we have to wonder how beliefs in the relative frequencies of theta values in different possible situations or worlds are relevant for predictions in the particular context from which the data arose, and about which the scientific question concerns.

Have fun on your holiday!

Mayo & David: I’d be very happy for this discussion to go on in public; it’s quite interesting and I think you did a not too bad job explaining the philosophy behind operational subjectivism.

Mayo is right regarding me, too, that the Bayesian position I sympathise most with is the good old de Finetti one, not a fancy “improved” version. As pointed out before, this is under the assumption that there is a clear benefit in the given application from incorporating the prior, and a clear meaning to it, and such applications may indeed not be the ones Mayo is interested in.

Actually, despite the fact that I see clarity and beauty in this position, I have to admit that I hardly have come across an application in my own work where this was the case, so in practice I’m much more of a frequentist.

Regarding Mayo’s last comment, it’s neither (1) nor (2) (as I had hoped to have explained already but apparently not clearly enough), but rather that the agent has a belief (subjective distribution) regarding observable future events that can be *expressed mathematically* (through exchangeability/de Finetti’s theorem) by writing down a two-step model where a distribution for random theta (which is a mathematical device, not something to believe in) is coupled with a distribution for future observations given theta.

Christian: You wrote: “There is a “two-step model where a distribution for random theta (which is a mathematical device, not something to believe in) is coupled with a distribution for future observations given theta.”

And now to interpret the result of the coupling (putting aside where the mathematical device of a prior comes from or means):

(1) The agent knows he believes in different observable events with different intensities (the intensities given by the probabilities)? or

(2) The agent knows he believes different observable events would occur with different relative frequencies (the relative frequencies given by the probabilities)?*

It was precisely your often hinting that the interpretation of the prior in this version of subjective Bayesianism was crystal clear and not muddled that led to my query. Now it appears to rest upon a black box with no interpretation. It is as if it the agent has become is the statistical hypothesis assigning intensities to events (1) or beliefs about the frequency of occurrences (2).

Maybe the prior distribution lives within each of ur pineal glands ( a la Descartes).

I fail to see the “clarity and beauty” you find in what looks to be a mish mash. The scientist wanted to peer into aspects of the actual source of observable patterns, and she is left peering into some agent’s pattern of beliefs stemming from a mathematical construct (the prior) that we are not allowed to interpret or question.

*(or is the result of the coupling the assigning to theta values?)

It’s (1) this time (and de Finetti would probably refer to betting rates, not sure whether he’d like the term “intensity”). (2) may hold as well but doesn’t have to.

Well, the approach is clear enough to me and if it isn’t to you from what I write, I’d rather blame my writing than the approach.

You are allowed to interpret and question the agent’s prior assignments, but you need to accept that these are about observable future events and not about the existence of some true theta (whether or not theta appears if you write it down formally).

I agree though that if you are after an “actual source” that is something true but unobservable for other than purely predictive reasons, this approach won’t deliver you much.

Christian: I can be interested in some aspect of the source of observable patterns for predictive reasons, but also for understanding these patterns, and perhaps intervening to change them.

By contrast, changing the agent doesn’t change the observable effect, or let me understand why the patterns occur.

But never mind let me just clarify one more thing.

You agreed roughly with my construal in (1):

(1) The agent knows he believes in different observable events with different intensities (the intensities given by the probabilities)?

or, if you prefer, he knows he has these betting proclivities regarding outcomes.

One thing: what happened to the posterior beliefs in values of theta? This is just a distribution of probabilities to outcomes, just like a statistical hypothesis H in frequentist statistics.

And by the way, for frequentists, H is regarded as at most an idealized (“picturesque” was Neyman’s word) way of capturing the distribution of relative frequencies of outcomes. It’s “truth” at most means H adequately captures the statistical properties, and this can be made rigorous.

Here’s a possible comparison:

Frequentist: The distribution of outcomes are as if captured by a probability model M.

Hennig subjectivist: The distribution of outcomes are as if they represented an agent who would bet as if the outcomes followed probability model M.

I’m not sure I understand this: “One thing: what happened to the posterior beliefs in values of theta? This is just a distribution of probabilities to outcomes, just like a statistical hypothesis H in frequentist statistics.”

As said before, theta is a mathematical device for assigning probabilities to observable outcomes, so if Bayesians talk about “belief in values of theta”, they have left the operational subjectivist path.

Regarding the observables, your comparison of “Hennig subjectivists” (I should really not claim credit for this!) with frequentists looks fine *if* the posterior turns out to assign a probability of about 1 to M (acknowledging once more that such an assignment still is no more than a mathematical device). The interpretation of the distribution of outcomes is not the same, though (one refers to hypothetical infinite repetition, the other one to belief as formalised in betting rates).

De Finetti would use the same observation to say that frequentism is superfluous even if reality looks as frequentist as it gets. It seems that you try to make the opposite case. (I am on neither side in this respect and think that still there is some use to them both.)

One almost forgets how tied up positions in philosophy of statistics are to holdovers from very old philosophical schools—notably, variants of positivism and verificationism– now long recognized as wrong-headed, distant from actual science, and self-defeating, at least by most philosophers of science. I’m now reminded, having reread di Finetti in Kyburg and Smokler. In one way it is not surprising that foundational issues, as taken up by contemporary statisticians, would still be somewhat locked in those older traditions and presuppositions of 40 and 50 years ago. That’s where the subjectivist gurus began. Positivistic philosophy looked to “observables’ and “measurables” as a way to be “operational” while retaining purity from metaphysics and causes, but that all seems so—so before we began to understand how theories actually relate to actual data. Among one of the reasons for positivist’s demise was the recognition (eg.g., Kuhn, Popper) that the “data” required to critically appraise claims are not direct sense data (e.g., “I seem to see red patch now”) but are themselves the result of quite a lot of background generalizing. They are difficult to attain, require many of their own assumptions and inferences, and are scarcely “directly” and unproblematically observed. Moreover, theories are not reducible to observational sentences, despite many attempts to so reduce them. Finally, restrictions to measurables and observables was thought to avoid the risky gaps between the knower and thing known, but the project is self-defeating. The whole “in here” vs “out there” dichotomy is wrong-headed; we are immersed in the world. Worst of all: a restriction to (supposed) “measurables” is at odds with the requirement of being able to subject claims to empirical scrutiny—so the supposed security they bring is an illusion.

It’s too bad to see smart, contemporary methodologists stuck in the old positivistic, even verificationistic, presuppositions, perhaps blended in modern-day constructivisms of various sorts. We (some of us) tend to assume everyone has caught up by now, but saying more would take us too far afield. Although the philosophy of science/statistics I champion does not require a standpoint on metaphysics, it is rooted in having replaced discredited positivistic philosophies with more contemporary standpoints. Still I don’t care in the least if someone wants to be a solipsist, a social or cognitive or what have you constructivist, or wants to see us as invariably “brains in a vat” or prisons of our paradigms.

Nothing changes.

If you want to add to every claim “as grasped by s as intended in context x”, or even if one wants to add hyphens: “the data-as-s-sensed” or the like, we can still set about our business of learning about our vats, and successfully intervening in vat processes. So I think that ends this little detour.

Well I persoinally don’t claim more that “local usefulness and local consistency” of the operational subjectivist approach. I don’t use it very often, as I said before. It is well possible to say that for addressing a specific problem it is assumed for the moment (i.e., “locally”) that the “observables” can be observed and measured without problems, accepting that this generally doesn’t hold and for the specific case can be discussed later.

I don’t really find a general rant that should really address people who say that this is the one and only way to do statistics appropriate as an answer to the discussion above.

Sorry, too many typos. “more than”, etc.

I totally agree with your diagnosis of the subjectivist gurus.

“in my work the frequentist option seems to be point estimation (usually maximum likelihood) …”

Would it be at all productive to classify that as a ‘classical’ option? Isn’t it permissible for a frequentist to use, say, a James-Stein estimator in this situation?

All: On my initial “query”, it now seems clear that the position I thought some were taking or trying out is just one of the standard subjective Bayesian views. But let me suggest that rather than deny that subjective probabilists place probabilities on a general claim or hypothesis H, they should simply view “the truth of H” the way some instrumentalists in philosophy do (i.e., what H says about observables is true).

Christian: (can’t reply under yours)

On the “local usefulness”, what’s the form of output: agent holds this distribution of outcomes, call it D, understood as his betting rates or anything you like. We can translate later,but is the output something like “agent holds (or believes) distribution D”. Or it could be an interval estimate, but how does one interpret it?

Generally something is ‘operational” only if we can check it, but the solipsist operates on his own terms.

Yes, the outcome is “agent holds (or believes) distribution D”, and the idea is that this determines how the agent would bet with you in case you offer him/her bets on future events covered by this, which you can check. (I am happy to admit that this rarely happens in reality but Bayesians tell me that it sometimes indeed does.)

Christian:

The de Finetti-style Bayesian comes out with the claim:

Claim C: For agent S, the probability distribution of observable events is D (or agent S would bet on events as if he assigned them distribution D)

The frequentist comes out with claim:

Claim C*: The probability distribution of observable events is D

C* being qualified by the relevant error probabilities associated with the method arriving that C*.

Neither is giving a posterior probability distribution to parameters or C, C*.

However, the de Finetti-style Bayesian knows, accepts, or believes claim C and no qualification is added. It is fully known.

The frequentist does not claim to know C*, but might say claim C* is warranted only qualified by the error probabilities associated with the method arriving at C*.

In particular, she would wish to report whether the method could readily have produced evidence in sync with claim C* even if C* is (specifiably) incorrect. This is more satisfactory than merely declaring C without qualification.

Notice that these error probabilities, associated with the method—enabling the frequentist to qualify the output C*–, are the same kind of probabilities (i.e., of observable outcomes*) that are supposedly hunky dory for the di Finetti Bayesian as regards D. The event “outputting claim C*” is equivalent to “set of observable outcomes that lead to C*”.

Mayo: Well yes. And no. “The probability distribution of observable events is D” has a different meaning for the de Finetti-style Bayesian (dFB) and the frequentist. In frequentism, it is a model for how reality “produces” data. In de Finetti-subjectivism it is a model for how the agent should think. (I’m not saying that the agent really should think like that, but that there are situations in which this is a useful model.)

Typically, dFB ends up with more complex distributions, namely such that can be written down as continuous mixtures over parameters or even distributional shapes. Of course frequentists could use such distributions, too, but they usually don’t. On the other hand, the dFB could in principle specify a mixture over all candidate models as prior that the frequentist considers while checking models. This doesn’t give error probabilities but a complex posterior that the dFB still can interpret by saying: “I assign probabilities to observable events as if they were generated by a mixture over the following frequentist models (…) with such-and-such posterior probabilities”. In principle, the dFB can handle as wide a scope of possible candidate models as the frequentist when doing model checking, although this may end up in a computational mess and they hardly ever do it (I won’t defend them for that).

Of course eventually the frequentist will have error probabilities, and the dFS will have subjective posterior probabilities. I think that indeed for the kind of science you are interested in, error probabilities make more sense, for the reasons you state (I certainly don’t buy into the bit of the dFS ideology that claims that science should be based on subjective betting rates because they are “observable”). However, posterior probabilities work nicely in a range of practical decision problems, in which the affected subjects can agree on a prior, or a set of priors to be investigated, and where it is not that clear how to use error probabilities.

They can also be used for a number of other interesting considerations such as investigating the width of the range of justifiable beliefs from the same data given a range of prior beliefs.

I seem to repeat myself, and so do you… I’m the first to agree with you regarding misplaced universalist claims of the Bayesians (subjectivist or not), but I won’t sing “everything the Bayesians can do, the frequentists do better” either. Perhaps I’m just missing a bit of respect on your behalf for what they achieved (all justified criticism granted).

Christian: “The probability distribution of observable events is D” has a different meaning for the de Finetti-style Bayesian (dFB) and the frequentist.”

Of course, but there’s no posterior in the (hypothesized) distribution D. I think people should be very unhappy with the idea of asserting, without any qualification, the knowledge or acceptance of:

Claim C: For agent S, the probability distribution of observable events is D (or agent S would bet on events as if he assigned them distribution D).

“ In de Finetti-subjectivism it is a model for how the agent should think.”

No, it was to be the report of how this agent S does think, construed as betting ratios. Where does the “ought” come in?

This is one of the sources of equivocation*.

_____________

You wrote: “I assign probabilities to observable events as if they were generated by a mixture over the following frequentist models (…) with such-and-such posterior probabilities”.

Gee, this is so much less metaphysical/metaphorical than modeling a data generating process (of the sort we can and do actually create or simulate).

________________

But my main point is that (the DF Bayesian you describe) still does not have a posterior probability in a statistical “hypothesis”, (i.e., in a claim about the distribution of outcomes D). (This contrasts with standard Bayesians.)

Moreover, C is accepted or asserted tout court, without qualification of its warrant, whereas we would never assert C* without some indication of how well or poorly warranted it is (as given by relevant error probabilities of the method). Popper would call it the degree of the rationality of accepting C or C*, but to avoid the equivocation (below), he speaks of degree of severity or corroboration. I prefer to speak of how warranted or well-tested a statistical claim is.

*What one really wants (if there is to be a normative element) is something like the degree to which this fellow S is rational in holding D. And that gets you to error probabilities as a way to qualify the warrant in the assignment of probabilities to events.

Despite repetitions, I do think there are some new clarifications of the contrasts, problems…(I’m traveling, I hope this is semi-readable.)

“Of course, but there’s no posterior in the (hypothesized) distribution D. (…) No, it was to be the report of how this agent S does think, construed as betting ratios. Where does the “ought” come in?”

What do you mean by “there is no posterior”? Of course there is a prior, some new observations, and a posterior obtained by conditioning on new observations (assuming that there are some). And this is how the “ought” comes in: the coherence requirement means that you get from the prior to the posterior by Bayesian conditioning.

I’m not sure but it may be that some misunderstanding comes in because it seems you think of the terms “prior” and “posterior” exclusively as distributions for parameters. But in de Finetti-subjectivism, they are predictive distributions for future data (“future” includes the actually observed observations to be analysed for the prior, but not the posterior). Under exchangeability it is possible to write them down using prior/posterior distributions of parameters combined with sampling distributions given the parameters, but they don’t have to look like this in general.

“Gee, this is so much less metaphysical/metaphorical than modeling a data generating process (of the sort we can and do actually create or simulate).”

Well I haven’t claimed, and I don’t think it is. But it is not that much more metaphysical, either.

“Moreover, C is accepted or asserted tout court, without qualification of its warrant, whereas we would never assert C* without some indication of how well or poorly warranted it is.”

If a subjectivist wants to convince others that her results make sense, she should try hard to justify and explain her prior. Part of this process can be using past observations. I’m not going to buy any subjectivist analysis without a well motivated prior, and some subjectivists do me this favour indeed (some others don’t, and they get into trouble if they meet me as a reviewer; as you know there are also frequentists who don’t do much about warranting their model assumptions).

“ And this is how the “ought” comes in: the coherence requirement means that you get from the prior to the posterior by Bayesian conditioning.”

It is at best a prediction about how I expect agent S would bet in a series of bets on all the possibilities (note that even this is a generalization).

Testing would seem to become testing if a hypothetical sequence of bets (never observed) accord with the predicted bets as stated in the distribution.

By the way, betting coherence is now deemed too strong by Bayesians, we’ve seen.

It seems, given the evidence, the last place one should look for “operationalizing” is elicitation by betting. Recall discussion of Berger 2006, e.g, http://errorstatistics.com/2012/10/13/mayo-responds-to-u-phils-on-background-information/

“If a subjectivist wants to convince others that her results make sense, she should try hard to justify and explain her prior.”

What would needs defending is claim C: Agent S ought to lay bets in accordance with distribution of outcomes D.

But as you’ve said, C is simply known or accepted without qualification. Even if he convinced me that this really, really is what he believes and how he would bet in a series of hypothetical scenarios, I haven’t a clue what I’d want to do with any of this in setting sail to arrive at (and critically evaluate) a warranted distribution (of outcomes) D.

“By the way, betting coherence is now deemed too strong by Bayesians, we’ve seen.”

Most of them aren’t de Finetti-type subjectivists, I guess. However, I do think that even a subjectivist may violate coherence at some point claiming that for some reason she later realised that she got her initial prior so wrong that incoherence harms her betting chances less than sticking to the prior. (I normally use this to tease subjectivists who still hold coherence to be the most essential requirement so please don’t use it to tease me…;-)

“It is at best a prediction about how I expect agent S would bet in a series of bets on all the possibilities (note that even this is a generalization).”

What you expect is up to you but S may commit herself to betting according to her prior (before data) and her posterior (afterwards), and anybody else who trusts S could *decide* to do the same. This is not about what you predict but about what you consciously adopt (or not).

“Testing would seem to become testing if a hypothetical sequence of bets (never observed) accord with the predicted bets as stated in the distribution. ”

I think they’re not too interested in testing in the sense you like to do.

“But as you’ve said, C is simply known or accepted without qualification. Even if he convinced me that this really, really is what he believes and how he would bet in a series of hypothetical scenarios, I haven’t a clue what I’d want to do with any of this in setting sail to arrive at (and critically evaluate) a warranted distribution (of outcomes) D.”

This isn’t what I wanted to get at. If S is an expert of the area the data analysis is about, T may put some trust into an analysis based on S’s prior (not only in the fact that S really believes this) if S can convince T that this prior is reasonably based on proper trustworthy knowledge.

You’re missing what is relevant: trusting the expert refers to what they say being close to what is the case or what will or would occur, that for example, the predicted relative frequences will be close to actual relative frequencies of outcomes. Merely trusting S to correctly reflect on his beliefs is not the least bit relevant for prediction. Someone might truly believe they would bet on the unemployment rate going down to 4%. Were you to cash out what can really be meant by relevant trust, you will, by circuitous means, be where the frequentist starts out, and without all the metaphorical hoopla of betting.

Sorry, I think I misunderstood this: ““Testing would seem to become testing if a hypothetical sequence of bets (never observed) accord with the predicted bets as stated in the distribution. ”

You mean “testing that S really would bet as stated”, don’t you?

Well, it isn’t impossible to do (and hence observe) some real betting. Some Bayesians do this sometimes, although probably not many, not often.

Christian: (My remark is just cashes out what you wrote).

Betting behavior is a terrible way to “operationalize” assessments of warranted belief in outcomes or their frequencies. In no other field I can think of would a purported way to measure something be retained by some people despite continual failure.

I hope you will go back and ponder my remarks in this post about “equivocations”.

Responding to the post before, it is up to you if you want to trust an expert’s prior or not. I’m with you if we are talking about discovering and confirming natural laws.

However, if I have to make a decision with a well constructed loss function for observable future events in a situation in which I have a limited amount of data and I’m not an expert myself but know one who is happy to provide (and motivate) a prior in a way that I feel reassured that she knows what she is doing, I may well decide that using her prior for a Bayesian analysis gives me a better basis for formal decision making than anything else I can think of.

Sorry, but I don’t really understand your “equivocation” remark.

Christian: I assume that by “know what she’s doing” you do not mean merely that she is very accurately in touch with her beliefs, you must mean you think her beliefs reflect what is/is likely to be the case, at least approximately–not just in the past but in the future. (It is an inductive claim, a general one as well. )

Why the strange fear to avoid saying it like it is? Someone who is often wrong on subject X would not be called an expert on X*. Yet the position you describe would accept C with no qualification as to its warrant in relation to getting it right. Error statisticians (and others who share their requirements) have direct ways to characterize and appraise if acting in accordance with C is warranted (to put it in the pragmatists’ language), or whether distribution D is reliable.

*Unless, of course they are stock analysts.

Mayo: To say “what is likely to be the case” seems to imply that there is an objective truth regarding this, whereas according to operationalist subjectivism there is an objective truth regarding what will be the case *when it actually is the case* but not regarding any probability/likelihood before.

I’d certainly demand some kind of qualification for C, but what kind of qualification this is, depends on the specific situation. Certainly the expert needs to explain properly and convincingly her reasons for C. As the whole thing is subjectivist, it is up to the individual to decide what they find convincing or not, but in a scientific framework one can certainly demand an in-depth discussion of possible arguments and objections.

Positive experience with past predictions of the expert may help, but that’s not the only thing that counts. The expert may know a lot that is relevant in the given situation but may haven’t done systematic predictions before and so there may be no basis for assessing her long term prediction quality. Still her information may be worthwhile to incorporate in formal decision making.

Christian: You wrote: “ according to operationalist subjectivism there is an objective truth regarding what will be the case *when it actually is the case*” but not before or after. (Like Alice, there is only jam tomorrow or yesterday but never jam today). But it just makes no sense for you to say there is an objective truth now about what is the case (independent of your belief). You can’t jump out of the subjective at every instant, you are stuck in it. By the time a sentence is uttered (even about current sense data) “I now appear to see a red blotch” the (alleged) objective truth is gone.

But of course it can never be objectively true if you are consistent, because it is all mere appearance. You can only say something like “I believe it is now the case” for an instant, not that it is the case. In short, your position, or the one you’re describing, just makes language and knowledge meaningless. You say “the expert may know a lot that is relevant in the given situation” but no such knowledge can be had. What could it mean? Only something like, it appears to me now that what this expert believes to be the case at the moment that it is the case, is true. Or some such gobbeldy-gook.

“Positive experience with past predictions of the expert may help”. How can past predictions be relevant for things that are not yet the case, unless you accept the kind of regularity R that I first stated in this post? I am back where I started: you need to assume general claims, but without warrant.

Mayo: I accept that strictly speaking I cannot be 100% sure that what seems to be the case really is the case. I would avoid the term “objective”; I cited it from de Finetti.

I do however think that the concept of observability makes sense (although I will not claim that one can make this 100% precise, there will always be borderline cases), and that many people can agree on a statement of the kind that “the result of the next toss, or the next 100 tosses, of my coin is observable, but the true probability (in the frequentist sense) of heads is not.” Actually the whole concept of estimation in frequentism relies on such a distinction.

It is not needed to call observable outcomes “objective”, the observability distinction is enough.

Furthermore I don’t see why the dFB needs general regularities such as R. Life is complex and in any given situation there is some flexibility. She doesn’t have to accept that past prediction successes either *always generally* make future successes likely, or that they don’t have to do with the future at all. She can well say that this depends on how similar she considers the circumstances in the past to the future ones, and how much and what kind of other knowledge is available apart from past betting results. she’d use R as a hint, as something to take into account among other things, but not as something that automatically determines her probability assessments or something that can never be overwritten.

You apparently either want general rules that can be tested by general tests, or you think nothing can be said at all. I disagree.

Christian: This is now getting more confused. General does not mean universal (but not particular). Inferring future cases based “on how similar she considers the circumstances in the past” is to hold a general inductive rule. It is not to merely stick to individual observables, or instantaneous sense data at time t. I would appraise the warrantedness of applying such a rule in the particular case at hand. I never spoke of “objective truth”(your term) or required it. Dashing off to a plane. best if I leave off with the request that you reread the

Mayo: Well, I don’t feel confused yet, but what qualifies as a “rule” to you? According to how I use this term, something like “normally past prediction success under similar circumstances will increase my expectation of future prediction success” is not a “rule”, because I haven’t specified how exactly this has to be applied in general. In other words, you can’t predict what I’m going to do from this in any specific case, and I was imprecise on purpose because I want to reserve the right to apply this in whatever way seems reasonable to me (or even ignore it with reasons) in a given situation taking into account the individual circumstances.

“Have a look at past prediction successes and take that into account somehow” is not really a rule, or is it? At best it is a quite weak one.

Let’s not discuss “objectivity” any further. It’s true that I brought it in, but as I said before, this was citing de Finetti, and I’d rather avoid the term in my attempt here to make the best of his framework. (You didn’t use it here but elsewhere, though, connected to testability, if I’m right.)