Next Phil Stat Forum: January 7: D. Mayo: Putting the Brakes on the Breakthrough (or “How I used simple logic to uncover a flaw in …..statistical foundations”)

Categories: Birnbaum, Birnbaum Brakes, Likelihood Principle

5 thoughts on “Next Phil Stat Forum: January 7: D. Mayo: Putting the Brakes on the Breakthrough (or “How I used simple logic to uncover a flaw in …..statistical foundations”)”

1. Yusuke Ono

Thank you very much for your wonderful papers and presentation.

Yesterday, I commented in the seminar that the two likelihoods in your Example1 aren’t proportional.

But as you replied, the example(ii) in Cox(1978; p.53) is the same as your Example1, and it’s said that the two likelihoods are identical.

A Japanese mathematician, Prof. Gen Kuroki, also points out to me in Twitter that the two likelihoods in your Example1 are identical.

I am now investigating whether I told a lie or not, but I am very confused. If someone know more information or mathematical derivations, I would like them to show the hint to me.

Sorry for this confusion.

• Yusuke Ono

Now, I understand I was wrong.

A Japanese mathematician, Prof. Gen Kuroki, explains to me why the two likelihoods in your Example1 become identical.

In Q&A time at yesterday’s session, I told false information (I told that two likelihoods aren’t proportional).

I am very sorry for my misunderstanding.

If you are interested in the reason why I misunderstood, I would like to explain it somewhere.

• Yusuke: A lot of people get this wrong, which is why I showed the equation from Cox and Hinkley (1974) in the “supplement” I prepared for my presentation yesterday. The thing is that the example is given by BOTH sides, so if there were anything wrong with it it would be a problem for them in their choice of example. There are zillions of LP violations: any use of a confidence level, p-value, standard error, error probability will do, and one doesn’t even need to name an example to make the points on either side. I used this deramatic example because, amazingly enough, it’s one the pro-LP people are happy about (i.e., they don’t think it should matter if you’re guaranteed to reject a null hypothesis erroneously). Less dramatic examples about (e.g., binomial vs negative binomial). It seemed quicker in a presentation to have an example where the LP violation was a difference in p-values, rather than keep saying “an LP violation”.
If you want to explain your point, you are welcome to do so in a comment here. Thank you for your interest.

• Yusuke Ono

Thank you for your reply, and I am sorry again for my confusion.

Let me just explain where I was wrong. I calculated the density function conditioned on n (, Pr(X1 = x1, X2 = x2, …, Xn = xn | N = n),) also for the Trying and Trying Again experiments. I should have not conditioned on n. The density function which I need to calculate may be denoted by Pr(X1 = x1, X2 = x2, …, Xn = xn, N = n). This unconditional likelihood becomes identical to the one for the fixed-n experiment.

2. This discussion and any additional questions are and will be at phil-stat-wars.com. I just replied to a comment that had been in spam:
https://phil-stat-wars.com/2020/12/03/january-7-on-the-birnbaum-argument-for-the-strong-likelihood-principle-deborah-mayo/comment-page-1/#comment-166