The very fact that Jerzy Neyman considers she might have been playing a “mischievous joke” on Harold Jeffreys (concerning probability) is enough to intrigue and impress me (with Hosiasson!). I’ve long been curious about what really happened. Eleonore Stump, a leading medieval philosopher and friend (and one-time colleague), and I pledged to travel to Vilnius to research Hosiasson. I first heard her name from Neyman’s dedication of Lectures and Conferences in Mathematical Statistics and Probability: “To the memory of: Janina Hosiasson, murdered by the Gestapo” along with around 9 other “colleagues and friends lost during World War II.” (He doesn’t mention her husband Lindenbaum, shot alongside her*.) Hosiasson is responsible for Hempel’s Raven Paradox, and I definitely think we should be calling it Hosiasson’s (Raven) Paradox for much of the lost credit to her contributions to Carnapian confirmation theory[i].
But what about this mischievous joke she might have pulled off with Harold Jeffreys? Or did Jeffreys misunderstand what she intended to say about this howler, or? Since it’s a weekend and all of the U.S. monuments and parks are shut down, you might read this snippet and share your speculations…. The following is from Neyman 1952:
“Example 6.—The inclusion of the present example is occasioned by certain statements of Harold Jeffreys (1939, 300) which suggest that, in spite of my insistence on the phrase, “probability that an object A will possess the property B,” and in spite of the five foregoing examples, the definition of probability given above may be misunderstood. Jeffreys is an important proponent of the subjective theory of probability designed to measure the “degree of reasonable belief.” His ideas on the subject are quite radical. He claims (1939, 303) that no consistent theory of probability is possible without the basic notion of degrees of reasonable belief. His further contention is that proponents of theories of probabilities alternative to his own forget their definitions “before the ink is dry.” In Jeffreys’ opinion, they use the notion of reasonable belief without ever noticing that they are using it and, by so doing, contradict the principles which they have laid down at the outset.
The necessity of any given axiom in a mathematical theory is something which is subject to proof. … However, Dr. Jeffreys’ contention that the notion of degrees of reasonable belief and his Axiom 1are necessary for the development of the theory of probability is not backed by any attempt at proof. Instead, he considers definitions of probability alternative to his own and attempts to show by example that, if these definitions are adhered to, the results of their application would be totally unreasonable and unacceptable to anyone. Some of the examples are striking. On page 300, Jeffreys refers to an article of mine in which probability is defined exactly as it is in the present volume. Jeffreys writes:
The first definition is sometimes called the “classical” one, and is stated in much modern work, notably that of J. Neyman.
However, Jeffreys does not quote the definition that I use but chooses to reword it as follows:
If there are n possible alternatives, for m of which p is true, then the probability of p is defined to be m/n.
He goes on to say:
The first definition appears at the beginning of De Moivre’s book (Doctrine of Chances, 1738). It often gives a definite value to a probability; the trouble is that the value is one that its user immediately rejects. Thus suppose that we are considering two boxes, one containing one white and one black ball, and the other one white and two black. A box is to be selected at random and then a ball at random from that box. What is the probability that the ball will be white? There are five balls, two of which are white. Therefore, according to the definition, the probability is 2/5. But most statistical writers, including, I think, most of those that professedly accept the definition, would give (1/2)•(1/2) + (1/2)•(1/3) = 5/12. This follows at once on the present theory, the terms representing two applications of the product rule to give the probability of drawing each of the two white balls. These are then added by the addition rule. But the proposition cannot be expressed as the disjunction of five alternatives out of twelve. My attention was called to this point by Miss J. Hosiasson.
The solution, 2/5, suggested by Jeffreys as the result of an allegedly strict application of my definition of probability is obviously wrong. The mistake seems to be due to Jeffreys’ apparently harmless rewording of the definition. If we adhere to the original wording (p. 4) and, in particular, to the phrase “probability of an object A having the property B,” then, prior to attempting a solution, we would probably ask ourselves the questions: ”What are the ‘objects A’ in this particular case?” and “What is the ’property B,’ the probability of which it is desired to compute?” Once these questions have been asked, the answer to them usually follows and determines the solution.
In the particular example of Dr. Jeffreys, the objects A are obviously not balls, but pairs of random selections, the first of a box and the second of a ball. If we like to state the problem without dangerous abbreviations, the probability sought is that of a pair of selections ending with a white ball. All the conditions of there being two boxes, the first with two balls only and the second with three, etc., must be interpreted as picturesque descriptions of the F.P.S. of pairs of selections. The elements of this set fall into four categories, conveniently described by pairs of symbols (1,w), (1,b), (2,w), (2,b), so that, for example, (2,w) stands for a pair of selections in which the second box was selected in the first instance, and then this was followed by the selection of the white ball. Denote by n1,w, n1,b, n2,w, and n2,b the (unknown) numbers of the elements of the F.P.S. belonging to each of the above categories, and by n their sum. Then the probability sought is “(Neyman 1952, 10-11).
Then there are the detailed computations from which Neyman gets the right answer (entered 10/9/13):
P{w|pair of selections} = (n1,w + n2,w)/n.
The conditions of the problem imply
P{1|pair of selections} = (n1,w + n1,b)/n = ½,
P{2|pair of selections} = (n2,w + n2,b)/n = ½,
P{w| pair of selections beginning with box No. 1} = n1,w/(n1,w + n1,b) = ½,
P{w| pair of selections beginning with box No. 2} = n2,w/(n2,w + n2,b) = 1/3.
It follows
n1,w = 1/2(n1,w + n1,b) = n/4,
n2,w = 1/3(n2,w + n2,b) = n/6,
P{w|pair of selections} = 5/12.
The method of computing probability used here is a direct enumeration of elements of the F.P.S. For this reason it is called the “direct method.” As we can see from this particular example, the direct method is occasionally cumbersome and the correct solution is more easily reached through the application of certain theorems basic in the theory of probability. These theorems, the addition theorem and the multiplication theorem, are very easy to apply, with the result that students frequently manage to learn the machinery of application without understanding the theorems. To check whether or not a student does understand the theorems, it is advisable to ask him to solve problems by the direct method. If he cannot, then he does not understand what he is doing.
Checks of this kind were part of the regular program of instruction in Warsaw where Miss Hosiasson was one of my assistants. Miss Hosiasson was a very talented lady who has written several interesting contributions to the theory of probability. One of these papers deals specifically with various misunderstandings which, under the high sounding name of paradoxes, still litter the scientific books and journals. Most of these paradoxes originate from lack of precision in stating the conditions of the problems studied. In these circumstances, it is most unlikely that Miss Hosiasson could fail in the application of the direct method to a simple problem like the one described by Dr. Jeffreys. On the other hand, I can well imagine Miss Hosiasson making a somewhat mischievous joke.
Some of the paradoxes solved by Miss Hosiasson are quite amusing…….” (Neyman 1952, 10-13)
What think you? I will offer a first speculation in a comment.
The entire book Neyman (1952) may be found here, in plain text, here.
*June, 2017: I read somewhere today that her husband was killed in 41, so before she was, but all refs I know are sketchy.
[i]Of course there are many good, recent sources on the philosophy and history of Carnap, some of which mention her, but obviously do not touch on this matter. I read that Hosiasson was trying to build a Carnapian-style inductive logic setting out axioms (which to my knowledge Carnap never did). That was what some of my fledgling graduate school attempts had tried, but the axioms always seemed to admit counterexamples (if non-trivial). So much for the purely syntactic approach. But I wish I’d known of her attempts back then, and especially her treatment of paradoxes of confirmation. {I’m sometimes tempted to give a logic for severity, but I fight the temptation.)
REFERENCES
Hosiasson, J. (1931) Why do we prefer probabilities relative to many data? Mind 40 (157): 23-36 (1931)
Hosiasson-Lindenbaum, J. (1940) On confirmation Journal of Symbolic Logic 5 (4): 133-148 (1940)
Hosiasson, J. (1941) Induction et analogie: Comparaison de leur fondement Mind 50 (200): 351-365 (1941)
Hosiasson-Lindenbaum, J. (1948) Theoretical Aspects of the Advancement of Knowledge Synthese 7 (4/5):253 – 261 (1948)
Jeffreys, H. (1939) Theory of Probability (1st ed.). Oxford: The Clarendon Press
Neyman, J. (1952) Lectures and Conferences in Mathematical Statistics and Probability. Graduate School, U.S. Dept. of Agriculture