** **

**Class, Part 2: A. Spanos:**

Probability/Statistics Lecture Notes 1: Introduction to Probability and Statistical Inference

Day #1 slides are here.

Main menu

- Mayo Pubs
- My PhilStat Alerts
- PhilStat Spring 19
- Syllabus: Second Installment
- NOTES
- SLIDES
- Mayo Slides Meeting #1 (Phil 6334/Econ 6614)
- Mayo Slides: Meeting #2 (Phil 6334/Econ 6614) Part I (Bernoulli Trials)
- Mayo Slides: Meeting #2 (Phil 6334/Econ 6614) Part II (Logic)
- Mayo Slides Meeting #3 (Phil 6334/Econ 6614)
- Mayo Slides Meeting #4 (Phil 6334/Econ 6614)
- Mayo Slides #5 (Phil 6334/Econ 6614)
- Mayo Slides Meeting #6 (Phil 6334/Econ 6614)
- Mayo Slides Meeting #7 (Phil 6334/Econ 6614)
- Mayo Slides Meeting #9 (Phil 6334/Econ 6614)
- Mayo Slides Meeting #10 (Phil 6334/Econ 6614)
- Mayo Slides Meeting #11 (Phil 6334/Econ 6614)
- Mayo Slides Meeting #12 (Phil 6334/Econ 6614)
- Spanos Lecture Notes 1 (Phil 6334/Econ 6614)
- Spanos Lecture Notes 2 (Phil 6334/Econ 6614)
- Spanos Lecture Notes 3 (Phil 6334/Econ 6614)
- Spanos Lecture Notes 4 (Phil 6334/Econ 6614)
- Spanos Lecture Notes 5 (Phil 6334/Econ 6614)
- Spanos Lecture Notes 6 (Phil 6334/Econ 6614)
- Spanos Lecture Notes 7 (Phil 6334/Econ 6614)
- Spanos Lecture Notes 8 (Phil 6334/Econ 6614)

- SIST Tour Summaries
- Captain’s Biblio with Links
- Spanos ch 1, 2 & IID R.V. explained
- Additional Stats help

- W14Phil6334
- Frequentists in Exile
- Blog Bagel
- (LSE) PH500
- 12-12-12 December 12 Seminar (10-12)
- 12-12-12 (background): Some Recipes for p-values, type 1 and 2 error probabilities, power, etc.
- 5 Dec. seminar reading (remember it is 10a.m.-12p.m.)
- 28 Nov. Seminar and Current U-Phil
- AUTUMN SEMINARS: Contemporary Philosophy of Statistics
- office hours week of Dec. 3-10 Dec:
- SUMMER SEMINARS: Contemporary Philosophy of Statistics

- Elbar Grease

** **

**Class, Part 2: A. Spanos:**

Probability/Statistics Lecture Notes 1: Introduction to Probability and Statistical Inference

Day #1 slides are here.

This site uses Akismet to reduce spam. Learn how your comment data is processed.

- "A small p-value indicates it’s improbable that the results are due to chance alone" –fallacious or not? (more on the ASA p-value doc)
- Guest post: Bad Pharma? (S. Senn)
- New venues for the statistics wars
- A letter in response to the ASA's Statement on p-Values by Ionides, Giessing, Ritov and Page
- The Meaning of My Title: Statistical Inference as Severe Testing: How to Get Beyond the Statistics Wars
- S. Senn: Personal perils: are numbers needed to treat misleading us as to the scope for personalised medicine? (Guest Post)
- Stephen Senn: Rothamsted Statistics meets Lord’s Paradox (Guest Post)
- Excursion 1 Tour I: Beyond Probabilism and Performance: Severity Requirement (1.1)
- Frequentists in Exile
- Statistical Science: The Likelihood Principle issue is out...!

- May 2019
- April 2019
- March 2019
- February 2019
- January 2019
- December 2018
- November 2018
- October 2018
- September 2018
- August 2018
- July 2018
- June 2018
- May 2018
- April 2018
- March 2018
- February 2018
- January 2018
- December 2017
- November 2017
- October 2017
- September 2017
- August 2017
- July 2017
- June 2017
- May 2017
- April 2017
- March 2017
- February 2017
- January 2017
- December 2016
- November 2016
- October 2016
- September 2016
- August 2016
- July 2016
- June 2016
- May 2016
- April 2016
- March 2016
- February 2016
- January 2016
- December 2015
- November 2015
- October 2015
- September 2015
- August 2015
- July 2015
- June 2015
- May 2015
- April 2015
- March 2015
- February 2015
- January 2015
- December 2014
- November 2014
- October 2014
- September 2014
- August 2014
- July 2014
- June 2014
- May 2014
- April 2014
- March 2014
- February 2014
- January 2014
- December 2013
- November 2013
- October 2013
- September 2013
- August 2013
- July 2013
- June 2013
- May 2013
- April 2013
- March 2013
- February 2013
- January 2013
- December 2012
- November 2012
- October 2012
- September 2012
- August 2012
- July 2012
- June 2012
- May 2012
- April 2012
- March 2012
- February 2012
- January 2012
- December 2011
- November 2011
- October 2011
- September 2011

Unauthorized use and/or duplication of this material without express and written permission from this site’s author and/or owner is strictly prohibited.

Excerpts and links may be used, provided that full and clear credit is given to Deborah G. Mayo and Error Statistics Philosophy with appropriate and specific direction to the original content.

Very nice, thanks! There is one thing I’m very curious about. Where did Aris get the knowledge from what the “medieval soldiers” knew before Cardano? Are there sources for this?

He may not read this so can you please pass the question on to him?

Thanks!

Christian: Thanks I will. I know it’s in his book.

Catching up on the readings …

The medieval soldiers bit might just be reasonable speculation. Of course, the soldiers would only need to observe that dicers whose lucky number was 7 tended to end up better off than dicers whose “lucky number” was 6 or 8.

The even -versus-odd sum for rolls of a pair of dice doesn’t need the elaborate working out of the 36 possibilities to get a correct answer. If one of the dice has a 50 percent chance of being odd, there is a 50 percent chance that the sum of two dice will be odd. This is true regardless of how many sides the second die has, or what the number pattern is on the second die, or what the probabilities are of rolling each number on the second die.

It is interesting that if the first die is our familiar six-sided die with equal chances of digits 1 through 6, then there is a one-third chance that the sum of two dice will be divisible by 3, regardless of the numbers or chances on the second die. This pattern (1/2 chance of number divisible by 2, 1/3 of number divisible by 3) does not extend to numbers divisible by 4 or 5, but holds for numbers divisible by 6.

This can all be seen by conditioning on the number x shown by the second die, without knowing what x is.

I have no idea whether medieval soldiers, or Cardano for that matter, realized this, although it seems to me more graspable by intuition than the more elaborate working out of the 36 possibilities.

One of the noteworthy issues that arose in our seminar is the difference between the way philosophers tend to talk about probability, namely, in terms of statements corresponding to the occurrence of events, and truth functions “and, or, not..” in relation to them. This contrasts to the corresponding set-theoretic operations and terms, but I cannot remember if I’ve seen a complete “translation”.

In any event, it’s clear that philosophers of probability don’t generally use “random variables”. Terms like (X=x) are considered foreign. But they too could be precisely defined within a formal system of (quantified) logic, with identity. It would be good to demystify random variables. (I’m not saying I’ve succeeded.)

Consider a finite outcomes set S = {s1,s2,s3,…sk}.

A random variable is relative to an event space F: a field associated with the space of events of interest.

A simple random variable X with respect to the event space F, is a function assigning real numbers to each member of S satisfying certain conditions (so as to “preserve the event structure of event space F”.)

A random variable assigns real numbers to the si . Each si gets mapped to a real number by X.

Say S consists of outcomes of two coin tosses:

S = {(T,T), (T,H), (H,T), (H,H)}

If one is interested in the number of “heads” in the 2 coin tosses, random variable X may assign numbers as follows:

X(T,T) = 0

X(T,H) = 1

X(H,T) = 1

X(H,H) = 2

(X = 1) is a shorthand for the set consisting of the members of S that X maps to 1:

{the si that X maps to 1} = {s: X(s) = 1}

So it’s a shorthand for an event.

In general,

(X = x) is a shorthand for {s: X(s) = x}, that is, “the set of si that X maps to x” (lower case x is the value of X)

The above assignment corresponds to 3 events of interest:

A0 ={s: X = 0} = {(T,T)}

A2 = {s: X = 1} = {(T,H), (H,T)}

A3= {s: X= 2} = {(H,H)}

Event space corresponding to X:

F ={ S, { }, {(T,T)}, {(H,H)}, {(T,H), (H,T)}, {(T,T), (H,H)}, {(T,H), (H,T), (H,H)}, {(T,H), (H,T), (T,T)},

Suppose on the other hand that:

X(T,T) = 0

X(T,H) = 1

X(H,T) = 3

X(H,H) = 2

(X = 3) = {(H,T)} but this is not an element of the event space F.

Send thoughts, corrections. For a statistician, I realize, it’s minutia.

For a finite set one would normally use the power set as event space, so that one doesn’t have to bother about sets that may not be member of the event space.

Otherwise it looks alright.

Christian: That was one of the points that arose in trying to define random variables. Once “events of interest” are identified, the corresponding field must reflect it.

Why would one want to restrict the field more than what would be mathematically necessary? (I’m not an expert in quantum physics and rumor has it that there are reasons for doing such things there but I haven’t yet come across a situation in which there were compelling reasons. How can it hurt to be able to handle more probabilities than one is interested in initially?)

Christian: I take the point to be that if one is interested in a subset of events, e.g., # of heads in two tosses, that the values of the random variable must correspond, else the probabilities don’t add up.