3 YEARS AGO (JUNE 2012): MEMORY LANE

3 years ago...
3 years ago…

MONTHLY MEMORY LANE: 3 years ago: June 2012. I mark in red three posts that seem most apt for general background on key issues in this blog.[1]  It was extremely difficult to pick only 3 this month; please check out others that look interesting to you. This new feature, appearing the last week of each month, began at the blog’s 3-year anniversary in Sept, 2014.

 

June 2012

[1]excluding those recently reblogged. Posts that are part of a “unit” or a group of “U-Phils” count as one.

Categories: 3-year memory lane | 1 Comment

Post navigation

One thought on “3 YEARS AGO (JUNE 2012): MEMORY LANE

  1. Huw Llewelyn (@HL327)

    You ask at the end of the last blog a question that leads on to this new blog: “Where are the philosophers?” In my MD thesis, I offered a description of what doctors and medical scientists appear to do in terms of probability and set theory (not what they ought to do). I did this from the point of view of a doctor and medical scientist who does it every day. I will try to kick off the discussion with an aspect of what I described in my MD thesis (and which is repeated in the final chapter of the Oxford Handbook of Clinical Diagnosis). So, what I will suggest again here is ‘descriptive’ philosophy as opposed to ‘normative’ philosophy (e.g. that someone ‘ought’ to be Bayesian or ‘ought’ to be a ‘frequentist’ statistician) and invite criticism if I reason incorrectly.

    In my experience, a diagnosis (e.g. Diabetes Mellitus) or hypothesis (e.g. that some drug benefits more patients with diabetes than it often than harms) is the title to one or more predictions with varying degrees of certainty about what has happened in the past, what is happening now and what may happen in the future. Some of these predicted phenomena may be about things that cannot be observed (e.g. the mean temperature of the seas millions of years ago) and other things that can be observed (e.g. mean blood pressure readings over 24 hours). A working diagnosis becomes final and a hypothesis becomes a theory when there is no intention (or no prospect) of trying to test it by observation. A change of intention to make further observations can change a theory back to a hypothesis or working diagnosis.

    A probability statement under the heading of a hypothesis has to be based on some established observation(s) that is the conditional evidence. A ‘prior’ probability of an event (e.g. observing a study result of 234/567 (R) or a ‘true’ value of 0.198 (T)) has to be based on ‘universal’ evidence, U (e.g. conditional on all the facts about a study other than the actual study outcomes ‘R’ and ‘T’). The result (R) is an example of an observable prediction but the ‘true’ result ‘T’ based on an infinite number of observations is an example of a non-observable phenomenon. Nevertheless, the probabilities of the observed study result R and the true result T are all restricted to a situation with the universal conditional evidence U (e.g. the features of the single study being considered).

    This in turn means that p(U/U^T) = 1 and p(U/ U^R) = 1. If x and y are such that p(U^T/U) =x and p(U^R/U) = y then when ‘k’ is a constant, p(U^T/U^R) = (x*k) and p (U^R/U^T) = (y*k). Rearranging, we get an expression for the inverse probability known as Bayes rule (but by usual convention ‘U’ is omitted from the expression and taken as read):

    p(T/R) = p(T)*p(R/T) / p(R) because (x*k) = x*(y*k) / y.

    Now if some fact (F) about the study becomes available after the result (R) had been seen, then this would have to be included in the new updated conditional evidence statement for p(T/R^F) provided that F is also a proper sub-set of U so that p(U/F) = 1. This probability of T given a combination of the new evidence F with R may have to be guessed by making a statistical independence assumption by assuming that p(R^F/T) = p(R/T)*p(F/T) (which may be wrong – so note the approximate equality ≈):

    p(T/R^F) ≈ p(T)*p(R/T)*p(F/T) /p(R)

    The problem is that the evidence E1^E2 … ^En = U that was the conditional evidence for the prior probability is not usually expressed in a transparent way when asserting a Bayesian prior probability. However, is it because it cannot be assumed then that F is a subset of U (which has already been fixed in an implied, non-transparent way before the study was performed) or because the assumption of statistical independence may be wrong that there is such consternation about including new evidence after the prior has been chosen and the study done? Or is the consternation because the ‘new evidence’ may be that the posterior probability p(T/R) ‘looks odd’ and that the original prior of p(T) may have been based on a poor guess, which Bayesians assume cannot happen because a prior, non-transparent personal belief cannot be ‘incorrect’ as it is simply a personal opinion? Is it because of some other reason that can be expressed in terms of the above reasoning and notation?

I welcome constructive comments for 14-21 days. If you wish to have a comment of yours removed during that time, send me an e-mail.

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s

Blog at WordPress.com.