S. Stanley Young, PhD
Assistant Director for Bioinformatics
National Institute of Statistical Sciences
Research Triangle Park, NC
Here are Dr. Stanley Young’s slides from our April 25 seminar. They contain several tips for unearthing deception by fraudulent p-value reports. Since it’s Saturday night, you might wish to perform an experiment with three 10-sided dice*,recording the results of 100 rolls (3 at a time) on the form on slide 13. An entry, e.g., (0,1,3) becomes an imaginary p-value of .013 associated with the type of tumor, male-female, old-young. You report only hypotheses whose null is rejected at a “p-value” less than .05. Forward your results to me for publication in a peer-reviewed journal.
*Sets of 10-sided dice will be offered as a palindrome prize beginning in May.
The key statement is ‘if you report just the significant ones’. The P-value per test is controlled provided that each test is performed correctly. Of course the P-values within a given trial are implausibly independent and dependence may cause some difficulties for interepretation. Nevertheless, if the null is true the expected number of P-values less than 0.05 (say) will not be more than 1/20 however many tests you do.
The main sin of multiplicity is doing lots of stuff and only reporting what appears interesting. This is actually a problem whether or not you do signicance tests.
1. Senn S, Bretz F. Power and sample size when multiple endpoints are considered. Pharmaceutical statistics 2007; 6: 161-170.