At the end of last year I received a strange email from the editor of the British Medical Journal(BMJ) appealing for ‘evidence’ to persuade the UK parliament of the necessity of making sure that data for clinical trials conducted by the pharmaceutical industry are made readily available to all and sundry. I don’t disagree with this aim. In fact in an article(1) I published over a dozen years ago I wrote ‘No sponsor who refuses to provide end-users with trial data deserves to sell drugs.’(P26)
However, the way in which the BMJ is choosing to collect evidence does not set a good example. It is one I hope that all scientists would disown and one of which even journalists should be ashamed.
The letter reads
“Dear Prof Senn,
We need your help to show the House of Commons Science and Technology Select Committee the true scale of the problem of missing clinical data by collating a list of examples.
The BMJ has documented problems the Cochrane Collaboration had in receiving enough data to fully scrutinise the clinical effectiveness of Tamiflu; but that is just the tip of the iceberg.
We would like to see any report of obstruction, from any researchers or companies, which you are aware of. This can be from your own work, the work of others that you have read, or media reports.
Could I ask you to fill in our quick online form, we will then collate the details and publish in this publically available spreadsheet.
Together we can convince the UK government to take this problem seriously.
Editor in Chief, BMJ “
When you click on the URL you get to a site which includes the following message.
“The BMJ has highlighted a few drugs of concern, most notably Tamiflu. However we are very aware that they are just the tip of the iceberg. To show the true scale of the problem we want to collate an exhaustive list of drugs where data has (sic) been hidden from public scrutiny.”
The irony in all this is that as the Evidence Based Medicine (EBM) movement keeps on reminding us, missing evidence is a problem not just because any loss of evidence is to be deplored but because the sort of evidence that is missing is unlike that which is present. Hence if we just naively use the evidence it is not just that we lose certainty but that we will make judgements with false certainty.
However, a point the EBM movement sometimes fails to appreciate is that what applies to evaluating trials applies to evaluating evidence generally. What sort of an example is the BMJ setting by collecting evidence in this way? In a recent blog I pointed out that Ben Goldacre had misinterpreted the evidence on editorial bias by overlooking this difficulty. (See also my online articles on the subject(2, 3).)
Furthermore, as Adam Jacobs has pointed out there is something very strange about the Tamiflu story. It seems that Roche has already provided much of the data, including all the standard summaries. Presumably the BMJ would like them to provide more but is their attitude fair? Is it the case, for example, that access to all original documents of any sort whatsoever related to a clinical trial are usually required for meta-analyses, for example those provided by the Cochrane Collaboration (CC) or those published in the BMJ? If not, what exactly is going on here?
In fact neither the CC nor the BMJ require access to original data for a meta-analysis. Here is an example from one published only last year in the BMJ.
“Independent reviewers extracted means (final scores or change score), standard deviations, and sample sizes from studies using a standardised data extraction form. When there was insufficient information in trial reports, we contacted authors or estimated data using methods recommended in the Cochrane Handbook for Systematic Reviews of Interventions. Briefly, when the mean was not reported we used the median; when standard deviations could not be estimated, we adopted the standard deviation from the most similar study”(4) (My emphasis.)
I am not suggesting that this is a bad meta-analysis, it strikes me as being above average, but if the editor of the BMJ can publish meta-analyses using summaries and, in some cases, imputed data, how can she justify her stand on Tamiflu?
The position the BMJ and others are taking could be parodied like this. ‘The methods employed by journals to check the accuracy of data in studies they publish are cursory, with minimal time allowed for statisticians to check analyses and original data hardly ever being provided to them. On the other hand regulatory clinical trial reports are checked in detail by the regulator. Our first priority is to check again those trials that have already been checked in detail. We are unconcerned about any we publish that have scarcely been checked at all.”
If anybody doubts how useless the medical journals are at detecting gross data errors and how slow they are to correct them once found, they should have a look at this very nice paper(5) written by Keith Baggerly and Kevin Coombes published in the Annals of Applied Statistics, or simply Google ‘Baggerly Cooombes’ for a mass of very interesting stories on the web.It seems to me that the BMJ would be much more effective by setting an example. Could they not contact their readers by asking them to report experiences (good and bad) they have had trying to get hold of original data from BMJ articles? *Head of Competence Center for Methodology and Statistics (CCMS)
These are my personal views and should not be ascribed to any organisations past or present with whom I am or have been associated. My declaration of interest is here.
1. Senn SJ. Statistical quality in analysing clinical trials. Good Clinical Practice Journal. 2000;7(6):22-6.
2. Senn S. Misunderstanding publication bias: editors are not blameless after all. F1000Research. 2012;1.
3. Senn S. Authors are also reviewers: problems in assigning cause for missing negative studies. F1000Research. 2013;2.
4. Pinto RZ, Maher CG, Ferreira ML, Ferreira PH, Hancock M, Oliveira VC, et al. Drugs for relief of pain in patients with sciatica: systematic review and meta-analysis. BMJ. 2012;344:e497. Epub 2012/02/15.
5. Baggerly K, Coombes KR. Deriving chemosensitivity from cell lines forensic bioinformatics and reproducible research in high-throughput biology. The Annals of Applied Statistics. 2009;3(4):1309-34.