At the end of last year I received a strange email from the editor of the British Medical Journal(BMJ) appealing for ‘evidence’ to persuade the UK parliament of the necessity of making sure that data for clinical trials conducted by the pharmaceutical industry are made readily available to all and sundry. I don’t disagree with this aim. In fact in an article(1) I published over a dozen years ago I wrote ‘No sponsor who refuses to provide end-users with trial data deserves to sell drugs.’(P26)
However, the way in which the BMJ is choosing to collect evidence does not set a good example. It is one I hope that all scientists would disown and one of which even journalists should be ashamed.
The letter reads
“Dear Prof Senn,
We need your help to show the House of Commons Science and Technology Select Committee the true scale of the problem of missing clinical data by collating a list of examples.
The BMJ has documented problems the Cochrane Collaboration had in receiving enough data to fully scrutinise the clinical effectiveness of Tamiflu; but that is just the tip of the iceberg.
We would like to see any report of obstruction, from any researchers or companies, which you are aware of. This can be from your own work, the work of others that you have read, or media reports.
Could I ask you to fill in our quick online form, we will then collate the details and publish in this publically available spreadsheet.
Together we can convince the UK government to take this problem seriously.
Yours Sincerely,
Fiona Godlee
Editor in Chief, BMJ “
When you click on the URL you get to a site which includes the following message.
“The BMJ has highlighted a few drugs of concern, most notably Tamiflu. However we are very aware that they are just the tip of the iceberg. To show the true scale of the problem we want to collate an exhaustive list of drugs where data has (sic) been hidden from public scrutiny.”
The irony in all this is that as the Evidence Based Medicine (EBM) movement keeps on reminding us, missing evidence is a problem not just because any loss of evidence is to be deplored but because the sort of evidence that is missing is unlike that which is present. Hence if we just naively use the evidence it is not just that we lose certainty but that we will make judgements with false certainty.
However, a point the EBM movement sometimes fails to appreciate is that what applies to evaluating trials applies to evaluating evidence generally. What sort of an example is the BMJ setting by collecting evidence in this way? In a recent blog I pointed out that Ben Goldacre had misinterpreted the evidence on editorial bias by overlooking this difficulty. (See also my online articles on the subject(2, 3).)
Furthermore, as Adam Jacobs has pointed out there is something very strange about the Tamiflu story. It seems that Roche has already provided much of the data, including all the standard summaries. Presumably the BMJ would like them to provide more but is their attitude fair? Is it the case, for example, that access to all original documents of any sort whatsoever related to a clinical trial are usually required for meta-analyses, for example those provided by the Cochrane Collaboration (CC) or those published in the BMJ? If not, what exactly is going on here?
In fact neither the CC nor the BMJ require access to original data for a meta-analysis. Here is an example from one published only last year in the BMJ.
“Independent reviewers extracted means (final scores or change score), standard deviations, and sample sizes from studies using a standardised data extraction form. When there was insufficient information in trial reports, we contacted authors or estimated data using methods recommended in the Cochrane Handbook for Systematic Reviews of Interventions. Briefly, when the mean was not reported we used the median; when standard deviations could not be estimated, we adopted the standard deviation from the most similar study”(4) (My emphasis.)
I am not suggesting that this is a bad meta-analysis, it strikes me as being above average, but if the editor of the BMJ can publish meta-analyses using summaries and, in some cases, imputed data, how can she justify her stand on Tamiflu?
The position the BMJ and others are taking could be parodied like this. ‘The methods employed by journals to check the accuracy of data in studies they publish are cursory, with minimal time allowed for statisticians to check analyses and original data hardly ever being provided to them. On the other hand regulatory clinical trial reports are checked in detail by the regulator. Our first priority is to check again those trials that have already been checked in detail. We are unconcerned about any we publish that have scarcely been checked at all.”
If anybody doubts how useless the medical journals are at detecting gross data errors and how slow they are to correct them once found, they should have a look at this very nice paper(5) written by Keith Baggerly and Kevin Coombes published in the Annals of Applied Statistics, or simply Google ‘Baggerly Cooombes’ for a mass of very interesting stories on the web.
It seems to me that the BMJ would be much more effective by setting an example. Could they not contact their readers by asking them to report experiences (good and bad) they have had trying to get hold of original data from BMJ articles? *Head of Competence Center for Methodology and Statistics (CCMS)Declaration
These are my personal views and should not be ascribed to any organisations past or present with whom I am or have been associated. My declaration of interest is here.
1. Senn SJ. Statistical quality in analysing clinical trials. Good Clinical Practice Journal. 2000;7(6):22-6.
2. Senn S. Misunderstanding publication bias: editors are not blameless after all. F1000Research. 2012;1.
3. Senn S. Authors are also reviewers: problems in assigning cause for missing negative studies. F1000Research. 2013;2.
4. Pinto RZ, Maher CG, Ferreira ML, Ferreira PH, Hancock M, Oliveira VC, et al. Drugs for relief of pain in patients with sciatica: systematic review and meta-analysis. BMJ. 2012;344:e497. Epub 2012/02/15.
5. Baggerly K, Coombes KR. Deriving chemosensitivity from cell lines forensic bioinformatics and reproducible research in high-throughput biology. The Annals of Applied Statistics. 2009;3(4):1309-34.
Stephen: Thanks so much for this. As an outsider, knowing nothing about the BMJ’s practices, or their relationship to the EBM enterprises, I could not immediately discern (from the start of your post) if your main complaint is hypocrisy on the journal’s part, or the fact that they’re encouraging people to name names about obstructionists (which might enable other sins). At first I thought it was the latter, but now I take it that it is (mainly) the former. On the latter, the EBM do-gooders may have a tendency to go too far in their self-righteousness. After all, if everyone is so self-interested in these matters, a decision to report on “obstructionists” could be biased as well. (Maybe you have thousands of shares of stock in Glaxo Smith Kline but have shorted Johnson and Johnson, for example). Just an off-the-cuff thought.
Journalists don’t have to be ashamed of this at all. BMJ is, in this case, acting journalistically, and not trying to do an impartial review of the entire situation. Specifically, BMJ just wants to collect a lot of bad examples – of what everyone accepts is a problem – to show that *in absolute terms* a lot of them exist.
Collecting examples which are both good and bad, in the way you suggest, one might (somehow accounting for non-response biases) be able to draw inference on the *relative* proportions of the different types of examples. This would be great, too. But it’s answering a different question to BMJ’s simple count, and the question it answers is not one that serves BMJ’s immediate needs. So, your criticism is not appropriate.
Also inappropriate here is “sic” – over whether “data” is a mass noun or a plural. Both are standard.
General remark:
Just to take my natural contrarian and skeptical view (particularly after looking at Senn’s reference—
http://dianthus.co.uk/the-strange-story-of-the-tamiflu-data)
I wonder if the official enterprises (be they journals or other groups), set up to promote the right uses of EBM, do themselves a disservice by linking so closely to persons who very publicly have to keep up the active book selling and pumped-up performances*. Ironically, the drug companies, especially if small with just a few drugs, have much more to lose from non-disclosures than do the popular “big pharma” debunkers–at least companies in the U.S.
*That said, I watched a couple of Goldacre U-tubes (after one of Senn’s criticisms), and thought he was pretty entertaining and funny. Nowadays, I suppose it’s common for politicians to be more closely linked to comics than in the past.
Goldacre is a doctor not a comic.
But his popular acts are stand-up routines.
The problem is that the BMJ had pretentions (I assume) of being a scientific publication. I expect that they would reject that they were just journalists.
Anybody can use data as singular if they like. It’s just not something I want to be accused of. Hence the sic.
Stephen: I’m curious to know which of the two complaints I note is closest to yours, or neither, or both?
It’s a good question Deborah. My real complaint was not really explained in my piece and is that the main problem with data in the medical literature is not those that are missing but those that are present that aren’t so. The really big story of recent years in my opinion is not the fact that yet more details of the Roche Tamiflu data ought to be in the puclic domain but that data from Duke University published in several leading journals turned out to be complete nonsense and had to be withdrawn. (Ten retractions so far, I think.) You would have thought that this would have been a wake-up call for the journals but as far as I am aware, the journals have done nothing to reduce the chance of it happening again.
To put it another way, the main problem is not the regulatory process it’s the journals.
Stephen:
“To put it another way, the main problem is not the regulatory process it’s the journals.”
Well this is yet a third potential culprit, but I’m not sure I understand…. Don’t drug companies report to the FDA, say, before publishing? I’m also not familiar with the example, except vaguely back in the days of H1N1, …
So say a bit more or give a link please.
Here’s a link for now
http://bioinformatics.mdanderson.org/Supplements/ReproRsch-All/Modified/StarterSet/index.html
More explanation later.
“I expect that they would reject that they were just journalists.”
Please don’t misrepresent my comments. I did not say they were “just journalists”. I specifically said BMJ was acting journalistically “in this case”.
BMJ acts journalistically in lots of cases. As well as publishing research reports, it has a news section and runs campaigns and investigations. It wins journalism awards, so it’s hardly a secret that it does some journalism. As part of its journalistic work, occasionally it will make sense to elicit particular examples – as they are doing here.
If you don’t like this being part of BMJ, fine, but what you call “casting stones” is “journalists doing their job”.
OG: I take it that Senn’s main point (in his comment) has to do with the source of the problem that BMJ is presumably trying to call attention to, if not ameliorate. It would make sense for them to be self-critical if the interest is in fixing the problem. I’m not familiar with the case, even having scanned some of his links. Are you?
Mayo: yes, I’m familiar with the case. Originally, Senn’s point was that he didn’t like journalists using journalistic methods, such as giving illustrative ‘case-study’ examples (e.g. evidence-witholding on Tamiflu). He doesn’t like journals containing journalism as well as publishing peer-reviewed science.
His subsequent grumble was that journals are terrible at the peer-review bit. And to help tell this story, he’s using a case study of a single bad example (Duke’s microarray Pottigate scandal), chosen entirely ad hoc.
So… it’s okay to use case-studies if you’re Stephen Senn, but not if you’re the BMJ? Senn the scientist might have reservations about what Senn the story-teller is doing, no?
OG I’m really not getting your beef. Senn is writing on a blog, not sending out letters to professionals to report obstructionists or whatever…
Dear OG,
The problem is that when they present their ‘evidence’ the person to whom they present it may forget that the evidence received should be allocated to the Daily Telegraph or even Daily Mail bin (I am sorry but I don’t know the US equivalent) rather than the scientific journal bin. I think that this dual role is a bad idea. For the Royal Statistical Society there is a clear distinction between Series A, B and C and Significance or for that matter RSSNews. I think that the BMJ is trading on its scientific credentials in doing this. Or to put it another way Dr Godlee as scientific editor should have reservations as to what Dr Godlee as journalist is doing.
To put it yet another way, I don’t approve of what the popular press does with science. At least in the Daily Mail case they also publish horoscopes so I suppose no one could accuse them of being expected to be taken seriously.
Stephen: This is becoming ever more mysterious to me. I watched some of the video delineating the blatant mistakes made on some gene microarray studies that had already, I think gone into trials. There was one mention of Duke, nothing about tamiflu or BMJ or the letter soliticiting info….Can you tie all the pieces together? (maybe one has to watch the other videos as well?) I liked the notion of bioinformatics forensics: invented to specifically try to puzzle out what researchers could actually have done to get their results!
My impression is there are various beliefs about what is an appropriate release of data:
1. Just the summary tables, hopefully with enough definitions to make using them valid. This is quite safe to do, and they should be made publicly available as a matter of course.
2. Full clinical trial reports (excluding data listings), so that there is a more thorough knowledge of the trial. The main trouble with making these publicly available is that they may reference subjects. They should be made available to anyone with a valid reason.
3. Full data. These would have the nice property of allowing the effect of various definitions and exclusion criteria to be evaluated, and allowing individual patient meta-analysis. There are problems about whether it is appropriate to re-analysis under a different analysis plan. There are situations where a new and better analysis could be performed, which may be of embarrassment to the drug company.
I left a comment on this blog the other day:
http://honglangwang.wordpress.com/2013/03/03/think-about-statistical-inferences-from-the-ground-up-again/
Wang alludes to a very interesting short article of Brad Efron’s in Significance, which connects to current “large scale” inference in medicine, and thus my previous blogpost (on big data). Efron always has sagacious reflections and has invented many neat methods, but I’m not sure this is really rethinking “inference” rather than expanding on Neyman’s behavioristic account in the context of medical screening. Here the goal really is controlling group error rates and false discovery rates. What is being discovered are potential candidates of interesting genes.
True I gave an illustrative story (involving ten papers not one and a leading American educational institution) but even Goldacre admits that trials run by the Pharma industry are of higher quality. This is what he thinks of this ‘‘…as industry is keen to point out, where people have compared the methods of independently-sponsored trials against industry-sponsored ones, industry trials often come out better. This may well be true, but it is almost irrelevant…” Bad Pharma, p171
This is a summary of my position but I don’t know whether this will help.
1. With the exception of certain large collaborative studies the highest quality trials are those run for regulatory purposes by the pharmaceutical industry. They have detailed protocols, pre-specified analyses, good quality control, expert input and are often reviewed in depth by the regulators.
2. The evidence base as a whole would be improved if these trials were made more visible. Thus, I approve of the movement to get the data from these trial made public.
3. I disapprove, however, of the implication that this is anywhere near being a priority. Far more important is to improve the published record of clinical trials whether sponsored by the pharma industry or not.
4. Even when trials have been run by the industry and reviewed by the regulator the published record is not always reliable. This issue would be a blog in itself but for an interesting case history see.1. Powers JH, Dixon CA, Goldberger MJ. Voriconazole versus liposomal amphotericin B in patients with neutropenia and persistent fever. N Engl J Med 2002; 346: 289-290.
http://www.ncbi.nlm.nih.gov/pubmed/11807157
5 I think that the leading medical journals should be concentrating in trying to improve the quality of what they publish rather than complaining about getting yet more detail on studies that have already been examined by the regulator. It is this that particularly annoys me about the current BMJ stance. The BMJ has to be given credit for instigating a policy as of this year of requiring original data from published clinical trials to be made available. (See
http://www.bmj.com/content/345/bmj.e7888 ) But this policy is not retroactive and so does not apply to the era of Tamiflu.
6 However, in the long run I hope that the medical journals will become irrelevant as a means of publishing clinical trials data. I think that the internet is the place to publish them and the journals should be reserved for commentaries.
Stephen: Thanks for this overview, which I need to study more carefully. This is in sync with my understanding that the pharmaceutical industry has a great incentive to avoid disasters. I watch bio stocks quite a lot over the years and my impression is that (a) the FDA site posts everything the panel sees, and you can even watch the panel hearings often (i do not claim to know if this is always the case), and (b) that this is before publication. But does this suggest the flawed published articles were not under regulatory scrutiny? Or that when they wrote it up they did a poor job?
I am intrigued by your claim that “the medical journals will become irrelevant as a means of publishing clinical trials data. I think that the internet is the place to publish them and the journals should be reserved for commentaries”. I have to admit that I’ve never understood why the material on the FDA site was not better known—one has to dig around for it, and just in time before the panel meets (at least for stock predictive capacity). But it requires synthesis, there are gobs and gobs of it…
I’m writing this quickly. I hope others have more detailed background information about Senn’s points. I also don’t know the differences between the U.S. and the U.K.
Deborah, I think that the Cochrane Collaboration won’t accept syntheses; they want access to the original data so that they can provide the syntheses.
Stephen: OK, so then the FDA records could go straight to a Cochrane-like outfit and bypass publication?
With the microarray example, I thought they were already starting trials of some sort, but on some mixed-up data? I really don’t know the regulations for things like custom-tailored drugs.
Deborah, to pick up on an earlier point of yours, the reason I think that the journals have to be bypassed is that
1. Journals cannot guarantee publication, let alone publication within the sort of time limit (one year after trial end) now being promoted. Review, rejection, resubmission all contribute uncertainty and delay.
2. In any case they can’t publish the data so that they must be published separately. If the data are on the internet what is the point of the journal?
3. The requirement that results are under embargo until they appear in the journal in which they will be published imposes further delay.
4. The peer review process does little to improve quality because the time allowed to statisticians (and others) to review is insufficient and the data are never provided anyway. If the data are provided on the internet the reviews will follow automatically.
I think the whole scene is changing and I sometimes put it like this: we are moving from an era of private data and public analyses to one of public data and private analyses.
Over ten years ago I wrote this:
“I cannot see what argument there can be against requiring that
where the regulator has, on behalf of the consumer, come to a judgement that a drug should be given a licence, the consumer should also have access to the evidence. Of course, this raises all sorts of issues, not least the fear that the only person who will ever perform prespecified analyses is the pharmaceutical industry statistician.A host of others will come after and trawl the
data at will.”
(See Senn, S. J. (2002). “Authorship of drug industry trials.” Pharmaceutical Statistics 1: 5-7.)
As regards your most recent question, my understanding is that the 2006 Nature Medicine data did not have regulatory review. In fact Keith Baggerly has said , ’It seems likely that several of the problems identified with the Duke trials would have been caught by an FDA review….’ See http://simplystatistics.org/2012/03/25/some-thoughts-from-keith-baggerly-on-the-recently/
For those who are interested the House of Commons Select Committee on clinical trials has now posted the ‘evidence’ it has received. There are contributions by the BMJ, Ben Goldacre, me and 58 other persons and organisations. (I like to think that my contribution is rather more ‘punchy’ and direct than most of the others.) The evidence is available by clicking on “Written evidence” at the bottom of this page:
http://www.parliament.uk/business/committees/committees-a-z/commons-select/science-and-technology-committee/news/121213-clinical-trials-inquiry-announced/
Stephen: Thanks so much for posting this link, it’s extremely interesting, though I’ve only read parts of it. (Can’t seem to search it.) What’s your overall take on the reactions? Anything surprise you?
Deborah: I haven’t read it all but nothing I have read so far surprised me. It might be worth looking at the statement from Roche (starts page 118) simply to see that it is possible to have another point of view. You should, of course, check out what the BMJ has to say. Their submission starts on P132. Actually our (my and the BMJ’s) recommendations are very similar; the main difference is that the BMJ does not appear to realise it is part of the problem. (This is true for the medical press generally, not just the BMJ.) The funniest thing in the BMJ submission is the very beginning when they refer to a particular piece of evidence as ‘a recent peer reviewed article in the BMJ’ as if this meant something. (It’s not the BMJ bit I find risible but the peer review.) This is yesterday’s discredited model. If the journals can’t trust the extensive review carried out by the regulators, why should the rest of us attach any importance to the cursory reviews carried out for the journals?
Otherwise there is an interesting statement by Iain Chalmers starting on P57 that is worth looking at. Ben Goldacre’s submission starts on p294. This is much better than his book and my opinion on that has been given elsewhere. It would also be worth looking at Elizabeth Wager’s submission. (Starts p54) She gives a good explanation of various mechanisms that can obstruct publication.
My ‘evidence’ starts on p37.
Stephen: I really appreciate this. I had (of course) read yours. Also the one by Roche. Will look for the others you mention. But not being an insider, I’m sure I’m missing a lot of the dynamics, so I’m grateful for the pointers.
Goldacre announces a show (by him) on publication bias:
http://www.badscience.net/2013/03/im-on-the-one-show-talking-about-missing-trials-tonight/#more-2855