Findings of the Office of Research Misconduct on the Duke U (Potti/Nevins) cancer trial fraud: No one is punished but the patients

imgres-2Findings of Research Misconduct
A Notice by the Health and Human Services Dept
on 11/09/2015
AGENCY: Office of the Secretary, HHS.
ACTION: Notice.

-----------------------------------------------------------------------

SUMMARY: Notice is hereby given that the Office of Research Integrity 
(ORI) has taken final action in the following case:
    Anil Potti, M.D., Duke University School of Medicine: Based on the 
reports of investigations conducted by Duke University School of 
Medicine (Duke) and additional analysis conducted by ORI in its 
oversight review, ORI found that Dr. Anil Potti, former Associate 
Professor of Medicine, Duke, engaged in research misconduct in research 
supported by National Heart, Lung, and Blood Institute (NHLBI), 
National Institutes of Health (NIH), grant R01 HL072208 and National 
Cancer Institute (NCI), NIH, grants R01 CA136530, R01 CA131049, K12 
CA100639, R01 CA106520, and U54 CA112952.
    ORI found that Respondent engaged in research misconduct by 
including false research data in the following published papers, 
submitted manuscript, grant application, and the research record as 
specified in 1-3 below. Specifically, ORI found that:

    1. Respondent stated in grant application 1 R01 CA136530-01A1 that 
6 out of 33 patients responded positively to dasatinib when only 4 
patients were enrolled and none responded and that the 4 CT scans 
presented in Figure 14 were from the lung cancer study when they were 
not.
    2. Respondent altered data sets to improve the accuracy of 
predictors for response to treatments in a submitted paper and in the 
research record by:
     Reversing the responder status of 24 out of 133 subjects 
for the adriamycin predictor in a manuscript submitted to Clinical 
Cancer Research
     switching the cancer recurrence phenotype for 46 out of 89 
samples to validate the LMS predictor in a file provided to a colleague 
in 2008
     changing IC-50 and R-code values for the cisplatin 
predictor in a data set provided to NCI in 2010
    3. Respondent reported predictors and/or their validation by 
disregarding accepted scientific methodology so that false data were 
reported in the following:
     Blood 107:1391-1396, 2006: Describing a predictor for 
thrombotic phenotypes
     New England Journal of Medicine 355:570-580, 2006: 
Describing a predictor of lung cancer relapse
     Nature Medicine 12:1294-1300, 2006: Describing a predictor 
for the response to the chemotherapeutic drugs topectan and docetaxol
     Journal of Clinical Oncology 25:4350-4357, 2007: 
Describing a predictor for the response to the chemotherapeutic drug 
cisplatin
     Lancet Oncology 8:1071-1078, 2007: Describing a predictor 
for the response to the combination of the chemotherapeutic drugs 
flurouracil, epirubicin, and cyclophosphamide or docetaxol, epirubicin, 
and docetaxol
     Journal of the American Medical Association 299:1574-1587, 
2008: Describing a predictor for breast cancer relapse
     Public Library Science One 3:e1908, 2008: Describing a 
predictor for the response to the chemotherapeutic drugs paclitaxel, 5-
fluouracil, adriamycin, and cyclophosphamide
     Proceedings of the National Academy of Sciences 105:19432-
19437, 2008: Describing a predictor of colon cancer recurrence
     Clinical Cancer Research 15:7553-7561, 2009: Describing a 
predictor for the response to the chemotherapeutic drug cisplatin
    As a result of Duke's investigation, the published papers listed 
above were retracted.
    Respondent has entered into a Voluntary Settlement Agreement with 
ORI. Respondent neither admits nor denies ORI's findings of research 
misconduct; the settlement is not an admission of liability on the part 
of the Respondent. The parties entered into the Agreement to conclude 
this matter without further expenditure of time, finances, or other 
resources. Respondent has not applied for or engaged in U.S. Public 
Health Service (PHS)-supported research since 2010. Respondent stated 
that he has no intention of applying for or engaging in PHS-supported 
research or otherwise working with PHS. However, the Respondent 
voluntarily agreed:
    (1) That if the respondent obtains employment in a research 
position in which he receives or applies for PHS support within five 
years of the effective date of the Agreement (September 23, 2015), he 
shall have his research supervised for a period of five years;

[[Page 69231]]

    (2) that prior to the submission of an application for PHS support 
for a research project on which the Respondent's participation is 
proposed and prior to Respondent's participation in any capacity on 
PHS-supported research, Respondent shall ensure that a plan for 
supervision of Respondent's duties is submitted to ORI for approval; 
the supervision plan must be designed to ensure the scientific 
integrity of Respondent's research contribution; Respondent agreed that 
he shall not participate in any PHS-supported research until such a 
supervision plan is submitted to and approved by ORI; Respondent agreed 
to maintain responsibility for compliance with the agreed upon 
supervision plan;
    (3) that any institution employing him shall submit, in conjunction 
with each application for PHS funds, or report, manuscript, or abstract 
involving PHS-supported research in which Respondent is involved, a 
certification to ORI that the data provided by Respondent are based on 
actual experiments or are otherwise legitimately derived and that the 
data, procedures, and methodology are accurately reported in the 
application, report, manuscript, or abstract; and
    (4) to exclude himself voluntarily from serving in any advisory 
capacity to PHS including, but not limited to, service on any PHS 
advisory committee, board, and/or peer review committee, or as a 
consultant for period of five years beginning on September 23, 2015.

FOR FURTHER INFORMATION CONTACT: Acting Director, Division of 
Investigative Oversight, Office of Research Integrity, 1101 Wootton 
Parkway, Suite 750, Rockville, MD 20852, (240) 453-8200.

Donald Wright,
Acting Director, Office of Research Integrity.
[FR Doc. 2015-28437 Filed 11-6-15; 8:45 am]
BILLING CODE 4150-31-P

Potti training

Potti training data

Having piled all the blame on Anil Potti, Potti is now free to deny any blame as well.  Scarcely a disincentive to avoid “disregarding accepted scientific methodology” in the future. I will comment later on. Share your thoughts.

The Federal Register Notice is here. For background and key links to this case, please see on this blog:

“Only those samples which fit the model best in cross validation were included” (whistleblower) “I suspect that we likely disagree with what constitutes validation” (Potti and Nevins)

What have we learned from the Anil Potti training and test data fireworks ? Part 1 (draft 2)

https://errorstatistics.com/2015/01/26/trial-on-anil-potti-trial-scandal-postponed-because-lawyers-get-the-sniffles-rejected-post/

 Key Resources:
Categories: Anil Potti, reproducibility, Statistical fraudbusting, Statistics | 12 Comments

Post navigation

12 thoughts on “Findings of the Office of Research Misconduct on the Duke U (Potti/Nevins) cancer trial fraud: No one is punished but the patients

  1. James T. Lee, MD,PhD,FACS,FIDSA,FSHEA

    A sad situation. Particularly odd is the antiseptic verbiage of the “official report”, no doubt the work product of highly paid lawyers and minimum wage wordsmiths. This person, who was obviously caught red-handed, should never again darken the door of any research establishment in the world, period. I have a feeling that “the word is on the street now” regarding this whole sorry debacle so future employment in academia or industry is perhaps now very unlikely with P <<<< .00001.

    I am nonetheless perplexed that Duke settled the matter—Huh ??? The language used to rationalize the option of settlement made it sound as if there exists some alternate version of the story that would be exculpatory. I think not. Was Duke afraid they would be sued in retaliation and, if so, what on Earth would be the basis for the feared lawsuit ?

    Very troubling.

    • James: It is troubling. Your P< .00001 would depend on some rationality assumptions that do not hold in this world. Potti is still practicing somewhere.
      I'm beginning to think there's need for some kind of citizen's watch group that channels public outrage when patients are lured into clinical trials for treating serious diseases based on models/methods that haven't been properly vetted. (It's not like Duke’s flu trials, this is cancer!) As I mention in an earlier post, I received phone calls from individuals who spoke to me in a hush hush sort of way about what they knew of the Potti-Nevins-Barry methodology (after my first post on the case). These were essentially stat technicians working there at the time, expecting to testify in a trial that I believe never occurred. (Does anyone know if there was a resumption after the lawyers got over their colds?* ) One quit because of the irregularities, not sure about the other,perhaps fired for asking too many questions. I'm not sure I get your remark about Duke settling the matter, maybe it's akin to the comment by E. Berk on this blog, below.
      *https://errorstatistics.com/2015/01/26/trial-on-anil-potti-trial-scandal-postponed-because-lawyers-get-the-sniffles-rejected-post/)

  2. Anoneuoid

    After reading through, it is still not clear to me whether the original analysis was reproducible from the papers. If it wasn’t, the fraud isn’t really relevant since there was no reason to take the papers seriously to begin with. He should have been refused future funding and ignored for that reason (along with everyone else who has a consistent history of producing useless research reports).

    If even the analysis was non-reproducible from the description, this should have easily come up during routine independent replication attempts. These are required before testing something on patients, right? That it required a whisteblower would indicate deeper problems with the medical research community.

    As I said, it is not clear to me whether the original analysis was described well enough in the papers.

    • Anoneuoid:
      “If even the analysis was non-reproducible from the description, this should have easily come up during routine independent replication attempts. These are required before testing something on patients, right?”
      You live in a charmed world, or what most people consider a world with normal expectations for clinical trials. When Baggerly and Coombes could not replicate the results and discovered flagrant data errors, they wrote a letter to Nature to which Potti responded: “If they’d only followed the way I got my results they would have obtained them as well!” His way was basically to throw out anything that didn’t look good and double up on on the data points that worked (I am not kidding, read Baggerly and Coombes, and whistleblower Perez). Medical journals wouldn’t publish Baggerly and Coombes’ paper, deeming it altogether too negative! (Who wants to read negative stuff? ) Finally Efron published it in the Annals of Applied Stat. Meanwhile, the trials based on the fraudulent prediction model were proceeding apace. Only after something like 30 statisticians signed a letter in protest were the trials halted. But only temporarily. The internal investigation at Duke (whose members were not given the Baggerly and Coombes critique, nor told of the whistleblower) decided everything was hunky dory, and the patients were called back to resume their “personalized” cancer therapy. Even aside from the deeper problems with the “validation” of the prediction model, the researchers often entered data in backwards so that patients got the treatment deemed least effective (on patients with tumors like theirs). Patients weren’t told of any of the alleged problems. Had Potti not lied on his CV about getting a Rhodes scholarship,the fraud wouldn’t even have been investigated, at least not when it finally was.

  3. A comment from the previous post by McKinney on Efron is relevant to the Potti case. https://errorstatistics.com/2015/11/05/s-mckinney-on-efrons-frequentist-accuracy-of-bayesian-estimates-guest-post/#comment-135241

    The episode is ghastly.

  4. e. berk

    Retraction Watch quotes a Duke official relieved at these findings. A VP for marketing and communications for “Duke Medicine”:
    “We are pleased with the finding of research misconduct by the federal Office of Research Integrity related to work done by Dr. Anil Potti. We trust this will serve to fully absolve the clinicians and researchers who were unwittingly associated with his actions, and bring closure to others who were affected”.

    http://retractionwatch.com/2015/11/07/its-official-anil-potti-faked-data-say-feds/#more-34142

    I don’t get it. Links from your post on whistleblower Perez (the letter published in The Cancer Letter) show that many researchers and administrators at Duke knew about the complaints by Potti’s student:
    “The three-page document was penned by Bradford Perez, then a third-year medical student …
    Instead of rewarding the student’s brilliance with a plaque and a potted plant, Potti’s collaborator and protector, Joseph Nevins—aided by a phalanx of Duke deans—pressured the young man to refrain from making a final complaint and reporting the matter to HHMI”. http://www.cancerletter.com/articles/20150109_1

    The new finding implicates and doesn’t “absolve” them.

  5. Steven McKinney

    Exactly right, e. berk:

    As stated at Retraction Watch:

    ” Update, 1:45 p.m. Eastern, 11/8/15: In a statement, Doug Stokke, vice president of marketing and communications for Duke Medicine, tells us:

    We are pleased with the finding of research misconduct by the federal Office of Research Integrity related to work done by Dr. Anil Potti. We trust this will serve to fully absolve the clinicians and researchers who were unwittingly associated with his actions, and bring closure to others who were affected. ”

    The ORI report in itself does nothing to absolve anyone else, as you correctly note. The ORI investigation focused on Potti’s work, a single researcher alone, as their investigations typically do. The ORI report mentions nothing about other researchers.

    This is nothing but more shameful marketing from Duke personnel, obfuscating the situation with hand waving and double speak as was done for years as Baggerly and Coombes tried to shed light on this situation.

    The clinical trials patients’ court cases have been adeptly settled out of court with non-disclosure agreements, and now with this narrowly focused ORI investigation report, Duke marketing personnel present the spin that this somehow absolves others who were not under scrutiny by the ORI.

    Do not trust the proclamations of such a spin-meister. Closure was brought about in the case of Joseph Nevins by orchestrating a quiet retirement exit. Any groups at Duke still using the methodology involved in the Potti-Nevins et al. fiasco still require careful scrutiny. If the methodology was so good, why did Potti have to fiddle with so many bits of data (as outlined in the ORI report)?

    • Steven: Totally agree. Anyone who wants to hear about how “unwitting” his associates were should read the whistleblower letter by Perez. He was writing directly to Nevins and shared his worries with various deans. But I’ not sure why the NCI, when already suspicious of Pott, gave him so many, many chances to fiddle with the data, and try and try again to show their model “worked”? Readers unfamiliar with the case might look at the Lisa McShane (from the NCI) statement: http://iom.nationalacademies.org/~/media/Files/Activity%20Files/Research/OmicsBasedTests/PAF%20Document%2021.pdf

      • Steven McKinney

        Mayo:

        The NCI gave the Duke group so many chances because the NCI biostatistics group involved were a talented, fair and honest bunch of researchers. When the Duke group wanted to start clinical trials, they already had publications in some fine glossy journals. The Duke group presented their methods and findings to the NCI with an apparently stellar set of references and used language that made it all seem so good. The NCI had to check into the methods, since human trials were proposed.

        The Duke group provided reams of documentation so reading all of it, making notes, comparing across documents etc. would have taken many days of effort. The Duke group provided some, but not all computer programs used to produce the results in the papers they cited in their clinical trials grant proposals. The NCI group had to figure out how to use the suite of under-documented code. Some of the code was provided in a binary format, without source code, so the NCI had to just play around with that code, to try and figure out what was going on inside that “black box” (TreeProfiler). The NCI group had to rewrite some of their own code to do some analyses that the Duke group could not provide computer code for (e.g. Bayesian classification tree – see PAF Document 11.pdf).

        As the NCI group started noticing problems, and contacting Duke about them, there was much to-and-fro letter writing, emailing and the like. Thus it took many months for the NCI to amass enough findings to clearly demonstrate the problems involved.

        Because the NCI group gave the Duke group so many, many chances, the NCI group eventually ended up with plenty of evidence to clearly present their case for shutting down the trials, as they did at the IOM hearings. Short of that, the marketing skills of Duke personnel would have cast a dark light on the NCI as just another heavy handed government agency interfering with such a fine group just trying to save people’s lives with such great new methodology.

  6. Stan Young

    Duke did at least two things wrong.
    1. They messed up the data set building (Potti messed up a sort/merge so data was not aligned properly.) It was technical mistake by a person not trained in data processing. There was also a management mistake. No one was checking the work of Potti.
    2. In the face of real evidence Duke was wrong with their data processing, still no one checked the work of Potti, then they stonewalled and worse.

    My understanding is that the research team was effectively disbanded at the time the problems were figured out.

    There was a systems problem. The real problem is that research at universities is more or less a cottage industry. A small team does everything with many single points of possible failure and essentially NO OVERSIGHT. It is like auto manufacture before the assembly line. In contrast, drug discovery within a drug company is a large-scale, highly integrated process with high quality research and effective oversight.

    The Institute of Medicine more or less said that typical university research labs are not organized to do sound, reproducible research. Normally university findings get tested and filtered by drug companies. See Begley and Ellis Nature 2011 for a veritable catalogue of poor statistical practice.

    Research in universities is most typically “cottage industry”. That fine for training, but it does not produce reliable results. A major fault was the university system. Yes, Potti failed, but he failed because of poor system design. He was not trained to do the work he was doing and there was no oversight. To my knowledge, the university research system has not been fixed at all. Duke was quick to throw Potti under the bus once it was discovered he fudged is resume. They did not admit they had a systems problem.

    • Stan: Thanks for your comment. You are right, but there will be scant reason to recognize the systematic problem you describe if universities are declared off the hook as in this case. They had abundant evidence for years that they were committing ghastly violations of data handling and validation. Several statistical people involved raised red flags and were told to shut up. They expected to reveal all in a legal proceeding that never occurred. (These are people in addition to the official whistleblower Perez, all hidden from the internal investigation.) This is a disincentive to future whistleblowing. Please also see a new editorial in The Cancer Letter https://errorstatistics.com/2015/11/13/what-does-it-say-about-our-national-commitment-to-research-integrity/

I welcome constructive comments that are of relevance to the post and the discussion, and discourage detours into irrelevant topics, however interesting, or unconstructive declarations that "you (or they) are just all wrong". If you want to correct or remove a comment, send me an e-mail. If readers have already replied to the comment, you may be asked to replace it to retain comprehension.

Blog at WordPress.com.