From thebmjopinion

An interesting article in the BMJ opinion blog. The ongoing credibility and reproducibility crisis in institutionalized research continues to unfold.

Health research is based on trust. Health professionals and journal editors reading the results of a clinical trial assume that the trial happened and that the results were honestly reported. But about 20% of the time, said Ben Mol, professor of obstetrics and gynaecology at Monash Health, they would be wrong. As I’ve been concerned about research fraud for 40 years, I wasn’t that surprised as many would be by this figure, but it led me to think that the time may have come to stop assuming that research actually happened and is honestly reported, and assume that the research is fraudulent until there is some evidence to support it having happened and been honestly reported. The Cochrane Collaboration, which purveys “trusted information,” has now taken a step in that direction.

As he described in a webinar last week, Ian Roberts, professor of epidemiology at the London School of Hygiene & Tropical Medicine, began to have doubts about the honest reporting of trials after a colleague asked if he knew that his systematic review showing the mannitol halved death from head injury was based on trials that had never happened. He didn’t, but he set about investigating the trials and confirmed that they hadn’t ever happened. They all had a lead author who purported to come from an institution that didn’t exist and who killed himself a few years later. The trials were all published in prestigious neurosurgery journals and had multiple co-authors. None of the co-authors had contributed patients to the trials, and some didn’t know that they were co-authors until after the trials were published. When Roberts contacted one of the journals the editor responded that “I wouldn’t trust the data.” Why, Roberts wondered, did he publish the trial? None of the trials have been retracted.

Later Roberts, who headed one of the Cochrane groups, did a systematic review of colloids versus crystalloids only to discover again that many of the trials that were included in the review could not be trusted. He is now sceptical about all systematic reviews, particularly those that are mostly reviews of multiple small trials. He compared the original idea of systematic reviews as searching for diamonds, knowledge that was available if brought together in systematic reviews; now he thinks of systematic reviewing as searching through rubbish. He proposed that small, single centre trials should be discarded, not combined in systematic reviews.

Mol, like Roberts, has conducted systematic reviews only to realise that most of the trials included either were zombie trials that were fatally flawed or were untrustworthy. What, he asked, is the scale of the problem? Although retractions are increasing, only about 0.04% of biomedical studies have been retracted, suggesting the problem is small. But the anaesthetist John Carlisle analysed 526 trials submitted to Anaesthesia and found that 73 (14%) had false data, and 43 (8%) he categorised as zombie. When he was able to examine individual patient data in 153 studies, 67 (44%) had untrustworthy data and 40 (26%) were zombie trials. Many of the trials came from the same countries (Egypt, China, India, Iran, Japan, South Korea, and Turkey), and when John Ioannidis, a professor at Stanford University, examined individual patient data from trials submitted from those countries to Anaesthesia during a year he found that many were false: 100% (7/7) in Egypt; 75% (3/ 4) in Iran; 54% (7/13) in India; 46% (22/48) in China; 40% (2/5) in Turkey; 25% (5/20) in South Korea; and 18% (2/11) in Japan. Most of the trials were zombies. Ioannidis concluded that there are hundreds of thousands of zombie trials published from those countries alone.

Others have found similar results, and Mol’s best guess is that about 20% of trials are false. Very few of these papers are retracted.

This is probably one of the harshest points.

Research fraud is often viewed as a problem of “bad apples,” but Barbara K Redman, who spoke at the webinar insists that it is not a problem of bad apples but bad barrels if not, she said, of rotten forests or orchards. In her book Research Misconduct Policy in Biomedicine: Beyond the Bad-Apple Approach she argues that research misconduct is a systems problem—the system provides incentives to publish fraudulent research and does not have adequate regulatory processes.

Read the full opinion piece here.

Here is a complete recording of the webinar where these issues were discussed.

Fraudulent Trials in Systematic Reviews – A Major Public Health Problem

Research seminar hosted by Professor Ian Roberts, Co-Director of the Clinical Trials Unit at the London School of Hygiene & Tropical Medicine

Chair: Emma Sydenham, Co-ordinating Editor, Cochrane Injuries Group, LSHTM

Agenda

Welcome: Chair
Ian Roberts: Fraudulent trials in systematic reviews (15 minutes).
Ian Roberts is professor of epidemiology and co-director of the clinical trials unit at the London School of Hygiene & Tropical Medicine. He is editor of the Cochrane Injuries Group.
Ben Mol: The response of the academic community (15 minutes)
Ben Mol is professor of obstetrics and gynaecology at Monash University, Melbourne, Australia and chair in obstetrics & gynaecology at the Aberdeen Centre for Women’s Health Research, Scotland, UK.
Barbara Redman: Rotten apples or rotten barrels – structural issues in fraud (15 minutes).
Barbara Redman is an internationally respected expert on fraud. She is Associate, Division of Medical Ethics, New York University Langone Medical Center and Adjunct Professor, NYU School of Nursing. She is author of “Research Misconduct Policy in Biomedicine, Beyond the Bad-Apple Approach.”
Discussion (30 minutes)
Closing remarks from Chair

HT/Joe Cool

via Watts Up With That?

https://ift.tt/3zzetrW

July 26, 2021