publication bias

Trial on antidepressant neglected suicide attempts

The antidepressant paroxetine was reported in 2001 to be effective and safe for adolescents. Now, this trial was re-evaluated following an open call in the Britsh Medicinal Journal (BMJ), as IFLScience reports. The new study was made possible with the help of GlaxoSmithKline, who initiated the original work and made the data accessible for re-evaluation. Paroxetine has been disputed over the last years, which was scaringly justified by the new results.

It came out that not only paroxetine is not beneficial to adolescents, but also that 11 persons from the 2001 study taking paroxetine attempted suicide or showed self-harming behavior, in comparison to only one person in the control group. This had been ignored by the researchers. It also had been ignored that parent- or self-rating by the patients of paroxetine did not show a significant difference from placebos. Here, we have a point that is valid in whole science.

“The investigator assessments always end up looking more favorable to the drug than those from the patients,” Jureidini told IFLScience.

Scientists are considered to be neutral and unbiased while they develop theories and prove or disprove them. But of course, this cannot be true, as scientists – along with all other human beings – have expectactions and are influenced by their opinions. That a researcher will judge results tendentially in favor for his new theory is not the problem, because it is just human. But it is apparently not responsible to legalize a medicine based on a trial wich was not double-checked.

Even though the danger of paroxetine is now revealed, not earlier than 14 years after the initial study, the BMJ call shows how important double-checking of clinical trials is. It should be not too difficult to legalize a drug only if its effectiveness and danger potential has been confirmed by two independent studies. Or, like in this case, the data set being analyzed separately.

Source: Wikimedia Commons

A publication bias workshop

Two weeks ago, the National Centre for the Replacement Refinement & Reduction of Animals in Research (NC3Rs) was hosting a workshop about publication bias. In that workshop, an effort was taken to bring together “funders, journals, and scientists from academia and industry to discuss the impact of publication bias in animal research”.

By this event, three very good blog articles were written from cogsci.nl , F1000 Research and one from BioMed Central. Also, a Twitter discussion was ongoing, for which I made a Storify (please feel free to give me a note if I missed something).
Judging from the distance, this workshop seemed to have had a good impact on raising the awareness about biased publication and the consequences. Also, some solutions were discussed, like prospective registration of clinical trials to journals, and new ways for publishing. To me, prospective registration might be an interesting solution, also for other disciplines. This is what every day happens when a researcher applies for funding. However, in that case, the scientist is responsible to provide all his results to the funder, but not to a journal. I agree that this idea might be complicated to manage, but I really think it is worth the effort.

Considering new ways of publishing, PLOS One seems to be a step ahead by launching a new collection focusing on negative results. As promising as this might sound in the first moment, the collection includes papers from 2008 til 2014, being published as a new collection two weeks ago, at the 25th February 2015. This still reminds me a bit of all the negative journals that are only sporadically published. Nonetheless, I think that the awareness about that issue is rising.

Scientific worth and culture

In their editorial in Disease Models & Mechanisms, Natalie Matosin and coworkers from the University of Wollongong and the Schizophrenia Research Institute in Syndey, Australia, are giving an excellent overview about the current view on negative results and the related issue of publication bias.

After showing some famous examples (e.g. the Wakefield-publication about vaccination and autism that was retracted not earlier than after twelve years), they also mention the time-comsuming attempts of the Australian Professor David Vaux to retract his own “News and Views” article in Nature.

From their own experiences, the authors describe the impact of negative findings in their own research and the criticism they encountered when they reported their findings in conferences.

A negative result is in response to a positive question. If you rephrased to a negative question, does that mean you have a positive finding?

In my opinion, and also judging from the described reactions from the scientific community, the authors’ reaction towards those negative findings is rather unusual: I hypothesize that if scientists encounter a null result, they are very likely to switch their topic, keeping the “unpublishable” result in fact unpublished (the so-called „file-drawer effect“).

To raise the sensitivity for negative outcomes, the authors refer to he various journals that are dedicated to publishing negative research outcomes, even if they consider the low attraction that these journals suffer from.

At the core, it is our duty as scientists to both: (1) publish all data, no matter what the outcome, because a negative finding is still an important finding; and (2) have a hypothesis to explain the finding.

Again, this publication describes a deep underlying problem in the scientific culture that needs rethinking.