open data

Trial on antidepressant neglected suicide attempts

The antidepressant paroxetine was reported in 2001 to be effective and safe for adolescents. Now, this trial was re-evaluated following an open call in the Britsh Medicinal Journal (BMJ), as IFLScience reports. The new study was made possible with the help of GlaxoSmithKline, who initiated the original work and made the data accessible for re-evaluation. Paroxetine has been disputed over the last years, which was scaringly justified by the new results.

It came out that not only paroxetine is not beneficial to adolescents, but also that 11 persons from the 2001 study taking paroxetine attempted suicide or showed self-harming behavior, in comparison to only one person in the control group. This had been ignored by the researchers. It also had been ignored that parent- or self-rating by the patients of paroxetine did not show a significant difference from placebos. Here, we have a point that is valid in whole science.

“The investigator assessments always end up looking more favorable to the drug than those from the patients,” Jureidini told IFLScience.

Scientists are considered to be neutral and unbiased while they develop theories and prove or disprove them. But of course, this cannot be true, as scientists – along with all other human beings – have expectactions and are influenced by their opinions. That a researcher will judge results tendentially in favor for his new theory is not the problem, because it is just human. But it is apparently not responsible to legalize a medicine based on a trial wich was not double-checked.

Even though the danger of paroxetine is now revealed, not earlier than 14 years after the initial study, the BMJ call shows how important double-checking of clinical trials is. It should be not too difficult to legalize a drug only if its effectiveness and danger potential has been confirmed by two independent studies. Or, like in this case, the data set being analyzed separately.

A “TripAdvisor” for chemical probes

If medicinists want to test a new drug, they can literally choose between hundreds or thousands of reported molecules. But the real problem they are here facing is the high number of ill-suited molecules, that are not properly described, e.g., compounds that target enzymes other than the desired one, or have unwanted side-effects. Finding a suitable drug for a biomedical study can thus take endless hours before the study itself has even started.

Chemical biologists have now used crowdfunding to start an internet platform that recommends chemical probes, as reported in Science News. This action is in my opinion a very exciting act of self-empowerment, based on the strong impression that the self-correction mechanisms in scientific publishing are not sufficent. I think, one underlying problem might also be the reproducibility crisis, science is still facing. Once a new compound is published, its reproduction (and cross-checking) by other labs is not feasible anymore, since the work would be not original. Problems in reproduction therefore usually occur when the compound is supposed to be used for an application.

Maybe has the potential to provide an alternative metric for science, based on the applicability of drugs and drug-like molecules.

When medicine is based on a single study, its effect might have been by chance.
credit: BloodyMary  /

100% Effective: the unrepeated studies

A few weeks back, Ben Goldacre wrote about the reproduction crisis that science is suffering from. As a very descriptive example, he addresses the large-scale use of deworming medication in developing countries which is based on a single, but very extensive study from 2004.

When Godacre described the outcome of a re-evaluation of the data from 2004, which was done in 2013, he listed all the problems found in that study – starting from missing data to wrong instructions provided by the analysis software package that was used back then. It is really no surprise that the new evaluation came to very different results about the effectiveness of deworming medication in schools.

I really appreciate that Goldacre does not take the credit from the authors of the 2004 study, acknowledging that they did a difficult and hard work in all conscience. Instead, he points out how unusual it was that those scientists provided all their raw data for a re-evaluation. And this is indeed astonishing. Goldacres comparison with the probe passing Pluto is well chosen:

Conducting a trial, and then refusing to let anyone see the data, is like claiming you’ve flown a spaceship to Pluto, but refusing to let anyone see the photos.

As a matter of fact, this happens frequently in science. As a chemist, I sometime roll my eyes when I see hundreds of numbers in the supplementary information of a paper, describing every atom coordinate obtained from a crystal structure of a molecule. But at least this tells me that I really get all the data.

When medicine is based on a single study, its effect might have been by chance. credit: BloodyMary  /

When medicine is based on a single study, its effect might have been occured by chance.
Credit: BloodyMary /

The other and even larger problem is indeed the reproducibilty. To be sure that a results is real and well-founded, it actually needs confirmation from different scientists. It is not unusual that scientists find a protocol published, and try to build their work on that. When I go through an interesting paper, I find myself looking for loopholes of missing information that might prevent me from reproducing the result on the first place. When I do a published synthesis and I succeed on the first try, I am surprised. On the other hand, a failure might mean that I am either not skilled enough, or that some piece of information is missing in that paper.

Not to give away all the information can be essential for a scientist under the increasing pressure to “publish or perish”. Since it delays others in reproducing the work, it ensures that the scientist keeps an advantage. Authors have to fear that their manuscripts are rejected, because of a peer-reviewer who reproduces that work in his own lab, and then publishs it first.

So, hoarding data is used as an insurance of the authors, or let’s say as a “copy protection”. As understandle as this might be, this is desastrous for science, as Goldacre clearly emphasizes. Irreproducible science is basically worthless, and in the worst case harmful. I agree that this has no influence on the fact that treatment of children against worms is an urgent and important issue. But it undermines the reliability of science in our society and promotes pseudo-scientific or religious beliefs that claim to be equally justified.

Changes and challenges in scientific publishing: 12th of June 2015, Vienna

At 12th of June, the University of Vienna will host a talk by Eva Amsen, the F1000 Community Strategy Manager. She will give a presentation about “Open peer review, open data, negative results: Scientific publishing is changing.”

This talk will look in more detail at the beneftis of these apects of open science, but also discuss some the challenges, such as lack of time or fear of sharing ongoing research.

It is very interesting to see that these aspects are more and more also addressed by publishers. If it becomes more rewarding to publish “the other” results, too, this would clearly be a benefit for all scientists and their work.