biomedical sciences

Trial on antidepressant neglected suicide attempts

The antidepressant paroxetine was reported in 2001 to be effective and safe for adolescents. Now, this trial was re-evaluated following an open call in the Britsh Medicinal Journal (BMJ), as IFLScience reports. The new study was made possible with the help of GlaxoSmithKline, who initiated the original work and made the data accessible for re-evaluation. Paroxetine has been disputed over the last years, which was scaringly justified by the new results.

It came out that not only paroxetine is not beneficial to adolescents, but also that 11 persons from the 2001 study taking paroxetine attempted suicide or showed self-harming behavior, in comparison to only one person in the control group. This had been ignored by the researchers. It also had been ignored that parent- or self-rating by the patients of paroxetine did not show a significant difference from placebos. Here, we have a point that is valid in whole science.

“The investigator assessments always end up looking more favorable to the drug than those from the patients,” Jureidini told IFLScience.

Scientists are considered to be neutral and unbiased while they develop theories and prove or disprove them. But of course, this cannot be true, as scientists – along with all other human beings – have expectactions and are influenced by their opinions. That a researcher will judge results tendentially in favor for his new theory is not the problem, because it is just human. But it is apparently not responsible to legalize a medicine based on a trial wich was not double-checked.

Even though the danger of paroxetine is now revealed, not earlier than 14 years after the initial study, the BMJ call shows how important double-checking of clinical trials is. It should be not too difficult to legalize a drug only if its effectiveness and danger potential has been confirmed by two independent studies. Or, like in this case, the data set being analyzed separately.

A “TripAdvisor” for chemical probes

If medicinists want to test a new drug, they can literally choose between hundreds or thousands of reported molecules. But the real problem they are here facing is the high number of ill-suited molecules, that are not properly described, e.g., compounds that target enzymes other than the desired one, or have unwanted side-effects. Finding a suitable drug for a biomedical study can thus take endless hours before the study itself has even started.

Chemical biologists have now used crowdfunding to start an internet platform that recommends chemical probes, as reported in Science News. This action is in my opinion a very exciting act of self-empowerment, based on the strong impression that the self-correction mechanisms in scientific publishing are not sufficent. I think, one underlying problem might also be the reproducibility crisis, science is still facing. Once a new compound is published, its reproduction (and cross-checking) by other labs is not feasible anymore, since the work would be not original. Problems in reproduction therefore usually occur when the compound is supposed to be used for an application.

Maybe has the potential to provide an alternative metric for science, based on the applicability of drugs and drug-like molecules.

When medicine is based on a single study, its effect might have been by chance.
credit: BloodyMary  /

100% Effective: the unrepeated studies

A few weeks back, Ben Goldacre wrote about the reproduction crisis that science is suffering from. As a very descriptive example, he addresses the large-scale use of deworming medication in developing countries which is based on a single, but very extensive study from 2004.

When Godacre described the outcome of a re-evaluation of the data from 2004, which was done in 2013, he listed all the problems found in that study – starting from missing data to wrong instructions provided by the analysis software package that was used back then. It is really no surprise that the new evaluation came to very different results about the effectiveness of deworming medication in schools.

I really appreciate that Goldacre does not take the credit from the authors of the 2004 study, acknowledging that they did a difficult and hard work in all conscience. Instead, he points out how unusual it was that those scientists provided all their raw data for a re-evaluation. And this is indeed astonishing. Goldacres comparison with the probe passing Pluto is well chosen:

Conducting a trial, and then refusing to let anyone see the data, is like claiming you’ve flown a spaceship to Pluto, but refusing to let anyone see the photos.

As a matter of fact, this happens frequently in science. As a chemist, I sometime roll my eyes when I see hundreds of numbers in the supplementary information of a paper, describing every atom coordinate obtained from a crystal structure of a molecule. But at least this tells me that I really get all the data.

When medicine is based on a single study, its effect might have been by chance. credit: BloodyMary  /

When medicine is based on a single study, its effect might have been occured by chance.
Credit: BloodyMary /

The other and even larger problem is indeed the reproducibilty. To be sure that a results is real and well-founded, it actually needs confirmation from different scientists. It is not unusual that scientists find a protocol published, and try to build their work on that. When I go through an interesting paper, I find myself looking for loopholes of missing information that might prevent me from reproducing the result on the first place. When I do a published synthesis and I succeed on the first try, I am surprised. On the other hand, a failure might mean that I am either not skilled enough, or that some piece of information is missing in that paper.

Not to give away all the information can be essential for a scientist under the increasing pressure to “publish or perish”. Since it delays others in reproducing the work, it ensures that the scientist keeps an advantage. Authors have to fear that their manuscripts are rejected, because of a peer-reviewer who reproduces that work in his own lab, and then publishs it first.

So, hoarding data is used as an insurance of the authors, or let’s say as a “copy protection”. As understandle as this might be, this is desastrous for science, as Goldacre clearly emphasizes. Irreproducible science is basically worthless, and in the worst case harmful. I agree that this has no influence on the fact that treatment of children against worms is an urgent and important issue. But it undermines the reliability of science in our society and promotes pseudo-scientific or religious beliefs that claim to be equally justified.

Source: Wikimedia Commons

A publication bias workshop

Two weeks ago, the National Centre for the Replacement Refinement & Reduction of Animals in Research (NC3Rs) was hosting a workshop about publication bias. In that workshop, an effort was taken to bring together “funders, journals, and scientists from academia and industry to discuss the impact of publication bias in animal research”.

By this event, three very good blog articles were written from , F1000 Research and one from BioMed Central. Also, a Twitter discussion was ongoing, for which I made a Storify (please feel free to give me a note if I missed something).
Judging from the distance, this workshop seemed to have had a good impact on raising the awareness about biased publication and the consequences. Also, some solutions were discussed, like prospective registration of clinical trials to journals, and new ways for publishing. To me, prospective registration might be an interesting solution, also for other disciplines. This is what every day happens when a researcher applies for funding. However, in that case, the scientist is responsible to provide all his results to the funder, but not to a journal. I agree that this idea might be complicated to manage, but I really think it is worth the effort.

Considering new ways of publishing, PLOS One seems to be a step ahead by launching a new collection focusing on negative results. As promising as this might sound in the first moment, the collection includes papers from 2008 til 2014, being published as a new collection two weeks ago, at the 25th February 2015. This still reminds me a bit of all the negative journals that are only sporadically published. Nonetheless, I think that the awareness about that issue is rising.

Curing AIDS: The first 25 years

As Retraction Watch observed in the last week, the dutch scientist Henk Buck delivered new insights about his publication in Science that was retracted in 1990. In the original paper, the authors claimed nothing less than the successful inhibition of HIV infectivity, allowing for a cure of AIDS.

Four separate investigations turned up faked data, manipulated images, and highly selective reporting designed to obscure the fact that HIV-fighting molecules never existed.

Having this in mind, I think that this new publication 25 years later might leave many people speechless. When I read the recent publication, the originally retracted paper and the retraction, I wanted to give this new interpretation a fair chance. At least, it is a discussion of the data, so what could be wrong with that? Apparently, quite a lot: First, of course the fact that Buck was proven guilty of scientific fraud. In addition to that comes the journal owned by Scientific Research Publishing (SCIRP), registered in Delaware and located in China and its questionable reputation.

Now, the two big questions are: Why? And why now?
I am keeping my fingers crossed that Retraction Watch receives an answer from the author, as this might be the beginning (or a very late continuation) of an interesting story.


Yet another retracted Nature publication

As announced in the Nature Blog this week, the RIKEN Centre for Developmental Biology (CDB) in Kobe, Japan is going to be renamed and reduced in size. This is so far the latest development in maybe the science scandal in 2014, where two publications in Nature about “stress-induced” growing of stems cells [1, 2] were retracted. The reason was the lack of reproducibility. Very tragically, this situation was accompanied by a suicide.

The amount of retracted papers is impressively shown by RetractionWatch, and this is not limited to highly prestigious publications, like Nature. The reasons for the publication of those inreproducible papers are manifold. In my opinion, the most likely case might be simple mistakes, as in the publication of Doo Ok Jang et al. in the Journal of the American Chemical Society, which was retracted five years after its publication.

These “false positive” results are in my opinion the most dangerous perils in science, since every scientists is eager to publish everything positive, (almost) no matter what. Once a hypothesis was proven in an exeriment, the chance is rather low that this will be double- or triple-checked.


Ben Goldacre’s talk about publication bias and its consequences in medicine

In this very interesting talk on TED , Ben Goldacre explains the consequences of publication bias in medical studies.

In fact, there have been so many studies conducted on publication bias now, over a hundred, that they’ve been collected in a systematic review, published in 2010, that took every single study on publication bias that they could find. Publication bias affects every field of medicine. About half of all trials, on average, go missing in action, and we know that positive findings are around twice as likely to be published as negative findings.

If you (like me) prefer to read instead of listening, you can find the transcript here.