science

100% Effective: the unrepeated studies

A few weeks back, Ben Goldacre wrote about the reproduction crisis that science is suffering from. As a very descriptive example, he addresses the large-scale use of deworming medication in developing countries which is based on a single, but very extensive study from 2004.

When Godacre described the outcome of a re-evaluation of the data from 2004, which was done in 2013, he listed all the problems found in that study – starting from missing data to wrong instructions provided by the analysis software package that was used back then. It is really no surprise that the new evaluation came to very different results about the effectiveness of deworming medication in schools.

I really appreciate that Goldacre does not take the credit from the authors of the 2004 study, acknowledging that they did a difficult and hard work in all conscience. Instead, he points out how unusual it was that those scientists provided all their raw data for a re-evaluation. And this is indeed astonishing. Goldacres comparison with the probe passing Pluto is well chosen:

Conducting a trial, and then refusing to let anyone see the data, is like claiming you’ve flown a spaceship to Pluto, but refusing to let anyone see the photos.

As a matter of fact, this happens frequently in science. As a chemist, I sometime roll my eyes when I see hundreds of numbers in the supplementary information of a paper, describing every atom coordinate obtained from a crystal structure of a molecule. But at least this tells me that I really get all the data.

When medicine is based on a single study, its effect might have been by chance. credit: BloodyMary  / pixelio.de

When medicine is based on a single study, its effect might have been occured by chance.
Credit: BloodyMary / pixelio.de

The other and even larger problem is indeed the reproducibilty. To be sure that a results is real and well-founded, it actually needs confirmation from different scientists. It is not unusual that scientists find a protocol published, and try to build their work on that. When I go through an interesting paper, I find myself looking for loopholes of missing information that might prevent me from reproducing the result on the first place. When I do a published synthesis and I succeed on the first try, I am surprised. On the other hand, a failure might mean that I am either not skilled enough, or that some piece of information is missing in that paper.

Not to give away all the information can be essential for a scientist under the increasing pressure to “publish or perish”. Since it delays others in reproducing the work, it ensures that the scientist keeps an advantage. Authors have to fear that their manuscripts are rejected, because of a peer-reviewer who reproduces that work in his own lab, and then publishs it first.

So, hoarding data is used as an insurance of the authors, or let’s say as a “copy protection”. As understandle as this might be, this is desastrous for science, as Goldacre clearly emphasizes. Irreproducible science is basically worthless, and in the worst case harmful. I agree that this has no influence on the fact that treatment of children against worms is an urgent and important issue. But it undermines the reliability of science in our society and promotes pseudo-scientific or religious beliefs that claim to be equally justified.

Advertisements

Discussion? Unwanted.

The dealings with research  results within the scientific community seems to become an ongoing topic in the german newspaper Die Zeit. In their online version, a case about a psychologic study is reported that might be described with words like “concealment“, “withholding“ or “suppression“. Here is what happened, as described in the article:

Frieder Lang from the University Erlangen-Nürnberg reported in March 2013 his research (please mind the paywall), which might be summarized with the insight that pessimistic people have a longer life than optimistic people. This is discussed with that pessimistic people apparently are more concerned about their health. This study, however, is questioned in some aspects by the statisticians Björn und Sören Christensen from the University of Applied Sciences and the University of Kiel. Their main concern addresses the assignment of test subjects into optimistic and pessimistic individuals. As they argue, the characteristic of an individual might change over the course of the five-years study. In my opinion, this concern does not contradict the research, but in fact gives a significant contribution.

The Zeit article reports that the Christensen brothers sent a note with their analysis in May 2013 to the journal Psychology and Aging in which the original research was published. Their note was refused for publishing, because it did not include a theoretical background between the test subjects’ felt and actual healthiness. One might argue here that delivering this background is covered by the originally addressed paper from Lang and coworkers. After this fruitless attempt to publish their concerns, that note was submitted as a paper to the Zeitschrift für Gesundheitspsychologie (Journal of Health Psychology). More than half a year later, they received again a refusal with an astonishing explanation: It was not possible to find a single referee willing to review that paper. After waiting for another half year, the manuscript was withdrawn by the authors.

In the public, science is often seen as a process of learning and exchange. Theories and conclusions can be discussed, complemented, or even overthrown. Nevertheless, the peer-review publishing system appears to be more static than a fluent research process might require. Tools like PubPeer and the review option in ResearchGate do exist and are of growing importance. But, nonetheless, they are still watched suspiciously by the established journals.

As Sören Christensen explained to me, their paper is currently under revision. So the case is not closed yet.

From crisis to crisis

In September this year, David Crotty wrote an blog post about two colliding crises – each in the context with negative results. The first crisis is described as a “reproducibility crisis, based on the assumption that a majority of published experiments is in fact unreproducible. The second crisis is referred to as “negative results crisis”, describing that a large amount of correct results remains unpublished, due to its null-result character. Both crises are described to cause a considerable waste of time for scientists – either in performing published experiments that however cannot succeed, or by repeating unsuccessful experiments that have not been published.

One attempt to overcome the problem of negative results was suggested by Annie Franco, Neil Malhotra and Gabor Simonovits, namely by “creating high-status publication outlets for these studies”. Bu I have to agree that this is easily said.

How willing are researchers to publicly display their failures? How much career credit should be granted for doing experiments that didn’t work?

Even though theses problems are clearly not new (I decidated this blog to negative results for a reason), I was surprised to see them actually described as “crises”. I do think that there is a problem of science losing trust by the public, caused by the omnipresent publish-or-perish paradigm.

Show me your data sets!

Are authors of a scientific publication really more unwilling to share their raw data when their reported evidence is not too strong? This question was recently addressed in the subject of psychology, unsurprisingly published in the open-access journal PLoS ONE. Jelte Wicherts, Marjan Bakker and Dylan Molenaar from the Psychology Department of the University of Amsterdam, indeed came to that conclusion. Their study included 1149 results from 49 papers. It is interesting that in 28 of the considered 49 papers, the respective co-authors did not share their research data, even if they had agreed on that before.

Distribution of reporting errors per paper for papers from which data were shared and from which no data were shared. From DOI 10.1371/journal.pone.0026828

Distribution of reporting errors per paper for papers from which data were shared and from which no data were shared. From DOI 10.1371/journal.pone.0026828

However, one might argue that the authors of this interesting “meta”-study walk on a difficult terrain, as they are trying to draw a correlation about the accuracy of other scientists’ correlations. But I think, their paper makes it clear enough that they were very much aware of that issue.

Is the Nobel Prize a good thing?

It’s Nobel Prize week. And as everyone knows, the Nobel Prize is considered to be the highest award a scientist can get in his carreer. This award is so archetypical, that the secret striving to eventually get the Nobel Prize is ascribed to every one doing science, and is often used in movies and TV shows as a typical cliché.

In terms of “pure” science, the aim for reputation and ackowledgement of scientists might seem somehow  disturbing. The first motivation of a scientist should not be to achieve an award or to gain reputation – it should be to solve a distinct problem, and to learn something new about nature. Of course, this image of a selfless scientist who works only in duty of finding the pure truth is as wrong as the assumption of something like “the pure truth” at all. Scientists do research because it is their job. They have studied, they have to fulfill contracts and they want to have a good and wealthy life, as everyone else does. And, of course, scientists also want to be acknowledged for their work, no less than everyone else does.

I think, the Nobel Prize is a perfect example about expectations. Getting the Nobel Prize is virtually impossible and purposefully working on getting the Nobel prize cannot be an option. The only thing one can really do is doing the best possible work and to hope that later, people notice that this work was really contributing to the progress of our society. And this is what the Nobel Prize is for.

So there are many good reasons to acknowledge the successes of the rewarded people and to do the best possible scientific work.

The importance of feeling stupid

I recently read a text about the concept behind a scientific publication, stating that it is somewhat misleading when it comes to the description of the scientific process. True enough, most papers are built upon a theory that is supposed to be tested, followed by a respective experimental setup in order to prove that theory. Nevertheless, this is indeed not how science usually works. The most important breakthroughs are coming from sidetracks, unexpected observations, or even from miscarried experiments.

I have to agree that the process of deduction cannot produce any information that has not been there before. Accessing and combining given information is clearly an important factor in science, but I think that it is difficult to conclude previously unknown concepts, or question the established ones only by deduction. But this is, however, what the structure of most scientific articles pretends: The team of scientists has an enlightenment about a given theory and deducts a meaningful experiment to prove or disprove exactly defined aspects of that theory. The data is then collected and listed without any subjective interpretation at this point of time. Finally, when all this is done, for the first time the scientists look at their new data in context of the theory to prove and come to new, ground-breaking conclusions about nature. I would be interested, how many of those publications originate from an experiment that was supposed to give a completely different result and let the researchers being puzzled for considerable time.

3ewtoz.jpg

In this essay, „The importance of stupidity in scientific research“, Martin Schwartz arises the conflict between the consideration of scientists as smart people, while many scientists themselves instead feel stupid in their work. Scientists are indeed addressing problems that not so many people have addressed before – which is the reason why they do it. So clearly, there is a lack of certainity, and every step has to be done carefully. It happens so easily that something gets overlooked, misinterpreted, or overrated. In science, you don’t simply know.

If you realize that you don’t know too much about certain things, and these things happen to be your scientific project, you must feel quite stupid. And again, this is why we do science: because we don’t know things.

Is Nature’s current retraction record not a record at all?

The current retraction wave in Nature is still highly discussed in the scientific community. Indeed, as of September 2014, the number of retractions is 8, which is yet even higher than in 2013, where it have been „only“ six retractions. In this discussion, the record from 2003 is often referred to, which is supposed to be 10.

So, what is going on in Nature? Paul Knoepfler addresses this question intensively in his blog, also pointing out that the increased number of retraction might be the result of a lower tolerance by the staff of Nature. Although the numbers of retractions over the last years look impressive (1–2–6–8, ranging from 2011 to 2014), this high number of retractions looks different in comparison to the record in 2003. Nevertheless, I have to contradict @Richvn, as two of the ten listed papers are related to retractions, but are not actual retractions.

2014-09-22_img

The contribution from my side to the ongoing discussion about Nature‘s wave of retractions is therefore, that this at least is not unique in the history of Nature. Nevertheless, 2014 has not passed yet. And several publications are, of course, retracted at a later point of time.