Month: December 2014

From crisis to crisis

In September this year, David Crotty wrote an blog post about two colliding crises – each in the context with negative results. The first crisis is described as a “reproducibility crisis, based on the assumption that a majority of published experiments is in fact unreproducible. The second crisis is referred to as “negative results crisis”, describing that a large amount of correct results remains unpublished, due to its null-result character. Both crises are described to cause a considerable waste of time for scientists – either in performing published experiments that however cannot succeed, or by repeating unsuccessful experiments that have not been published.

One attempt to overcome the problem of negative results was suggested by Annie Franco, Neil Malhotra and Gabor Simonovits, namely by “creating high-status publication outlets for these studies”. Bu I have to agree that this is easily said.

How willing are researchers to publicly display their failures? How much career credit should be granted for doing experiments that didn’t work?

Even though theses problems are clearly not new (I decidated this blog to negative results for a reason), I was surprised to see them actually described as “crises”. I do think that there is a problem of science losing trust by the public, caused by the omnipresent publish-or-perish paradigm.

Advertisements

Show me your data sets!

Are authors of a scientific publication really more unwilling to share their raw data when their reported evidence is not too strong? This question was recently addressed in the subject of psychology, unsurprisingly published in the open-access journal PLoS ONE. Jelte Wicherts, Marjan Bakker and Dylan Molenaar from the Psychology Department of the University of Amsterdam, indeed came to that conclusion. Their study included 1149 results from 49 papers. It is interesting that in 28 of the considered 49 papers, the respective co-authors did not share their research data, even if they had agreed on that before.

Distribution of reporting errors per paper for papers from which data were shared and from which no data were shared. From DOI 10.1371/journal.pone.0026828

Distribution of reporting errors per paper for papers from which data were shared and from which no data were shared. From DOI 10.1371/journal.pone.0026828

However, one might argue that the authors of this interesting “meta”-study walk on a difficult terrain, as they are trying to draw a correlation about the accuracy of other scientists’ correlations. But I think, their paper makes it clear enough that they were very much aware of that issue.