peer-review

Show me your data sets!

Are authors of a scientific publication really more unwilling to share their raw data when their reported evidence is not too strong? This question was recently addressed in the subject of psychology, unsurprisingly published in the open-access journal PLoS ONE. Jelte Wicherts, Marjan Bakker and Dylan Molenaar from the Psychology Department of the University of Amsterdam, indeed came to that conclusion. Their study included 1149 results from 49 papers. It is interesting that in 28 of the considered 49 papers, the respective co-authors did not share their research data, even if they had agreed on that before.

Distribution of reporting errors per paper for papers from which data were shared and from which no data were shared. From DOI 10.1371/journal.pone.0026828

Distribution of reporting errors per paper for papers from which data were shared and from which no data were shared. From DOI 10.1371/journal.pone.0026828

However, one might argue that the authors of this interesting “meta”-study walk on a difficult terrain, as they are trying to draw a correlation about the accuracy of other scientists’ correlations. But I think, their paper makes it clear enough that they were very much aware of that issue.

The invisible reviewer

This week, on Wednesday, a scientific tweetstorm started about a publication from Zachary W. Culumber et al., who published a paper without removing a draft comment that was not supposed to get public. The full story is nicely summarized by Grrlscientist.

Despite the fact that many of the commenters address the apparent lack of proper peer-reviewing before the manuscript was published, I had another thought. What if…?

Typically, a manuscript is reviewed thoroughly by the authors themselves before it gets submitted. Then, in the following peer-review process, at least two anonymous experts are reviewing the text and offering comments, including their suggestion if this manuscript should be either accepted or rejected for publication. It is unlikely, but of course not impossible, that such a blatant mistake got overlooked. And, the peer-reviewers are not paid and doing that work besides their actual one (like teaching classes, supervising research, or applying for funding). But what if… this was done on purpose?

My theory is that the reviewers, or at least one of them, might be a competitor of the authors. It might have happened that this mistake was noticed and not commented. In that case, the anonymous peer-review process would offer a perfidious way to harm competitors – simply by letting them run into the open knife.

However, in each case the peer review had clearly failed. It’s only human and besides a gleefully smiling community, the scientific results were never called into question.

Link of the week: “How anonymous peer review fails to do its job and damages science.”

In his text in the Open Science Collaboration blog, Prof. Jan P. de Ruiter comments on the apparent drawbacks of the anonymous peer-review (AP) process in science publishing. He recites an ideed very popular phrase abut AP:

It may have its flaws, but it’s the ‘least bad’ of all possible systems.

The examples about partially insulting and unfair reviews that result from the current AP system are, after my experience, true in their tendency. I very much appreciate that de Ruiter also offers alternatives which he applies for himself, when possible.

Rule a) Reviewers with tenure always sign their reviews.

Rule b) Reviews are stored, and all researchers have the explicit right to look up and cite reviews. If the author of a certain review is anonymous, so be it. Call them “reviewer 3 in submission so-and-so to journal X”, but at least this allows researchers to address and discuss their arguments in the papers. I often notice that reviewers have a very strong influence on papers, by requesting that certain points be addressed before they advise acceptance. This epistemic tug-of-war between reviewers and authors often results in needless meandering and bad rhetorical flow.

The full text can be found here.