This year, the scientific publisher Elsevier has launched New Negatives in Plant Science, an “open access, peer reviewed, online journal that will publish hypothesis-driven, scientifically sound studies that describe unexpected, controversial, dissenting, and/or null (negative) results in basic plant sciences.” The first issue was published in August this year and is currently in progress.
Journals about null-outcome studies usually suffer from a low attractiveness. Therefore, the fact that null-results are getting acknowleged by a major publisher comes as a very positive surprise. I also think that open access is exactly the right way to tackle this new journal, since that type of journal supposedly publishes curious results that might help many other scientists avoid performing unnecessary experiments.
This week, on Wednesday, a scientific tweetstorm started about a publication from Zachary W. Culumber et al., who published a paper without removing a draft comment that was not supposed to get public. The full story is nicely summarized by Grrlscientist.
Despite the fact that many of the commenters address the apparent lack of proper peer-reviewing before the manuscript was published, I had another thought. What if…?
Typically, a manuscript is reviewed thoroughly by the authors themselves before it gets submitted. Then, in the following peer-review process, at least two anonymous experts are reviewing the text and offering comments, including their suggestion if this manuscript should be either accepted or rejected for publication. It is unlikely, but of course not impossible, that such a blatant mistake got overlooked. And, the peer-reviewers are not paid and doing that work besides their actual one (like teaching classes, supervising research, or applying for funding). But what if… this was done on purpose?
My theory is that the reviewers, or at least one of them, might be a competitor of the authors. It might have happened that this mistake was noticed and not commented. In that case, the anonymous peer-review process would offer a perfidious way to harm competitors – simply by letting them run into the open knife.
However, in each case the peer review had clearly failed. It’s only human and besides a gleefully smiling community, the scientific results were never called into question.
When my attempt to write about negative results in this blog was told in a small discussion, a friend mentioned that there already is a journal covering “null” results in science. So, I would like to address the “Journal of Unsolved Questions” (JUnQ).
Since I was unaware of this journal, I was accordingly surprised that the journal is very alive with (as far as I can judge) two issues per year, being published by PhD students from the university of Mainz, Germany. The journal features articles, guest contributions, and comments from contributors around the world, covering various scientific topics. The articles are peer-reviewed and judged for acceptance or refusal by independent referees. Also, it seems very consistent with the journal’s name that most of the articles’ titles are indeed questions, which is refreshing since scientists are usually supposed to offer answers instead. Personally, I took a great interest in the article of Natascha Gaster, Jorge S. Burns and Michael Gaster about the ICMJE recommendations and the problem of co-author overflow and honory authorships in articles.
Nonetheless, it occurs to me that in JUnQ – although dedicated to “[…] making ‘negative’ and ‘null’-results from all fields of science available to the scientific community” – the authors rephrase the “null” outcomes of their work to open questions. That’s fair enough, since negative results do keep the original questions unsolved, or even give rise to new ones.
What I am still wondering about is whether there is a similarly serious platform for experimental studies with a “true negative” outcome. JUnQ is clearly contributing to a manifold of unsolved questions in sciences, but I think a platform for negative experimental results would help scientists to avoid running into dead ends that had been already discovered, but never published.
The current retraction wave in Nature is still highly discussed in the scientific community. Indeed, as of September 2014, the number of retractions is 8, which is yet even higher than in 2013, where it have been „only“ six retractions. In this discussion, the record from 2003 is often referred to, which is supposed to be 10.
So, what is going on in Nature? Paul Knoepfler addresses this question intensively in his blog, also pointing out that the increased number of retraction might be the result of a lower tolerance by the staff of Nature. Although the numbers of retractions over the last years look impressive (1–2–6–8, ranging from 2011 to 2014), this high number of retractions looks different in comparison to the record in 2003. Nevertheless, I have to contradict @Richvn, as two of the ten listed papers are related to retractions, but are not actual retractions.
The contribution from my side to the ongoing discussion about Nature‘s wave of retractions is therefore, that this at least is not unique in the history of Nature. Nevertheless, 2014 has not passed yet. And several publications are, of course, retracted at a later point of time.
In his text in the Open Science Collaboration blog, Prof. Jan P. de Ruiter comments on the apparent drawbacks of the anonymous peer-review (AP) process in science publishing. He recites an ideed very popular phrase abut AP:
It may have its flaws, but it’s the ‘least bad’ of all possible systems.
The examples about partially insulting and unfair reviews that result from the current AP system are, after my experience, true in their tendency. I very much appreciate that de Ruiter also offers alternatives which he applies for himself, when possible.
Rule a) Reviewers with tenure always sign their reviews.
Rule b) Reviews are stored, and all researchers have the explicit right to look up and cite reviews. If the author of a certain review is anonymous, so be it. Call them “reviewer 3 in submission so-and-so to journal X”, but at least this allows researchers to address and discuss their arguments in the papers. I often notice that reviewers have a very strong influence on papers, by requesting that certain points be addressed before they advise acceptance. This epistemic tug-of-war between reviewers and authors often results in needless meandering and bad rhetorical flow.
The full text can be found here.
As announced in the Nature Blog this week, the RIKEN Centre for Developmental Biology (CDB) in Kobe, Japan is going to be renamed and reduced in size. This is so far the latest development in maybe the science scandal in 2014, where two publications in Nature about “stress-induced” growing of stems cells [1, 2] were retracted. The reason was the lack of reproducibility. Very tragically, this situation was accompanied by a suicide.
The amount of retracted papers is impressively shown by RetractionWatch, and this is not limited to highly prestigious publications, like Nature. The reasons for the publication of those inreproducible papers are manifold. In my opinion, the most likely case might be simple mistakes, as in the publication of Doo Ok Jang et al. in the Journal of the American Chemical Society, which was retracted five years after its publication.
These “false positive” results are in my opinion the most dangerous perils in science, since every scientists is eager to publish everything positive, (almost) no matter what. Once a hypothesis was proven in an exeriment, the chance is rather low that this will be double- or triple-checked.
Upto now, I found five journals dedicated to publish negative results. While only two of them provide new content on their websites (i.e., entries from at least 2014), one of these two has published no more than 25 articles in its twelve-years long history. The three “silent” ones are:
The one actually running exception I found so far, is the Journal of Negative Results in Biomedicine, hosted by a professional publisher. Worth to mention is also the campaign of the biomedical F1000 Research journal in 2013, when manuscripts about negative results were not been charged for publishing. Please send me a note if I missed something.
So far, there is clearly a need to publish negative, unexpected, or contradictive findings and there are indeed attempts to give them a shelter. The interesting aspect is that publishing negative results seems to require a strong lobby, like from a publisher. One might expect that an open-science solution should work just “out of the crowd”. However, in my opinion the critical point is the question why scientists might actually want to publish negative findings. Of course, there are many good reasons, but in the end of the day most scientists do not dare to make their “failures” public, as every publication counts in the CV. And no one really likes to tell the world what he did not manage to achieve.
In their publication from 2007, the social scientists David Lehrer, Janine Leschke, Stefan Lhachmini, Ana Vasuliu and Brigitte Weiffen described the impact of negative results in their research area. Besides their excellent work in defining and classifying negative results, their publication aims to introduce the Journal of Spurious Correlations, which is dedicated to increase transparency in research.
The authors define negative results as “[…] findings that are validated outside the research context in which they are generated, but not by standards of the heuristic process that generated them”. So, in my reading, negative results are unexpected ones. They also address the problem to distinguish those unexpected findings from mistakes.
Being a physical scientist, I absolutely agree with their argumentation why negative results are of an important value for science. Their categorization into four classes (inconclusive results, non-results, confutative results and ersatz results) is something that in my opinion might not be transferred directly to other sciences. As the authors clearly state, social sciences have distinct methods for perfoming studies and evaluating data. They might be also similar for several aspects physical sciences (e.g. statistical data evaluation), but maybe not always.
Most interestingly is their attempt to overcome this obstacle with the Journal of Spurious Correlations. However, I was not able to find any article published, while the latest news entry is from december 2007. This correlates with a statement of Douglas McCormick from the same year.
Couple that with most researchers’ reluctance to publicly air what they consider mistakes, and with the difficulty of finding reviewers canny enough to separate the null-result wheat from the ill-executed chaff, and you wind up with some significant doubts about the workability of the project.
In this very interesting talk on TED , Ben Goldacre explains the consequences of publication bias in medical studies.
In fact, there have been so many studies conducted on publication bias now, over a hundred, that they’ve been collected in a systematic review, published in 2010, that took every single study on publication bias that they could find. Publication bias affects every field of medicine. About half of all trials, on average, go missing in action, and we know that positive findings are around twice as likely to be published as negative findings.
If you (like me) prefer to read instead of listening, you can find the transcript here.