Is Nature’s current retraction record not a record at all?

The current retraction wave in Nature is still highly discussed in the scientific community. Indeed, as of September 2014, the number of retractions is 8, which is yet even higher than in 2013, where it have been „only“ six retractions. In this discussion, the record from 2003 is often referred to, which is supposed to be 10.

So, what is going on in Nature? Paul Knoepfler addresses this question intensively in his blog, also pointing out that the increased number of retraction might be the result of a lower tolerance by the staff of Nature. Although the numbers of retractions over the last years look impressive (1–2–6–8, ranging from 2011 to 2014), this high number of retractions looks different in comparison to the record in 2003. Nevertheless, I have to contradict @Richvn, as two of the ten listed papers are related to retractions, but are not actual retractions.


The contribution from my side to the ongoing discussion about Nature‘s wave of retractions is therefore, that this at least is not unique in the history of Nature. Nevertheless, 2014 has not passed yet. And several publications are, of course, retracted at a later point of time.


Link of the week: “How anonymous peer review fails to do its job and damages science.”

In his text in the Open Science Collaboration blog, Prof. Jan P. de Ruiter comments on the apparent drawbacks of the anonymous peer-review (AP) process in science publishing. He recites an ideed very popular phrase abut AP:

It may have its flaws, but it’s the ‘least bad’ of all possible systems.

The examples about partially insulting and unfair reviews that result from the current AP system are, after my experience, true in their tendency. I very much appreciate that de Ruiter also offers alternatives which he applies for himself, when possible.

Rule a) Reviewers with tenure always sign their reviews.

Rule b) Reviews are stored, and all researchers have the explicit right to look up and cite reviews. If the author of a certain review is anonymous, so be it. Call them “reviewer 3 in submission so-and-so to journal X”, but at least this allows researchers to address and discuss their arguments in the papers. I often notice that reviewers have a very strong influence on papers, by requesting that certain points be addressed before they advise acceptance. This epistemic tug-of-war between reviewers and authors often results in needless meandering and bad rhetorical flow.

The full text can be found here.

How to publish null?

In one of my past entries I made an exemplary and incomplete list of journals that are dedicated to negative outcomes of research. The observation that most of those journals suffer from a very low number of article submission is maybe not surprising, but must look confusing. In my opinion, it is still undisputed that unsuccessful experiments, unexpected observations and contradindicative findings are crucial for the progress in science.

However, there are plenty of reasons why scientists would not unveil their failures openly, and I would do the same. So the question is: How could a platform be like that helps scientists to communicate about obstacles, questions and uncertainities? And why would scientists want to contribute?
An interesting example is the open access journal PLOS One that explicitely publishes every article, as long as it is scientifically sound. Due to its open access nature, the authors have to pay upon publication, instead of the reader. There are many other similar open access journals, but to my knowlegde, PLOS One is the successful one. I think, PLOS One is indeed a shelter for findings that contradict commonly acknowledged theories or research areas that are not considered to be „sexy“ by the scientific community. To my knowledge, it took the journals many difficult years to get established and even now, it is not known to too many scientists.
However, I think that for difficult projects like this, it was very helpful to cover a wide spectrum of sciences. Also, the combination of quick publishing and the connection with the audience is a an asset that distinguishes a project like PLOS One from the typical journals. It is maybe the certainity for the authors to get published in a serious way (PLOS One is established), while the journal it is wide-spread enough to ensure that there are enough submissions.
Another interesting example is the review function of the scientific social network ResearchGate. On this platform, Dr. Kenneth Lee from the university of Hong Kong and also Dr. Mohendra Rao from the NIH published their efforts to reproduce STAP, coming both the conclusion that the original work is not reproducible. Dr. Lee tried to also submit this review to Nature, where the original STAP work was published. However, the review was rejected for not-so-clear reasons. Later on, Nature retracted the original STAP publications, nonetheless.
Intransparency and lack of reproducibility of experiments is an ongoing threat that undermines the reliability of science at all. I think, to seriously report about negative experimental outcomes, the reproducibility of those must be ensured. But in fact, it must also be ensured for the well-selling, positive outcomes. So, having an eye on transparent and reproducible experimental procedures is in fact just a sign of good scientific quality at all.

Considering this, a platform focusing on negative results should (1.) be broad in scope, and (2.) leave no doubt about the scientific craft. Further, I tend more and more to believe that a „classic“ medium like a journal might not be the ideal platform for such results. A good publication type might be communications, supporting a quick and responsive feedback. Another important criterion is that publishing those results must be rewarded. In a simple case, it should help improve the authors h-hindex. Here, ResearchGate’s approach to invent a new score might be useful, since its „RG score“ is not solely coupled to the sheer number of publications and citations.

Link of the week: “Science is not Neutral”

Alice Bell describes in the Guardian’s “Political Science” blog a surprisingly well-matching equivalent of Occupy in the british science community, in 1970.

They started by just asking questions. But the panel chairman and speakers stifled any attempts of debate, dismissing political discussion as irrelevant. The BA seemed to be built on an inflexible culture and internal structure, too reliant on industrial sponsorship to positively challenge debate on the social implications of science. Frustrated, they occupied a mid-conference teach-in. It was designed to be the anti-thesis of how they saw a BA session, with no set-piece speeches, and no restrictions on what could or could not be asked.

The full text is available here.

Yet another retracted Nature publication

As announced in the Nature Blog this week, the RIKEN Centre for Developmental Biology (CDB) in Kobe, Japan is going to be renamed and reduced in size. This is so far the latest development in maybe the science scandal in 2014, where two publications in Nature about “stress-induced” growing of stems cells [1, 2] were retracted. The reason was the lack of reproducibility. Very tragically, this situation was accompanied by a suicide.

The amount of retracted papers is impressively shown by RetractionWatch, and this is not limited to highly prestigious publications, like Nature. The reasons for the publication of those inreproducible papers are manifold. In my opinion, the most likely case might be simple mistakes, as in the publication of Doo Ok Jang et al. in the Journal of the American Chemical Society, which was retracted five years after its publication.

These “false positive” results are in my opinion the most dangerous perils in science, since every scientists is eager to publish everything positive, (almost) no matter what. Once a hypothesis was proven in an exeriment, the chance is rather low that this will be double- or triple-checked.

The journals of negative results in science

Upto now, I found five journals dedicated to publish negative results. While only two of them provide new content on their websites (i.e., entries from at least 2014), one of these two has published no more than 25 articles in its twelve-years long history. The three “silent” ones are:

The one actually running exception I found so far, is the Journal of Negative Results in Biomedicine, hosted by a professional publisher. Worth to mention is also the campaign of the biomedical F1000 Research journal in 2013, when manuscripts about negative results were not been charged for publishing. Please send me a note if I missed something.

So far, there is clearly a need to publish negative, unexpected, or contradictive findings and there are indeed attempts to give them a shelter. The interesting aspect is that publishing negative results seems to require a strong lobby, like from a publisher. One might expect that an open-science solution should work just “out of the crowd”. However, in my opinion the critical point is the question why scientists might actually want to publish negative findings. Of course, there are many good reasons, but in the end of the day most scientists do not dare to make their “failures” public, as every publication counts in the CV. And no one really likes to tell the world what he did not manage to achieve.

Spuriously correlated: Negative results in social science

In their publication from 2007, the social scientists David Lehrer, Janine Leschke, Stefan Lhachmini, Ana Vasuliu and Brigitte Weiffen described the impact of negative results in their research area. Besides their excellent work in defining and classifying negative results, their publication aims to introduce the Journal of Spurious Correlations, which is dedicated to increase transparency in research.

The authors define negative results as “[…] findings that are validated outside the research context in which they are generated, but not by standards of the heuristic process that generated them”. So, in my reading, negative results are unexpected ones. They also address the problem to distinguish those unexpected findings from mistakes.

Being a physical scientist, I absolutely agree with their argumentation why negative results are of an important value for science. Their categorization into four classes (inconclusive results, non-results, confutative results and ersatz results) is something that in my opinion might not be transferred directly to other sciences. As the authors clearly state, social sciences have distinct methods for perfoming studies and evaluating data. They might be also similar for several aspects physical sciences (e.g. statistical data evaluation), but maybe not always.

Most interestingly is their attempt to overcome this obstacle with the Journal of Spurious Correlations. However, I was not able to find any article published, while the latest news entry is from december 2007. This correlates with a statement of Douglas McCormick from the same year.

Couple that with most researchers’ reluctance to publicly air what they consider mistakes, and with the difficulty of finding reviewers canny enough to separate the null-result wheat from the ill-executed chaff, and you wind up with some significant doubts about the workability of the project.

Ben Goldacre’s talk about publication bias and its consequences in medicine

In this very interesting talk on TED , Ben Goldacre explains the consequences of publication bias in medical studies.

In fact, there have been so many studies conducted on publication bias now, over a hundred, that they’ve been collected in a systematic review, published in 2010, that took every single study on publication bias that they could find. Publication bias affects every field of medicine. About half of all trials, on average, go missing in action, and we know that positive findings are around twice as likely to be published as negative findings.

If you (like me) prefer to read instead of listening, you can find the transcript here.