Month: September 2014

Is Nature’s current retraction record not a record at all?

The current retraction wave in Nature is still highly discussed in the scientific community. Indeed, as of September 2014, the number of retractions is 8, which is yet even higher than in 2013, where it have been „only“ six retractions. In this discussion, the record from 2003 is often referred to, which is supposed to be 10.

So, what is going on in Nature? Paul Knoepfler addresses this question intensively in his blog, also pointing out that the increased number of retraction might be the result of a lower tolerance by the staff of Nature. Although the numbers of retractions over the last years look impressive (1–2–6–8, ranging from 2011 to 2014), this high number of retractions looks different in comparison to the record in 2003. Nevertheless, I have to contradict @Richvn, as two of the ten listed papers are related to retractions, but are not actual retractions.

2014-09-22_img

The contribution from my side to the ongoing discussion about Nature‘s wave of retractions is therefore, that this at least is not unique in the history of Nature. Nevertheless, 2014 has not passed yet. And several publications are, of course, retracted at a later point of time.

Link of the week: “How anonymous peer review fails to do its job and damages science.”

In his text in the Open Science Collaboration blog, Prof. Jan P. de Ruiter comments on the apparent drawbacks of the anonymous peer-review (AP) process in science publishing. He recites an ideed very popular phrase abut AP:

It may have its flaws, but it’s the ‘least bad’ of all possible systems.

The examples about partially insulting and unfair reviews that result from the current AP system are, after my experience, true in their tendency. I very much appreciate that de Ruiter also offers alternatives which he applies for himself, when possible.

Rule a) Reviewers with tenure always sign their reviews.

Rule b) Reviews are stored, and all researchers have the explicit right to look up and cite reviews. If the author of a certain review is anonymous, so be it. Call them “reviewer 3 in submission so-and-so to journal X”, but at least this allows researchers to address and discuss their arguments in the papers. I often notice that reviewers have a very strong influence on papers, by requesting that certain points be addressed before they advise acceptance. This epistemic tug-of-war between reviewers and authors often results in needless meandering and bad rhetorical flow.

The full text can be found here.

How to publish null?

In one of my past entries I made an exemplary and incomplete list of journals that are dedicated to negative outcomes of research. The observation that most of those journals suffer from a very low number of article submission is maybe not surprising, but must look confusing. In my opinion, it is still undisputed that unsuccessful experiments, unexpected observations and contradindicative findings are crucial for the progress in science.

However, there are plenty of reasons why scientists would not unveil their failures openly, and I would do the same. So the question is: How could a platform be like that helps scientists to communicate about obstacles, questions and uncertainities? And why would scientists want to contribute?
An interesting example is the open access journal PLOS One that explicitely publishes every article, as long as it is scientifically sound. Due to its open access nature, the authors have to pay upon publication, instead of the reader. There are many other similar open access journals, but to my knowlegde, PLOS One is the successful one. I think, PLOS One is indeed a shelter for findings that contradict commonly acknowledged theories or research areas that are not considered to be „sexy“ by the scientific community. To my knowledge, it took the journals many difficult years to get established and even now, it is not known to too many scientists.
However, I think that for difficult projects like this, it was very helpful to cover a wide spectrum of sciences. Also, the combination of quick publishing and the connection with the audience is a an asset that distinguishes a project like PLOS One from the typical journals. It is maybe the certainity for the authors to get published in a serious way (PLOS One is established), while the journal it is wide-spread enough to ensure that there are enough submissions.
Another interesting example is the review function of the scientific social network ResearchGate. On this platform, Dr. Kenneth Lee from the university of Hong Kong and also Dr. Mohendra Rao from the NIH published their efforts to reproduce STAP, coming both the conclusion that the original work is not reproducible. Dr. Lee tried to also submit this review to Nature, where the original STAP work was published. However, the review was rejected for not-so-clear reasons. Later on, Nature retracted the original STAP publications, nonetheless.
Intransparency and lack of reproducibility of experiments is an ongoing threat that undermines the reliability of science at all. I think, to seriously report about negative experimental outcomes, the reproducibility of those must be ensured. But in fact, it must also be ensured for the well-selling, positive outcomes. So, having an eye on transparent and reproducible experimental procedures is in fact just a sign of good scientific quality at all.

Considering this, a platform focusing on negative results should (1.) be broad in scope, and (2.) leave no doubt about the scientific craft. Further, I tend more and more to believe that a „classic“ medium like a journal might not be the ideal platform for such results. A good publication type might be communications, supporting a quick and responsive feedback. Another important criterion is that publishing those results must be rewarded. In a simple case, it should help improve the authors h-hindex. Here, ResearchGate’s approach to invent a new score might be useful, since its „RG score“ is not solely coupled to the sheer number of publications and citations.

Link of the week: “Science is not Neutral”

Alice Bell describes in the Guardian’s “Political Science” blog a surprisingly well-matching equivalent of Occupy in the british science community, in 1970.

They started by just asking questions. But the panel chairman and speakers stifled any attempts of debate, dismissing political discussion as irrelevant. The BA seemed to be built on an inflexible culture and internal structure, too reliant on industrial sponsorship to positively challenge debate on the social implications of science. Frustrated, they occupied a mid-conference teach-in. It was designed to be the anti-thesis of how they saw a BA session, with no set-piece speeches, and no restrictions on what could or could not be asked.

The full text is available here.

Yet another retracted Nature publication

As announced in the Nature Blog this week, the RIKEN Centre for Developmental Biology (CDB) in Kobe, Japan is going to be renamed and reduced in size. This is so far the latest development in maybe the science scandal in 2014, where two publications in Nature about “stress-induced” growing of stems cells [1, 2] were retracted. The reason was the lack of reproducibility. Very tragically, this situation was accompanied by a suicide.

The amount of retracted papers is impressively shown by RetractionWatch, and this is not limited to highly prestigious publications, like Nature. The reasons for the publication of those inreproducible papers are manifold. In my opinion, the most likely case might be simple mistakes, as in the publication of Doo Ok Jang et al. in the Journal of the American Chemical Society, which was retracted five years after its publication.

These “false positive” results are in my opinion the most dangerous perils in science, since every scientists is eager to publish everything positive, (almost) no matter what. Once a hypothesis was proven in an exeriment, the chance is rather low that this will be double- or triple-checked.