negative results

New negative-journal launched by a major publisher

This year, the scientific publisher Elsevier has launched New Negatives in Plant Science, an “open access, peer reviewed, online journal that will publish hypothesis-driven, scientifically sound studies that describe unexpected, controversial, dissenting, and/or null (negative) results in basic plant sciences.” The first issue was published in August this year and is currently in progress.

Journals about null-outcome studies usually suffer from a low attractiveness. Therefore, the fact that null-results are getting acknowleged by a major publisher comes as a very positive surprise. I also think that open access is exactly the right way to tackle this new journal, since that type of journal supposedly publishes curious results that might help many other scientists avoid performing unnecessary experiments.

Changes and challenges in scientific publishing: 12th of June 2015, Vienna

At 12th of June, the University of Vienna will host a talk by Eva Amsen, the F1000 Community Strategy Manager. She will give a presentation about “Open peer review, open data, negative results: Scientific publishing is changing.”

This talk will look in more detail at the beneftis of these apects of open science, but also discuss some the challenges, such as lack of time or fear of sharing ongoing research.

It is very interesting to see that these aspects are more and more also addressed by publishers. If it becomes more rewarding to publish “the other” results, too, this would clearly be a benefit for all scientists and their work.

A publication bias workshop

Two weeks ago, the National Centre for the Replacement Refinement & Reduction of Animals in Research (NC3Rs) was hosting a workshop about publication bias. In that workshop, an effort was taken to bring together “funders, journals, and scientists from academia and industry to discuss the impact of publication bias in animal research”.

By this event, three very good blog articles were written from cogsci.nl , F1000 Research and one from BioMed Central. Also, a Twitter discussion was ongoing, for which I made a Storify (please feel free to give me a note if I missed something).
Judging from the distance, this workshop seemed to have had a good impact on raising the awareness about biased publication and the consequences. Also, some solutions were discussed, like prospective registration of clinical trials to journals, and new ways for publishing. To me, prospective registration might be an interesting solution, also for other disciplines. This is what every day happens when a researcher applies for funding. However, in that case, the scientist is responsible to provide all his results to the funder, but not to a journal. I agree that this idea might be complicated to manage, but I really think it is worth the effort.

Considering new ways of publishing, PLOS One seems to be a step ahead by launching a new collection focusing on negative results. As promising as this might sound in the first moment, the collection includes papers from 2008 til 2014, being published as a new collection two weeks ago, at the 25th February 2015. This still reminds me a bit of all the negative journals that are only sporadically published. Nonetheless, I think that the awareness about that issue is rising.

Scientific worth and culture

In their editorial in Disease Models & Mechanisms, Natalie Matosin and coworkers from the University of Wollongong and the Schizophrenia Research Institute in Syndey, Australia, are giving an excellent overview about the current view on negative results and the related issue of publication bias.

After showing some famous examples (e.g. the Wakefield-publication about vaccination and autism that was retracted not earlier than after twelve years), they also mention the time-comsuming attempts of the Australian Professor David Vaux to retract his own “News and Views” article in Nature.

From their own experiences, the authors describe the impact of negative findings in their own research and the criticism they encountered when they reported their findings in conferences.

A negative result is in response to a positive question. If you rephrased to a negative question, does that mean you have a positive finding?

In my opinion, and also judging from the described reactions from the scientific community, the authors’ reaction towards those negative findings is rather unusual: I hypothesize that if scientists encounter a null result, they are very likely to switch their topic, keeping the “unpublishable” result in fact unpublished (the so-called „file-drawer effect“).

To raise the sensitivity for negative outcomes, the authors refer to he various journals that are dedicated to publishing negative research outcomes, even if they consider the low attraction that these journals suffer from.

At the core, it is our duty as scientists to both: (1) publish all data, no matter what the outcome, because a negative finding is still an important finding; and (2) have a hypothesis to explain the finding.

Again, this publication describes a deep underlying problem in the scientific culture that needs rethinking.

From crisis to crisis

In September this year, David Crotty wrote an blog post about two colliding crises – each in the context with negative results. The first crisis is described as a “reproducibility crisis, based on the assumption that a majority of published experiments is in fact unreproducible. The second crisis is referred to as “negative results crisis”, describing that a large amount of correct results remains unpublished, due to its null-result character. Both crises are described to cause a considerable waste of time for scientists – either in performing published experiments that however cannot succeed, or by repeating unsuccessful experiments that have not been published.

One attempt to overcome the problem of negative results was suggested by Annie Franco, Neil Malhotra and Gabor Simonovits, namely by “creating high-status publication outlets for these studies”. Bu I have to agree that this is easily said.

How willing are researchers to publicly display their failures? How much career credit should be granted for doing experiments that didn’t work?

Even though theses problems are clearly not new (I decidated this blog to negative results for a reason), I was surprised to see them actually described as “crises”. I do think that there is a problem of science losing trust by the public, caused by the omnipresent publish-or-perish paradigm.

Where do the unsolved questions go?

When my attempt to write about negative results in this blog was told in a small discussion, a friend mentioned that there already is a journal covering “null” results in science. So, I would like to address the “Journal of Unsolved Questions” (JUnQ).

Since I was unaware of this journal, I was accordingly surprised that the journal is very alive with (as far as I can judge) two issues per year, being published by PhD students from the university of Mainz, Germany. The journal features articles, guest contributions, and comments from contributors around the world, covering various scientific topics. The articles are peer-reviewed and judged for acceptance or refusal by independent referees. Also, it seems very consistent with the journal’s name that most of the  articles’ titles are indeed questions, which is refreshing since scientists are usually supposed to offer answers instead. Personally, I took a great interest in the article of Natascha Gaster, Jorge S. Burns and Michael Gaster about the ICMJE recommendations and the problem of co-author overflow and honory authorships in articles.

Nonetheless, it occurs to me that in JUnQ – although dedicated to “[…] making ‘negative’ and ‘null’-results from all fields of science available to the scientific community” – the authors rephrase the “null” outcomes of their work to open questions. That’s fair enough, since negative results do keep the original questions unsolved, or even give rise to new ones.

What I am still wondering about is whether there is a similarly serious platform for experimental studies with a “true negative” outcome. JUnQ is clearly contributing to a manifold of unsolved questions in sciences, but I think a platform for negative experimental results would help scientists to avoid running into dead ends that had been already discovered, but never published.

How to publish null?

In one of my past entries I made an exemplary and incomplete list of journals that are dedicated to negative outcomes of research. The observation that most of those journals suffer from a very low number of article submission is maybe not surprising, but must look confusing. In my opinion, it is still undisputed that unsuccessful experiments, unexpected observations and contradindicative findings are crucial for the progress in science.

However, there are plenty of reasons why scientists would not unveil their failures openly, and I would do the same. So the question is: How could a platform be like that helps scientists to communicate about obstacles, questions and uncertainities? And why would scientists want to contribute?
An interesting example is the open access journal PLOS One that explicitely publishes every article, as long as it is scientifically sound. Due to its open access nature, the authors have to pay upon publication, instead of the reader. There are many other similar open access journals, but to my knowlegde, PLOS One is the successful one. I think, PLOS One is indeed a shelter for findings that contradict commonly acknowledged theories or research areas that are not considered to be „sexy“ by the scientific community. To my knowledge, it took the journals many difficult years to get established and even now, it is not known to too many scientists.
However, I think that for difficult projects like this, it was very helpful to cover a wide spectrum of sciences. Also, the combination of quick publishing and the connection with the audience is a an asset that distinguishes a project like PLOS One from the typical journals. It is maybe the certainity for the authors to get published in a serious way (PLOS One is established), while the journal it is wide-spread enough to ensure that there are enough submissions.
Another interesting example is the review function of the scientific social network ResearchGate. On this platform, Dr. Kenneth Lee from the university of Hong Kong and also Dr. Mohendra Rao from the NIH published their efforts to reproduce STAP, coming both the conclusion that the original work is not reproducible. Dr. Lee tried to also submit this review to Nature, where the original STAP work was published. However, the review was rejected for not-so-clear reasons. Later on, Nature retracted the original STAP publications, nonetheless.
Intransparency and lack of reproducibility of experiments is an ongoing threat that undermines the reliability of science at all. I think, to seriously report about negative experimental outcomes, the reproducibility of those must be ensured. But in fact, it must also be ensured for the well-selling, positive outcomes. So, having an eye on transparent and reproducible experimental procedures is in fact just a sign of good scientific quality at all.

Considering this, a platform focusing on negative results should (1.) be broad in scope, and (2.) leave no doubt about the scientific craft. Further, I tend more and more to believe that a „classic“ medium like a journal might not be the ideal platform for such results. A good publication type might be communications, supporting a quick and responsive feedback. Another important criterion is that publishing those results must be rewarded. In a simple case, it should help improve the authors h-hindex. Here, ResearchGate’s approach to invent a new score might be useful, since its „RG score“ is not solely coupled to the sheer number of publications and citations.