New negative-journal launched by a major publisher

This year, the scientific publisher Elsevier has launched New Negatives in Plant Science, an “open access, peer reviewed, online journal that will publish hypothesis-driven, scientifically sound studies that describe unexpected, controversial, dissenting, and/or null (negative) results in basic plant sciences.” The first issue was published in August this year and is currently in progress.

Journals about null-outcome studies usually suffer from a low attractiveness. Therefore, the fact that null-results are getting acknowleged by a major publisher comes as a very positive surprise. I also think that open access is exactly the right way to tackle this new journal, since that type of journal supposedly publishes curious results that might help many other scientists avoid performing unnecessary experiments.

Changes and challenges in scientific publishing: 12th of June 2015, Vienna

At 12th of June, the University of Vienna will host a talk by Eva Amsen, the F1000 Community Strategy Manager. She will give a presentation about “Open peer review, open data, negative results: Scientific publishing is changing.”

This talk will look in more detail at the beneftis of these apects of open science, but also discuss some the challenges, such as lack of time or fear of sharing ongoing research.

It is very interesting to see that these aspects are more and more also addressed by publishers. If it becomes more rewarding to publish “the other” results, too, this would clearly be a benefit for all scientists and their work.

A lesson in academic gender bias

Last week, the Times Higher Education reported about a a paper rejection due to the fact that the two authors are female. Thankfully, this has ignited an outrage against the affected journal PLOS One, which in consequence ousted the anonymous reviewer.

The rejected study focuses on gender bias in academia and concludes that there is indeed a gender bias. In this context, the rejection gives proof to that in a stunning way. But besides joining the outrage, I would like to add my opinion, since gender bias (i.e., patriarchy) is a highly sensitive and complex topic.

As a matter of fact, the majority of scientists is male and it was not too long ago that women were not accepted to be scientists at all. Also, I hardly believe that the publication would have been rejected with the same argument when all authors would have been male. The reviewer’s phrasing does not seem to imply a gender-balanced author team. Instead, it seems to aim on a contribution of supposedly missing male opinions, which is a big difference. This can only mean that the reviewer assumes that male researchers are more objective than female ones and that the female interpretation is more prone to “ideologically biased assumptions” than the male one.

The other problem is that from a man’s view, patronizing is not an issue, since we are not getting patronized. But this is also a misinterpretation, since we (men) also are affected by a gender bias which expects males give their work first priority. I think, the cases of male scientists taking one or two years off to take care for their family are rarely seen. And why? – Because it would kill our career, which is exactly what is expected from women.

Last but not least, this event also demonstrates the power of social media in science. This discussion started with a tweet, exposing the biased devaluation and insult that many authors have to face from anonymous peer review. However, double-anonymous peer review might not always be an answer – many research areas are so small that it is easy to guess who might be the author, or the reviewer, respectively.

Judging from the impact of that story, it might be worth to start a project similar to the highly acknowledged Retraction Watch. Maybe something like a “Rejection Watch”, where biased and unfair reviewer comments can be dicussed openly.

Source: Wikimedia Commons

A publication bias workshop

Two weeks ago, the National Centre for the Replacement Refinement & Reduction of Animals in Research (NC3Rs) was hosting a workshop about publication bias. In that workshop, an effort was taken to bring together “funders, journals, and scientists from academia and industry to discuss the impact of publication bias in animal research”.

By this event, three very good blog articles were written from , F1000 Research and one from BioMed Central. Also, a Twitter discussion was ongoing, for which I made a Storify (please feel free to give me a note if I missed something).
Judging from the distance, this workshop seemed to have had a good impact on raising the awareness about biased publication and the consequences. Also, some solutions were discussed, like prospective registration of clinical trials to journals, and new ways for publishing. To me, prospective registration might be an interesting solution, also for other disciplines. This is what every day happens when a researcher applies for funding. However, in that case, the scientist is responsible to provide all his results to the funder, but not to a journal. I agree that this idea might be complicated to manage, but I really think it is worth the effort.

Considering new ways of publishing, PLOS One seems to be a step ahead by launching a new collection focusing on negative results. As promising as this might sound in the first moment, the collection includes papers from 2008 til 2014, being published as a new collection two weeks ago, at the 25th February 2015. This still reminds me a bit of all the negative journals that are only sporadically published. Nonetheless, I think that the awareness about that issue is rising.

Discussion? Unwanted.

The dealings with research  results within the scientific community seems to become an ongoing topic in the german newspaper Die Zeit. In their online version, a case about a psychologic study is reported that might be described with words like “concealment“, “withholding“ or “suppression“. Here is what happened, as described in the article:

Frieder Lang from the University Erlangen-Nürnberg reported in March 2013 his research (please mind the paywall), which might be summarized with the insight that pessimistic people have a longer life than optimistic people. This is discussed with that pessimistic people apparently are more concerned about their health. This study, however, is questioned in some aspects by the statisticians Björn und Sören Christensen from the University of Applied Sciences and the University of Kiel. Their main concern addresses the assignment of test subjects into optimistic and pessimistic individuals. As they argue, the characteristic of an individual might change over the course of the five-years study. In my opinion, this concern does not contradict the research, but in fact gives a significant contribution.

The Zeit article reports that the Christensen brothers sent a note with their analysis in May 2013 to the journal Psychology and Aging in which the original research was published. Their note was refused for publishing, because it did not include a theoretical background between the test subjects’ felt and actual healthiness. One might argue here that delivering this background is covered by the originally addressed paper from Lang and coworkers. After this fruitless attempt to publish their concerns, that note was submitted as a paper to the Zeitschrift für Gesundheitspsychologie (Journal of Health Psychology). More than half a year later, they received again a refusal with an astonishing explanation: It was not possible to find a single referee willing to review that paper. After waiting for another half year, the manuscript was withdrawn by the authors.

In the public, science is often seen as a process of learning and exchange. Theories and conclusions can be discussed, complemented, or even overthrown. Nevertheless, the peer-review publishing system appears to be more static than a fluent research process might require. Tools like PubPeer and the review option in ResearchGate do exist and are of growing importance. But, nonetheless, they are still watched suspiciously by the established journals.

As Sören Christensen explained to me, their paper is currently under revision. So the case is not closed yet.

Scientific worth and culture

In their editorial in Disease Models & Mechanisms, Natalie Matosin and coworkers from the University of Wollongong and the Schizophrenia Research Institute in Syndey, Australia, are giving an excellent overview about the current view on negative results and the related issue of publication bias.

After showing some famous examples (e.g. the Wakefield-publication about vaccination and autism that was retracted not earlier than after twelve years), they also mention the time-comsuming attempts of the Australian Professor David Vaux to retract his own “News and Views” article in Nature.

From their own experiences, the authors describe the impact of negative findings in their own research and the criticism they encountered when they reported their findings in conferences.

A negative result is in response to a positive question. If you rephrased to a negative question, does that mean you have a positive finding?

In my opinion, and also judging from the described reactions from the scientific community, the authors’ reaction towards those negative findings is rather unusual: I hypothesize that if scientists encounter a null result, they are very likely to switch their topic, keeping the “unpublishable” result in fact unpublished (the so-called „file-drawer effect“).

To raise the sensitivity for negative outcomes, the authors refer to he various journals that are dedicated to publishing negative research outcomes, even if they consider the low attraction that these journals suffer from.

At the core, it is our duty as scientists to both: (1) publish all data, no matter what the outcome, because a negative finding is still an important finding; and (2) have a hypothesis to explain the finding.

Again, this publication describes a deep underlying problem in the scientific culture that needs rethinking.


Show me your data sets!

Are authors of a scientific publication really more unwilling to share their raw data when their reported evidence is not too strong? This question was recently addressed in the subject of psychology, unsurprisingly published in the open-access journal PLoS ONE. Jelte Wicherts, Marjan Bakker and Dylan Molenaar from the Psychology Department of the University of Amsterdam, indeed came to that conclusion. Their study included 1149 results from 49 papers. It is interesting that in 28 of the considered 49 papers, the respective co-authors did not share their research data, even if they had agreed on that before.

Distribution of reporting errors per paper for papers from which data were shared and from which no data were shared. From DOI 10.1371/journal.pone.0026828

Distribution of reporting errors per paper for papers from which data were shared and from which no data were shared. From DOI 10.1371/journal.pone.0026828

However, one might argue that the authors of this interesting “meta”-study walk on a difficult terrain, as they are trying to draw a correlation about the accuracy of other scientists’ correlations. But I think, their paper makes it clear enough that they were very much aware of that issue.