A publication bias workshop

Two weeks ago, the National Centre for the Replacement Refinement & Reduction of Animals in Research (NC3Rs) was hosting a workshop about publication bias. In that workshop, an effort was taken to bring together “funders, journals, and scientists from academia and industry to discuss the impact of publication bias in animal research”.

By this event, three very good blog articles were written from cogsci.nl , F1000 Research and one from BioMed Central. Also, a Twitter discussion was ongoing, for which I made a Storify (please feel free to give me a note if I missed something).
Judging from the distance, this workshop seemed to have had a good impact on raising the awareness about biased publication and the consequences. Also, some solutions were discussed, like prospective registration of clinical trials to journals, and new ways for publishing. To me, prospective registration might be an interesting solution, also for other disciplines. This is what every day happens when a researcher applies for funding. However, in that case, the scientist is responsible to provide all his results to the funder, but not to a journal. I agree that this idea might be complicated to manage, but I really think it is worth the effort.

Considering new ways of publishing, PLOS One seems to be a step ahead by launching a new collection focusing on negative results. As promising as this might sound in the first moment, the collection includes papers from 2008 til 2014, being published as a new collection two weeks ago, at the 25th February 2015. This still reminds me a bit of all the negative journals that are only sporadically published. Nonetheless, I think that the awareness about that issue is rising.

Discussion? Unwanted.

The dealings with research  results within the scientific community seems to become an ongoing topic in the german newspaper Die Zeit. In their online version, a case about a psychologic study is reported that might be described with words like “concealment“, “withholding“ or “suppression“. Here is what happened, as described in the article:

Frieder Lang from the University Erlangen-Nürnberg reported in March 2013 his research (please mind the paywall), which might be summarized with the insight that pessimistic people have a longer life than optimistic people. This is discussed with that pessimistic people apparently are more concerned about their health. This study, however, is questioned in some aspects by the statisticians Björn und Sören Christensen from the University of Applied Sciences and the University of Kiel. Their main concern addresses the assignment of test subjects into optimistic and pessimistic individuals. As they argue, the characteristic of an individual might change over the course of the five-years study. In my opinion, this concern does not contradict the research, but in fact gives a significant contribution.

The Zeit article reports that the Christensen brothers sent a note with their analysis in May 2013 to the journal Psychology and Aging in which the original research was published. Their note was refused for publishing, because it did not include a theoretical background between the test subjects’ felt and actual healthiness. One might argue here that delivering this background is covered by the originally addressed paper from Lang and coworkers. After this fruitless attempt to publish their concerns, that note was submitted as a paper to the Zeitschrift für Gesundheitspsychologie (Journal of Health Psychology). More than half a year later, they received again a refusal with an astonishing explanation: It was not possible to find a single referee willing to review that paper. After waiting for another half year, the manuscript was withdrawn by the authors.

In the public, science is often seen as a process of learning and exchange. Theories and conclusions can be discussed, complemented, or even overthrown. Nevertheless, the peer-review publishing system appears to be more static than a fluent research process might require. Tools like PubPeer and the review option in ResearchGate do exist and are of growing importance. But, nonetheless, they are still watched suspiciously by the established journals.

As Sören Christensen explained to me, their paper is currently under revision. So the case is not closed yet.

Curing AIDS: The first 25 years

As Retraction Watch observed in the last week, the dutch scientist Henk Buck delivered new insights about his publication in Science that was retracted in 1990. In the original paper, the authors claimed nothing less than the successful inhibition of HIV infectivity, allowing for a cure of AIDS.

Four separate investigations turned up faked data, manipulated images, and highly selective reporting designed to obscure the fact that HIV-fighting molecules never existed.

Having this in mind, I think that this new publication 25 years later might leave many people speechless. When I read the recent publication, the originally retracted paper and the retraction, I wanted to give this new interpretation a fair chance. At least, it is a discussion of the data, so what could be wrong with that? Apparently, quite a lot: First, of course the fact that Buck was proven guilty of scientific fraud. In addition to that comes the journal owned by Scientific Research Publishing (SCIRP), registered in Delaware and located in China and its questionable reputation.

Now, the two big questions are: Why? And why now?
I am keeping my fingers crossed that Retraction Watch receives an answer from the author, as this might be the beginning (or a very late continuation) of an interesting story.

Scientific worth and culture

In their editorial in Disease Models & Mechanisms, Natalie Matosin and coworkers from the University of Wollongong and the Schizophrenia Research Institute in Syndey, Australia, are giving an excellent overview about the current view on negative results and the related issue of publication bias.

After showing some famous examples (e.g. the Wakefield-publication about vaccination and autism that was retracted not earlier than after twelve years), they also mention the time-comsuming attempts of the Australian Professor David Vaux to retract his own “News and Views” article in Nature.

From their own experiences, the authors describe the impact of negative findings in their own research and the criticism they encountered when they reported their findings in conferences.

A negative result is in response to a positive question. If you rephrased to a negative question, does that mean you have a positive finding?

In my opinion, and also judging from the described reactions from the scientific community, the authors’ reaction towards those negative findings is rather unusual: I hypothesize that if scientists encounter a null result, they are very likely to switch their topic, keeping the “unpublishable” result in fact unpublished (the so-called „file-drawer effect“).

To raise the sensitivity for negative outcomes, the authors refer to he various journals that are dedicated to publishing negative research outcomes, even if they consider the low attraction that these journals suffer from.

At the core, it is our duty as scientists to both: (1) publish all data, no matter what the outcome, because a negative finding is still an important finding; and (2) have a hypothesis to explain the finding.

Again, this publication describes a deep underlying problem in the scientific culture that needs rethinking.

From crisis to crisis

In September this year, David Crotty wrote an blog post about two colliding crises – each in the context with negative results. The first crisis is described as a “reproducibility crisis, based on the assumption that a majority of published experiments is in fact unreproducible. The second crisis is referred to as “negative results crisis”, describing that a large amount of correct results remains unpublished, due to its null-result character. Both crises are described to cause a considerable waste of time for scientists – either in performing published experiments that however cannot succeed, or by repeating unsuccessful experiments that have not been published.

One attempt to overcome the problem of negative results was suggested by Annie Franco, Neil Malhotra and Gabor Simonovits, namely by “creating high-status publication outlets for these studies”. Bu I have to agree that this is easily said.

How willing are researchers to publicly display their failures? How much career credit should be granted for doing experiments that didn’t work?

Even though theses problems are clearly not new (I decidated this blog to negative results for a reason), I was surprised to see them actually described as “crises”. I do think that there is a problem of science losing trust by the public, caused by the omnipresent publish-or-perish paradigm.

Show me your data sets!

Are authors of a scientific publication really more unwilling to share their raw data when their reported evidence is not too strong? This question was recently addressed in the subject of psychology, unsurprisingly published in the open-access journal PLoS ONE. Jelte Wicherts, Marjan Bakker and Dylan Molenaar from the Psychology Department of the University of Amsterdam, indeed came to that conclusion. Their study included 1149 results from 49 papers. It is interesting that in 28 of the considered 49 papers, the respective co-authors did not share their research data, even if they had agreed on that before.

Distribution of reporting errors per paper for papers from which data were shared and from which no data were shared. From DOI 10.1371/journal.pone.0026828

Distribution of reporting errors per paper for papers from which data were shared and from which no data were shared. From DOI 10.1371/journal.pone.0026828

However, one might argue that the authors of this interesting “meta”-study walk on a difficult terrain, as they are trying to draw a correlation about the accuracy of other scientists’ correlations. But I think, their paper makes it clear enough that they were very much aware of that issue.

The invisible reviewer

This week, on Wednesday, a scientific tweetstorm started about a publication from Zachary W. Culumber et al., who published a paper without removing a draft comment that was not supposed to get public. The full story is nicely summarized by Grrlscientist.

Despite the fact that many of the commenters address the apparent lack of proper peer-reviewing before the manuscript was published, I had another thought. What if…?

Typically, a manuscript is reviewed thoroughly by the authors themselves before it gets submitted. Then, in the following peer-review process, at least two anonymous experts are reviewing the text and offering comments, including their suggestion if this manuscript should be either accepted or rejected for publication. It is unlikely, but of course not impossible, that such a blatant mistake got overlooked. And, the peer-reviewers are not paid and doing that work besides their actual one (like teaching classes, supervising research, or applying for funding). But what if… this was done on purpose?

My theory is that the reviewers, or at least one of them, might be a competitor of the authors. It might have happened that this mistake was noticed and not commented. In that case, the anonymous peer-review process would offer a perfidious way to harm competitors – simply by letting them run into the open knife.

However, in each case the peer review had clearly failed. It’s only human and besides a gleefully smiling community, the scientific results were never called into question.

Where do the unsolved questions go?

When my attempt to write about negative results in this blog was told in a small discussion, a friend mentioned that there already is a journal covering “null” results in science. So, I would like to address the “Journal of Unsolved Questions” (JUnQ).

Since I was unaware of this journal, I was accordingly surprised that the journal is very alive with (as far as I can judge) two issues per year, being published by PhD students from the university of Mainz, Germany. The journal features articles, guest contributions, and comments from contributors around the world, covering various scientific topics. The articles are peer-reviewed and judged for acceptance or refusal by independent referees. Also, it seems very consistent with the journal’s name that most of the  articles’ titles are indeed questions, which is refreshing since scientists are usually supposed to offer answers instead. Personally, I took a great interest in the article of Natascha Gaster, Jorge S. Burns and Michael Gaster about the ICMJE recommendations and the problem of co-author overflow and honory authorships in articles.

Nonetheless, it occurs to me that in JUnQ – although dedicated to “[…] making ‘negative’ and ‘null’-results from all fields of science available to the scientific community” – the authors rephrase the “null” outcomes of their work to open questions. That’s fair enough, since negative results do keep the original questions unsolved, or even give rise to new ones.

What I am still wondering about is whether there is a similarly serious platform for experimental studies with a “true negative” outcome. JUnQ is clearly contributing to a manifold of unsolved questions in sciences, but I think a platform for negative experimental results would help scientists to avoid running into dead ends that had been already discovered, but never published.

Is the Nobel Prize a good thing?

It’s Nobel Prize week. And as everyone knows, the Nobel Prize is considered to be the highest award a scientist can get in his carreer. This award is so archetypical, that the secret striving to eventually get the Nobel Prize is ascribed to every one doing science, and is often used in movies and TV shows as a typical cliché.

In terms of “pure” science, the aim for reputation and ackowledgement of scientists might seem somehow  disturbing. The first motivation of a scientist should not be to achieve an award or to gain reputation – it should be to solve a distinct problem, and to learn something new about nature. Of course, this image of a selfless scientist who works only in duty of finding the pure truth is as wrong as the assumption of something like “the pure truth” at all. Scientists do research because it is their job. They have studied, they have to fulfill contracts and they want to have a good and wealthy life, as everyone else does. And, of course, scientists also want to be acknowledged for their work, no less than everyone else does.

I think, the Nobel Prize is a perfect example about expectations. Getting the Nobel Prize is virtually impossible and purposefully working on getting the Nobel prize cannot be an option. The only thing one can really do is doing the best possible work and to hope that later, people notice that this work was really contributing to the progress of our society. And this is what the Nobel Prize is for.

So there are many good reasons to acknowledge the successes of the rewarded people and to do the best possible scientific work.

The importance of feeling stupid

I recently read a text about the concept behind a scientific publication, stating that it is somewhat misleading when it comes to the description of the scientific process. True enough, most papers are built upon a theory that is supposed to be tested, followed by a respective experimental setup in order to prove that theory. Nevertheless, this is indeed not how science usually works. The most important breakthroughs are coming from sidetracks, unexpected observations, or even from miscarried experiments.

I have to agree that the process of deduction cannot produce any information that has not been there before. Accessing and combining given information is clearly an important factor in science, but I think that it is difficult to conclude previously unknown concepts, or question the established ones only by deduction. But this is, however, what the structure of most scientific articles pretends: The team of scientists has an enlightenment about a given theory and deducts a meaningful experiment to prove or disprove exactly defined aspects of that theory. The data is then collected and listed without any subjective interpretation at this point of time. Finally, when all this is done, for the first time the scientists look at their new data in context of the theory to prove and come to new, ground-breaking conclusions about nature. I would be interested, how many of those publications originate from an experiment that was supposed to give a completely different result and let the researchers being puzzled for considerable time.

3ewtoz.jpg

In this essay, „The importance of stupidity in scientific research“, Martin Schwartz arises the conflict between the consideration of scientists as smart people, while many scientists themselves instead feel stupid in their work. Scientists are indeed addressing problems that not so many people have addressed before – which is the reason why they do it. So clearly, there is a lack of certainity, and every step has to be done carefully. It happens so easily that something gets overlooked, misinterpreted, or overrated. In science, you don’t simply know.

If you realize that you don’t know too much about certain things, and these things happen to be your scientific project, you must feel quite stupid. And again, this is why we do science: because we don’t know things.