We don’t know (yet)

A few weeks ago, the Royal Society of Chemistry published the results of a survey about the public view on chemistry and chemists. This is the first study of this kind considering chemistry and the results were as interesting as surprising: Most people (84 %) consider chemistry to make a valuable contribution to society. On the other side, only 12 % of the chemists expected the public to say so. In general, the public opinion sees chemistry as something positive and useful, and considers chemists as reliable and to be trusted. Being a chemist, I would also have expected a far more negative view. One other key results is that most people simply don’t understand chemistry and feel emotionally neutral towards it. I think, that these result might be similar for science in general.

There is a “disbelieve” (or mistrust) in science – which, again, puts science along with pseudo-sciences or religious believes; and I think this is due to the inaccessibility of how research works. One argument that I frequently hear, let’s say, from opponents of vaccination, is that “the scientific discourse is not yet finished”. This is true but beside the point, because no scientific discourse is ever finished. Science, by definition, is difficult to understand and often contradicts itself, overthrowing and questioning everything. Studies are published, and later neglected. Phenomena are observed and explained, until something new comes up. And honestly, science is far from truth, but this is still as close as we can get. Of course, this must create a high uncertainty and also discomfort, for the public as well as for scientists.

A problem that arises from that is that pseudo-scientific and religious explanations start to mix into scientific views, as it happens in biology classes where creationism is taught as a theory alternative to the theory of evolution. But here is a big misunderstanding: creationism is no theory at all. To qualify itself, a theory must be based on observations, must be provable and falsifiable, and should therefore allow predictions. It is quite simple: The hypothesis that all species were created by an omnipotent God who tries to test our faith might be based on the observation of our sheer existence. I agree that the fact that we exist is extremely unlikely and totally astonishing. But the existence of God cannot be proved, or disproved (anyone who wants to object here, please send me your comments and consider to contribute to the according wikipedia article). This is maybe why it is called “faith”. The fact that there is an ancient collection of edited and translated reports about talking, burning shrubberies, is no proof. Also, creationism allows no predictions, maybe excluding the Book of Revelation.

source: wikimedia commons

The Platypus. source: wikimedia commons

My point is that the scientific discourse always is not finished, while the religious usually is. There are discussions about how to interpret the holy texts, but the text itself remains rather static. We might learn some day that the first spark of life came from an asteroid, or that our planet is indeed just a gigantic supercomputer operated by extraterrestrial mice in oder to find the question to the answer of “42”. We might also finally encounter that one of all religions was indeed correct. We do not know yet. Until then, we just assume that our current theories are working fine within their limits; until we get a better idea.

A publication bias workshop

Two weeks ago, the National Centre for the Replacement Refinement & Reduction of Animals in Research (NC3Rs) was hosting a workshop about publication bias. In that workshop, an effort was taken to bring together “funders, journals, and scientists from academia and industry to discuss the impact of publication bias in animal research”.

By this event, three very good blog articles were written from , F1000 Research and one from BioMed Central. Also, a Twitter discussion was ongoing, for which I made a Storify (please feel free to give me a note if I missed something).
Judging from the distance, this workshop seemed to have had a good impact on raising the awareness about biased publication and the consequences. Also, some solutions were discussed, like prospective registration of clinical trials to journals, and new ways for publishing. To me, prospective registration might be an interesting solution, also for other disciplines. This is what every day happens when a researcher applies for funding. However, in that case, the scientist is responsible to provide all his results to the funder, but not to a journal. I agree that this idea might be complicated to manage, but I really think it is worth the effort.

Considering new ways of publishing, PLOS One seems to be a step ahead by launching a new collection focusing on negative results. As promising as this might sound in the first moment, the collection includes papers from 2008 til 2014, being published as a new collection two weeks ago, at the 25th February 2015. This still reminds me a bit of all the negative journals that are only sporadically published. Nonetheless, I think that the awareness about that issue is rising.

From crisis to crisis

In September this year, David Crotty wrote an blog post about two colliding crises – each in the context with negative results. The first crisis is described as a “reproducibility crisis, based on the assumption that a majority of published experiments is in fact unreproducible. The second crisis is referred to as “negative results crisis”, describing that a large amount of correct results remains unpublished, due to its null-result character. Both crises are described to cause a considerable waste of time for scientists – either in performing published experiments that however cannot succeed, or by repeating unsuccessful experiments that have not been published.

One attempt to overcome the problem of negative results was suggested by Annie Franco, Neil Malhotra and Gabor Simonovits, namely by “creating high-status publication outlets for these studies”. Bu I have to agree that this is easily said.

How willing are researchers to publicly display their failures? How much career credit should be granted for doing experiments that didn’t work?

Even though theses problems are clearly not new (I decidated this blog to negative results for a reason), I was surprised to see them actually described as “crises”. I do think that there is a problem of science losing trust by the public, caused by the omnipresent publish-or-perish paradigm.

Is the Nobel Prize a good thing?

It’s Nobel Prize week. And as everyone knows, the Nobel Prize is considered to be the highest award a scientist can get in his carreer. This award is so archetypical, that the secret striving to eventually get the Nobel Prize is ascribed to every one doing science, and is often used in movies and TV shows as a typical cliché.

In terms of “pure” science, the aim for reputation and ackowledgement of scientists might seem somehow  disturbing. The first motivation of a scientist should not be to achieve an award or to gain reputation – it should be to solve a distinct problem, and to learn something new about nature. Of course, this image of a selfless scientist who works only in duty of finding the pure truth is as wrong as the assumption of something like “the pure truth” at all. Scientists do research because it is their job. They have studied, they have to fulfill contracts and they want to have a good and wealthy life, as everyone else does. And, of course, scientists also want to be acknowledged for their work, no less than everyone else does.

I think, the Nobel Prize is a perfect example about expectations. Getting the Nobel Prize is virtually impossible and purposefully working on getting the Nobel prize cannot be an option. The only thing one can really do is doing the best possible work and to hope that later, people notice that this work was really contributing to the progress of our society. And this is what the Nobel Prize is for.

So there are many good reasons to acknowledge the successes of the rewarded people and to do the best possible scientific work.

The importance of feeling stupid

I recently read a text about the concept behind a scientific publication, stating that it is somewhat misleading when it comes to the description of the scientific process. True enough, most papers are built upon a theory that is supposed to be tested, followed by a respective experimental setup in order to prove that theory. Nevertheless, this is indeed not how science usually works. The most important breakthroughs are coming from sidetracks, unexpected observations, or even from miscarried experiments.

I have to agree that the process of deduction cannot produce any information that has not been there before. Accessing and combining given information is clearly an important factor in science, but I think that it is difficult to conclude previously unknown concepts, or question the established ones only by deduction. But this is, however, what the structure of most scientific articles pretends: The team of scientists has an enlightenment about a given theory and deducts a meaningful experiment to prove or disprove exactly defined aspects of that theory. The data is then collected and listed without any subjective interpretation at this point of time. Finally, when all this is done, for the first time the scientists look at their new data in context of the theory to prove and come to new, ground-breaking conclusions about nature. I would be interested, how many of those publications originate from an experiment that was supposed to give a completely different result and let the researchers being puzzled for considerable time.


In this essay, „The importance of stupidity in scientific research“, Martin Schwartz arises the conflict between the consideration of scientists as smart people, while many scientists themselves instead feel stupid in their work. Scientists are indeed addressing problems that not so many people have addressed before – which is the reason why they do it. So clearly, there is a lack of certainity, and every step has to be done carefully. It happens so easily that something gets overlooked, misinterpreted, or overrated. In science, you don’t simply know.

If you realize that you don’t know too much about certain things, and these things happen to be your scientific project, you must feel quite stupid. And again, this is why we do science: because we don’t know things.

How to publish null?

In one of my past entries I made an exemplary and incomplete list of journals that are dedicated to negative outcomes of research. The observation that most of those journals suffer from a very low number of article submission is maybe not surprising, but must look confusing. In my opinion, it is still undisputed that unsuccessful experiments, unexpected observations and contradindicative findings are crucial for the progress in science.

However, there are plenty of reasons why scientists would not unveil their failures openly, and I would do the same. So the question is: How could a platform be like that helps scientists to communicate about obstacles, questions and uncertainities? And why would scientists want to contribute?
An interesting example is the open access journal PLOS One that explicitely publishes every article, as long as it is scientifically sound. Due to its open access nature, the authors have to pay upon publication, instead of the reader. There are many other similar open access journals, but to my knowlegde, PLOS One is the successful one. I think, PLOS One is indeed a shelter for findings that contradict commonly acknowledged theories or research areas that are not considered to be „sexy“ by the scientific community. To my knowledge, it took the journals many difficult years to get established and even now, it is not known to too many scientists.
However, I think that for difficult projects like this, it was very helpful to cover a wide spectrum of sciences. Also, the combination of quick publishing and the connection with the audience is a an asset that distinguishes a project like PLOS One from the typical journals. It is maybe the certainity for the authors to get published in a serious way (PLOS One is established), while the journal it is wide-spread enough to ensure that there are enough submissions.
Another interesting example is the review function of the scientific social network ResearchGate. On this platform, Dr. Kenneth Lee from the university of Hong Kong and also Dr. Mohendra Rao from the NIH published their efforts to reproduce STAP, coming both the conclusion that the original work is not reproducible. Dr. Lee tried to also submit this review to Nature, where the original STAP work was published. However, the review was rejected for not-so-clear reasons. Later on, Nature retracted the original STAP publications, nonetheless.
Intransparency and lack of reproducibility of experiments is an ongoing threat that undermines the reliability of science at all. I think, to seriously report about negative experimental outcomes, the reproducibility of those must be ensured. But in fact, it must also be ensured for the well-selling, positive outcomes. So, having an eye on transparent and reproducible experimental procedures is in fact just a sign of good scientific quality at all.

Considering this, a platform focusing on negative results should (1.) be broad in scope, and (2.) leave no doubt about the scientific craft. Further, I tend more and more to believe that a „classic“ medium like a journal might not be the ideal platform for such results. A good publication type might be communications, supporting a quick and responsive feedback. Another important criterion is that publishing those results must be rewarded. In a simple case, it should help improve the authors h-hindex. Here, ResearchGate’s approach to invent a new score might be useful, since its „RG score“ is not solely coupled to the sheer number of publications and citations.

Link of the week: “Science is not Neutral”

Alice Bell describes in the Guardian’s “Political Science” blog a surprisingly well-matching equivalent of Occupy in the british science community, in 1970.

They started by just asking questions. But the panel chairman and speakers stifled any attempts of debate, dismissing political discussion as irrelevant. The BA seemed to be built on an inflexible culture and internal structure, too reliant on industrial sponsorship to positively challenge debate on the social implications of science. Frustrated, they occupied a mid-conference teach-in. It was designed to be the anti-thesis of how they saw a BA session, with no set-piece speeches, and no restrictions on what could or could not be asked.

The full text is available here.