Fixing science

Science is broken. What can we do to fix it?

I find that too often in science, publication bias is talked about but not really implemented. We have had to suffer from many fixes to this problem that don’t really work. I have talked about a possible framework called Simularcrum for this very problem too!

The main problem is that negative data does not get published easily. This issue stems from the fact that well negative data is negative. It doesn’t tell us anything that we were asking, at least that’s how a lot of people interpret it. Most people start with a hypothesis and then do experiments. Now when the data doesn’t match their claim, they have two options: Either run the experiment again and change the parameters to observe what happened and why it didn’t work. Or secondly, to modify the experiment again and again until in one of the trials, they get something that proves their hypothesis. This is a major problem, of course they got something that proves them right, but it hurts the field at large. Here’s how: The result obtained by them is not something that always happens. In fact, because of its very nature, such a result has poor reproducibility. Many scientists have done a thorough analysis of how deep this problem runs and the results that came out of that meta-analysis particularly by Ioannidis’s group are scary. A recent report by Freedman et al. shows that a lot of money is being wasted doing research and that this cost might exceed billions

Thankfully, a lot of people are realizing this and trying very hard to change the academic structure to resolve this problem:

A likely culprit for this disconnect is an academic reward system that does not sufficiently incentivize open practices (7). In the present reward system, emphasis on innovation may undermine practices that support verification. Too often, publication requirements (whether actual or perceived) fail to encourage transparent, open, and reproducible science (2, 4, 8, 9).

This new paper in Science magazine just recently came out and it proposes a set of common standards that can be agreed upon and really until large-scale changes happen to academia, evidence-based medicine might continue facing such issues. Over the time, the problems emerging have become more and more subtle. They require the need for sophisticated analysis and while it is true that often life sciences focus on a narrow topic, the quality of work submitted for publication can be such that reproducibility is an inherent feature built into it and not an additional qualification that has to be met by the research lab. Let’s hope that this paper generates much needed debate on the issue and that we can come to better policy on how research is done and how grant money from government organizations such as the NIH or NSF are used to promote replicable research.

Papers in consideration:

1. Promoting an open research culture
B. A. Nosek, G. Alter, G. C. Banks, D. Borsboom, S. D. Bowman, S. J. Breckler, S. Buck, C. D. Chambers, G. Chin, G. Christensen, M. Contestabile, A. Dafoe, E. Eich, J. Freese, R. Glennerster, D. Goroff, D. P. Green, B. Hesse, M. Humphreys, J. Ishiyama, D. Karlan, A. Kraut, A. Lupia, P. Mabry, T. A. Madon, N. Malhotra, E. Mayo-Wilson, M. McNutt, E. Miguel, E. Levy Paluck, U. Simonsohn, C. Soderberg, B. A. Spellman, J. Turitto, G. VandenBos, S. Vazire, E. J. Wagenmakers, R. Wilson, and T. Yarkoni

Science 26 June 2015: 348 (6242), 1422-1425. DOI:10.1126/science.aab2374

2. The Economics of Reproducibility in Preclinical Research.
Freedman LP, Cockburn IM, Simcoe TS (2015).

PLoS Biol 13(6): e1002165. DOI:10.1371/journal.pbio.1002165