In 1962 Jacob Cohen, a psychologist at New York University, reported an alarming finding. He had analysed 70 articles published in the Journal of Abnormal and Social Psychology and calculated their statistical “power” (a mathematical estimate of the probability that an experiment would detect a real effect). He reckoned most of the studies he looked at would actually have detected the effects their authors were looking for only about 20% of the time—yet, in fact, nearly all reported significant results. Scientists, Cohen surmised, were not reporting their unsuccessful research. No surprise there, perhaps. But his finding also suggested some of the papers were actually reporting false positives, in other words noise that looked like data. He urged researchers to boost the power of their studies by increasing the number of subjects in their experiments.
Wind the clock forward half a century and little has changed. In a new paper, this time published in Royal Society Open Science, two researchers, Paul Smaldino of the University of California, Merced, and Richard McElreath at the Max Planck Institute for Evolutionary Anthropology, in Leipzig, show that published studies in psychology, neuroscience and medicine are little more powerful than in Cohen’s day.
They also offer an explanation of why scientists continue to publish such poor studies. Not only are dodgy methods that seem to produce results perpetuated because those who publish prodigiously prosper—something that might easily have been predicted. But worryingly, the process of replication, by which published results are tested anew, is incapable of correcting the situation no matter how rigorously it is pursued.
The preservation of favoured places
First, Dr Smaldino and Dr McElreath calculated that the average power of papers culled from 44 reviews published between 1960 and 2011 was about 24%. This is barely higher than Cohen reported, despite repeated calls in the scientific literature for researchers to do better. The pair then decided to apply the methods of science to the question of why this was the case, by modelling the way scientific institutions and practices reproduce and spread, to see if they could nail down what is going on.
They focused in particular on incentives within science that might lead even honest researchers to produce poor work unintentionally. To this end, they built an evolutionary computer model in which 100 laboratories competed for “pay-offs” representing prestige or funding that result from publications. They used the volume of publications to calculate these pay-offs because the length of a researcher’s CV is a known proxy of professional success. Labs that garnered more pay-offs were more likely to pass on their methods to other, newer labs (their “progeny”).
Some labs were better able to spot new results (and thus garner pay-offs) than others. Yet these labs also tended to produce more false positives—their methods were good at detecting signals in noisy data but also, as Cohen suggested, often mistook noise for a signal. More thorough labs took time to rule these false positives out, but that slowed down the rate at which they could test new hypotheses. This, in turn, meant they published fewer papers.
In each cycle of “reproduction”, all the laboratories in the model performed and published their experiments. Then one—the oldest of a randomly selected subset—“died” and was removed from the model. Next, the lab with the highest pay-off score from another randomly selected group was allowed to reproduce, creating a new lab with a similar aptitude for creating real or bogus science.
Sharp-eyed readers will notice that this process is similar to that of natural selection, as described by Charles Darwin, in “The Origin of Species”. And lo! (and unsurprisingly), when Dr Smaldino and Dr McElreath ran their simulation, they found that labs which expended the least effort to eliminate junk science prospered and spread their methods throughout the virtual scientific community.
Their next result, however, was surprising. Though more often honoured in the breach than in the execution, the process of replicating the work of people in other labs is supposed to be one of the things that keeps science on the straight and narrow. But the two researchers’ model suggests it may not do so, even in principle.
Replication has recently become all the rage in psychology. In 2015, for example, over 200 researchers in the field repeated 100 published studies to see if the results of these could be reproduced (only 36% could). Dr Smaldino and Dr McElreath therefore modified their model to simulate the effects of replication, by randomly selecting experiments from the “published” literature to be repeated.
A successful replication would boost the reputation of the lab that published the original result. Failure to replicate would result in a penalty. Worryingly, poor methods still won—albeit more slowly. This was true in even the most punitive version of the model, in which labs received a penalty 100 times the value of the original “pay-off” for a result that failed to replicate, and replication rates were high (half of all results were subject to replication efforts).
The researchers’ conclusion is therefore that when the ability to publish copiously in journals determines a lab’s success, then “top-performing laboratories will always be those who are able to cut corners”—and that is regardless of the supposedly corrective process of replication.
Ultimately, therefore, the way to end the proliferation of bad science is not to nag people to behave better, or even to encourage replication, but for universities and funding agencies to stop rewarding researchers who publish copiously over those who publish fewer, but perhaps higher-quality papers. This, Dr Smaldino concedes, is easier said than done. Yet his model amply demonstrates the consequences for science of not doing so.
From the print edition: Science and technology