Science has been peculiarly resistant to self-examination. During the ‘science wars’ of the 1990s, for instance, scientists disdained sociological studies of their culture. Yet there is now a growing trend for scientists to use the quantitative methods of data analysis and theoretical modelling to try to work out how, and how well, science works — often with depressing conclusions. Why are these kinds of studies being produced, and what is their value?
Sign up – it’s free!
Take a study published on 10 November1 by psychologists Andrew Higginson of the University of Exeter and Marcus Munafò of the University of Bristol, UK. It considers how scientists can maximize their ‘fitness’, or career success, in a simplified ecosystem that allows them to invest varying amounts of time and effort into exploratory studies. The study finds that in an ecosystem that rewards a constant stream of high-profile claims, researchers will rationally opt for corner-cutting strategies, such as small sample sizes. These save on the effort required for each study, but they raise the danger that new findings will not prove robust or repeatable.
A slightly different perspective — but a similar conclusion — comes from work published on 21 September2, by information scientist Paul Smaldino at the University of California, Merced, and evolutionary ecologist Richard McElreath, at the Max Planck Institute for Evolutionary Anthropology in Leipzig, Germany. They take an evolutionary view, imagining that laboratories are in competition for rewards, and that the most successful of them produce more ‘progeny’: new research groups that use the same techniques and strategies. There is generally a trade-off between productivity and rigour: producing more statistically secure, replicated findings takes more time and effort, but generating too many false positives will eventually take its toll on reputations. Under selection for productivity, however, less-rigorous methods spread and false discovery rates increase. (Source)