Robust science depends on getting the right incentives at all levels, says Marcus Munafò
Around 1950, the American statistician and management expert W Edwards Deming travelled to Japan to help its industry recover from the second world war. Training Japanese engineers and executives, Deming emphasised quality control. The trick, as one engineer reflecting on Deming’s contribution put it decades later, is to get it right the first time.
His advice is credited with leading to a step-change in the quality of Japanese products, such as cars. Deming saw that focusing on quality control would also improve productivity, as less resource would be expended on fixing cars that had broken down later.
Any endeavour that neglects quality control is asking for trouble. You can see the consequences in science, in the evidence and concern around the lack of robustness and reproducibility of much research. Too often, researchers are not getting it right the first time.
Of the potential solutions, open research practices are among the most promising. The argument is that transparency acts as an implicit quality control process. If others are able to scrutinise our work—not just the final published output, but the underlying data, code, and so on—researchers will be incentivised to ensure these are high quality.
So, if we think that research could benefit from improved quality control, and if we think that open research might have a role to play in this, why aren’t we all doing it? In a word: incentives.
At the moment, researchers are incentivised to produce outputs, particularly journal articles, that report a certain kind of work (‘groundbreaking’), are published in a certain kind of journal—it’s taboo to talk about impact factor, but on the ground this is still a major consideration—and so on. This is a problem because, to paraphrase the eminent Cambridge plant scientist Ottoline Leyser, we break ground in order to build something. If all we do is groundbreaking, all we’ll be left with is holes in the ground.
But this is what funders and institutions explicitly encourage, constantly telling researchers that their research needs to be novel, innovative and groundbreaking. In other words, we are incentivised to publish a certain kind of work, and to get grants, but not to be right. How can what is good for a scientists’ career be better aligned with what is good for science?
Universities are the ultimate repository of much academic culture. One of the most fundamental expressions of this is their hiring and promotion criteria. This makes such criteria important targets for intervention if we want to change what we value.
That’s why, on the assumption that open research is beneficial to society as a whole, and researchers in particular, the University of Bristol has recently revised its promotion criteria to include open research practices, for use from the 2020-21 promotion cycle. Along with more conventional indicators, such as publication record, the adoption of open research practices, as appropriate to an individual’s research, will be recognised in promotion cases.
Open access publication, including in Bristol’s institutional repository, has been required for some time. But open research is much more than simply open access. The new criteria state that cases will recognise: “Producing open research outputs as appropriate by adopting good practice in, for example, sharing data and code, sharing materials, sharing digital outputs, publishing preprints and pre-registering study protocols.”
Including data sharing in promotion criteria is a requirement of institutions signing the Concordat on Open Research Data. Including open research practices in its promotion criteria allows the University of Bristol to sign the Concordat, which will in turn enhance the environment component of its submission to the Research Excellence Framework. There is a web of incentives.
The change has been rapid and seamless—the benefits to the university, to researchers and to science were clear. Bristol is not the only university to include open research practices in promotion criteria, and hopefully as more do so this will become the norm.
The culture of openness that policymakers and funders have been encouraging is having a gradual but steady impact. Even so, policies such as Bristol’s are only a first step. Ultimately, we need to change the underlying culture of academia.
To do this, we need to understand the research environment as a system, and take a systems approach to change. Incentives are central to this, and operate at all levels. Ensuring incentives are aligned and coordinated will be critical to ensuring the UK leads the world in conducting and promoting rigorous research.
This article also appeared in Research Fortnight and a version also appeared in Research Europe