Go back

Make the REF about processes, not outputs

                  

Change rules so that preparations add value for universities, say Marcus Munafó and Neil Jacobs

Universities and researchers often complain about the burden of the Research Excellence Framework, but funders may themselves be frustrated at the lengths institutions go to in preparing their submissions. Such efforts are not surprising, given the money and reputations at stake. 

Nevertheless, much of the REF’s burden is self-imposed, and arguably disproportionate. As institutional choices cascade down to individual researchers, that burden can grow even larger, creating unintended effects. 

Adding value

Could the REF be designed so that preparation adds value for institutions, becoming a useful part of business-as-usual? This is not an explicit aim of the exercise, although it might have that effect. Could it do this better?

Some suggest the solution lies in metrics—contrary to the more cautious approach advocated in reports such as Harnessing the Metric Tide. This would certainly reduce burden. However, output metrics such as citations are retrospective, and reliable metrics for the research environment are hard to define. A centralised, metrics-based approach could not easily enhance business-as-usual.

Other approaches that have received less attention may offer ways forward. The REF is, in large part, a mechanism for deciding whether and where to invest resources. All organisations face such choices, and looking beyond the higher education sector might suggest ways to make the REF less work while also improving the daily practice of institutions and researchers. 

In particular, other sectors do not focus on assessing carefully selected outputs—as it stands, we don’t know if, by allowing universities to submit only their best work for evaluation, the REF is overestimating the UK research system’s strength. Instead, decision-makers probe the day-to-day operations and procedures most critical to the quality and value of the work.

We have previously argued for applying some of the methods of quality control used in manufacturing to research, for example to enhance reproducibility in certain disciplines. At the core of this approach, termed total quality management (TQM), is the proposition that if we take care of the process, the results will take care of themselves. The REF should focus on evaluating the process of research, not a small subset of results.

What would an REF based on TQM look like? One option would be to conduct random spot-checks on eligible research outputs, to check that the process is running well.

Universities would no longer need to select outputs for submission, eliminating a large part of the REF’s institutional burden. In addition, if any eligible output could be chosen for evaluation, researchers would have a strong incentive to focus on quality over quantity. And review panels would have a strong incentive to genuinely focus on the quality of the research rather than the noteworthiness of its results.

But the research system is not like a factory; it is made up of relatively autonomous actors—universities, researchers, publishers and so on. This creates the scope for others to also introduce TQM techniques such as spot-checks. 

Some journals already do this, for example by checking the data and code associated with a paper, which reviewers rarely do. And some universities are also considering it as an option, initially in specific disciplines, with the ambition of supporting broader cultural changes such as open research practices, as well as driving up quality.

Of course, approaches will need to be rooted in disciplinary and community norms relating to research integrity. Small-scale, pilot TQM initiatives at individual institutions could help explore these across a range of disciplines, identifying what does and does not work. 

Unlike, say, schools inspection, this would be an internal exercise aimed at driving up quality. Evaluation could be carried out internally, prior to publication, or by an external review panel as at present. One option is to focus on specific components of the research process seen as inherently important, such as data and code.

Spotting the spot-checks

How might institutions and researchers react to spot-checks in, say, REF 2028? Perhaps the answer is to change the question. What if institutions and researchers started piloting this approach in ways that are focused on reducing bureaucracy? Those planning the REF would then have firm evidence, and some constraints, with which to help the evaluation process evolve into a virtuous cycle of continual improvement rather than a vicious circle of compliance and checking.  

Marcus Munafò is professor of biological psychology and associate pro vice-chancellor, research culture, at the University of Bristol. Neil Jacobs is head of the open research programme at the UK Reproducibility Network

This article also appeared in Research Fortnight