Go back

REF 2021: The metric tide rises again

   

The REF is ripe for radical change, say Stephen Curry, Elizabeth Gadd and James Wilsdon

As we continue to digest last week’s headlines from the Research Excellence Framework (REF)—and with more to come as detailed panel reports and impact case studies emerge over the next few weeks—attention is already turning to the scope and design of the next assessment cycle.

As in 2008 and 2014, the possibility of a simpler, cheaper process that draws on readily available metrics is being floated as an alternative to a process that is widely agreed to have become overly cumbersome.

It is worth remembering that anyone under 60 who works in UK universities is part of a system shaped by successive waves of national research assessment, dating back to the first research selectivity exercise in 1986. Over eight cycles, this has become a highly complex evaluation machine, to use a term coined by the political scientist Peter Dahler-Larsen.

This machinery is simultaneously admired—and seen by some as something to emulate—as a fair and accountable basis on which to determine the annual allocation of around £2 billion of quality-related (QR) funding, and contested as a source of bureaucracy, competition and conformity.

So it is right that the REF’s designers and users remain alert to the potential of new technologies and other innovations to enhance, reboot or streamline its operations. When Tim Berners-Lee invented the World Wide Web, the UK was already completing its second assessment cycle. Since then, advances in ICT, data science, scientometrics and related fields have transformed the possibilities and practices of measurement and management, and research assessment has advanced alongside.

Many see machine learning and artificial intelligence as the latest general-purpose technologies, with the capacity to boost productivity and transform working practices across many sectors, including research. There have been calls to build these technologies into the REF.

Catch the wave

Over the decades, the culture and management of UK university research has become so deeply fused with the machinery of assessment that it makes reform difficult. When viewed from afar, unpicking the whole thing can seem straightforward; up close, all you see is a spaghetti of interdependencies and connections.

That said, various factors are now aligning to support a more radical overhaul of the exercise than at any point in recent years.

Public R&D spending is set to grow through to 2025. There is the potential for more strategic integration between QR and other funding streams through the structures of UK Research and Innovation, combined with heightened urgency around research culture, impact, diversity and inclusion. And there is already a strong drive to reduce bureaucracy, through Adam Tickell’s ongoing review and UKRI’s initiative for ‘Simpler and Better’ funding.

So the time is right to look in an open and creative way at how we could simplify and improve the REF. The Future Research Assessment Programme, which the research funding bodies initiated in 2020, is admirable in its scope and intent to do that. Multiple strands of evaluation and analysis are now underway.

As the latest addition to this mix, Research England is announcing today that it has asked the three of us to lead an updated review of the potential role of metrics in the UK research assessment system.

Short and sharp

The Metric Tide Revisited will take a short, sharp, evidence-informed look at current and potential uses of metrics, with four tightly defined objectives:

  • To revisit the conclusions and recommendations of the last review of these questions—The Metric Tide, which two of us co-authored in 2015—and assess progress against these;
  • To consider whether recent developments in the infrastructures, methodologies and uses of research metrics negate or change any of those 2015 conclusions, or suggest additional priorities;
  • To look afresh at the role of metrics in any future REF, and consider whether design changes being considered by the FRAP suggest similar or different conclusions to those reached in 2015;
  • To offer updated advice to UKRI and the higher education funding bodies on the most effective ways of supporting and incentivising responsible research assessment and uses of metrics.

This will be a rapid review, completing in September 2022. The original Metric Tide was underpinned by extensive evidence gathering and consultation, and there’s no need to repeat all this from scratch.

We’ve also seen welcome progress on these agendas since 2015, under the umbrella of the Declaration on Research Assessment; through institutions adopting their own policies for responsible metrics and assessment; and with additional guidance at an international level from bodies such as the International Network of Research Management Societies, Science Europe, Unesco and the Global Research Council.

We will hold roundtables in June and July to invite formal inputs from experts and stakeholder groups. These will include researchers across disciplines and career stages; scientometricians; metrics providers; university leaders and research managers; publishers; librarians; learned societies; research funders; and infrastructure providers. We will also work with the Forum for Responsible Research Metrics—itself created as a recommendation of The Metric Tide—as a source of informal oversight.

More than anything, we as a team care passionately about improving research cultures and delivering the evidence and answers that the FRAP, and the wider community, need. We know how vital it is to get assessment systems right; how the purposes and priorities of the REF need to be weighed alongside technologies, methods and applications; and how any proposed reforms to the REF must engage with the users’ experiences and insights and the expectation of stakeholders.

The different strands of FRAP, including ours, will be drawn together in the autumn. It will then be up to ministers to decide how radical they want to be. We are quietly optimistic about the prospects for positive change.

Stephen Curry is professor of structural biology and assistant provost for equality, diversity and inclusion at Imperial College London, and chair of the Declaration on Research Assessment steering committee. He was a co-author of The Metric Tide

Elizabeth Gadd is research policy manager at Loughborough University and chair of the International Network of Research Management Societies Research Evaluation Group

James Wilsdon is digital science professor of research policy at the University of Sheffield, and director of the Research on Research Institute. He was chair of The Metric Tide review and is a founding member of the Forum for Responsible Research Metrics

A version of this article also appeared in Research Fortnight and Research Europe