Go back

The tide is turning. Revisiting the Metric Tide

                   

The authors of a report on research assessment reflect on changes since a landmark analysis

With temperatures plummeting across the UK, a nip of reform is in the air. Last week, as the Labour Party announced its plans to reform one bloated, out-of-touch institution—the House of Lords—we put the finishing touches to a review of what many in higher education view as another: the Research Excellence Framework (REF).

Harnessing the Metric Tide is published today [12 December] as a modest contribution to the ongoing Future of Research Assessment Programme (Frap). Commissioned by Research England jointly with the equivalent funders in Northern Ireland, Scotland and Wales, the review places debates over REF in the context of bigger changes in the international landscape for research assessment over the past decade. Across 30 pages and 10 recommendations, we revisit the analysis and arguments of the original Metric Tide review, and consider to what extent things have changed, one completed REF cycle later. Is there now greater scope to use quantitative data and indicators to determine the qualities, impacts and overall ‘excellence’ of research in the UK?

We identified visible progress against almost all of the recommendations in The Metric Tide. Across UK research, there is now heightened awareness of the need to use metrics responsibly, accompanied by increased usability and transparency of data infrastructures. As an exercise, REF 2021 benefited from clearer guidelines on the use of quantitative information in its outputs, environment and impact pillars. And two new bodies have been created—the UK Forum for Responsible Research Metrics (in 2016) and the Research on Research Institute (2019)—that have strengthened the evidence base and enriched international debates on assessment cultures and practices. 

Of course, important work remains to be done. A fully functioning data infrastructure—founded on open, community-owned principles—has yet to emerge. And while many have signed up to statements of principles and good practice, there are concerns that some research institutions are yet to demonstrate they have translated principles of responsible metrics into credible forms of action. 

In parallel with fights over the form and fabric of the REF, calls for reform have grown louder and more global since 2015. Two shifts stand out: a demand that reform of research assessment transitions from discussing problems to testing and scaling solutions; and greater urgency in the search for ways to incentivise positive change.

These debates have also benefited from critiques of the fuzzily-defined concept of ‘excellence’, and emphasis that assessments of the various qualities of research outputs must include consideration of the qualities of the environments and processes through which they are produced. 

We propose that the REF realises and rewards more of that latent value by placing greater weight on the environment statement (following an evidence-informed narrative structure). This could include issues such as gender and race equality, team-leadership skills, workload management, and measures to eliminate bullying and harassment. The data needed to support such an innovation need to be carefully considered, to avoid growing the assessment burden of the REF. 

Overall, despite valuable innovations in recent years (e.g. the Initiative for Open Citations and Overton.io) there is still no magic solution to the challenges of large-scale research assessment. We remain persuaded that a mixed-methods approach will best serve the purposes of the REF. 

If the purposes of the REF are clear, there is an opportunity for more radical surgery, which we suggest takes place over two REF cycles to allow the research community time to consult and co-design. One option worth exploring is to reconsider the scale at which assessment is performed, potentially moving from department-level units of assessment to main panel or institution-level. This would create scope for the use of aggregated data which may provide a more reliable indication of dimensions of research quality. 

We look forward to discussing the report’s findings and recommendations at our launch event today, and through ongoing dialogue with sector stakeholders over the coming weeks and months. We should be rightly proud of the contribution the UK research community has made to these agendas, due in no small part to the far-flung influence of The Metric Tide. As we calibrate many of our global alliances after Brexit, it is important that the UK research community maintains constructively engaged in these reform efforts.

As Juan Pablo Pardo-Guerra notes in the closing section of his superb book, The Quantified Scholar: “In the end we ought to focus not on resisting numbers and their pretence of authority but on the scaffolding we knowingly, willingly construct and then defer to.” 

To end where we began, with the possibilities for reform: the progress we’ve seen since 2015 reminds us of the need for responsible assessment cultures in which purposes are clear, methodologies are transparent, assessors are competent, and biases are mitigated. Through Frap, an opportunity for change is surfacing and we must seize it while we can.

Stephen Curry is professor of Structural Biology at Imperial College London. Elizabeth Gadd is Research Policy Manager at Loughborough University. James Wilsdon is founding director of the Research on Research Institute 

A version of this article appeared in Research Europe