Go back

How do you run the REF in a pandemic? We still don’t know

     

Pre-Covid approach to evaluation won’t necessarily work this time, say Gemma Derrick and Julie Bayley

Assuming the government’s roadmap out of lockdown goes to plan, England’s pubs will reopen on 12 April. That’s 12 days after the deadline for submissions to the 2021 Research Excellence Framework.

The run-up to a REF deadline is no picnic in a normal year. Mid-pandemic, with changing deadlines, increased workloads for the academics, and professional services staff preparing submissions, in addition to the daily struggles of lockdown, the nearly fortnight’s gap between submission and first orders seems cruel. 

Worldwide, the academic community is wondering how to compensate for a lost year of productivity and focus. This will require a suite of measures aimed at redressing the inequalities left in the wake of Covid-19. 

Learning how to fairly and sensitively accommodate for this disruption is a challenge for research evaluation systems everywhere. On the one hand, with the REF feeling even less welcome than usual, there’s a temptation to just get it over with. But on the other, and as one of the world’s largest research audit exercises, it is more important than ever that the REF takes the realities of Covid-19 into account. The UK has an early chance to show how to adjust and compensate for the setbacks experienced by researchers globally.

Moving online

REF2021 has already made progress in this direction, extending deadlines and census dates and allowing Covid-mitigation statements alongside regular submissions. What is missing, though, are guidelines about how review panels are expected to adjust their evaluation processes so that good submissions are rewarded while those that bear the scars of the crisis are treated reasonably and fairly. 

It should be the evaluators’ job to gauge the damage done by Covid-19. But the lack of clear guidance on how panels should treat mitigation statements places this burden on applicants. 

Most likely, REF panels will meet virtually. Meeting online will be safer and cheaper than an indoor, poorly ventilated and probably heated discussion with lots of people, some of whom are based overseas. But it brings complications of its own.

When any group thrashes out its disagreements, face-to-face dynamics and non-verbal cues form a huge part of the process. Virtual deliberations risk being less agile and less consistent with previous exercises, making it harder still to decide how to treat Covid-mitigation statements. 

Online meetings will be especially tricky for panels judging research impact, which accounts for 25 per cent of the total mark. Compared with traditional criteria for research excellence, the definition and measurement of impact is much less fixed, and so more up for grabs in each meeting. 

Unexpected impact

More generally, Covid-19 has affected the relationship between science and society in myriad ways. Events planned to capture impact and add value to a case study have been cancelled, delayed or never been scheduled; businesses have failed; people who might have provided testimonials are unavailable, or have sadly died.

The past 12 months have also seen UK researchers produce incredible science and impact—sequencing Covid-19 variants, creating a vaccine, and potentially saving hundreds of thousands of lives. Thanks to the extended deadlines, all of these are countable in REF2021. 

No doubt the pandemic has prompted some interesting game-playing in universities around which case studies to submit. Panels might view impacts related to Covid-19 more favourably, simply because evaluators share the gratitude we all feel towards any research aimed at combating the pandemic. The significance and reach of this work is undeniable to anyone who has survived the past 12 months. It will be even more so if, by the time of evaluations, the panel is able to meet
face-to-face.

So the question remains as to whether and how REF2021 can find a sensitive and fair way to meet the expectations loaded on the exercise pre-pandemic while also accommodating and mitigating for the past year’s disruption to individual researchers, research topics and universities. If those running REF2021 can pull this off, it will set a benchmark for post-Covid evaluation processes that would have a global impact. 

Giving review panels clear guidelines on how to evaluate the effects of the pandemic would be a step in the right direction. Without these, the UK risks embarking on an evaluation process that relies on normal tools that are unfit for our post-Covid new normal.

Gemma Derrick is director of research and a senior lecturer at the Centre for Higher Education Research and Evaluation at Lancaster University. Julie Bayley is director of Research Impact Development at the University of Lincoln

This article also appeared in Research Fortnight