Go back

REF 2021: Don’t go compare

Beware those drawing parallels between this year’s results and REF 2014, says Simon Kerridge

This was my second Research Excellence Framework and my fifth UK assessment exercise. Over that time, I have been a submitter, one of the submitted, a secretary and, this year, a panel adviser.

My role in REF2021 was to help ensure that the process operated as advertised, as per the panel criteria and working methods. This was mostly straightforward and sometimes difficult—there are always edge cases, and online meetings brought their own challenges. I have newfound admiration for tech support.

But, certainly for my sub-panels in the social sciences, every panel member I worked with was, without exception, extraordinarily hardworking and diligent. They wanted to do the best job possible—and, above all, be fair.

The good

The REF’s major virtue is that it does what it says on the tin. It has shown itself to be a tried and tested method for assessing the research, impact and culture of departments in universities, and a relatively cost-effective way of distributing funding.

Panels care about outputs, not outlets—4* articles were found in many, many places, while those from ‘top’ journals often fell well short of top marks. Double weighting, where a single work (usually a book) counts as two outputs, was used much more often than in 2014, rewarding researchers in the social sciences, arts and humanities by incentivising monographs and other long-form outputs. 
 
Even so, I suspect that it could have been used more—submitters only request double weighting for work if they are confident it will score as least as highly as the reserve output submitted in case the request is turned down. A less-cautious approach might have yielded higher marks.

The bad

Remember, the REF does not assess all research, and does not always give an accurate picture of research activity.

For example, case studies of the ongoing impact of research submitted in 2014 only counted as ‘continuing’ if they relied on exactly the same body of work—a single additional piece of evidence means a new case study. This means that the data, if not properly interpreted, probably underestimate long-term impact.

Similarly, the ‘interdisciplinary identifier’ used to trigger special treatment for boundary-spanning work was applied very inconsistently. Any analyses about the relative strength of interdisciplinary research are likely to be less than useless.

In 2014, a high proportion of submissions were just below the number of staff that would require an additional impact case study, creating spikes in submission sizes. This time, in theory, all research-active staff had to be submitted, and a case study was required for every 15 full-time equivalents rather than 10, as in 2014.

This should smooth out those spikes, but I worry it will be because staff have been moved to a teaching-only contract that is not eligible for the REF. This would surely be worse than not being submitted.

The ugly

The ugliest thing is sure to be how the results are used. We can expect the usual league tables and “best campus-based university in a city of fewer than 100,000 people, which also has a medical school” marketing spin.

This year’s results are not comparable with last time, although no doubt many will try. The rules for staff and output selection are so different as to make comparisons meaningless.

The same goes for impact: its share of the mark has risen to 25 per cent, based entirely on case studies, in contrast to 20 per cent in 2014, of which four-fifths was based on case studies. In effect, impact case studies are 56.25 per cent more valuable.

This severely disadvantages new and small submissions. The rules mean that a submission of two staff has to produce the same number of case studies as one with 18. How is that fair?

The assessment period is longer than last time, the number of outputs per staff member is lower, and there is more flexibility around output selection. Changes to rules on the portability of outputs mean that much of the work by staff moving institutions can be submitted by both or all of their employers.

These changes will have given large institutions more scope to tailor their submissions, which should result in some pretty stellar output profiles. These are all matters for the ongoing Future Research Assessment Programme.

Finally, in case no one else acknowledges their efforts and grace under extreme pressure (and I don’t just mean working with me), I’d like to give a shout-out to all the REF team, led by Research England’s REF director Kim Hackett, all the panel advisers and secretaries, and in particular to panel secretaries Louise Stanley and Natalie Wall, with whom I worked closely.

Simon Kerridge is the founder of Kerridge Research Consulting and a past chair of the Association of Research Managers and Administrators

A version of this article also appeared in Research Fortnight