Peaks in submission sizes suggest rule changes did not stop institutional gaming, says Simon Kerridge
The distribution of submission sizes in the 2014 Research Excellence Framework was startling. REF 2014 required two impact case studies for submissions smaller than 15 full-time-equivalent staff, and one more for each additional 10 FTE or part thereof. In the event, a quarter of submissions fell less than 1 FTE short of the threshold for an additional impact case study. Submission sizes in this range were 10 times more common than those of a single FTE or less above the threshold.
This was clear evidence that institutions were being strategically selective in their submissions, perhaps through fear of the unknown—impact case studies were new in that assessment. REF 2021, however, required all eligible staff to be submitted. You’d think this would mean no such skewing in the distribution of submission sizes, as was seen in the REF’s predecessor, the 2008 Research Assessment Exercise. But you’d be wrong.
This time, submissions below 20 FTE needed two impact cases studies. Above this, another impact case study was required for each additional 15 FTE up to 110, and then one for each additional 50 FTE (although because few submissions reached this size, I have excluded them from this analysis). If the impact case-study requirement had no effect on submission sizes, the numbers ought to fall in some kind of normal distribution. But, as the graph of submission sizes shows, the case-study cliff edge remains as stark as ever.
Spikes in numbers
Again, submission sizes peak just below case-study thresholds. There are 81 submissions between 19 and 19.99 FTE, compared with 15 in the 20-20.99 FTE range. There are similar spikes at 35, 50, 65 FTE, and so on: as each threshold looms, submissions come in just under the wire. The largest proportional difference is between 34 and 35 FTE, with the former being 13 times more likely than the latter.
Pooling the data shows that, on average, a submission is more than seven times more likely to be within 1 FTE below the bar for an additional impact case study than 1 FTE above it.
This can be seen as institutions gaming the REF—avoiding having to submit weaker case studies that might dent their quality profiles. It seems that, despite the rule to submit all eligible staff, many submissions adjusted their numbers to be just under the threshold needed for an additional impact case study.
The worst-case scenario is that this meant some staff no longer being employed by the institution on the census date, although it’s possible that additional staff were employed to bring the cohort to just under the threshold. Another, perhaps more insidious, approach is to change contractual status to remove or add responsibility for research to arrive at the desired submission size. A third possibility would be a change in FTE: a department of 20 full-time staff would need to persuade only one person to drop to a fractional contract to avoid a third case study.
My fear is that some people will have been persuaded to have research removed from their contracts or cut their hours, perhaps even to zero, in the name of optimising a REF submission.
In 2014, staff submission was a choice, so researchers who were not submitted would ‘only’ face the stigma of not being ‘REFable’. This time they could well have had a change of contract, which is surely worse; I truly hope that no one lost their job as a result.
Data from the Higher Education Statistics Agency ought to give a good handle on which of these various explanations might be most prevalent—I’d encourage you to look at your own institution.
Perhaps there really are 13 times as many departments of 34 FTE than of 35 FTE, but it seems more plausible that the rules on impact case studies shape the size of departments.
Simon Kerridge is an independent research consultant and honorary staff member at the University of Kent. He was a panel adviser to main panel C for REF 2021.
This article also appeared in Research Fortnight