Go back

How thresholds for case studies shaped REF submissions

Simon Kerridge looks at why it probably wasn’t a good idea being the fifteenth researcher in a department with only two strong impact case studies.

When the University of Kent was preparing its submission to the Research Excellence Framework, there was much concern about how the number of staff we chose to submit would affect the number of impact case studies we would need to submit alongside them. In the end this was not an issue for Kent, but it seems to have been elsewhere.

The number of case studies needed for each unit of assessment was dependent on the number of full-time-equivalent staff submitted: two case studies for fewer than 15 FTE staff, three for fewer than 25, four for fewer than 35 and so on for every 10 additional staff.

The dilemma, then, was whether to include more staff if it meant an additional impact case study would be required. Particularly for smaller units of assessment, a weak case study could have a huge effect on the overall quality profile.

Impact case studies accounted for 16 per cent of the overall profile, or four-fifths of the 20 per cent of the mark assigned to impact, with the remainder down to the impact template. The mean submission size for a unit of assessment in the REF was 19.20 FTE staff—necessitating three case studies, each worth 5.33 per cent of that unit’s overall profile.

Compare this with publications. With each researcher submitting an average of 3.67 outputs, the average submission would have just over 70 outputs. These accounted for 65 per cent of the profile, or 0.92 per cent for each output. In this example, a case study is worth nearly six times as much as a research output. The numbers vary, of course, but this example is representative.

So what if you have 15 staff with equally good outputs, but no viable third case study? 

If the first two case studies were, say, 3* and the third only 1*, that would reduce your overall 3* by 5.33 per cent and your overall grade point average by 0.11, although it would not adversely affect your quality-related funding. But worse, if one of the original case studies were 4* then the diluting effect of the 1* study would reduce QR funding despite the effect of the additional staff member.

So, as many commentators have already suggested, were some researchers excluded from the REF so their institution could avoid submitting a weak impact case study? Comparing the submissions to REF 2014 with those to the 2008 Research Assessment Exercise suggests strongly that they were. In particular, there is a startling difference near the thresholds. The RAE data show a general downward trend in submission size: there are many smaller departments (or submissions) and fewer larger ones. 

The REF data, however, show a huge spike in submissions just before each of the thresholds beyond which an additional impact case study would be required. Almost a quarter of all submissions had a number of FTE staff in the ranges ending 4 to 4.99—that is, they were within one person of requiring an additional case study (see figure).

So either the submission sizes were curtailed when it was known that the next case study was weak, or some researchers were judiciously placed in a less relevant unit of assessment that had some headroom before the next boundary.

Another significant change between the 2008 RAE and the REF was increased clarity in the guidance and regulations for submitting fewer than four outputs for staff with particular circumstances, such as a disability or time off for parental leave.

It would be unfortunate if, by incentivising the exclusion of staff, the introduction of impact case studies eroded that endeavour. Analysing institutional codes of practice for staff selection in relation to the impact case study boundaries might show whether this was the case.

Assuming that the next REF will again assess selected staff rather than whole departments, one extreme option to counteract this effect would be to require one case study for each staff member submitted. But with more than 50,000 researchers and (only) 7,000 case studies submitted to the 2014 REF, this would perhaps lead to an unworkably huge deluge of case studies. 

A compromise would be to require, say, one case study for every five FTE staff, rather than for every 10. There would be twice as many break points, but each would be only half as precipitous.

It seems likely that impact will command more of the total mark in the next REF—perhaps the 25 per cent originally suggested. If this happens but the number of case studies stays the same, each will become an even weightier element of the final profile, exacerbating the issue. I therefore expect the number of impact case studies required to increase.

Simon Kerridge is director of research services at the University of Kent and chairman of the board of directors of the Association of Research Managers and Administrators.

More to say? Email comment@ResearchResearch.com

This article also appeared in Research Fortnight