We know how much of universities’ research impact is local—but not whether that’s the right amount, say Jonathan Grant and Kirstie Hewlett.
The social and economic divides revealed by the 2016 Brexit referendum have prompted a focus on the issue of place in science and innovation. The most tangible example of this is the £236 million Strength in Places fund trailed in the 2017 Industrial Strategy White Paper and incorporated in UK Research and Innovation’s delivery plan, published in June.
There are still doubts, however, over whether universities’ contributions to their regions are valued as they should be. The Civic University Commission’s February report, Truly Civic, noted that in the Research Excellence Framework, “‘local research’ is by definition inferior to international research”, and argued that REF criteria should be amended to reward locally focused research.
However, the commission did not assess the extent of local research impact, or suggest what an acceptable level might be. We decided to address this data gap by analysing the 6,679 non- redacted impact case studies submitted to REF 2014.
We found that 1,795 case studies—27 per cent—mention the city in which the submitting university is based, implying some form of local impact. The 12 institutions with the highest proportion of case studies mentioning their home city are shown in the figure.
Of these, only four belong to the Russell Group of (self-declared) research-intensive universities. This is slightly misleading, as Russell Group universities had relatively large REF submissions averaging 115 case studies, accounting for just over 40 per cent of all those submitted. Overall, 36 per cent of submissions from Russell Group institutions named their home city.
To refine this crude estimate, we read 10 per cent, 179, of the case studies identified as showing local impact, chosen at random. This revealed false positives, such as references to city names that did not refer to impact. To look for false negatives, such as local impacts not picked up by the search, we read 179 case studies that had not been captured in the initial search.
This revealed 67 false positives and 15 false negatives, suggesting that a crude search is largely reliable, but overestimates local impact. Adjusting for search-term errors, a reasonable estimate is that just under one in five case studies submitted to REF 2014 demonstrate local research impact.
Judged by REF submissions, university research already has local impact. The question is whether an average of about 20 per cent is too high, too low or about right. This is very difficult to answer, but given the current policy focus on place, perhaps we should expect the proportion of local case studies to increase in REF 2021.
A related issue raised by the Civic University Commission is whether REF criteria should change to support policy. Currently, research that is “recognised nationally” is only rated 1*, and thus does not attract any financial reward. “World leading research”, on the other hand, is rated 4* and attracts significant resources.
The criteria for assessing impact in REF 2014 were slightly different, based on the “reach” and “significance” of the research. In other words, local impact is not by definition inferior to international impact, but is treated similarly in the assessment and rating of impact sub-profiles.
The priority, then, seems not to be amending the REF evaluation criteria, but having a debate as to what proportion of a university’s research impact we, as tax payers, should expect to occur locally. To be fair, this was the commission’s primary call to action.
Jonathan Grant is vice-president and vice principal for service, and Kirstie Hewlett is a research associate in the Policy Institute at King’s College London
This article also appeared in Research Fortnight