Go back

REF survey calls out publish-or-perish culture

Image: Selena N.B.H. [CC BY 2.0], via Flickr

Calls for research assessment exercise to embrace diverse research output and technological development

Research assessment exercises need to focus more on how they are changing academic behaviour though incentivising only some forms of output, according to a survey of more than 3,700 university staff commissioned by the funder Research England.

Exercises like the REF—which grades the research of UK universities and determines their funding levels—could be encouraging academics to place more importance on outputs such as journal articles, the survey found, as universities are using these measurements when promoting and hiring, which could stifle diversity.

The survey was carried out by consulting firm Rand Europe to understand how the research landscape, including national research assessments, may change over the next 5-10 years.

“Importantly, the findings are drawn from the experiences and perceptions of a highly engaged research community,” said Steven Hill, director of research at Research England, one of the agencies that oversees the REF. “It is vital that Research England continues to provide space for the research community to contribute to and inform evidence-based policymaking that affects their sector.”

While the academics surveyed did not expect any dramatic changes in how research is carried out over the next five to 10 years, respondents said they expected to see more variety in research outputs.

The dominant forms of output, such as journal publications and books are unlikely to change, the survey found. In the 2014 REF exercise more than 80 per cent of output submissions were journal articles; books and chapters in books made up 13 per cent, while all other outputs counted for just 6 per cent.

“REF

“The vast majority of outputs that have been submitted to research assessment exercises in the United Kingdom (and considered as important and hence used within promotion and hiring decisions) represent a small number of output forms,” the researchers wrote.

Many respondents think that might be about to change. The report said that non-confidential research reports—for example, for external bodies—and openly published peer review could become more common outputs of their research.

However, the survey suggested that the REF itself could be stifling moves towards more diverse outputs, such as code and web content, and said that assessments must be able to “shift with the research landscape”.

“If the increased diversity of output forms is considered valuable to the system and need assessment, then it may be necessary to consider suitable ways to encourage the submission of these forms of output and ensure appropriate capacity to both assess and ensure confidence in the assessment of these outputs,” the report said.

One important development that did not feature in the views of researchers about the future, says Hill, is the growing pace of artificial intelligence and machine learning and their potential role in research evaluation.

Simple analytical approaches, such as plagiarism detection, are already in widespread use by journals, he says, while many are experimenting with using machines for more sophisticated tasks such as reviewer selection.

“I think we will also need work to understand the potential role that artificial intelligence might play in the future of research assessment,” Hill wrote in a blog post accompanying the report. “This is not just a simple technological assessment, we also need to start a debate, so that researchers themselves are involved in shaping the use of technology in research assessment.”

The report is part of a wider programme of work commissioned by Research England into the research assessment landscape, which will provide a foundation for a successor to the current REF, which will conclude in 2021.