Go back

Metrics-based REF could lead to dirty tricks, review told

Using metrics as the base for research assessment could lead to game playing by academics in order to increase their citation counts, researchers have warned.

Academics have expressed their concerns about the formation of ‘citation rings’—in which groups of researchers agree to cite each other in order to increase their citation counts—in their responses to an ongoing review of research metrics.

The review, commissioned by the Higher Education Funding Council for England and led by James Wilsdon of the Science Policy Research Unit at the University of Sussex, had invited responses to a consultation that closed on 30 June. HEFCE published the summary of the responses on 5 November.

The majority of respondents said that metrics were often unfair and could be useless for certain disciplines. Fifty-seven per cent of submissions to the HEFCE review could be classified as sceptical about the use of metrics, according to HEFCE. This included responses from 28 higher education institutions and 24 learned societies.

Thirty-five of the 153 responses said that metrics could unfairly disadvantage some disciplines that typically publish fewer papers than the natural sciences, particularly in the arts, humanities and social sciences. Some responses said that metrics were useless for law, English literature, nursing and criminology.

The problem also affects the field of mathematics, according to mathematician Michael Dreher from Heriot-Watt University in Edinburgh. “It is much easier to write journal articles in the sub-discipline numerics than in the sub-disciplines algebra or analysis, because algebra or analytic papers contain proofs, and proofs must be logically complete and correct (otherwise the proof simply does not exist). Therefore numericists have longer publication lists than analysts or algebraists with comparable academic age,” he said.

One response noted that Australia has reflected disciplinary differences by using peer review to measure quality in some disciplines and metrics in others.

Further scepticism is expressed in the responses on behalf of individuals who may be unfairly affected by the use of metrics due to personal circumstance, such as early-career researchers and women, who, data show, are less likely to be cited than men and less likely to cite themselves. The University of Bristol called on HEFCE to provide evidence of how metrics would affect women and under-represented groups before implementing any metrics-led research assessment.

Not all respondents were negative. Just under one-fifth of respondents said they would welcome more use of metrics in research assessment. Anglia Ruskin University said that judicious use of journal impact factors could be justified for the next REF or its equivalent. And the University of Southampton argued that metrics could be seen as a fairer and more objective method of assessment because “metrics are arguably more transparent than peer review as the basis for the score/grading can be verified independently”.

Wilsdon’s group is expected to publish its final report in spring 2015.