Go back

Don’t forget the social dimension of research evaluation

                                   

Letting technology drive reform will reproduce old problems, say Anestis Amanatidis and Lottie Provost

There are many initiatives aiming to reshape research assessment worldwide. In Europe, the most prominent include the Coalition for Advancing Research Assessment, the European Research Area policy agenda and the European Council conclusions on open science.

The movement for change has grown out of criticism of evaluation that relies heavily on quantitative publication metrics such as the journal impact factor and the h-index. Misuse of such citation-based measures encourages a publish-or-perish culture that creates perverse incentives and does not reflect the increasingly collaborative, open and interdisciplinary nature of research, or the increasing variety of practices and outputs. 

Policy reforms aim to create cultural change: making assessment fairer, more humane, more in line with the changing nature of science and better able to serve society. Openness—broadly the idea that research should be as collaborative, accessible, shareable and reusable as possible—is central to this.

But despite this cultural framing, the social aspects and effects of how openness in research is evaluated remain under-explored. European institutions have arrived at broad agreement on what healthy research evaluation should look like, but they have given little attention to the implications for building that culture of advancing specific ideas of openness. Unless this changes, new approaches to assessment will reproduce old problems. 

Infrastructure importance

A range of infrastructures are under construction to enable the high-level goals of reform. Many are funded from the EU’s Horizon Europe research and innovation programme, with the aim of enabling “a rewards and recognition system based on a new generation of…metrics and indicators, leading to a culture and system change”.

We are participating in one such project, called GraspOS. This aims to build a federated, open infrastructure hosting tools and data that permit diverse and flexible approaches to evaluation. 

The project was born out of the observation that, too often, the development of evaluation infrastructures is driven by the possibilities offered by commercial data sources, such as Scopus and Web of Science, rather than by the needs of researchers and evaluators. (Web of Science is owned by Clarivate. Research Professional News is an editorially independent part of that company.)

Research information infrastructures shape what research practices are valued. Anthropological research on infrastructures shows that they perform a lot of invisible work through the ideas inscribed in their design—much urban policy and design, for example, works to displace homeless people from public spaces. Which practices infrastructures enable, and which they hinder, depends as much on human organisation as on technical capabilities.

Overlooking this social dimension has left the movement to reform research assessment in Europe overly reliant on developing the technical aspects of open science.

Starting from what the technology can do, rather than what its users need, risks reinforcing narrow conceptions of openness. This is especially so if these infrastructures focus on the dissemination of publications as a scientific virtue. The idea that open-access publishing is synonymous with open science is already alarmingly powerful.

Defining openness

If narrow ideas of openness are moulded into the tools and processes of evaluation, the capacity to measure researchers’ activities may come at the expense of an ability to drive the cultural change desired and assess the health of the research system, in terms of understanding who is included and excluded, privileged or disadvantaged.

Funders and policymakers aiming to reform research assessment in the light of open science need to give more consideration to notions of openness. A crude or narrow definition will lead to evaluation and monitoring tools that either go unused or are used even though they do not fit the objectives of evaluation. 

This would have unintended and dangerous consequences. Counting open-access publications as a proxy measure for excellence or quality of research fails to take into account the sharply rising costs of open-access publication, reproducing old patterns of systemic inequality. 

If reforms are indeed to herald a cultural change that fosters ‘good’ evaluative practices across open research cultures, the technical infrastructures that carry future research assessments need to be malleable enough for appropriation and adaptation across diverse research communities. Only then will these infrastructures be truly open. 

Anestis Amanatidis is in the Centre for Science and Technology Studies at Leiden University in the Netherlands. Lottie Provost is at the Antonio Zampolli Institute of Computational Linguistics at the National Research Council of Italy in Pisa.

This article also appeared in Research Europe