Go back

Rankings could undermine research-evaluation reforms

    

League tables threaten EU plans to end culture of publish-or-perish, says Silvia Gómez Recio

For European universities and researchers, one of the major policy currents of 2022 looks set to be research-assessment reform. The European Commission highlighted this as a key area in the European Research Area policy agenda, published in November, and it is now moving to tackle the issue together with member states and stakeholders.

Too often, the quality of research is still measured by bogus proxies, such as the number of articles published and the perceived quality of the publishing journal. But there is one metric that tends to be overlooked in this debate: international university rankings. This omission is worrying, given that the current design of rankings risks hindering efforts to improve research evaluation.

In a nutshell, the planned reforms aim to create a more comprehensive research-evaluation system. This means moving away from a simplistic focus on numbers to more qualitative approaches, such as peer review and interviews. It means emphasising aspects such as interdisciplinarity and collaborative research methods, while embracing the diversity of research outputs and assessment methods among disciplines.

Improving research evaluation should also aid the continuing advancement of open science. The goal of making publicly funded research more accessible, reusable and transparent has become familiar in universities and research institutions. But it is not yet mainstream, and faces challenges ranging from a lack of infrastructure and training to profound issues of mindset and research culture, in which assessment criteria and processes play a key part. 

At EU level, the process has begun by assembling a coalition of the willing, in the shape of research institutions, funding organisations and others. These will commit to foster initiatives and bring in concrete changes, in dialogue with member states and national agencies involved in research evaluation. Their approaches and experiences will then be shared with the wider community to spur common understanding and wider uptake.

These European-level efforts will add to the gathering momentum for more rigorous and fairer ways to evaluate research. Thanks to initiatives such as the San Francisco Declaration on Research Assessment, more and more institutions are turning away from simplistic indicators such as journal impact factors as a measure of the quality of research.

The omission of rankings from the debate is worrying, given that the current design of rankings risks hindering efforts to improve research evaluation.

University rankings are meant to provide a comparison between institutions based on indicators such as citation counts and student-to-staff ratios. 

In reality, they have a tremendous impact on the public perception of the quality of institutions and their research. 

Rankings may not be directly linked to institutional funding and resources, but they affect these indirectly by swaying the choices of students, researchers, staff, institutions’ leaders, companies and global partners. 

Similar to journal impact factors, however, rankings provide an at-best-incomplete picture a university’s quality. Simple changes in the way their data—mostly quantitative indicators and methods—are populated, collected and analysed can affect the final results, as shown by institutions’ differing positions in different rankings.

Nuances and caveats are easily lost. To the public and the academic community, the visible thing is that a given university is “number one”. What this actually means often goes unquestioned—the only thing that matters is where one’s institution stands and how to climb higher.

In this quest for performance, leaders and managers in universities look at the different indicators used and how they might improve them within their institutions. One such indicator is the number of publications in high-impact journals.

This creates a dilemma for researchers and institutions. While some forms of assessment move away from article counts and impact factors, a university’s position in international rankings still depends on abiding by the old publish-or-perish culture.

This gives these rankings a worrying, indirect influence on research-assessment systems. If neither the rankings nor institutions’ approach to them changes, the reform process led by the Commission and the coalition of the willing may be thwarted.

Ideally, we should move towards different types of rankings that, rather than having a single absolute position, offer the opportunity for comparison using different parameters. Examples include the Leiden Ranking and the EU-funded U-Multirank. Other international university rankings should consider the desire for, and goals of, a new approach to research assessment—and change their approach and methods accordingly.  

Silvia Gómez Recio is secretary-general of the Young European Research Universities Network

This article also appeared in Research Europe