Policies to counter nepotism influence evaluation processes and attitudes to reform, says John Whitfield
Last November, two Spanish science-policy researchers, Ismael Rafols and Jordi Molas-Gallart, drew attention to the fact that in Spain, university hiring and promotion decisions are signed off by government agencies at national or regional level.
This system, they wrote, was “introduced in the 2000s to reduce nepotism” but has resulted in the agencies taking a “rigid and standardised” approach based on journal rankings. Universities, they argued, need more autonomy.
Theirs is one of many such calls in Europe for researcher careers to be de-linked from sometimes crude metrics.
At present, the likeliest vehicle for such reform is the Coalition for Advancing Research Assessment. Coara’s agreement calls on signatories to “move away from inappropriate uses of metrics”, to “broaden recognition of the diverse practices, activities and careers in research”, and to allow for differences between disciplines, cultures and places.
Since it was finalised last year, it has attracted signatures from organisations across Europe. But differences in how nations and institutions approach evaluation are not just an internal issue for research. They depend on society and history more broadly—particularly social norms and the rule of law.
The historical rationale for Spain’s approach makes it all the more interesting that while Spain’s CSIC network of publicly funded laboratories and many universities have signed Coara, its main public funder, the State Research Agency (AEI), has not.
Sources point to differences in opinion at different levels of the Spanish system, with the appetite for change declining the closer one gets to government.
A diversity of views across different bodies might lead to some welcome flexibility, but it might also create inconsistencies that could leave researchers unclear what is required of them.
Asked why it had not signed Coara, the AEI pointed out that it has signed the San Francisco Declaration on Research Assessment, which has similar goals to Coara, and that it no longer uses metrics such as journal impact factors or h-index to evaluate individuals.
“The AEI is firmly committed to fighting against the undesired effects of the publish or perish philosophy,” it said. “However, as the country’s main funding agency, it sees a certain risk in committing to follow procedures that may be too distant from our reality in Spain and making commitments that could be detrimental to our science.”
So far, national funders in 15 EU member states, along with the UK, Switzerland and Norway, have signed Coara. One holdout is Italy, where policymakers have also sought to combat nepotism in academic appointments. The country has given metrics a relatively prominent role in its national evaluation and in the habilitation process needed to become a university professor. Its national funder, the Ministry of Universities and Research, did not respond to a request for comment.
Policymakers’ engagement with research evaluation partly reflects the strength of a nations’ research system. Unsurprisingly, the EU national funders that have signed Coara are mostly from the member states with the highest R&D spending as a proportion of GDP.
But there’s also a strikingly good match with a country’s position on the Corruption Perceptions Index published by Transparency International at the end of January. Most countries where national funders have signed Coara score well on the index. Only two such countries—Slovenia and the Czech Republic—have worse scores than the highest-ranked non-signatory, which is Spain.
Otherwise, nations without national funder signatories are arranged squarely below those with them in this proxy for corruption. This is not to point to some funders as suspect and others as exemplars. Italy’s network of public research institutes, the CNR, and its research-evaluation agency Anvur, have both signed Coara.
The AEI’s stance shows that not signing does not show a lack of reflection or engagement, and Coara does not have a monopoly on the issue. But it does hint at potential complications that may loom larger, particularly if Coara becomes more global.
The coalition wants assessment to be based “primarily on qualitative judgment”. But, as others have observed, in places where the qualities needed to get ahead in academia don’t necessarily include being good at research, researchers tend to be more pro-metrics, seeing measurements as preferable to nepotism and patronage.
Coara will need to be alive to such issues. It will also need to deal with policymakers who might be reluctant to let go of the reins of evaluation.
John Whitfield is opinion editor at Research Europe
This article also appeared in Research Europe