Go back

Put metrics in context

Increased sophistication doesn’t always improve evaluation. Measures of research need to be more transparent and easier to use, says Ludo Waltman.

Citation metrics, such as the journal impact factor, the h-index and others, are omnipresent in the evaluation of scientific research. Ten years ago, I joined the Centre for Science and Technology Studies (CWTS) at Leiden University, the Netherlands, a research centre that has made influential contributions to the study and use of these measures.

Over the past decade, I have criticised the h-index, which combines the number of publications a researcher writes with the number of citations they receive, for inconsistent ranking.

I have contributed to the development of advanced citation metrics such as field-normalised metrics, used in the Scopus and Dimensions databases and in analytical tools such as InCites and SciVal, and the Source Normalised Impact per Paper (SNIP), a metric for journals used by Scopus.

I have criticised the Relative Citation Ratio (RCR), a metric developed by the US National Institutes of Health, for inherent bias against interdisciplinary publications and I’ve co-authored the Leiden Manifesto for responsible research metrics.

Looking back, the key question is whether the work done by myself and other bibliometricians has really improved how research is evaluated.

To an extent I think it has. Citation metrics have become more accurate, reducing the risk that their users will draw the wrong conclusions. For instance, in the CWTS Leiden Ranking, a university ranking tool published,annually by my centre, we have made major steps in improving accuracy.

However something has been lost: the price of increased technical sophistication is a loss of transparency and ease of understanding. Increasingly, research metrics are black boxes, concealing their underlying assumptions and limitations.

As a result, the generally accepted principle that research metrics should support expert judgment is increasingly challenging to implement. The complexity and opacity of research metrics makes it difficult for experts to link quantitative information from metrics to their own qualitative judgment.

For instance, the RCR is so complex that an in-depth interpretation of RCR scores is almost impossible. It is very difficult to tell whether a high score reflects exceptional performance or is just an artefact of the methodology.

Black-box research metrics are especially problematic when applied to individual researchers or groups, where evaluation should be based primarily on expert judgment. At these levels, there is an urgent need for a new approach that I call contextualised bibliometrics.

Here, the goal is not to maximise accuracy. Instead, it is to provide transparent and understandable metrics that support responsible evaluation practices based on qualitative and quantitative information from a broad range of sources

Transparency

Contextualised bibliometrics recognises that experts should be able to put metrics in a broader context, linking information from metrics to their own judgment.

This means that metrics should be easy to understand. The data underlying a metric should be accessible in online tools, for experts to explore in as much detail as they wish, for instance using interactive visualisations.

In contextualised bibliometrics, instead of evaluating individual researchers with complex metrics such as field-normalised citation counts or RCR, more straightforward metrics are used, such as simple citation counts.

Such measures may be less accurate, however, their transparency and comprehensibility enable experts to deeply reflect on the information they provide.

Ideas similar to contextualised bibliometrics are gradually being adopted. Clarivate Analytics and Elsevier, the producers of, respectively, the Web of Science and Scopus databases, have made some steps in increasing the transparency of citation metrics for journals. Recently, Clarivate has proposed moving from metrics to bibliometric profiles, which can be seen as a step toward contextualised bibliometrics.

However, realising the promise of contextualised bibliometrics will require more ambitious steps. Most importantly, it requires open bibliographic metadata, as promoted by the Initiative for Open Citations. This openness is essential to increase the transparency of bibliometric analyses. 

The past 10 years have seen technical improvements made to citation metrics and research metrics more generally. The next decade will hopefully bring a similar level of development, but focused on contextualisation, transparency, and openness rather than further technical sophistication.

Ludo Waltman is professor of quantitative science studies and deputy director at the Centre for Science and Technology Studies at Leiden University, in the Netherlands

This article also appeared in Research Europe