Go back

Ten rules for ranking universities

Image: State Farm [CC BY 2.0], via Flickr

Rankings should be designed with transparency and used with care, say Ludo Waltman and his colleagues.

Despite being controversial, university rankings can have a considerable influence on institutions’ decision-making. To mark the recent release of the CWTS Leiden Ranking 2017, a bibliometric ranking of major universities worldwide compiled by our centre, we have set out 10 principles for the responsible design, interpretation and use of university rankings. These may not be comprehensive, and feedback would be much appreciated.

1. One size doesn’t fit all

Several rankings claim to offer a single, definitive measure. They do this by combining dimensions of university performance in a rather arbitrary way. Yet whether a university is doing well depends on what interests you. Some may be strong in teaching, others may excel at research. There is no way to weigh a good performance in one dimension against a poorer performance in another.

2. Separate the relative from the absolute

Some indicators reflect a university’s overall output. Others show achievements relative to size or resources. Combining them, as some rankings do, makes no sense. Constructing indicators of relative performance is particularly challenging. Ideally, they require accurate, standardised data on, for example, a university’s research workforce. These are very difficult to obtain.

3. Be explicit about the definition of a university

A consistent definition of what constitutes a university is a major challenge. There is much worldwide variation, for instance, in how hospitals are associated with universities. Perfect consistency at an international level is not possible, but rankings should explain their definition.

4. Be transparent

Users of rankings require at least a basic understanding of their design. Rankings therefore need to explain their methodology. Ideally, they should also make their underlying data available. Users could then see, for example, not only how many highly cited publications a university has produced but also what they are. Most rankings do not do this, because of the proprietary nature of data and the commercial interests of rankers.

5. Compare and contrast

Universities are unique. A university in the Netherlands is expected to be more internationally oriented than one in the United States. A university focusing on engineering will have stronger ties to industry than one active mainly in the social sciences. Some indicators adjust for differences between disciplines; others do not. Interpreting rankings requires careful consideration of such contexts.

6. Acknowledge uncertainty

Rankings are subject to uncertainty. Indicators are typically proxies—citation statistics are only a partial reflection of scientific impact. Rankings are also influenced by data inaccuracies, and coincidental events may affect a university’s performance. Such uncertainties may be partly quantifiable, but they must largely be assessed intuitively, meaning that small differences and minor fluctuations over time are best ignored.

7. Look at the underlying data

The very term ‘university ranking’ risks emphasising institutions’ relative positions above the underlying indicators. This can be misleading, as small differences in performance can lead to large differences in rank—the university at position 200 in our ranking has only 10 per cent more highly cited publications than that at 300. A university’s position may also fall when the number of institutions included in a ranking increases.

8. Not everything that counts can be counted

Rankings focus on things that are relatively easy to quantify. Our Leiden Ranking, for instance, focuses on specific aspects of scientific performance. Others have a broader scope, but none covers all the relevant dimensions of university performance. Teaching and impact, for example, are typically not very well covered. And while some aspects of scientific performance, such as impact and collaboration, can be captured well, quantifying other dimensions such as productivity is more difficult.

9. Know your level

Performance criteria relevant for universities as a whole are not necessarily relevant for research groups within a university. Publications co-authored with industry, for example, will say little about research groups in areas with little potential for commercial applications. Universities may be tempted to mechanically apply performance criteria to lower levels, but they should resist doing so.

10. Handle with care, but don’t discard

Used responsibly, university rankings may provide relevant information to universities, students, funders and governments. They may help with international comparisons and aid decision-making. Their limitations and the caveats in their use, however, should be continuously emphasised.

Ludo Waltman, Paul Wouters and Nees Jan van Eck are at the Centre for Science and Technology Studies (CWTS), Leiden University. This article is based on a post on the CWTS blog

More to say? Email comment@ResearchResearch.com

This article also appeared in Research Europe