Go back

One-size rankings don’t fit all

Universities must be assessed in many dimensions if comparisons are to be fair. The EU-funded U-Multirank aims to do just that, says Gero Federkeil.

Ten years ago, Shanghai Jiao Tong University launched the Academic Ranking of World Universities. It was the first attempt to create an international measure of universities; there are now more than a dozen, produced by companies and universities on four continents.

Such rankings both reflect and reinforce the growing international competition between universities. They are exerting an increasing influence on individual and institutional decisions, funding, international collaborations and national higher education policy—the Indian government, for example, has instructed its universities only to form partnerships with the top 500 universities in the Shanghai ranking.

All the international rankings so far are based mainly on research activity, which is usually measured with bibliometric indicators such as publications and citations. In effect, this measures strength in science and particularly biomedical science. Not all fields are well represented in bibliometric databases—even the best departments in the social sciences and humanities, for example, perform poorly by such measures. And by combining differently weighted measures into a single composite score, these rankings lose robustness: small changes in weighting can cause large shifts in league table positions.

Existing rankings, then, fail to measure many important things that universities do, and distort the picture of aspects that they capture. To produce a fairer and more transparent measure of the international higher education system, the European Commission launched the U-Multirank project in 2009. A feasibility study ended in 2011, and the Commission is funding a two-year implementation project.

U-Multirank was officially launched late last month, and a first edition will be published in early 2014, followed by annual updates. The project involves a consortium of 17 partners from nine countries, led by the Centre for Higher Education in Gütersloh, Germany, and the Center for Higher Education Policy Studies at the University of Twente in the Netherlands.

U-Multirank will gauge an institution’s strength as a whole and in individual fields, initially mechanical and electrical engineering, business and physics. The first edition will cover at least 500 higher education institutions in Europe and beyond, with fields and institutions added continuously. Indicators are based on databases, such as bibliometric and patent data, self-reported data from institutions and an international student survey. Unlike other global rankings, U-Multirank will also measure teaching and learning, knowledge transfer, international orientation and regional engagement.

Rather than presenting a single league table based on composite indicators, U-Multirank’s output is driven by the user. A web tool allows users to choose how they want to measure universities. So, a student trying to choose the best place to study physics, a researcher wanting to compare her university with others, or a policymaker interested in the relative performance of institutions will each be able to find the relevant information.

U-Multirank will also allow users to analyse a broader range of institutions than internationally oriented research universities. And, based on a number of indicators, U-Multirank will compare institutions with a similar profile, because it makes little sense to compare a small, regional, undergraduate teaching institution with the University of Oxford, or an art school with MIT.

Finally, instead of producing over-simplified league tables, U-Multirank will create rank groups for each indicator. League tables make for good headlines, but they exaggerate differences in performance between universities and give a false impression of exactness.

Such an approach presents challenges. The feasibility study revealed the limited availability of reliable and internationally comparable data for some indicators, particularly on knowledge transfer and regional engagement. Concepts and definitions must be refined continuously, and data sources improved, in collaboration with institutions. New indicators are needed for disciplines from the humanities and the social sciences, particularly to measure research performance.

U-Multirank’s success will depend on its ability to sample enough diverse institutions at both regional and national level. In the long run, the aim is to produce a ranking that provides comprehensive coverage of European higher education and allows comparison with a sample of relevant non-European institutions.

For all their intuitive appeal, university rankings have been more a marketing tool than a true measure of institutional quality. They have also created an arms race to become a ‘world-class university’, which threatens the diversity of higher education as it devalues institutions other than the internationally oriented, comprehensive research university.

If rankings are to be relevant, and support diversity at the same time, they must become multidimensional and encompass a broader range of institutions.

More to say? Email comment@ResearchResearch.com

Gero Federkeil is project manager responsible for international ranking at the CHE Centre for Higher Educuation in Gütersloh, Germany.