Go back

Tempest in the rankings teapot

There are many reasons why the world, especially Africa, would be well served to ignore university rankings, writes Damtew Teferra.

It is that season when ranking entities announce their “findings” on the comparative stature of the world’s universities. Almost certainly, the “premier” universities remain at the top and the rest are relegated to the bottom—African universities in particular. The “rankers” go about their business some with audacity but too often without sufficient concern for veracity, authenticity or integrity in their methodologies or (especially in the case of Africa) lack of data.

Facts vs. Perceptions

For the last three years the University of Kwazulu-Natal in South Africa stood at the top of the country in academic productivity as measured by the Department of Higher Education and Training. The Department undertakes the task using parameters that meticulously measure research and academic outputs.

Yet according to the newly released QS rankings—which allocates 60 per cent of the criteria to academic reputation—the university now stands below six South African universities. This points to a glaring tension between data and dubious assessment based on reputation.

Building reputation: unpacking the numbers

The QS ranking is ostensibly a mix of survey responses and data across six indicators, compiled and weighted to formulate a final score. It claims that over 70,000 academics and 30,000 employers contributes to the rankings through the QS global surveys. QS states that it analyses 99 million citations from 10.3 million papers before 950 institutions are ranked.

Times Higher Education states that their methodology is a unique piece of research that involves “questionnaires [that] ask over 10,500 scholars from 137 countries about the universities they perceive to be best for teaching and research”. It claims that the Academic Reputation Survey “uses United Nations data as a guide to ensure that the response coverage is as representative of world scholarship as possible”. They go on to state that where countries were over- or under-represented, the responses were weighted to “more closely reflect the actual geographical distribution of scholars”, throwing more uncertainty on the changing parameters of the rankings.

There appears to be a conflation of “the world of scholarship” with “geographical distribution of scholars” without defining clearly what a “scholar” or “scholarship” is. China, India, and Brazil may have the largest number of “scholars” and by that account more scholarship, yet they barely make it to the top in the rankings.

According to the Times, only 2 per cent of the survey participants were Africans, presumably located on the continent. As about 50 per cent of research in Africa is undertaken in South Africa, one presumes that the number of survey participants in the rest of Africa thus tapers off to one per cent. Around 100 academics in Africa, outside of South Africa, would have participated in the reputation index “evenly spread across academic disciplines”. Thus, for the 11 disciplines considered in the Times rankings, that would mean about 10 responses per discipline from Africa.

Massive expansion

Indeed, rankings are largely about reputation. According to QS, reputation is a calculation with 40 per cent from academics and 20 per cent from employers. An institution improves its position in the rankings if it scores big in these two indices based on perception.

The reasons why the world, especially Africa, would be well served to ignore these rankings are numerous. Let’s consider the QS ranking that puts considerable weight on student-faculty ratio.

Without exception, the African higher education sector is expanding massively. This has created very high student-staff ratios, forcing African institutions to face difficult choices if improving their standing in the rankings is important—either freeze expansion or raise the number of academics. Increasing the number of academics would require massive investments, creative policies and long-term commitments that few institutions are positioned to contemplate.

Another parameter used in the rankings is international faculty ratio and international student ratio. South Africa and Botswana and to some extent Namibia (in sub-Saharan Africa) are the only countries that attract international faculty, mostly from the continent. This remains a dream for the rest of Africa.

Likewise, improving the percentage of international students is another rankings criteria used by QS and others. The number of African countries that attract international students is very small and includes South Africa, Ghana, Kenya and Uganda. Virtually all these “international” students come from other African countries with the exception of South Africa. Even when students enroll from overseas, it would only be for a semester or two.

The nature of these rankings ensures that the institutions at the top are mostly from the US, year in and year out. In reviewing the ranking published by the Times Higher Education, the same could be said about those on the list at the “middle” and “lower” end, where some may have moved up a notch and others moved down a notch.

Emphasising reputation-based criteria does not affect the standing of those established at the top. These institutions tend to be immune to strikes, financial strain, internal strife, or other critical challenges faced by institutions in the developing world.

Manipulating the rankings

Some enterprising entities, calling themselves data analysts, are already emerging to “help” African institutions do better in the rankings. One flagship university in East Africa is suspected of pursuing that approach, for which it reportedly paid a hefty service fee.

The aggressive positioning of these entities masquerading as service providers—often at major events where senior institutional administrators meet—is nothing more than a swindle. Institutions should use their limited resources effectively rather than pursue shortcuts to an improved ranking.

The option of withdrawal

Over a year ago I got a phone call from a vice-chancellor at a university in South Africa who suggested coordinating a withdrawal from the rankings by the country’s institutions. The proposal was to encourage all universities in the country to refuse to participate and instead to dedicate all their resources, energy and time to more relevant concerns. Rhodes, one of the premier universities in South Africa, already refuses to participate in the rankings, so a precedent already exists.

An international roundtable on rankings, supported by the Peter Wall Institute for Advanced Studies at the University of British Columbia, Canada, took place in May 2017 in Vancouver. The roundtable deliberated on the scope and significance of university rankings and proposed concrete actions and interventions on the issue in the future.

‘No meaningful purpose’

As stated by Philip G. Altbach, an internationally-recognised scholar on international higher education, rankings are not disappearing anytime soon. As more rankings join the fray, they are more likely to generate more buzz to ensure their survival and influence.

Numerous ranking entities generate multiple findings related to the reputation of institutions. As the former vice-chancellor of Rhodes University, Saleem Badat, stated, “Rankings, in their current form, serve no meaningful educational or social purpose,” but the tempest in the rankings teapot continues undeterred.

Damtew Teferra is professor of higher education and leader of the Higher Education Training and Development department at the University of KwaZulu-Natal, in Durban, South Africa. This article was published originally in Inside Higher Education.