Research excellence is a highly multidimensional concept in Africa and should be customised to be responsive to local needs, write Robert Tijssen and Erika Kraemer-Mbula.
Research excellence has become a fashionable concept in the world of science funding and assessment. There are many interpretations of excellence and ideas about how it could or should be applied within the African context—often accompanied by passionate pleas for Africa-customised notions. But African research must also try to remain globally competitive.
To unpack the concept of excellence in Africa, we collected data on the perceptions and practices of research excellence in Africa. We carried out two online surveys distributed between October 2016 and February 2017. One survey targeted research-performing institutions, while the other targeted national science granting councils. Respondents represented all four African regions, although North Africa had fewer respondents.
What is excellence?
Excellence is at the heart of many African research funding initiatives, like the African Development Bank and World Bank ‘centres of excellence’ programmes, national initiatives like the South African Research Chairs Initiative, and the recently-established Alliance for Accelerating Excellence in Science in Africa.
Unfortunately, the exact meaning of the word ‘excellence’ is left undefined in most African policy initiatives.
In our surveys, we asked: What criteria would you use to describe an “excellent” researcher?
Respondents placed the highest weight on training and supporting future generations of researchers—a reflection of the severe shortage of research skills in the continent, and one of the main impediments to the advance of African scientific performance. Respondents also attached weight to creating new knowledge, producing work with social impact, and being well published. Overall, 18 dimensions of excellence were considered ‘relevant’ or ‘very relevant’, while only three were considered on average as ‘somewhat relevant’: patenting, continuity of work, and receiving awards.
We then asked: Which performance indicator(s) should the science council in your country apply to assess a research proposal?
In response, respondents qualified ten dimensions as ‘relevant’ or ‘very relevant’. Among the latter, they emphasised the quality of the proposal in terms of methodology and scientific rigour, followed by the potential of the proposal for social impact and policy influence. Still valued but with lower scores were performance indicators of the researchers (publications and citations), as well as peer-review scores and credentials of the researchers’ organisation. These results suggest that researchers feel that too much weight is given to peer-review scores and numbers of publications and citations in allocating research funding.
We asked: What performance indicator(s) should the science council in your country apply to assess the quality of research outputs or impacts?
The top three suggested indicators were: creating awareness of societal issues, direct benefits to disadvantaged communities, and new technological developments. This is an indication of the perceived need for a closer connection between research outputs and end users. Respondents also acknowledged publications in top international journals as a relevant indicator of the quality of research outputs and impacts. At the bottom of the list were the direct impacts on the researcher or the research team, such as moving to more prestigious positions nationally and abroad, or winning awards.
Finally, we asked respondents to describe, in their own words, an excellent research output.
The most common answers had to do with its ability to solve a problem, improve the lives of people (particularly those marginalised or disadvantaged), or change policy. When asked what indicators of excellence have been overlooked in mainstream research evaluation, many respondents highlighted economic, social, and policy impacts. In particular, indicators of social impact are highly noted as missing by the research community
A muddled picture
The lack of consensus on which performance indicators are most relevant within the African context presents major issues on to how develop widely acceptable quantitative indicators for large-scale implementation. At this point in time only very few quantitative indicators seem to be feasible. Just one option is now readily applicable to measure excellence within an African comparative context: highly cited research publications.
Our analysis also suggests that there are still many obstacles to the attainment of research excellence in African science. According to both researchers and science granting council research coordinators, the two largest obstacles are insufficient funding and poor research infrastructure and equipment. Current legal frameworks also constitute a developmental challenge, since they do not explicitly foster the pursuit of research quality involving research collaboration networks. As a result, a ‘silo mentality’ often prevails in African research, which is seen as a major deterrent to achieve research excellence.
Do we need international quality standards and generally accepted indicators to identify and appreciate research excellence within Africa? Yes, we do. Establishing a broad set of quality dimensions is an essential first step towards appropriate rubrics, associated standardised ratings, and meaningful metrics. But for any process to start identifying African research excellence, or contemplate how to select or design appropriate research excellence indicators, one needs a proper understanding of the accountability frameworks in which many African science funding agencies operate.
Any Africa-centric notion of research excellence should go beyond international research publications and scientific impact in the academic community, to embrace the wider impacts of researchers in their local or domestic environments. Truly excellent researchers should also be assessed on their ability to create broader impacts such as science-based teaching and training, fund raising, networking, mobility and cooperation, commercialisation, and innovation. In order to become useful and generally accepted these indicators need to provide meaningful information, be convincing, and be perceived as fair.
Robert Tijssen holds the chair of science and innovation studies at Leiden University in the Netherlands. He is also an extraordinary professor at Stellenbosch University in South Africa. Erika Kraemer-Mbula is a senior lecturer and research fellow at the Institute for Economic Research at Tshwane University of Technology, South Africa.