A system used by South Africa’s National Research Foundation (NRF) to rate researchers is biased against some disciplines and researchers who co-publish.
This is the message of a paper, ‘The objectivity of National Research Foundation peer review in South Africa assessed against bibliometric indexes’.
The paper, published in the journal Scientometrics on 2 March, says researchers who co-publish are more likely to receive a B-rating than an A-rating. A B-rating is given to researchers with considerable international recognition while an A-rating is the highest recognition given to leading scholars.
“The NRF evaluation process appears biased against disciplines in which multi-author publications are the norm, as well as multi-disciplinary work, which is inherently collaborative,” said Johannes Wolfgang Fedderke, an economics professor at the Pennsylvania State University in the US, who wrote the paper.
Researchers in the physical sciences are also more likely to get an A-rating than those in medical sciences, business studies or social sciences though they produce the same number of publications.
“It is not clear why [biological and social sciences] have a considerably lower probability of receiving an A-rating,” says the paper, which assessed the results of the review process for 1,932 scholars with NRF ratings.
The problem is exacerbated by the lack of transparency in the review process. Reports prepared by the specialist committees after evaluating applications from researchers who want to be rated are confidential and can’t be assessed for objectivity, says Fedderke, who is also a former board member of the NRF.
“Since both the peer reports and the deliberations of the specialist committees are confidential, the grounds for the ratings reported, the rigour and consistency of assessment cannot be assessed for objectivity and accuracy,” the paper says.
However, the NRF criticised the paper, saying it is itself outdated.
Dorsamy Pillay, deputy chief executive officer of the NRF’s Research and Innovation Support and Advancement, says that because the author collected the data for his paper in 2009 and did not publish until 2013, he hasn’t taken into account some of the NRF’s changes to its rating system.
“Lot of things he refers to are historical. We are now adjusting to take into account the contribution of researchers who work in groups,” he says.
Pillay also says Fedderke, who had been given a B-rating and had unsuccessfully appealed for an A, bases his argument on his own perception.
“This is a voluntary system. How come the number of researchers who ask to be rated has increased from 500 to 2,600 with a third of those coming from social sciences if the system is flawed and biased?” he asks.