Go back

South African scheme shows flaws in superstar funding

Massive awards have not increased productivity. Money should be assigned more carefully and spread more broadly, argue Johannes Fedderke and Marcela Goldschmidt.

The South African Research Chairs Initiative, launched in 2008 by the country’s National Research Foundation, concentrates funding on a small number of researchers judged by peer review to be world class. Chair holders receive between $150,000 (€133,000) and $300,000 a year for five years, renewable for up to 15 years. In contrast, researchers outside the scheme receive, at most, about $10,000 a year from the NRF. 

The initiative grew from 32 chairs in 2007 to 150 in 2014. In a 2012 review, the NRF declared the scheme to be “an imaginative and largely successful innovation”. It reported that there was “good evidence that chair holders and their colleagues are contributing to an increase in the flow of publications, including those to prestigious journals”, but did not provide any data on this point. It also noted that more than 90 per cent of the chairs reviewed had been renewed.

The initiative offers an opportunity to measure the effect of funding allocation on research output. We recently compared the productivity of 80 chair holders with that of equivalent researchers without such funding, from 2009 to 2012. We found that, in bibliometric terms, chair holders were scarcely, if at all, more productive.

We used two types of control group in our comparison. First, we used bibliometric measures such as publication counts, citation counts and h-index scores to create groups of researchers who were comparable based on similar past performance.

Second, the NRF ranks researchers in a number of categories, based on peer review. This ranking is independent of the selection mechanism for research chairs, although research chairs are also ranked. Categories A and B are held to indicate world-class research—making these researchers an obvious control group against which to compare chair holders.

We found that despite a funding advantage of at least 15:1, chair holders did not show a statistically observable superiority in their performance. On average, they authored no more articles and were cited no more than the researchers in either the A-rated group or the bibliometrically defined groups.

The chair holders showing the greatest superiority in performance were those who had performed best and been rated most highly before the funding award. By contrast, chair holders with relatively weak prior records performed worse than those in the control groups.

Strikingly, more than half of the chair holders in our sample were ranked below the A and B categories, indicating a lack of international peer recognition. Symmetrically, the researchers in our sample with the lowest performance on bibliometric measures were more likely to be chair holders. The peer-based selection of research chairs thus appears to have been biased away from its stated goal of rewarding research excellence.

The effect of funding varied across disciplines. Only chair holders in the biological, medical and physical sciences showed a statistically significant improvement in output. There was a weak effect in the chemical sciences and engineering, and none at all in business, economics, the social sciences and the humanities.

This analysis was not designed to reveal other possible impacts of the chairs, such as economic and social impact, a rise in graduate student numbers or capacity building. But there are immediate policy inferences to be made: our results show that selective funding yields the greatest returns the more responsive it is to prior research performance. Funding needs to go to the strongest researchers.

Even then, the marginal returns from raised funding seem to be steeply diminishing. In South Africa, even for the most productive recipients, an additional publication by a chair holder costs 22 times as much as one by a comparable researcher outside the scheme. Each additional citation costs 32 times as much.

If funding is intended to raise the output and impact of an entire research system, a more broad-based, inclusive approach that gives smaller awards to more researchers may carry more promise. The differential rates of return across disciplines also suggest that adjusting funding to reflect these differences could raise aggregate levels of output and impact.

Finally, if funding allocation is to follow revealed productivity, productivity has to be monitored transparently and objectively. An obvious step would be to use the growing number of bibliometric measures alongside peer review in reaching decisions about allocations. All the more so as peer review is itself not immune from bias—as this South African case demonstrates.

Something to add? Email comment@ResearchResearch.com

Johannes Fedderke and Marcela Goldschmidt work in the school of international affairs at Pennsylvania State University in the United States. Their study of the South African Research Chairs Initiative is published in Research Policy vol 44, p467-482 (2015).

This article also appeared in Research Europe