Go back

Building BRICs

The Research Assessment Exercise has long been a bête noire for scientists in the UK. So why do countries with fast growing research capabilities want to create similar evaluation systems?

Most UK researchers have spent their working lives under the yoke of the Research Assessment Exercise, driven by the brutal slave masters of Coldharbour Lane, home to the Higher Education Funding Council for England. Ahead, the electric cattle prod of the Research Excellence Framework waits to guide them to the slaughterhouse of retirement. Does the rest of the world treat its beasts of burden any better?

Inconveniently, perhaps, other countries have recognised a steady improvement in the UK’s research performance over the past 25 years, can see the benefits of the RAE and would like something similar. While the BRIC countries are keen to follow this course, so far they haven’t made much progress. There may be many reasons but one common factor is that an evaluation process that works in the relatively homogeneous UK system may not do so where the research base is more diverse.

Brazil has looked at research evaluation systems and how data might be employed to improve both assessment and resource allocation. There is certainly a need for a system that identifies excellent research, even in São Paulo with its relative abundance of resources. Brazilian research excellence is highly concentrated and the pressure to develop a more comprehensive regional infrastructure means that international benchmarks alone won’t meet national policy goals. The country needs a system that can build greater ‘platform’ capacity rather than focusing only on excellence.

Russia has quite separate problems. Significant tensions exist between the state universities, the Russian Academy of Sciences and government. These are exacerbated by the legacy of a distinguished research history that is difficult to match in today’s environment. The geography of research is changing rapidly but for Russia this means that its share of the world’s published science has more than halved in the past 25 years. Once mighty universities are in decline and the resources required to regenerate the research base aren’t there. Under these circumstances it is not surprising that memories of past glories obscure plans for a transparent process to validate today’s performance.

India has a diverse educational system serving huge numbers of students but don’t be distracted by the numbers. There may be 2,000 business schools but the system only recognises 40 central universities and perhaps 250 state universities. Expanding higher education in India will depend on massive private investment from institutions focusing on areas such as management, architecture and engineering. With limited resources, concentrating research in central and state universities may be essential but will this create internationally competitive research? How many institutions can be regularly examined and in what subjects? India is considering an audit system that sets parameters for those parts of the system requiring regular assessment. That could be associated with selective funding, which, as in Brazil, balances the need for regional capacity building and a research base that addresses national priorities.

China complicates any analysis of international research because of its extraordinary growth. Attempts to concentrate Chinese research have created multi-faculty institutions by grouping specialist monotechnics. The development of a transparent, internal research evaluation system is still a work in progress but the efforts of Shanghai Jiao Tong University are driving things in that direction. The university’s academic ranking list, concentrating on high-end research performance, is being used inside China to accommodate a wider range of research indicators and the possibility of faculty and possibly subject level assessments.

Other expanding economies also look to the UK for ideas on research evaluation. Singapore’s research is flourishing and now accounts for around 1 per cent of world output while Malaysia and Saudi Arabia are also catching up. All three are considering research evaluation systems that draw on the UK model but they have different challenges. The problem in smaller systems is that everyone knows everyone else—so how can any evaluation be objective? International assessors are a useful but expensive option while indicators can help to provide an independent reference point at much lower cost.

It is likely that comparative institutional research evaluation—in national assessments as well as internal reviews and competitor reports—will become increasingly indicator-orientated. That may seem unpalatable in the UK but in small systems it provides a critical sense-check to defend panel judgments. And for very large systems, and for those situations where peer review would need to coexist with merit review aimed at capacity building, it may be the only cost-effective solution.

More to say? Email comment@ResearchResearch.com

Jonathan Adams is director, research and development, at Thomson Reuters.