Go back

Put peer review under review

There is little evidence that peer review is the best way to apportion research money—and even less evidence for the alternatives. Funders should be experimenting with different approaches.

Peer review is the conventional route which funding bodies use to deliver research grants to their chosen scientists. But shouldn’t they experiment with different approaches to see what gets them to their destination most efficiently, asks Steven Wooding, research leader at RAND Europe.

Awarding money for research is not easy. As physicist Niels Bohr once remarked: “Prediction is hard, especially about the future.” Funders are expected to judge which research proposals will turn out best even though how they will turn out is, by definition, unknown.

The traditional way of doing this is peer review. In the UK, more than 95 per cent of grant money for biomedical research is allocated in this way. Across Europe, the majority of research funding is allocated by panels of scientists judging each other’s work.

And yet there is very little evidence that peer review is an effective and efficient way to allocate grant money. The progress of science and technology shows that good research does get funded—but it doesn’t show that peer review is the best way to select it. Throwing two dice will eventually turn up a double six; the question is, how can we load the dice to improve our odds?

Anecdotal reports suggests that peer review can be slow, suffer from bias against new ideas and interdisciplinary work, be prejudiced against early-career researchers and women, undervalue applied research and lack transparency, although the evidence for most of these charges is inconclusive. More certain is that writing, reviewing and judging proposals imposes a burden on researchers and the research system as a whole.

There is even less evidence that alternative methods work any better. This is partly because of peer review’s ubiquity, and partly because of the difficulty in performing comparative experiments when it may take 20 years or more to see the results.

To address this, funders should be experimenting with how they allocate some of their research funding. RAND Europe has compiled a selection of alternatives to peer review, along with less radical tweaks that could be made to the system.

Changes such as involving other viewpoints—for instance, those of patients and venture capitalists—in the review process can help focus research on the areas of greatest need and provide advocates for applied research. Asthma UK, for example, includes lay reviewers alongside experts in a six-month, three-stage scoring process that ranks proposals for funding by priority. To work well, such approaches need to avoid tokenism and give outsiders real weight in decision making.

One way to be more inclusive is by using more transparent scoring when discussing grant proposals. Such methods can also support virtual panels, reducing the need to bring everyone together in one place and expanding the range of available opinions. However, just as some viewpoints are drowned out in committee discussion, those of contributors less at home in the virtual world may end up excluded online.

More radical alternatives dispense with peer review altogether. Prizes have a long history of stimulating innovation, having enabled, for example, the chronometers that allowed accurate measurement of longitude in the eighteenth century, and food preservation by canning in the nineteenth century. In recent years, the United States government’s Defense Advanced Research Projects Agency’s prize for developing self-driving robot vehicles has been a great technical and public-
relations success, and another US-based organisation, the
not-for-profit X Prize Foundation, has used prizes to award research funding in a range of areas, including space flight, genomics and automobiles.

Prizes, of course, need researchers ready to take up the challenge. Areas where there is a dearth of research may need a more nurturing approach, as seen in ‘sandpits’. Sandpits bring together researchers across disciplines to develop a research programme, often in a neglected area, through collaboration and competition over a period of discussion and iteration. The UK’s Engineering and Physical Sciences Research Council has used this approach to generate project ideas on nutrition for older people, mobile healthcare delivery and coping with extreme weather events.

The idea that public policy should be based on what works is no longer controversial. But the scientific establishment has been much less willing to experiment with how it judges itself. Institutional inertia has resulted in peer review being considered the gold standard, and a resistance to trying different funding mechanisms.

It seems unlikely that any one decision-making mechanism will fit every scheme in every area of science. This uncertainty should inspire funders to try different approaches and share their findings. That will let everyone improve how they go about one of the most important processes in science: deciding what science to do.

It may be that, like democracy, peer review is the worst form of decision making, except all the others that have been tried. Only in this case the others haven’t even been tried.

Steven Wooding is a research leader at RAND Europe, part of RAND Corporation, a non-profit research institute and a co-author of Alternatives to Peer Review in Research Project Funding. Email wooding@rand.org or contact @drstevenwooding on Twitter.

Something to add? Email comment@ResearchResearch.com