Go back

The (partial) rise of (partial) randomisation

  

Arma 2022: Adam Golberg has a (partial) list of questions for funders, researchers and universities

Like the video assistant referee in football, the margin of research funding decisions often seems too small to support the weight of the consequences. Reviewers follow the rules, but decisions to fund one project and reject another still feel arbitrary.

We might go further and say that, given how different the strongest proposals are, there is in fact no reliable way of ranking them. This will be especially true for funders who put out open calls with remits open to many different disciplines. 

This helps to explain why funders such as the Natural Environment Research Council (Nerc) and the British Academy are introducing partial randomisation to allocate grants. It seems like a formalisation of what already happens between the strongest applications.

Another motive for randomisation is to reduce two kinds of bias. The first is against people—bad, old-fashioned discrimination or unconscious bias based on ethnicity, gender, employing institution, reputation and so on. The second is against ideas, with review processes thought to lean towards safer options at the expense of more radical proposals.

There’s also an argument for efficiency. Randomisation can reduce the administrative burden on funders, referees and review panels, and make applications less work for project teams and supporting institutions.

But even for those convinced by these arguments, there are important questions that need considering.

Thresholds

Where should funders set the threshold for randomisation? At the extreme, every application could go into the randomiser. However, nearly everyone thinks that there ought to be some filtering.

Reviewers could throw out the ineligible, the out-of-scope, the unfeasible and the incomprehensible, and enter the rest into the lottery. Or they could go further and filter out the mediocre, the derivative, the dubious and the underwhelming. Further still, they could filter out the dull-but-worthy or even all but the highest-rated proposals.

The higher the threshold is set, the more the advantages of randomisation fade. There will be smaller efficiency savings, and a risk that biases will just transfer to the filtering process.

There’s no obvious answer, so it’s no surprise that different funders reach different conclusions. The British Academy has indicated it thinks half of the applications it receives are fundable, while Nerc looks to be randomising from the highest-banded projects and then working downwards.

Resubmissions

Should funders allow proposals that miss out in a random draw to be resubmitted? The easy answer is no, but this is unpopular with researchers, and denying worthy but unlucky proposals a second chance seems inefficient: all that work might go to waste.

But if resubmission is allowed, what happens to these lottery losers? They could be asked to reapply—but they already cleared the threshold. Are they included for the next round by default? If so, for how many rounds?

If every losing proposal rolls over, the pool soon becomes so large that success rates plummet. Should there be a secondary draw to keep some applications and reject the rest? Decisions need to be made.

Demand

If it’s easier to apply, more applications will be submitted. That’s not necessarily bad, but it could negate potential efficiency gains and reduce success rates further. The lower the quality threshold for the lottery, the greater the temptation to throw in anything that might make it through.

We risk ending up funding lots of just-about-good-enough science. This mustn’t be allowed to happen.

Backlash

I have been surprised that, in principle, nearly everyone seems in favour of an element of randomisation in funding decisions. Maybe that’s because most researchers believe that they or their sub-discipline loses out in the current system, so randomisation will be an improvement. But logically, this can’t be true for everyone.

Some researchers whose projects aren’t chosen are going to bridle to see projects funded that they feel shouldn’t even have been in contention. I also suspect that knowing funding is literally a lottery will be much more demoralising than just thinking it’s effectively a lottery.

Universities

Finally, will a university treat two researchers who both cleared the threshold equally in a promotion process, even if one was funded and the other not? How will universities account for random funding decisions when they measure the grant-getting of schools and research groups? Do they keep measuring actual income, even if that is partly down to chance?

None of this constitutes a decisive objection to partial randomisation, and I broadly welcome the experiments. It’ll be fascinating to see what happens next.

Adam Golberg is Research Development Manager (Charities) at the University of Nottingham. He writes in a personal capacity

A version of this article also appeared in Research Fortnight and Research Europe