Go back

Efforts to end bias in funding need better evidence

Image: Martin Barraud, via Getty Images

Without more testing, interventions will remain well-intentioned hypotheses, say Stefanie Schneider and her colleagues

The process of deciding who gets research funding is fraught with challenges. Key among them is the issue of bias in peer review. 

Many researchers find themselves chasing career-defining outcomes while also facing barriers relating to their gender, race, disability and institutional affiliation. Bias doesn’t just affect individuals and their careers; it also skews the direction of research, as ideas and perspectives from excluded communities are overlooked.

In a newly published evidence review, we examine what is known about the obstacles and enablers at different stages of the review process, from crafting funding calls to making final grant decisions. We were surprised to find some sizeable holes in our collective understanding of where biases occur and how they can be redressed.

Our review shows that 44 per cent of the literature on this topic focuses on gender bias, mostly associated with disciplines in science, technology engineering and maths. Racial inequity and institutional prestige have also been shown to sway funding decisions, but these topics have received much less attention in the literature. 

The focus has been on effects on individuals at single stages of the process. We found most evidence for bias during peer review: scholars across disciplines have voiced concerns about biases created by the subjective interpretation of scoring criteria (such as what constitutes ‘excellence’); a lack of time for discussion; and a paucity of equality, diversity and inclusion training. 

Research has also found that a panel’s makeup affects how reviewers assess proposals. Such findings have fuelled calls to make review panels more diverse and to implement EDI interventions that make the process more robust.

Networking

Strikingly little is known, however, about the biases that affect which proposals reach review panels in the first place. Based on personal experience, for example, we expected to find evidence that a researcher’s professional networks would influence their decision to apply for funding or their ability to put together a successful bid. 

Academics at more established, research-intensive and prestigious universities almost certainly find it easier to build such networks. These institutions have more resources to sponsor conference attendance, mentor early career researchers and foster internal collaborations. They also have a higher density of colleagues able to share direct experience from successful bids. 

Networks can furthermore be crucial in giving a heads-up about upcoming opportunities and possible collaborators. Our review found that research on equity in the funding process has overlooked such factors.

Interventions

The other surprising gap is in evidence for the effectiveness of EDI interventions. Many major UK funders, for example, mention the Declaration on Research Assessment and its recommendations in their EDI policies and reviewer guidance. Dora emphasises the superiority of a broad range of impact measures over traditional metrics such as journal impact factors.

However, there is an elephant in the room: the effectiveness of these interventions remains largely untested. We also found many articles that discuss hypothetical actions without explaining where their ideas come from. 

One such example is the shift from traditional CVs listing publications to narrative CVs designed to showcase a broader range of skills and experiences. Advocated by UK Research and Innovation, this is intended to boost the chances of early career scholars by allowing them to promote a wider set of qualities. 

But it might also advantage those skilled at academic writing. Without testing these interventions, we simply don’t know who they will benefit. At this stage, they remain well-intentioned hypotheses rather than proven solutions. There is an urgent need for funders to assess the impact of interventions they’ve introduced and those they intend to introduce.

To conclude, the research and innovation sector has much work to do to translate good intentions into effective practices. The good news is that, connecting with scholars and funders who have recently finished pilot studies on testing interventions, we found that the results are promising.

 

For example, trials show that both anonymising applications and using modified lotteries to allocate funding, choosing winners at random from among proposals that fulfil desired criteria, result in more underrepresented scholars winning support. This tells us two things: that interventions can make a positive impact; and that they require further systematic testing and compelling evidence. 

Stefanie Schneider, Cat Morgan, Robert MacIntosh and Clayton Magill are members of the Equality, Diversity and Inclusion Caucus, funded by UK Research and Innovation and the British Academy

EDICa is running webinars on peer review bias on 22 April and 23 May

This article also appeared in Research Fortnight and a version appeared in Research Europe