Go back

Earma 2019: ‘Big, big problems’ with R&D evaluation

The European Commission has admitted major problems with how proposals to the EU’s R&D programme are evaluated, meaning large amounts of funding are potentially being squandered as projects fail to deliver on their promises.

Isabel Vergara, a specialist in evaluations at the Commission’s R&D directorate general, said in a session at the annual meeting of the European Association of Research Managers and Administrators in Bologna that evaluating the impact of projects was a particular concern.

“Proposals are very well written, they promise a lot of things, but we have found in more and more cases that at the time of implementation, they don’t do what they promised. In many cases they are not even close to what they promised,” she told the 28 March meeting.

With millions of euros up for grabs in EU funding calls, any problems in evaluating applications could potentially waste vast amounts of money.

The Commission knows it has to change how impact is evaluated, and is thinking of “radically changing the concept”, she said. But, she added, “We don’t know what to do. Your ideas are welcome.”

Applicants also increasingly know how to write their proposals to get favourably evaluated, adding to problems, Vergara told the meeting. “Some content has become very standard, and for experts it’s very difficult to distinguish a good proposal from a bad proposal.”

Another major problem is finding evaluators, particularly for niche areas and multidisciplinary research. “In some cases, some specific areas, we can’t find the expertise we need,” Vergara said. “In some areas all experts in Europe are presenting proposals, so they can’t act as experts and can’t evaluate.”

The Commission is now considering loosening conflict-of-interest requirements in specific areas to alleviate this.

Other problems Vergara listed include the poor quality of feedback provided to applicants—which is “a big, big problem”, said Vergara—a lack of industry and social sciences representation among evaluators, and the need for better training of evaluators.

The Commission is looking to trial anonymous evaluation of proposals in 2020 on request from the European Parliament, Vergara said. It has even considered using lotteries to choose among closely evaluated proposals, she revealed, but said that in her view such a step would be too radical for the time being.

Despite all these problems, Vergara said that the Commission sees the Horizon 2020 evaluation process as being “strong and solid”. It will form the basis of the evaluation process for the 2021-27 programme Horizon Europe, she said, with the main three criteria of excellence, impact and quality of implementation unchanged, albeit with revision of sub-criteria.

More news from Earma. 

A version of this article also appeared in Research Europe