Go back

The battle for research integrity is winnable

   

RPN Live: Artificial intelligence, paper mills and more are troubling, but a fightback is afoot

The issue of research integrity cuts to the core of scientific endeavour. If scientific results were arrived at by incorrect—or even fraudulent—methods, is our knowledge of the world really increasing?

This is a question that more and more people both within and outside science are wrestling with.

As Research Professional News editor-in-chief Sarah Richardson said at the RPN Live webinar on research integrity on 18 July, there is “growing talk of a so-called reproducibility crisis in research, and growing awareness that systemic issues within research environments—pressure to publish, insecure environments, insufficient institutional oversight, to name just a few—are jeopardising trust in the reliability of research”.

But as the ensuing discussion demonstrated, scientists care deeply about this subject; they want to protect the scientific record and are readying the tools and tactics necessary to do so.

Here are the four main themes that developed during the webinar.

Research misconduct is a serious problem and (probably) getting worse

The opening presentation by Elisabeth Bik, a science integrity consultant who won the John Maddox Prize in 2021 for her work identifying image manipulation in publications, was a bracing demonstration of how widespread research misconduct is.

In 2014-15, Bik examined more than 20,000 papers for image manipulation, flagging 782 of them to journals as problematic. She estimated that half of those were intentional errors, roughly 2 per cent of all publications assessed.

When asked whether she thought that figure indicated the wider level of results that are deliberately falsified, she said she was only able to catch “very dumb” errors, where little or no effort had been made to cover up the mistake.

“It would be very hard to detect any data manipulation that is not in a photo,” she added. “So the real percentage of misconduct might be much higher than 2 per cent.”

Bik was also convinced that research misconduct was becoming more prevalent, especially with the rise of generative AI and its likely use by ‘paper mills’—organisations that produce and sell fraudulent manuscripts that resemble genuine research.

This was also a theme developed by Sabina Alam, director of publishing ethics and integrity at academic publisher Taylor & Francis.

“I’ve been doing this since 2008 and there’s been a marked difference, I would say, from 2017 onwards,” Alam said. “The paper mills are one thing, but we can also see people who are part of what we refer to as ‘cartels’… They’re working with each other to boost their publication numbers and H-indexes [a measure of the significance of a researcher’s published work] to levels never seen before.”

Nonetheless, two panellists—Miles Padgett, a member of the UK Committee on Research Integrity and interim executive chair of the Engineering and Physical Sciences Research Council, and Debra Schaller-Demers, senior director for research integrity and compliance at New York University—were more equivocal on whether research misconduct is a worsening problem.

Honest mistakes must be separated from deliberate misconduct

This was a point agreed upon by all panellists. As Alam noted, careless errors and deliberate misconduct “are being lumped into one box and it can…[demotivate] researchers who have spotted a problem in their paper from requesting a retraction”—something that itself leads to errors in the scientific record.

Alam noted “real global variation” in the underlying reasons for the loss of research integrity with “misconduct that happens due to suboptimal training in research integrity and ethics” a growing issue in some parts of the world. Targeted educational work could therefore bolster research integrity, she said.

Outcomes for researchers who had made honest mistakes and those who had engaged in deliberate misconduct needed to be different, stressed Marcus Munafò, chair of the UK Reproducibility Network and associate pro-vice-chancellor for research culture at the University of Bristol.

He said: “The [investigation] process needs to protect the rights of the individual accused, but once the process is complete and it’s known that misconduct did take place, then that needs to be made clear, and often it isn’t. Often these people who engage in misconduct quietly leave by the back door, maybe join another institution—it’s very opaque.”

Research misconduct necessitates a sector-wide response

This was another point on which all panellists agreed, although they put different emphasis on who should be responsible for doing the work.

For Munafò, it was academic institutions that “need to own” this topic, partially due to “issues around the legalities of misconduct”. But he said there was a challenge to ensure that institutions did not “mark their own homework” on such an important matter, and encouraged “cross-institutional partnerships that would allow for cases to be reviewed independently”.

Nandita Quaderi, editor-in-chief of Web of Science*, agreed about the need for a sector-wide response and added a caveat. “When we talk about shared responsibility, we need to make sure that doesn’t allow people to shirk their responsibility. Each stakeholder has to have a very clear sense of what bit of the process they’re responsible for,” she said.

She stressed that funders had a role to play. “If you are funded by an agency and you are found by an institute to have deliberately deceived, there needs to be some kind of pushback [from the funder].”

The creation of a ‘just culture’ could offer a way forward

Munafò outlined what an integrated system that incentivised researchers toward honest behaviour looked like, or “what in some sectors is called a ‘just culture’”.

One way to create this just culture could be by breaking the focus on outputs and focus instead on the process of research, he said. He gave the example of the ‘registered report’ publishing system, which allows researchers to submit their work at the protocol stage and before any data has been collected.

The journal reviews the protocol in the usual way and, if approved, offers in-principle acceptance. In this way, publication is more or less assured, irrespective of results, so long as the protocol is followed.

Munafò had been involved with a trial linking this technique to a funder, Cancer Research UK. It made sense, he said, because “funders, when they review grants, are making very similar decisions: are these research questions important and is the methodology robust?”

For all the speakers, the move to open science aided the defence of research integrity. For Munafò, increased transparency was the cornerstone of a just culture, “where people don’t feel that they have to hide their honest errors” but will self-declare them, freeing up time to combat “the cases of actual malpractice, which need to be treated very differently”.

Padgett agreed with the need for transparency, saying that the increased popularity of publishing online links to raw data was to be welcomed. “I know that doesn’t guarantee the integrity, but it makes it more likely,” he said.

But he sounded a note of caution, reminding attendees that total transparency was not always possible in certain fields of research for confidentiality reasons.

Speaking as a physical scientist, Padgett said: “My data doesn’t involve people, for example. It’s basically collections of numbers taken from instruments. If your data is the output of an interview, that needs to be treated very differently in terms of transparency.”

*Web of Science is a Clarivate product. Research Professional News is an editorially independent part of Clarivate