Go back

AI comes alive

                

A UKRI scheme aims to kick-start artificial intelligence applications in healthcare

The first UK Research and Innovation-wide scheme on potential uses of artificial intelligence in health research has opened with £13 million available.

The national funder is after AI projects of 18 months’ duration, involving expertise from several disciplines, with a total cost of between £500,000 and £750,000—UKRI will fund successful projects at 80 per cent of the full economic cost. The deadline for intentions to submit is 28 February, with full proposals due by 28 March.

Yan Yip, data science programme manager for UKRI’s Medical Research Council, and Katherine Freeman, a senior portfolio manager for healthcare technologies at its Engineering and Physical Sciences Research Council, discuss the thinking behind the call.

What is the background to this scheme?

Yan Yip: There have been other AI-for-health funding opportunities within and outside UKRI, but this is the first time we’re offering something that spans the remit of all the research councils. There are still significant barriers in AI in health and innovation, such as health data access, storage and use, understanding societal acceptance of the use of AI in health, and the professional skills to enable this potential.

But this funding opportunity is not only about applying AI to understand health problems but also how to create a suitable ecosystem that will enable the use of AI to improve health research. For example, how would you create technology that is effective and trustworthy enough so that it is adopted by clinicians? To do this, AI researchers will have to work closely with end users such as clinicians and patients to develop this technology.

Katherine Freeman: Challenges in different health areas are unique; therefore, tailored solutions are needed for a particular context. AI technologies need to be developed to meet these challenges and will require collaboration between multidisciplinary team members who can develop suitable solutions and build new capability.

How many projects do you expect to fund?

YY: The funding allows for a maximum of 16 to 17 projects, but the final number will depend on the quality of applications.

Are you looking for identification of a health problem amenable to tackling with AI or do you want teams to bring tech that could potentially do that?

KF: The latter, but any project does need to have a clear health need. We’d consider funding more risky, quite early-stage AI tech but the applicants would have to have the right people in the team and a well-chosen application area in health. I’m not 100 per cent sure how much pilot data is necessary but there would need to be an idea of what you would want to do and some evidence of its usefulness.

YY: I wouldn’t say you need to have something that already works, though. You can develop something.

How does the ‘intention to submit’ work?

YY: To submit a full proposal, applicants need to fill out an intention to submit. It helps us gauge the types and number of applications. This way, the office can get the review panel ready and manage the applications. However, it is not used to evaluate applications. It is really just the name of the principal investigator, keywords, co-investigators and a short abstract.

Teams should work across “the nexus of challenge spaces”—could you elaborate on this?

YY: We want people to propose ambitious projects that address multiple aspects of particular health challenges. For example, if you’re creating a piece of technology, is it responsible, is it ethical and is it effective? As a development and application project, this isn’t so straightforward. Sometimes there are challenges around data, such as how you are going to ensure that the data you’re using is a suitable resource to develop your algorithm.

For this scheme, applicants have to think about all pieces that fit together to answer a specific health challenge. This is why projects must involve multidisciplinary teams to bring different expertise together.

KF: An example of confronting a challenge would involve working with users of technologies, for example with patients, to ensure they would accept any technology developed. For example, a decision support tool where AI gives diagnoses—would patients approve of that?

Should applicants be aware of regulatory issues in this area?

YY: The focus of these projects should be development of innovative technology. This usually means the project would be quite far away from a working prototype and regulatory approval. But we do want applicants to think about what is needed, maybe by involving people who have relevant expertise, to help them consider ethical and regulatory concerns.

There’s a lot of development activity in health AI but many technologies end up failing regulatory assessment. Instead of thinking of the regulatory component when you’ve got a prototype that’s fully formed, we want the applicants to think about it much earlier. 

This is an extract from an article in Research Professional’s Funding Insight service. To subscribe contact sales@researchresearch.com