Go back

Future positive

Why researchers should not fear the AI revolution

There will be considerable focus this year on the value of artificial intelligence—of its potential to revolutionise computing, how it might alter the way we work, and how it can impact culture and life itself. There will be debate on everything from becoming an AI-powered business to AI’s impact on ethical judgment.

Its effect on scientific practice is contested and will be the subject of much debate, as many fear the potential of AI to reproduce, rather than eliminate, biases in scientific research and become a useful tool in the service of research malpractice.

Anna Scaife, professor of radio astronomy at the University of Manchester and an inaugural AI fellow of the Alan Turing Institute, the UK’s national institute of data science and artificial intelligence, has a more positive vision in which AI models unlock previously unrealisable potential for researchers—and even counter bias.

She shared her views last month at a workshop on foundational AI in scientific research, for members of the European University Association.

Building the foundation

Foundational AI is a term that relates to ‘foundation models’—systems capable of general tasks such as text synthesis, image manipulation or audio generation. The language model ChatGPT is arguably the best-known foundational AI system.

For Scaife, foundational AI has potential for scientific research as it can process and categorise data much faster than any human. In particular, she argued, the use of bespoke models could expedite scientific advancement and multidisciplinary work.

However, she also warned that “one cannot just adopt AI approaches from the computer science literature that are not specialised for scientific purposes”. Instead, specialised foundation models must be built from the bottom up. But that is no easy task.

“Computationally, they are very expensive,” Scaife said. The final training run of GPT-3, for example, is estimated to have cost around $4.6 million. The company behind the model estimated that the cost doubled roughly every three and a half months, Scaife added.

Besides the cost, in scientific research there are few datasets with labelled entries that the AI model can draw from readily online, which makes building bespoke foundation models for use in science even trickier.

However, Scaife continued, building AI models is an excellent way of organically developing interdisciplinary and multidisciplinary projects between different natural sciences, as well as with computer science.

Widespread application

Once you have built a foundation model, this may then be used as a base for developing more specialised downstream applications, which can be fine-tuned. In contrast to foundation-model building, she said, “This is computationally quite cheap.”

The low cost means that once the foundation model for data processing in a field is developed, it can easily be adapted with smaller datasets by people in specific fields, Scaife continued, and there is therefore a democratising
effect as scientists of all stripes gain access.

At this stage, even scientists without access to large infrastructure could potentially use AI for scientific analysis just by uploading datasets from their laptop, Scaife said. It would become possible for scientists to undertake exploratory analysis of datasets that would previously have been too large to treat in this way.

In Scaife’s field of astronomy, observations, and thus their collected datasets, have been growing exponentially. It is increasingly difficult, therefore, for individual astronomers to undertake more conventional types of analysis manually. Building adapted AI models is a way to overcome that obstacle, she said.

Scaife joked about this. “There might be enough graduate students in the world [to perform manual analysis], but there is probably not enough coffee, so AI is a very attractive, and in fact necessary, mechanism for extracting the scientific information from our data in a timely fashion.”

Considering the volumes of data researchers work with now, Scaife said, without the help of AI it may soon become impossible to extract any impactful findings, “certainly within the scope of a PhD and really within the scope of a career”.

Reciprocity

Foundational models of AI built for scientific research also have the potential to contribute to wider AI methodology in a reciprocal way, Scaife concluded, and particularly with regard to the contentious issue of bias. 

As foundation models learn from datasets, any bias in the original dataset tends to get reinforced. But Scaife could give a first-hand account of seeing that process countered. By incorporating underlying knowledge about observations in astronomy, a project she worked on was able to remove a bias from a standard deep-learning model and correct it, so the resulting data catalogue was not also biased. Scaife said that by making use of prior knowledge in their field, “science also has a role in developing new AI methodologies”. 

This is an extract from an article in Research Professional’s Funding Insight service. To subscribe contact sales@researchresearch.com