Go back

A chance to lead on responsible AI

Image: Prime Minister's Office

Summit and new UKRI programme can drive a democratic approach to regulation, says Jack Stilgoe

In 1975, some of the world’s leading scientists met to discuss what was then an exciting and troubling new technology, recombinant DNA—essentially the first genetic engineering. Spooked by talk of rogue superbugs and threats of regulation from United States policymakers, scientists had declared a moratorium on research in 1974 (sound familiar?).

The 1975 meeting at Asilomar, California, that agreed a set of guidelines under which the work could continue has been talked about as a milestone in responsible innovation. Crucially, however, the meeting’s agenda was set by the researchers and innovators who would be governed by the principles it produced.

That year, US senator Ted Kennedy called the scientists’ attempts to take responsibility “commendable, but…inadequate”, adding that “the factors under consideration extend far beyond their technical competence. In fact they were making public policy. And they were making it in private.” Others saw it as more of an attempt to head off regulation than to impose it.

The Asilomar model has been copied many times, including with an Asilomar for artificial intelligence in 2017. That meeting was more inclusive than its biological predecessor, but repeated some of the same mistakes, producing a set of weak and speculative principles.

Since this meeting, excitement over AI has exploded, especially in the past six months. But the sorts of ethical principles agreed in 2017 have done little to shape the technology in more ethical ways.

Brokering regulation

Forward to 2023, and Rishi Sunak rightly calculates that while the UK will never win a tech arms race, it could usefully lead a debate about responsible innovation, standards and regulation. Accordingly, the prime minister used his recent trip to the White House to announce an AI Summit to take place later this year.

This would be a forum not just for scientific exchange, but also for international diplomacy and technology assessment to help ensure that there is a global effort to raise regulatory standards rather than a race to the bottom. For a nation looking for a new global role post-Brexit, potentially as a bridge between European dirigisme and Silicon Valley’s Wild West, AI could be a vital test case.

Until now, the debate about responsible AI and AI ‘safety’ has been led by the powerful (mainly)men who are developing the technology. Geoffrey Hinton, the AI pioneer whose fears prompted his resignation from Google, Sam Altman, chief executive of OpenAI, the company behind ChatGPT, and others did not have to speak out with their concerns about AI. They have—as scientists have throughout history—rung important alarm bells. But alarm bells do not tell us what to do next.

We urgently need to hear more from the people likely to use AI and the people whose lives and livelihoods are likely to be disrupted. It is notable that, when Silicon Valley types talk about democratisation, they tend to mean something like ‘make it cheap and easy’, rather than ‘let the people decide’.

We now need some real democracy. Rather than fixating on utopian dreams or existential risks, we must understand what people really care about, otherwise governments risk governing an imaginary technology rather than a real one.

Social scientists and computer scientists have tried over the past few years to draw policymakers’ attention to the real harms that AI is causing already by making decisions that are dangerous, biased, unreliable and often unaccountable. But science-fiction scenarios, often reported by hungry media with accompanying pictures of the Terminator, have proven to be a useful displacement activity to confronting present dangers.

The AI Summit should draw on the UK’s broad base of researchers from a range of disciplines, in companies as well as universities, and it should ensure that all perspectives, not just those of scientists and entrepreneurs, are heard.

Building a community

Speaking at London Tech Week on 14 June, Chloe Smith, acting secretary of state for science, technology and innovation, announced a major investment by UK Research and Innovation in responsible AI, for which I am part of the leadership team. We have £31 million and five years to convene and support a community of researchers trying to ensure that AI is directed towards social goals, rather than being driven by scientists’ excitement or tech companies’ search for a new profit centre.

Weighed against the vast amounts of money being invested in the technology, even a large UKRI programme seems small. But the forthcoming summit is an opportunity to use our leverage to change the debate. It must not be wasted. 

Jack Stilgoe is a professor of science and technology studies at University College London

This article also appeared in Research Fortnight and a version appeared in Research Europe