Researchers can help hold politicians to their words on AI safety
The past two weeks have seen a flurry of top-level announcements aimed at encouraging what US president Joe Biden termed “safe, secure and trustworthy artificial intelligence”. There has been an agreement from 28 countries and the EU to deepen scientific cooperation; G7 countries have pushed a code of conduct for developers; and on AI “safety institutes” have come from the EU, the US and the UK.
AI, as Biden’s vice-president Kamala Harris put it last week ahead of the first global AI Safety Summit, has “the potential to do profound good [and] the potential to cause profound harm”. Speaking on the eve of the event at Bletchley Park near London, she listed AI-enabled cyberattacks and AI-formulated bioweapons as examples of “existential threats”. Meanwhile, European Commission president Ursula von der Leyen likened the risks inherent in the development of AI to those in the development of nuclear weapons.