The UK’s AI Safety Summit and The Future of AI Bioweapons

Artificial Intelligence (AI) is an epoch-defining technology that will profoundly shape the ways we live and work. 

In the so-called ‘Century of Biology,’ AI-enabled tools promise a massive increase in speed and capacity in certain parts of the bioengineering R&D cycle, including during data aggregation, lab automation, and the analysis and output phases. 

But despite the excitement about the innovative potential of AI in the life sciences, conversations on potential safety risks – including biological risks such as an AI-enabled aerosolised pathogenic release that could be catastrophic for humanity – have been less cohesive. This is perhaps to be expected since the emergence of functional AI is a relatively recent phenomenon. However, it’s critical to act now to support international collaboration to safeguard against those risks.  

Beginning with the results of the UK’s AI Safety Summit in Bletchley Park, despite the doubters in the lead-up, the summit has successfully kick-started a worldwide effort to develop and coordinate mitigations against AI harms. The two-day summit marked a huge achievement by the UK Government, specifically the hastily constructed AI Safety Taskforce, soon to be established as the world’s first AI Safety Institute or “AISI”. 

The Summit’s various successes set an important tone on this issue. It saw the U.S. Commerce Secretary Gina Raimondo share the stage with Wu Zhaohui, China’s vice minister of science, which was a major diplomatic coup and signalled to the world that these risks were bigger than the current state of geopolitics. It recognised the fact that AI risks do not respect international boundaries, and if not addressed with coordinated mitigations they could present catastrophic harm to humanity. 

The Bletchley Declaration recognised that essential to effective mitigation is a globally inclusive process to better understand, prevent, and collectively respond to emergent risks before they occur. Fundamentally, it also recognises that any future governance framework, regulatory or otherwise, must be proportionate and adaptable whilst predictable and minimally intrusive to those it affects.

Looking forward, South Korea announced that they will host the next global AI Safety Summit in six months’ time with the third gathering in France in a year’s time. 

There is a lot to do before May. Proportionate risk mitigation requires a clear-eyed understanding of the risks coupled with agile tools that respond to technological iterations. 

This is particularly true at the nexus of biological and AI risks – which we call the Bio-AI risk nexus. Essentially, the question of how might AI increase the ease and likelihood of the development and deployment of biological weapons.

To make progress, it is imperative to understand where and how AI may accelerate biological weapons development capabilities—and where it does not. This clarity, in turn, helps stakeholders develop appropriate and proportional risk mitigation strategies. Understanding the specific risks at the intersection of AI and biological weapons development provides policymakers with the data and information needed to compare them to other significant national security risks, in turn enabling proportionate and effective responses.

The Bio-AI risk discourse has, hitherto, focused predominantly on how large language models (LLMs) could lower existing tacit knowledge barriers, potentially empowering a broader range of actors to research, develop, and successfully deploy biological weapons for their unique purposes. While this is an important component of the risk, understanding how the capabilities of AI-enabled tools, including those with specialised life sciences capabilities, influence the individual steps of the biological weapons research, development, and deployment lifecycle may be more productive. Breaking down the impact of AI capabilities in this way will offer unique insights that can be turned into tractable, targeted solutions to mitigate the risks that AI-enabled tools introduce into the life sciences. 

It’s also important not to lose sight of the wider biosecurity risk landscape. These initial Bio-AI risks are but some of the many risks in the wider picture. Traditional risks will endure and will in many cases be augmented by AI technologies. While more effective AI guardrails can offer significant risk reduction benefits, they will not eliminate all biosecurity risks. Therefore, resilient public health systems and strong pandemic preparedness and response capabilities will remain key safeguards. 

Of course, AI can also provide many opportunities to develop more bio-resilient societies. The U.S. Executive Order on the Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence is a great starting point for maximising these opportunities in the U.S.  

All in all, the UK’s AI Safety Summit made concrete headway toward a global consensus on the need to balance AI innovation, safety, and security. Importantly, it kickstarted a long and diplomatically onerous process toward effective, inclusive, and much-needed global AI Governance. The UK should be proud of these achievements. There is, however, a lot more work to do. And when it comes to the AI-bio nexus, that work is critical for the security of humanity.

Christopher East is a Senior Fellow and Program Manager at the Council on Strategic Risks and former Chief of Staff for Biological Security in the UK’s Cabinet Office.

Authors

Categories & Related

,

Search