51. Biosecurity in the Age of AI: What’s the Risk?
- Author:
- Janet Egan and Eric Rosenbach
- Publication Date:
- 11-2023
- Content Type:
- Policy Brief
- Institution:
- Belfer Center for Science and International Affairs, Harvard University
- Abstract:
- “The biggest issue with AI is actually going to be … its use in biological conflict,” according to former Google CEO, Eric Schmidt.[1] And he’s not the only AI expert worried. In his testimony before the Senate Judiciary Committee Subcommittee on Privacy, Technology, and the Law, the CEO of Anthropic, Dario Amodei, warned that in just two to three years, AI has the potential to “greatly widen the range of actors with the technical capability to conduct a large-scale biological attack.”[2] OpenAI’s Sam Altman has called for regulation on AI models “that could help create novel biological agents.”[3]President Biden’s recent Executive Order on Safe, Secure and Trustworthy Development and Use of Artificial Intelligence explicitly tasks relevant agencies with assessing the ways in which AI can increase, and potentially help mitigate, biosecurity risks.[4]But what exactly has experts and officials so worried?
- Topic:
- Defense Policy, Science and Technology, Biosecurity, and Artificial Intelligence
- Political Geography:
- Global Focus