1 - 7 of 7
Number of results to display per page
Search Results
2. Emerging Technologies and Terrorism: An American Perspective
- Author:
- Susan Sim, Eric Hartunian, and Paul J. Milas
- Publication Date:
- 04-2024
- Content Type:
- Special Report
- Institution:
- The Strategic Studies Institute of the U.S. Army War College
- Abstract:
- In a world where technology is rapidly advancing and available to the masses, companies and policymakers face a daunting reality—non-state actors are using innovation for sinister purposes. While artificial intelligence (AI) and autonomous systems promise enhanced threat detection, terrorist groups are exploiting these tools for recruitment and attacks. The future is concerning as AI becomes more widespread and autonomous systems and augmented reality redefine society. A groundbreaking report is born from a collaboration between NATO COE-DAT and the US Army War College Strategic Studies Institute. This book unveils a grim forecast that terrorists are poised to exploit advances in artificial intelligence, autonomous systems, augmented reality, biotechnology, and nanotechnology. The line between reality and fiction blurs in the age of rapid technological evolution, urging governments, industries, and academia to unite in crafting ethical frameworks and regulations. As geopolitical tides shift, NATO stresses national responsibility in combating terrorism and advocating for collective strength against the looming specter of technology-driven threats. However, questions linger. Can regulatory frameworks keep pace with technological innovation? Will industry prioritize ethical considerations over profit margins?
- Topic:
- Agriculture, Science and Technology, Terrorism, Biosecurity, Artificial Intelligence, Nanotechnology, Biotechnology, and Nanoweaponry
- Political Geography:
- Global Focus
3. AI and the Evolution of Biological National Security Risks: Capabilities, Thresholds, and Interventions
- Author:
- Bill Drexel and Caleb Withers
- Publication Date:
- 08-2024
- Content Type:
- Special Report
- Institution:
- Center for a New American Security (CNAS)
- Abstract:
- Not long after COVID-19 gave the world a glimpse of the catastrophic potential of biological events, experts began warning that rapid advancements in artificial intelligence (AI) could augur a world of bioterrorism, unprecedented superviruses, and novel targeted bioweapons. These dire warnings have risen to the highest levels of industry and government, from the CEOs of the world's leading AI labs raising alarms about new technical capabilities for would-be bioterrorists, to Vice President Kamala Harris’s concern that AI-enabled bioweapons “could endanger the very existence of humanity.”1 If true, such developments would expose the United States to unprecedented catastrophic threats well beyond COVID-19’s scope of destruction. But assessing the degree to which these concerns are warranted—and what to do about them—requires weighing a range of complex factors, including: The history and current state of American biosecurity The diverse ways in which AI could alter existing biosecurity risks Which emerging technical AI capabilities would impact these risks Where interventions today are needed This report considers these factors to provide policymakers with a broad understanding of the evolving intersection of AI and biotechnology, along with actionable recommendations to curb the worst risks to national security from biological threats. The sources of catastrophic biological risks are varied. Historically, policymakers have underappreciated the risks posed by the routine activities of well-intentioned scientists, even as the number of high-risk biosecurity labs and the frequency of dangerous incidents—perhaps including COVID-19 itself—continue to grow. State actors have traditionally been a source of considerable biosecurity risk, not least the Soviet Union’s shockingly large bioweapons program. But the unwieldiness and imprecision of bioweapons has meant that states remain unlikely to field large-scale biological attacks in the near term, even though the U.S. State Department expresses concerns about the potential bioweapons capabilities of North Korea, Iran, Russia, and China. On the other hand, nonstate actors—including lone wolves, terrorists, and apocalyptic groups—have an unnerving track record of attempting biological attacks, but with limited success due to the intrinsic complexity of building and wielding such delicate capabilities. Today, fast-moving advancements in biotechnology—independent of AI developments—are changing many of these risks. A combination of new gene editing techniques, gene sequencing methods, and DNA synthesis tools is opening a new world of possibilities in synthetic biology for greater precision in genetic manipulation and, with it, a new world of risks from the development of powerful bioweapons and biological accidents alike. Cloud labs, which conduct experiments on others’ behalf, could enable nonstate actors by allowing them to outsource some of the experimental expertise that has historically acted as a barrier to dangerous uses. Though most cloud labs screen orders for malicious activity, not all do, and the constellation of existing bioweapons norms, conventions, and safeguards leaves open a range of pathways for bad actors to make significant progress in acquiring viable bioweapons. But experts’ opinions on the overall state of U.S. biosecurity range widely, especially with regard to fears of nonstate actors fielding bioweapons. Those less concerned contend that even if viable paths to building bioweapons exist, the practicalities of constructing, storing, and disseminating them are far more complex than most realize, with numerous potential points of failure that concerned parties either fail to recognize or underemphasize. They also point to a lack of a major bioattacks in recent decades, despite chronic warnings. A more pessimistic camp points to experiments that have demonstrated the seeming ease of successfully constructing powerful viruses using commercially available inputs, and seemingly diminishing barriers to the knowledge and technical capabilities needed to create bioweapons. Less controversial is the insufficiency of U.S. biodefenses to adequately address large-scale biological threats, whether naturally occurring, accidental, or deliberate. Despite COVID-19’s demonstration of the U.S. government’s inability to contain the effects of a major outbreak, the nation has made limited progress in mitigating the likelihood and potential harm of another, more dangerous biological catastrophe. New AI capabilities may reshape the risk landscape for biothreats in several ways. AI is enabling new capabilities that might, in theory, allow advanced actors to optimize bioweapons for more precise effects, such as targeting specific genetic groups or geographies. Though such capabilities remain speculative, if realized they would dramatically alter states’ incentives to use bioweapons for strategic ends. Instead of risking their own militaries’ or populations’ health with the unwieldy weapons, states could sabotage other nations’ food security or incapacitate enemies with public health crises from which they would be unlikely to rebound. Relatedly, the same techniques could create superviruses optimized for transmissibility and lethality, which may considerably expand the destructive potential of bioweapons. Tempering these fears, however, are several technical challenges that scientists would need to overcome—if they can be solved at all. The most pressing concern for biological risks related to AI stems from tools that may soon be able to accelerate the procurement of biological agents by nonstate actors. Recent studies have suggested that foundation models may soon be able to help accelerate bad actors’ ability to acquire weaponizable biological agents, even if the degree to which these AI tools can currently help them remains marginal.2 Of particular concern are AI systems’ budding abilities to help troubleshoot where experiments have gone wrong, speeding the design-build-test-learn feedback loop that is essential to developing working biological agents. If made more effective, emerging AI tools could provide a boon to would-be bioweapons creators by more dynamically providing some of the knowledge needed to produce and use bioweapons, though such actors would still face other significant hurdles to bioweapons development that are often underappreciated. AI could also impact biological risks in other ways. Technical faults in AI tools could fail to constrain foundation models from relaying hazardous biological information to potential bad actors, or inadvertently encourage researchers to pursue promising medicinal agents with unexpected negative side effects. Using AI to create more advanced automated labs could expose these labs to many of the risks of automation that have historically plagued other complex automated systems, and make it easier for nonspecialists to concoct biological agents (depending upon the safety mechanisms that automated labs institute). Finally, heavy investment in companies and nations seeking to capitalize on AI’s potential for biotechnology could be creating competition dynamics that prioritize speed over safety. These risks are particularly acute in relation to China, where a variety of other factors shaping the country’s biotech ecosystem also further escalate risks of costly accidents.
- Topic:
- Security, National Security, Biosecurity, Artificial Intelligence, and COVID-19
- Political Geography:
- Global Focus
4. Biosecurity in the Age of AI: What’s the Risk?
- Author:
- Janet Egan and Eric Rosenbach
- Publication Date:
- 11-2023
- Content Type:
- Policy Brief
- Institution:
- Belfer Center for Science and International Affairs, Harvard University
- Abstract:
- “The biggest issue with AI is actually going to be … its use in biological conflict,” according to former Google CEO, Eric Schmidt.[1] And he’s not the only AI expert worried. In his testimony before the Senate Judiciary Committee Subcommittee on Privacy, Technology, and the Law, the CEO of Anthropic, Dario Amodei, warned that in just two to three years, AI has the potential to “greatly widen the range of actors with the technical capability to conduct a large-scale biological attack.”[2] OpenAI’s Sam Altman has called for regulation on AI models “that could help create novel biological agents.”[3]President Biden’s recent Executive Order on Safe, Secure and Trustworthy Development and Use of Artificial Intelligence explicitly tasks relevant agencies with assessing the ways in which AI can increase, and potentially help mitigate, biosecurity risks.[4]But what exactly has experts and officials so worried?
- Topic:
- Defense Policy, Science and Technology, Biosecurity, and Artificial Intelligence
- Political Geography:
- Global Focus
5. Eight Commonsense Actions on Biosafety and Biosecurity
- Author:
- J. Stephen Morrison and Michaela Simoneau
- Publication Date:
- 11-2023
- Content Type:
- Policy Brief
- Institution:
- Center for Strategic and International Studies (CSIS)
- Abstract:
- Today, there is a shared sense of vulnerability and a shared resolve across political divides to better protect the United States and the world against the accidental release of biological agents (biosafety), deliberate misuse of biological agents (biosecurity), and naturally occurring spillover of dangerous pathogens. Biosafety and biosecurity are critically important areas where individual freedom, ethics, scientific progress, and national security converge. As biological research and discovery accelerate at an unprecedented pace, they generate new risks that must be managed responsibly. The CSIS Bipartisan Alliance for Global Health Security’s Working Group on R&D Innovation advocates select, commonsense actions to strengthen U.S. and global biosafety and biosecurity: clarify U.S. policies and policy leadership within the White House; increase financial and diplomatic investments; update the rules around synthetic products and viral discovery work; and invest in research on basic lab protections and the new risks of artificial intelligence. Together, these timely, affordable steps can attract broad political support and promote a safer, more secure, and more innovative global bioeconomy.
- Topic:
- Security, Defense Policy, Infectious Diseases, Biosecurity, COVID-19, and Global Health
- Political Geography:
- Global Focus
6. Re-thinking Biological Arms Control for the 21st Century
- Author:
- Filippa Lentzos
- Publication Date:
- 07-2019
- Content Type:
- Journal Article
- Journal:
- Fletcher Security Review
- Institution:
- The Fletcher School, Tufts University
- Abstract:
- International treaties prohibit the development and use of biological weapons. Yet concerns about these weapons have endured and are now escalating. It is high time to take a hard look at technical and political developments and consider how the international security policy community should respond. A major source of the growing concern about future bioweapons threats stem from scientific and technical advances. Innovations in biotechnology are expanding the toolbox to modify genes and organisms at a staggering pace, making it easier to produce increasingly dangerous pathogens. Disease-causing organisms can now be modified to increase their virulence, expand their host range, increase their transmissibility, or enhance their resistance to therapeutic interventions.[1] Scientific advances are also making it theoretically possible to create entirely novel biological weapons,[2] by synthetically creating known or extinct pathogens or entirely new pathogens.[3] Scientists could potentially enlarge the target of bioweapons from the immune system to the nervous system,[4] genome, or microbiome,[5] or they could weaponize ‘gene drives’ that would rapidly and cheaply spread harmful genes through animal and plant populations.[6] Concurrent developments in other emerging technologies are also impacting potential future biological weapons threats. Developments in artificial intelligence and machine learning could speed up identification of harmful genes or DNA sequences. Artificial intelligence and machine learning could also potentially enable much more targeted biological weapons that would harm specific individuals or groups of individuals based on their genes, prior exposure to vaccines, or known vulnerabilities in their immune system.[7] Big Data and ‘cloud labs’ (completely robotized laboratories for hire) facilitate this process by enabling massively scaled-up experimentation and testing, significantly shortening ‘design-test-build’ timeframes and improving the likelihood of obtaining specificity or producing desired biological functionality.[8] Other developments provide new or easier ways to deliver pathogens or biological systems. Nanotechnology could potentially create aerosolized nanobots dispersing lethal synthetic microbes or chem-bio hybrids through the air,[9] or in vivo nanobots releasing damaging payloads inside human bodies.[10] Aerosol or spraying devices attached to swarms of small unmanned aerial vehicles, or drones, could be another potential means to disperse biological agents. Additive manufacturing, or 3D printing, could circumvent barriers imposed by national export control systems on controlled laboratory equipment or dispersal devices.
- Topic:
- Security, Health, Science and Technology, Treaties and Agreements, Biosecurity, and Weapons
- Political Geography:
- Global Focus
7. Developing the Blue Economy in Caribbean and Other Small States
- Author:
- Cyrus Rustomjee
- Publication Date:
- 03-2016
- Content Type:
- Policy Brief
- Institution:
- Centre for International Governance Innovation (CIGI)
- Abstract:
- The world’s oceans are crucial to human life. They cover 71 percent of the earth’s surface and contain 97 percent of the earth’s water (Oceanic Institute 2016); provide vital ecosystem services; serve as a growing source of renewable energy and make crucial contributions to global food production and food security, through the provision of food, minerals and nutrients. Fish provide 4.3 billion people with about 15 percent of their intake of animal protein (UN Food and Agriculture Organization [FAO] 2014b). Over 3.1 billion people live within 100 km of the ocean or sea in about 150 coastal and island nations (FAO 2014a), and global ocean economic activity is estimated to be US$3–5 trillion (FAO 2014b). Oceans and seas serve as waterways for global trade, with more than 90 percent of global trade carried by sea (International Maritime Organization 2012). Some 880 million people depend on the fisheries and aquaculture sector for their livelihoods (ibid.). Recognition of the services and resources provided by oceans has accelerated in recent years, spurred by the opportunities and challenges posed by a rapidly growing global population, increasing global demand for food and energy, advances in technology, and changes in patterns of global trade and human consumption. Developed countries have expanded fisheries, tourism and other oceanic and maritime industries; extended mineral exploration and extraction; and scaled up ocean-related scientific, technological and industrial research. Using increased knowledge of marine biodiversity, they have developed new value chains in pharmaceuticals, health care and aquaculture; and many have established integrated national ocean economy strategies, bringing together the regulatory, environmental, spatial, policy, institutional, industrial and other factors influencing their ability to exploit maritime resources.
- Topic:
- Environment, Political Economy, Maritime Commerce, Biosecurity, and Natural Resources
- Political Geography:
- Caribbean and Global Focus