October 12, 2022

Artificial Intelligence and Arms Control

Introduction

Advances in artificial intelligence (AI) pose immense opportunity for militaries around the world. With this rising potential for AI-enabled military systems, some activists are sounding the alarm, calling for restrictions or outright bans on some AI-enabled weapon systems. Conversely, skeptics of AI arms control argue that as a general-purpose technology developed in the civilian context, AI will be exceptionally hard to control. AI is an enabling technology with countless nonmilitary applications; this factor differentiates it from many other military technologies, such as landmines or missiles. Because of its widespread availability, an absolute ban on all military applications of AI is likely infeasible. There is, however, a potential for prohibiting or regulating specific use cases.

The international community has, at times, banned or regulated weapons with varying degrees of success. In some cases, such as the ban on permanently blinding lasers, arms control has worked remarkably well to date. In other cases, however, such as attempted limits on unrestricted submarine warfare or aerial bombardment of cities, states failed to achieve lasting restraint in war. States’ motivations for controlling or regulating weapons vary. States may seek to limit the diffusion of a weapon that is particularly disruptive to political or social stability, contributes to excessive civilian casualties, or causes inhumane injury to combatants.

This paper examines the potential for arms control for military applications of AI by exploring historical cases of attempted arms control, analyzing both successes and failures. The first part of the paper explores existing academic literature related to why some arms control measures succeed while others fail. The paper then proposes several criteria that influence the success of arms control. Finally, it analyzes the potential for AI arms control and suggests next steps for policymakers. Detailed historical cases of attempted arms control—from ancient prohibitions to modern agreements—can be found in appendix A in the pdf available for download. For a summary table of historical attempts at arms control, see appendix B.

History teaches us that policymakers, scholars, and members of civil society can take concrete steps today to improve the chances of successful AI arms control in the future. These include taking policy actions to shape the way the technology evolves and increasing dialogue at all levels to better understand how AI applications may be used in warfare. Any AI arms control will be challenging. There may be cases, however, where arms control is possible under the right conditions, and small steps today could help lay the groundwork for future successes.

Understanding Arms Control

“Arms control” is a broad term that can encompass a variety of different actions. Generally, it refers to agreements that states make to control the research, development, production, fielding, or employment of certain weapons, features of weapons, applications of weapons, or weapons delivery systems.

Types of Arms Control

Arms control can occur at many stages in the development and use of a weapon (see figure 1). Nonproliferation regimes, such as the nuclear nonproliferation treaty (NPT), aim to prevent access to the underlying technology behind certain weapons. (See appendix C in the PDF for a list of official treaty names and informal titles and acronyms). Bans, such as those on land mines and cluster munitions, allow access to the technology but prohibit developing, producing, or stockpiling the weapons. Arms-limitation treaties permit production; they simply limit the quantities of certain weapons that countries can have in peacetime. Other agreements regulate the use of weapons in war, restricting their use in certain ways or prohibiting use entirely.

Figure 1. Arms Control Measures Across the Life Cycle of Weapons Development and Use

Arms control can be implemented in a variety of means, including legally binding treaties, customary international law that arises from state practice over time, or non-legally-binding instruments. Successful arms control can even be carried out through tacit agreements that are not explicitly stated between states but nevertheless result in mutual restraint.

Arms control among states is the exception rather than the rule. Most of the time, states compete in military technologies without either formal or informal mechanisms of arms control to limit their competition. Several factors make arms control challenging. Arms control requires some measure of coordination and trust among states, and the circumstances in which arms control is most needed— intense militarized competition or war—are the ones in which coordination and trust are most difficult. The kind of monitoring and verification that might enable trust is also a challenge, because the same transparency that might allay a competitor’s fears about weapons development might also reveal vulnerabilities in one’s own military forces, making states reluctant to adopt such measures.

Despite these pressures, states have at times succeeded in restraining weapons development or use. Even at the height of total war, states have sought mutual restraint and refrained from using certain weapons, features of weapons, or tactics that would escalate fighting or unnecessarily increase suffering. The key question for this paper is not why arms control is rare, but why it succeeds in some instances and not others.

Factors that Influence the Success or Failure of Arms Control

Sean Watts and Rebecca Crootof analyzed historical cases of arms control to identify which social, legal, and technological factors influence whether arms control succeeds.

Watts identifies six criteria that he argues affect a weapon’s tolerance or resistance to regulation: effectiveness, novelty, deployment, medical compatibility, disruptiveness, and notoriety. An effective weapon that provides “unprecedented access” to enemy targets and has the capacity to ensure dominance is historically resistant to regulation. There is a mixed record for regulating novel weapons or military systems throughout history. Countries have pursued regulation of certain new weapons or weapons delivery systems (e.g., aerial bombardment) while also resisting regulation for other novel military systems (e.g., submarines). Weapons that are widely deployed—“integrated into States’ military operations”—tend to be resistant to arms control. Weapons that cause “wounds compatible with existing medical protocols” in military and field hospitals are historically difficult to ban or regulate. Powerful nations have historically tried to regulate or ban weapons that are “socially and militarily disruptive” out of fear that such weapons could upend existing global or domestic power dynamics. Campaigns by civil society groups or widespread disapproval from the public can increase notoriety, making a weapon potentially more susceptible to arms control.

Crootof’s model overlaps with Watts’s, but her focus is on weapons bans as opposed to arms control more generally. She identifies eight factors that influence the success of a weapons ban. Weapons that cause superfluous injury or unnecessary suffering or that are inherently indiscriminate are more likely to be banned. Countries tend to resist regulating or banning a weapon that has demonstrated military or strategic utility. Weapons that are unique or provide a country with the “only means of accomplishing certain goals” are difficult to regulate or ban. A ban that is narrow and clearly defines what is and is not permitted is more likely to be effective. An existing treaty or regulation on a weapon may make future arms control more successful, unless technological developments increase the weapon’s military utility. Advocacy groups and public opinion may influence countries’ consideration of a weapons ban, although, as Crootof notes, “this factor is far from decisive.” Finally, the success of a weapons ban is influenced by both the total number of countries willing to support the ban and which countries agree to sign on to it.

Watts and Crootof agree that a weapon’s effectiveness is arguably the most important factor that influences the success of arms control. Although their interpretations differ slightly, both argue that a weapon with uniquely valuable capabilities is difficult to regulate. Watts focuses on the social or military disruptiveness of a weapon—the capacity of the weapon to upset the existing balance of power. Although powerful countries may seek to restrain disruptive weapons, he argues that these efforts rarely succeed. Crootof argues that a weapon unique in its “ability to wreak a certain type of devastation” or accomplish certain military objectives is likely to be resistant to arms control.

The next section will build upon their models to present a slightly revised approach toward understanding factors that affect the success or failure of arms control for different technologies.

Desirability and Feasibility of Arms Control

Whether arms control succeeds or fails depends on both its desirability and its feasibility. The desirability of arms control encompasses states’ calculation of a weapon’s perceived military value versus its perceived horribleness (because it is inhumane, indiscriminate, or disruptive to the social or political order). Thus, desirability of arms control is a function of states’ desire to retain a weapon for their own purposes balanced against their desire to restrain its use by their adversaries.

The feasibility of arms control—the sociopolitical factors that influence its success—includes states’ ability to achieve clarity on the degree of restraint that is desired, states’ capacity to comply with an agreement to restrain use, states’ capacity to verify compliance, and the number of states needed to secure cooperation for an agreement to succeed. Arms control has the best chance of success when both desirability and feasibility are high.

Arms control is deemed successful when state behavior is restrained—in weapons development, quantity produced, deployment posture, or use. For the purposes of this paper, arms control agreements that fail to restrain state behavior are not considered successful. In rare instances, restraint occurs by tacit agreement, without any formal treaty or other mechanism. Generally, however, formal agreements are a useful coordination mechanism between states for reaching clarity on what is permitted and what is not. In many cases, success exists on a spectrum. Few arms control agreements are 100 percent successful, with zero violations. Some of the most successful agreements, such as modern bans on chemical and biological weapons or limits on the proliferation of nuclear weapons, have some exceptions and violators. Other agreements are successful only for a period of time, after which technology or the political environment changes in a way that causes them to collapse. Nevertheless, even partially successful agreements can be valuable in reducing harm by improving stability, reducing civilian casualties, or reducing combatant suffering.

Desirability of Arms Control

A weapon that is effective, grants unique access or a capability, or provides a decisive battlefield advantage has high military value. Although relinquishment is not impossible, states will be reluctant to give up a weapon that provides a critical advantage or a unique capability even if the weapon arguably causes other significant harm. A weapon’s military value is the most important factor that influences the desirability of arms control. Above all, states want to ensure their own security.

Weighed against a weapon’s value is its perceived horribleness—meaning the type of injury it causes, its stability risks, its impact on the social or political order, or its indiscriminate nature. Although most successful bans are against weapons that are not particularly effective, it is oversimplified to suggest that bans are not feasible for any weapon with military value. War is horrible, and states have at times sought to temper its horror through arms control measures that restrain their actions or capabilities.

States have often sought to restrain weapons that increase combatant or civilian suffering in war beyond that required for battlefield effectiveness. States have at times restricted weapons that cause superfluous injury or unnecessary suffering to combatants, for example, if such weapons are not deemed to be uniquely effective. A bullet that leaves glass shards in the body, for example, causes superfluous injury beyond that required to disable combatants and win on the battlefield, because glass shards are not detectable by x-rays, and are therefore more difficult to remove from wounded personnel. (Weapons that leave undetectable fragments in the body are prohibited under the Convention on Certain Conventional Weapons Protocol I.) States have also attempted arms control for weapons or weapons delivery systems that are difficult to use in a discriminate manner to avoid civilian casualties. International humanitarian law already prohibits weapons that cause superfluous injury and indiscriminate attacks, yet states have sometimes coordinated on regulations that identify which specific weapons are worthy of special restraint.

The difficulty in anticipating how technologies will evolve is a challenge for regulating emerging technologies.

Throughout history, those with political power have sought to control disruptive weapons, such as early firearms or the crossbow, that have threatened the existing political or social order. States have also tried to regulate weapons that could cause undue instability in crisis situations, such as intermediate-range ballistic missiles, anti-ballistic missile systems, or space-based weapons of mass destruction (WMD).

Weapons that are perceived as destabilizing because they could provoke an arms race may also be susceptible to some form of regulation. For instance, a primary motivation for signatories to the 1922 Washington Naval Treaty was a desire to avoid a costly naval arms race.

A key factor to a state’s continued desire for arms control is reciprocity. While there are myriad threats and inducements that compel states to comply with arms control agreements in times of peace, it is not international opprobrium that restrains militaries in the heat of war—it is the fear of enemy reprisal.

Feasibility of Arms Control

Whereas the desirability of arms control encompasses the criteria that incentivize or disincentive states to attempt some form of control, feasibility includes the factors that determine if long-term, successful arms control is possible.

An essential ingredient for effective arms control is clarity among states about the degree of restraint that is desired. Lines clearly delineating what is and is not permitted must exist for arms control to succeed; ambiguous agreements run the risk of a slippery slope to widespread use. Simplicity is key. Agreements with a clear focal point, akin to the “no gas” and permanently blinding lasers prohibitions, are more effective, because states have a clear understanding of the expectations of their behavior and that of their adversaries’.

A closely related issue is the necessity of states’ being able to comply with an agreement to restrain use. In the early 20th century, states sought to limit the use of submarines and aerial bombardment, but practical realities in the ways submarines and aircraft were employed made it difficult for states to comply with the agreed-upon limits. States initially restrained their use in wartime, but restraint did not last once the practical difficulties of doing so in war were revealed.

Arms control’s feasibility is also affected by states’ ability to verify whether other parties are complying with an agreement. This ability can be accomplished through a formal verification regime, but it doesn’t necessarily have to be. The key to verification is ensuring sufficient transparency. For weapons that can be developed in secret—such as chemical or nuclear weapons—transparency may need to be assured through a verification regime. In other cases, countries may adopt less formal measures of verifying other states’ compliance, such as relying on national intelligence collection measures.

The overall number of countries needed for an agreement to succeed also influences the feasibility of arms control. Feasibility increases when fewer countries are necessary for arms control to succeed. If the polarity of the international system causes military power to be concentrated in a small number of states, getting those states to agree is crucial to success. Despite their mutual hostility, the Soviet Union (USSR) and the United States had a number of successful arms control treaties during the Cold War, some of which were bilateral agreements and some of which included many states but were led by the United States and USSR. Alternatively, in some cases, few states may be needed for an agreement to be successful simply because by virtue of technology, the weapons—such as nuclear weapons, long-range ballistic missiles, or space-based weapons—are accessible only to those few states. Diffuse weapons are more difficult to control, and more nations need to reach agreement for arms control on them to be lasting and successful. Which countries support an agreement is also important. As Rebecca Crootof explains, “If a treaty ban is ratified by the vast majority of states in the international community, but not by states that produce or use the weapon in question, it would be difficult to argue that the ban is successful.”

Finally, arms control is often path-dependent, with successful regulations piggybacking on prior successful regulations of similar technologies. Modern bans on chemical and biological weapons build on long-standing ancient prohibitions on poison. The 2008 ban on cluster munitions was likely enabled by the successful 1997 ban on antipersonnel landmines. Cold War–era strategic arms control treaties likely had a snowballing effect, with successful agreements increasing the odds of future success.

The criteria within these two dimensions—desirability and feasibility—capture the most important factors that affect the success or failure of arms control. While not all-encompassing, these factors appear to be the most significant when examining the historical record of attempted arms control. If past historical experience turns out to be a useful guide for the future, then these factors are likely to influence the desirability and feasibility of arms control for new and emerging technologies, including military applications of AI.

Why Some Arms Control Measures Succeed and Others Fail

The factors affecting the desirability and feasibility of arms control combine in ways that make arms control successful in some cases and not others. States desire arms control for some weapons over others because they are seen as more horrible and/or less useful. In some cases, states have sought arms control that was ultimately not feasible, and arms control failed.

A state’s calculation of the desirability of arms control is best exemplified by the response to nuclear weapons versus chemical weapons. Nuclear weapons are undeniably more horrible—they cause greater suffering, more civilian casualties, and lasting environmental impact. Nuclear weapons are uniquely effective, however, giving states that wield them a decisive battlefield advantage. It’s the military value of nuclear weapons that has prevented the nonproliferation community from achieving worldwide nuclear disarmament.

The result of this dynamic is that many examples of successful arms control are for weapons that are not particularly effective. There are instances, however, where states have chosen to place restrictions on effective weapons. If the military value of a weapon were the only factor, far more states would use chemical weapons on the battlefield. If nothing else, the threat of chemical weapons in war forces the enemy to fight in protective gear, slows down enemy troops, and reduces their effectiveness. Fighting in a gas mask is hard. It restricts situational awareness, makes it difficult to breathe, and diminishes firing accuracy. This alone is valuable. Despite these advantages, states have, for the most part, successfully controlled the use of chemical weapons in war. For most states, their military advantage is outweighed by the increased suffering they bring and the fear that using them would only cause adversaries to respond in kind.

There are many examples of states banning weapons seen as causing particularly problematic injuries to combatants, especially when these weapons have only marginal military value. For such weapons, the perceived horribleness outweighs its effectiveness, increasing desirability for arms control. Germany’s sawback bayonet in World War I reportedly caused grievous injuries to combatants because of its serrated edge for sawing wood. Germany unilaterally withdrew the bayonet after reports that British and French troops would torture and kill German soldiers found with the weapon.

A novel mechanism of injury can also increase the perception of a weapon’s horribleness, increasing the desirability of its regulation. In the case of the ban on permanently blinding lasers, the type of injury (permanent blinding) is perceived to cause unnecessary suffering. It is not obvious why being blinded by a laser is worse than being killed, but the prohibition remains. The permanently blinding laser ban also owes its success, however, to the fact that it is narrowly scoped enough that it does not inordinately constrain military effectiveness. The ban permits laser “dazzlers” that temporarily blind an individual but do not cause lasting damage. Desirability for arms control is high in this case because militaries can use lasers to cause a similar battlefield effect, temporarily incapacitating the enemy, with lower levels of suffering and harm to combatants.

The process by which some weapons are deemed inhumane while others are allowable is path- dependent and not always logical. Long-standing prohibitions against poison date back to ancient times and likely influenced the success of modern-day bans against chemical and biological weapons. Ancient prohibitions on fire-tipped weapons also appear to have carried over to modern regulations on inflammable bullets and incendiary weapons. It’s unclear why death by poison or a fire-tipped weapon is worse than many other means of death in war. These prohibitions, however, are enduring and cut across regions and cultures.

Path dependence has often enabled bans on weapons perceived to cause especially problematic injuries, even if those weapons are viewed as legitimate in other settings. Expanding bullets are regularly used for personal defense and by law enforcement, yet many states foreswear them because of the 1899 Hague Declaration ban, which itself built on the 1868 ban on exploding bullets. Similarly, riot control agents are permissible for use against rioting civilians but, perversely, are banned for use against combatants because they fall under Chemical Weapons Convention.

Countries have also regulated weapons that are seen as destabilizing or are difficult to use discriminately, and these are more likely to be successful when additional factors enhance the feasibility of regulation. Arms control measures on destabilizing weapons, such as the Seabed Treaty, Outer Space Treaty, 1972 Anti-Ballistic Missile (ABM) Treaty, and 1987 Intermediate-Range Nuclear Forces (INF) Treaty, have succeeded (at least temporarily), particularly in cases where the overall number of countries needed for cooperation was limited, making arms control more feasible. Prohibitions on expanding warfare into new domains, such as weapons on the moon or in Antarctica, have succeeded only when a clear focal point existed and the military value of deploying the weapons was low, making both the desirability and feasibility of arms control higher. Regulations on less-discriminate weapons—ones that are more difficult to use in a targeted fashion against combatants without also causing civilian harm—have succeeded in the past, but only when a weapon was banned entirely, thereby increasing the feasibility of control.

Clarity and simplicity of the agreement are essential for making arms control feasible. States need agreements with clear focal points to effectively coordinate with one another. Agreements that ban a weapon, such as poisonous gas or blinding lasers, are typically more successful than complex regulations that govern specific uses. Complete bans on weapons such as cluster munitions, antipersonnel land mines, exploding bullets, chemical and biological weapons, and blinding lasers have largely been successful because the bans were clearly defined and the weapons were prohibited entirely, not just in certain circumstances. Conversely, arms control measures on weapons and delivery systems, such as air-delivered weapons and submarine warfare, that permitted their use in some circumstances but not others ultimately failed. In wartime, states expanded their use to prohibited targets.

Notable exceptions to this rule on simplicity are the bans on land mines and cluster munitions. Although the treaties seem simple enough on the surface—“never under any circumstances to use …”—the more complicated rules are concealed in the weapons’ definitions. The way these treaties were crafted suggest that the drafters understood the normative power of a complete prohibition to help stigmatize a weapon. Complex exceptions that were necessary for states to reach agreement were pushed to the fine print.

Not all treaties have simple rules, but successful treaties that have complex regulations often have other factors that favor success. Many of the bilateral arms control agreements between the United States and the Soviet Union/Russia, such as the INF Treaty, ABM Treaty, Strategic Arms Limitation Talks (SALT) I and II, Strategic Offensive Reductions Treaty (SORT), Strategic Arms Reduction Treaty (START), and New START, have complicated rules, but only two parties are needed to reach agreement. Additionally, these treaties apply to the production, stockpiling, or deployment of weapons in peacetime rather than wartime use, when the exigencies of war might increase pressures for defection. Complicated rules may be more viable in peacetime than wartime.

Although states have often codified arms control agreements in treaties, an agreement’s legal status seems to have little to no bearing on its success. Throughout history, countries have violated legally binding treaties, especially in wartime. Violations include the use of chemical weapons in World War I and the aerial bombardment of undefended cities in World War II. States have also complied with informal, non-legally-binding agreements, such as the 1985 Australia Group, which prevents the export of technologies used to produce chemical or biological weapons. There are even a few instances of tacit restraint among states without a formal agreement at all, such as the United States’ and Soviet Union’s decision to refrain from pursuing anti-satellite (ASAT) weapons and neutron bombs.

At the dawn of the AI revolution, it is unclear how militaries will adopt AI, how it will affect warfare, and what forms of arms control states may find desirable and feasible.

Integral to a state’s continued adherence to an agreement is not the threat of legal consequences but the fear of reciprocity. Adolph Hitler refrained from ordering the bombing of British cities in the initial stages of World War II not because of legal prohibitions against doing so, but because of the fear that Britain would respond in kind (which it did after German bombers hit central London by mistake at night). Before the 1925 Geneva Gas Protocol was ratified, major powers, including the United Kingdom, France, and the USSR, declared that the protocol would cease to be binding if a nation failed to abide by it. Even if the horribleness of a weapon far outweighs its utility, if the fear of reciprocity does not exist, states may use the weapon regardless. Syrian leader Bashar al-Assad used poisonous gas against his own people without fear of retribution. Germany used poisonous gas extensively in World War II, but never against powers that could retaliate in kind. When mutual restraint prevails, it is because state behavior is held in check either by internal norms of appropriateness or fear of how their adversary may respond.

When restraint depends upon reciprocity, states need some mechanism to verify that others are complying with an agreement. For some weapons, such as those that can be developed in secret, formal verification regimes may be necessary. Other cases may not require formal verification but do require some form of transparency. The Chemical Weapons Convention and the NPT have inspection measures in place to verify signatories’ compliance. The Outer Space Treaty requires that states allow others to view launches and visit installations on the moon. While the prohibitions on land mine and cluster munitions do not have formal inspection regimes, they do require states to be transparent on their stockpile elimination.

Arms control measures do not require formal or institutional verification to succeed, however. A host of arms control agreements—the 1899 ban concerning expanding bullets, the 1925 Geneva Gas Protocol, the Convention on Certain Conventional Weapons (CCW), and SORT have no formal verification regimes in place. States will verify each other’s compliance through their own observations in some cases. For the Environmental Modification Convention, Biological Weapons Convention, and Seabed Treaty, states can turn to the U.N. Security Council if they believe a signatory is violating the agreement. The Strategic Arms Limitation Talks I and II agreements and ABM Treaty stated that the United States and Soviet Union would use their own means of verifying compliance, such as using satellite imagery. The Washington Naval Treaty had no verification provision, perhaps on the assumption that states could observe capital ship construction through their own means. The essential element is the ability of states to observe, through any number of means, whether or not a competitor is in compliance with the terms of the agreement.

The one remaining factor that undergirds all the rest is time. Over time, the desirability and feasibility of arms control is subject to change. Technology advances and evolves, making some weapons or capabilities, such as air power, more valuable. Alternatively, a weapon—for example, chemical weapons—may be stigmatized over time if it is perceived to cause unnecessary suffering or does not provide a decisive battlefield advantage. It is very difficult to predict the developmental pathway of emerging technologies and their countermeasures. The 1899 Hague Declarations crafted regulations around a host of new weapons—balloon-delivered weapons, expanding bullets, and gas-filled projectiles—that were correctly anticipated to be problematic. Yet the regulations states crafted to restrain these technologies were built on assumptions that turned out to be false. For air-delivered weapons, Hague delegates failed to fully anticipate the futility in defending against air attacks. Expanded bullets were banned, even though their use became normalized in personal defense and law enforcement settings. And Hague delegates failed to ban poison gas in canisters, creating a loophole that Germany exploited in early gas use in World War I.

The difficulty in anticipating how technologies will evolve is a challenge for regulating emerging technologies. The fact that a technology is new complicates the desirability of arms control in several ways. Some states may favor preemptively restricting a nascent technology or weapon, particularly if they fear a potential arms race. In other instances, however, states may be reluctant to give up a capability whose military value isn’t fully known. States may also fail to comprehend the horror of a weapon until it is deployed in battle. Countries understood the potential harm of air-delivered weapons in civilian areas, but the horror of poisonous gas and nuclear weapons was not fully realized until their use.

Even if states desire arms control for emerging technologies, attempted regulations may not prove feasible if they misjudge the way the technology evolves. Complicated rules (in the fine print) are possible for bans on weapons that already exist, like cluster munitions and land mines. For preemptive bans on new weapons, however, states are unlikely to successfully predict the details of how the technology will evolve. Preemptive regulations of emerging technologies are more likely to succeed when they focus on the intent of a weapon, such as the ban on lasers intended to cause permanent blinding, rather than technical details that may be subject to change.

Even when factors support the desirability and feasibility of arms control, success is not guaranteed. States may choose not to comply. Mutual restraint may collapse. A weapon may prove too valuable militarily, leading states to forgo arms control to retain a potentially war-winning weapon. These challenges have been faced for centuries, and they have concrete implications for future attempts at regulating emerging technologies, such as AI. Countries must keep them in mind as they reckon with how and when to regulate or restrict certain uses of military AI.

Implications for Artificial Intelligence

AI technology poses challenges for arms control for a variety of reasons. AI technology is diffuse, and many of its applications are dual use. As an emerging technology, its full potential has yet to be realized—which may hinder efforts to control it. Verification of any AI arms control agreement would also be challenging; states would likely need to develop methods of ensuring that other states are in compliance to be comfortable with restraining their own capabilities. These hurdles, though significant, are not insurmountable in all instances. Under certain conditions, arms control may be feasible for some military AI applications. Even while states compete in military AI, they should seek opportunities to reduce its risks, including through arms control measures where feasible.

AI As A General-Purpose Technology

AI is a general-purpose enabling technology akin to electricity or the internal combustion engine, rather than a discrete weapon such as the submarine, the expanding bullet, or the blinding laser. This aspect of the technology poses several challenges from an arms control standpoint.

First, AI technology is dual use, with both civilian and military applications, and thus is likely to be widely available. The diffuse nature of the technology makes arms control challenging in two ways. First, it makes a nonproliferation regime that would propose to “bottle up” AI and reduce its spread less likely to succeed. Additionally, the widespread availability of AI technology means that many actors would be needed to comply with an arms control regime for it to be effective. All things being equal, coordination is likely to be more challenging with a larger number of actors.

Second, the general-purpose nature of AI technology could make it more difficult to establish clear focal points for arms control. This is particularly true given that its very definition is fuzzy and open to many interpretations. “No AI” lacks the clarity of “no gas”; whether a technology qualifies as “AI” may be open to multiple interpretations. In practice, AI is a such a broad field of practice that declaring “no AI” would be analogous to states deciding “no industrialization” in the late 19th century. Although states attempted to regulate or ban many specific technologies that emerged from the industrial revolution (including submarines, aircraft, balloons, poison gas, and exploding or expanding bullets), a pledge by states to simply not use any industrial-era technologies in warfare would have been untenable. Nor, given the dual- use nature of civilian industrial infrastructure, is it at all clear where such lines would or could have been drawn, even if they had been desirable. Could civilian railways, merchant steamships, or civilian trucks have been used to transport troops? Could factories have been used to manufacture weapons? For AI technology today, many military applications are likely to be in non-weapons uses that improve business processes or operational efficiencies, such as predictive maintenance, image processing, or other forms of predictive analytics or data processing that may help streamline military operations. These AI applications could enhance battlefield effectiveness by improving operational readiness levels, accelerating deployment timelines, shortening decision cycles, improving situational awareness, or providing many other advances. Yet where the line should be drawn between acceptable military AI uses and unacceptable uses could be murky, and states would need clarity for any agreement to be effective.

AI technology poses challenges for arms control for a variety of reasons. AI technology is diffuse, and many of its applications are dual use. As an emerging technology, its full potential has yet to be realized—which may hinder efforts to control it.

States’ experience with arms control for technologies that emerged during the industrial revolution is a useful historical guide because states did attempt to regulate (and succeeded in some cases) specific applications of general-purpose industrial technologies, including the internal combustion engine (submarines and airplanes) and chemistry (exploding bullets and poison gas). These efforts were not always successful, but not because states were unable to define what a submarine or airplane was, nor even because states could not limit their civilian use (which was not necessary for the bans to succeed). Rather, the reasons for failure had to do with the specific form of how those weapons were used in warfare. Had the offense-defense balance between bombers and air defenses, or submarines and merchant ships, evolved differently, arms control for those weapons might have been more successful. (Alternatively, had states attempted to ban these weapons entirely, rather than regulate their use in war, arms control for aircraft and submarines might have been successful.)

This analysis suggests that although banning all military AI applications may be impractical for many reasons, there is ample historical evidence to suggest that states may be able to agree to limit specific military applications of AI. The question then is which specific military AI applications may meet the necessary criteria for desirability and feasibility of arms control. Because AI could be used for many applications, there may be certain specific uses that are seen as particularly dangerous, destabilizing, or otherwise harmful. AI applications relating to nuclear stability, autonomous weapons, and cybersecurity have already been the focus of attention from scholars, and there may be other important AI applications that merit additional consideration. Even within particular domains of interest, the desirability and feasibility of arms control for any specific applications may depend a great deal on the way the technology is applied. Bans or regulations could be crafted narrowly against specific instantiations of AI technology that are seen as particularly problematic, analogous to state restraint on bullets that are designed to explode inside the body, rather than all exploding projectiles.

AI As An Emerging Technology

One of the difficulties in anticipating which specific AI applications may merit further consideration for arms control is that, as is the case with other emerging technologies, it is not yet clear exactly how AI will be used in warfare. This problem is not new. States struggled in the late 19th and early 20th centuries to successfully control new industrial-age technologies precisely because they were continually evolving.

There are ways in which arms control is both easier and harder for emerging technologies. On the one hand, preemptive bans on new technologies can be easier in some respects, because states are not giving up a weapon that is already integrated into their militaries, upon which they depend for security (and for which there may be internal bureaucratic advocates). On the other hand, regulating emerging technologies can sometimes be more challenging. The cost-benefit tradeoff for militaries is unknown, because it may be unclear how militarily effective a weapon is. Similarly, its degree of horribleness may not be known until a weapon is used, as was the case for poison gas and nuclear weapons. States may be highly resistant to restraining development of a weapon that appears to be particularly valuable.

Militaries’ perception of AI as a “game-changing” technology may be a hurdle in achieving state restraint. Militaries around the world are investing in AI and may be reluctant to place some applications off limits. The hype surrounding AI—much of which may not actually match militaries’ investments in practice—may be an obstacle to achieving arms control. Additionally, the perception of AI systems as yielding superhuman capabilities, precision, reliability, or efficacy may reduce perceptions that some AI applications may be destabilizing or dangerous.

Perceptions of AI technology, even if they are unfounded, could have a significant impact on states’ willingness to consider arms control for military AI applications. Over time, these perceptions are likely to become more aligned with reality as states field and use military AI systems. In some cases, though, even if some AI applications are eventually seen as worthy of arms control, it could be difficult to put the genie back into the bottle if they have already been integrated into states’ military forces or used on the battlefield.

Challenges in Verifying Compliance

Even if states can agree on clear focal points for arms control and the cost-benefit tradeoff supports mutual restraint, verifying compliance with any arms control regime is critical to its success. One complication with AI technology is that, as is the case with other forms of software, the cognitive attributes that an AI system possesses are not easily externally observable. A “smart” bomb, missile, or car may look the same as a “dumb” system of the same type. The sensors that an autonomous vehicle uses to perceive its environment, particularly if it is engaged in self-navigation, may be visible, but the particular algorithm used may not be. This is a challenge when considering arms control for AI-enabled military systems. States may not be able to sustain mutual restraint if they cannot verify that others are complying with the agreement.

There are several potential approaches that could be considered in response to this problem: states could adopt intrusive inspections, restrict physical characteristics of AI-enabled systems, regulate observable behavior of AI systems, and restrict compute infrastructure.

Adopt intrusive inspections. States could agree to intrusive inspection regimes that permit third-party observers access to facilities and to specific military systems to verify that their software complies with an AI arms control regime. AI inspection regimes would suffer from the same transparency problem that arises for other weapons: inspections risk exposing vulnerabilities in a weapon system to a competitor nation. Future progress in privacy-preserving software verification might help states overcome this challenge, however, by verifying the behavior of a piece of software without exposing private information. Or states might simply accept that the benefits to verification outweigh the risks of increased transparency; there are precedents for intrusive inspection regimes. One challenge with inspections is that if the difference between the permitted and banned capability lay in software, a state could simply update its software after inspectors left. Software updates could be done relatively quickly and at scale, far more easily than building more missiles or nuclear enrichment facilities. In principle, states might be able to overcome this problem through the development of more advanced technical approaches in the future, such as continuous monitoring of software to detect changes or by embedding functionality into hardware. Unless states can confidently overcome the challenge of fast and scalable post-inspection updates to AI systems, intrusive inspection regimes will remain a weak solution for verifying compliance, even if states were willing to agree to such inspections.

Restrict externally observable physical characteristics of AI-enabled systems. States could focus not on the cognitive abilities of a system but on gross physical characteristics that are both easily observable and difficult to change, such as size, weight, power, endurance, payload, warhead, and so forth. Under this approach, states could adopt whatever cognitive characteristics (sensors, hardware, and software) they wanted for a system. Arms control limitations would apply only to the gross physical characteristics of a vehicle or munition, even if the actual concern were motivated by the military capabilities enabled by AI. For example, if states were concerned about swarms of antipersonnel small drones, rather than permitting only “dumb” small drones (which would be difficult to verify), states could simply prohibit all weaponized small drones, regardless of their cognitive abilities. States have used similar approaches before, regulating the gross physical characteristics of systems (which could be observed), rather than their payloads (which were the states’ actual concern but more difficult to verify). Multiple Cold War–era treaties limited or banned certain classes of ballistic and cruise missiles, rather than only prohibiting arming them with nuclear weapons. An alternative approach, limiting only nuclear- armed missiles, would have permitted certain conventional missiles but would have been harder to verify.

Regulate observable behavior of AI systems. States could choose to center regulations on the observable behavior of an AI system, such as how it operated under certain conditions. This would be analogous to the “no cities” concept of bombing restrictions, which prohibited not bombers but rather the way they were employed. This approach would be most effective when dealing with physical manifestations of AI systems in which the outward behavior of the system is observable by other states. For example, states might establish rules for how autonomous naval surface vessels ought to behave in proximity to other ships. States might even adopt rules for how armed autonomous systems might clearly signal escalation of force to avoid inadvertent escalation in peacetime or crises. The specific algorithm that a state used to program the behavior would be irrelevant; different states could use different approaches. The regulation would govern how the AI system behaved, not its internal logic. For some military AI applications that are not observable, however, this approach would not be effective. (For example, restrictions on the role of AI in nuclear command and control would likely not be observable by an adversary.) Another limitation to this approach is that, as is the case with intrusive inspections, the behavior of a system could potentially be modified quickly through a software update—which could undermine verifiability and trust.

Restrict compute infrastructure. AI systems have physical infrastructure used for computation—chips— and one approach could be to focus restraint on elements of AI hardware that can be observed or controlled. This could be potentially done by restricting specialized AI chips, if these specialized chips could be controlled through a nonproliferation regime (and if these chips were essential for the prohibited military capability). Another approach could conceivably focus on restricting large-scale compute, if compute resources were observable or could be tracked. Leading AI research labs have invested heavily in large-scale compute for machine learning in recent years, although it is unclear whether the value of this research outweighs its significant costs and for how long this trend can continue. There are also countervailing trends in compute efficiency that may, over time, democratize AI capabilities by lowering compute costs for training machine learning systems.

One important factor enabling arms control focused on AI hardware is the extent to which chip fabrication infrastructure is democratized globally versus concentrated in the hands of a few actors. Current semiconductor supply chains are highly globalized but have key chokepoints. These bottlenecks present opportunities for controlling access to AI hardware. For example, in 2020 the United States successfully cut off the Chinese telecommunications company Huawei from advanced chips needed for 5G wireless communications by restricting the use of U.S.-made equipment for chip manufacturing (even though the chips themselves were made in Taiwan). Similar measures could conceivably be used in the future to control access to AI hardware if production of those chips were similarly limited to a few key actors.

The future evolution of semiconductor supply chains is highly uncertain. Supply chain shocks and geopolitical competition have accelerated state intervention in the global semiconductor market, causing significant uncertainties in how the market evolves. There are trends pointing toward greater concentration of hardware supply chains and other trends toward greater democratization. The high cost of semiconductor fabrication plants, or fabs, is one factor leading to greater concentration in the industry. On the other hand, geopolitical factors are leading China and the United States to accelerate indigenous fab capacity. There are powerful market and nonmarket forces affecting the global semiconductor industry, and the long-term effects of these forces on supply chains is unclear.

The Way Ahead

The closest historical analogy to the current moment with artificial intelligence is the militarization of industrial-age technology around the turn of the 20th century and states’ attempts at the time to control those dangerous new weapons. Following the St. Petersburg Declaration in 1868, states engaged in a flurry of arms control activity, both in the run-up to World War I and in the interwar period before World War II. Leading military powers at the time met to discuss arms control in 1874, 1899, 1907, 1909, 1919, 1921, 1922, 1923, 1925, 1927, 1930, 1932, 1933, 1934, 1935, 1936, and 1938. Not all of these efforts reached agreements, and not all of the treaties that were ratified held in wartime, but the scale of diplomatic activity shows the effort and patience needed to achieve even modest results in arms control.

There are several steps that policymakers, scholars, and members of civil society can take today to explore the potential for AI arms control. These include meetings and dialogue at all levels to better understand the technology, how it may be used in warfare, and potential arms control measures. Academic conferences, Track II academic-to-academic exchanges, bilateral and multilateral dialogues, and discussions in various international forums are all valuable for helping advance dialogue and mutual understanding among international parties. Analysis of potential arms control measures must be tightly linked to the technology itself and the conduct it enables, and these dialogues must include AI scientists and engineers to ensure that policy discussions are grounded in technical realities. Additionally, because AI technology remains fluid and rapidly evolving, those considering arms control must be prepared to be adaptive and to shift the focus of their attention to different aspects of AI technology or the military capabilities it enables as the technology matures. Metrics for tracking AI progress and proliferation will also help illuminate both possibilities for arms control and future challenges.

Policymakers can take steps today that may make the technology more controllable in the long run by shaping its development, particularly in hardware. Enacting export controls on key choke points in the global supply chain may help to control the spread of underlying technologies that enable AI, concentrating supply chains and enhancing future controllability. Export controls can have the effect of accelerating indigenization of technology, however, as actors who are cut off from a vital technology redouble their efforts to grow their national capacity. Policymakers should be judicious in applying various instruments of industrial policy to ensure that they are mindful of the long-term consequences of their actions and whether they are retaining centralized control over a technology, and thus the ability to restrict it in the future, or whether they are inadvertently accelerating its diffusion.

At the dawn of the AI revolution, it is unclear how militaries will adopt AI, how it will affect warfare, and what forms of arms control states may find desirable and feasible. Policymakers can take steps today, however, to lay the groundwork for potential arms control measures in the future, including not only shaping the technology’s evolution but also the political climate. The history of arms control shows that it is highly path-dependent—and that arms control measures are often built on prior successful arms control agreements. Small steps now could yield larger successes down the road, and states should seek opportunities for mutual restraint to make war less terrible whenever possible.

Download the Full Report

Download PDF

  1. “Autonomous Weapons: An Open Letter from AI & Robotics Researchers,” Future of Life Institute, July 28, 2015, https://futureoflife.org/open-letter-autonomous-weapons/; “Lethal Autonomous Weapons Pledge,” Future of Life Institute, https://futureoflife.org/lethal-autonomous-weapons-pledge/; Adam Satariano, “Will There Be a Ban on Killer Robots?” The New York Times, October 19, 2018, https://www.nytimes.com/2018/10/19/technology/artificial-intelligence-weapons.html; “Less Autonomy, More Humanity,” Stop Killer Robots, http://www.stopkillerrobots.org/%3B Matt McFarland, “Leading AI researchers vow to not develop autonomous weapons,” CNNMoney, July 18, 2018, https://money.cnn.com/2018/07/18/technology/ai-autonomous-weapons/index.html; Tsuya Hisashi, “Can the use of AI weapons be banned?” NHK, April 18, 2019, https://www3.nhk.or.jp/nhkworld/en/news/backstories/441/; and Mary Wareham, “Stopping Killer Robots: Country Positions on Banning Fully Autonomous Weapons and Retaining Human Control” (Human Rights Watch, August 2020), http://www.hrw.org/sites/default/files/media_2020/08/arms0820_web_0.pdf
  2. Will Knight, “AI arms control may not be possible, warns Henry Kissinger,” MIT Technology Review, March 1, 2019, http://www.technologyreview.com/f/613059/ai-arms-control-may-not-be-possible-warns-henry-kissinger/; Vincent Boulanin, “Regulating military AI will be difficult. Here’s a way forward,” Bulletin of the Atomic Scientists, March 3, 2021, https://thebulletin.org/2021/03/regulating-military-ai-will-be-difficult-heres-a-way-forward/; Forrest E. Morgan, Benjamin Boudreaux, Andrew J. Lohn, Mark Ashby, Christian Curriden, Kelly Klima, and Derek Grossman, “Military Applications of Artificial Intelligence: Ethical Concerns in an Uncertain World” (RAND Corporation, 2020), https://www.rand.org/pubs/research_reports/RR3139-1.html; National Security Commission on Artificial Intelligence, Final Report (March 2021), 96, https://www.nscai.gov/wp-content/uploads/2021/03/Full-Report-Digital-1.pdf; and Evan Ackerman, “We Should Not Ban ‘Killer Robots’ and Here’s Why: What we really need is a way of making autonomous armed robots ethical, because we’re not going to be able to prevent them from existing,” IEEE Spectrum, July 28, 2015, https://spectrum.ieee.org/we-should-not-ban-killer-robots.
  3. Michael C. Horowitz, “Artificial Intelligence, International Competition, and the Balance of Power,” Texas National Security Review, 1 no. 3 (May 2018), https://tnsr.org/2018/05/artificial-intelligence-international-competition-and-the-balance-of-power/.
  4. Sean Watts, “Regulation-Tolerant Weapons, Regulation-Resistant Weapons and the Law of War,” International Law Studies, 91 (August 2015), https://digital-commons.usnwc.edu/cgi/viewcontent.cgi?article=1411&context=ils; Sean Watts, “Autonomous Weapons: Regulation Tolerant or Regulation Resistant?”Temple International and Comparative Law Journal, 30 no. 1 (2016), https://sites.temple.edu/ticlj/files/2017/02/30.1.Watts-TICLJ.pdf; Rebecca Crootof, “The Killer Robots Are Here: Legal and Policy Implications,” Cardozo Law Review, 36 no. 5 (December 2014), http://cardozolawreview.com/wp-content/uploads/2018/08/CROOTOF.36.5.pdf; and Rebecca Crootof, “Why the Prohibition on Permanently Blinding Lasers is Poor Precedent for a Ban on Autonomous Weapon Systems,” Lawfare, November 24, 2015, https://www.lawfareblog.com/why-prohibition-permanently-blinding-lasers-poor-precedent-ban-autonomous-weapon-systems.
  5. Michael C. Horowitz and Paul Scharre, “AI and International Stability: Risks and Confidence-Building Measures” (Center for a New American Security, January 2021), https://www.cnas.org/publications/reports/ai-and-international-stability-risks-and-confidence-building-measures. Some definitions of arms control include post-conflict disarmament imposed by the victors on losing states, such as the Treaty of Versailles. For alternative definitions of arms control, see “Arms control, disarmament and non-proliferation in NATO,” NATO, April 6, 2022, https://www.nato.int/cps/en/natohq/topics_48895.htm; Thomas C. Schelling and Morton H. Halperin, Strategy and Arms Control (Washington, DC: Pergamon-Brassey’s, 1985), 2; Robert R. Bowie, “Basic Requirements of Arms Control,” Daedalus 89 no. 4 (Fall 1960), 708, http://www.jstor.org/stable/20026612; Hedley Bull, “Arms Control and World Order,” International Security 1, no. 1 (Summer 1976), 3, https://www.jstor.org/stable/2538573; Julian Schofield, “Arms Control Failure and the Balance of Power,” Canadian Journal of Political Science / Revue Canadienne de Science Politique, 33 no. 4 (December 2000), 748, http://www.jstor.org/stable/3232662; Coit D. Blacker and Gloria Duffy, International Arms Control: Issues and Agreements (Stanford, CA: Stanford University Press, 1984), 3; Lionel P. Fatton, “The impotence of conventional arms control: why do international regimes fail when they are most needed?" Contemporary Security Policy, 37 no. 2 (June 2016),201, https://doi.org/10.1080/13523260.2016.1187952; and Henry A. Kissinger, “Arms Control, Inspection and Surprise Attack,” Foreign Affairs, 38 no. 4 (July 1960), 559, https://www.foreignaffairs.com/articles/1960-07-01/arms-control-inspection-and-surprise-attack.
  6. Used with permission. Paul Scharre, “Autonomous weapons and stability” (PhD diss., King’s College London, March 2020), https://kclpure.kcl.ac.uk/portal/files/129451536/2020_Scharre_Paul_1575997_ethesis.pdf.
  7. Andrew J. Coe and Jane Vaynman, “Why Arms Control Is So Rare,” American Political Science Review, 114 no. 2 (May 2020), 342–55, https://www.cambridge.org/core/journals/american-political-science-review/article/abs/why-arms-control-is-so-rare/BAC79354627F72CDDDB102FE82889B8A: John D. Maurer, “The Purposes of Arms Control,” Texas National Security Review 2, no. 1 (November 2018), https://tnsr.org/2018/11/the-purposes-of-arms-control/; Charles H. Anderton and John R. Carter, “Arms Rivalry, Proliferation, and Arms Control,” in Principles of Conflict Economics: A Primer for Social Scientists, eds. Charles H. Anderton and John R. Carter (Cambridge: Cambridge University Press, 2009), 185–221, https://doi.org/10.1017/CBO9780511813474.011; Andrew Webster, “From Versailles to Geneva: The many forms of interwar disarmament,” Journal of Strategic Studies, 29 no. 2 (2006), 225– 246, https://www.tandfonline.com/doi/abs/10.1080/01402390600585050; Charles L. Glaser, “When Are Arms Races Dangerous? Rational versus Suboptimal Arming,” International Security, 28 no. 4 (Spring 2004), 44–84, http://www.jstor.org/stable/4137449; Robert Jervis, "Arms Control, Stability, and Causes of War,” Political Science Quarterly, 108 no. 2 (Summer 1993), 239–253, https://doi.org/10.2307/2152010; and Marc Trachtenberg, “The Past and Future of Arms Control,” Daedalus, 120 no. 1 (Winter 1991), 203– 216, http://www.jstor.org/stable/20025364.
  8. Lionel P. Fatton, “The impotence of conventional arms control: why do international regimes fail when they are most needed?”,Contemporary Security Policy, 37 no. 2 (2016), 200–222, https://doi.org/10.1080/13523260.2016.1187952; Andrew Kydd, “Arms Races and Arms Control: Modeling the Hawk Perspective,” American Journal of Political Science, 44 no. 2 (2000), 228–229, https://doi.org/10.2307/2669307; Colin Gray, House of Cards: Why Arms Control Must Fail (Cornell University Publishing, 1992), 5, 27; Stuart Croft, Strategies of arms control (Manchester University Press: 1996), 5.
  9. Coe and Vaynman, “Why Arms Control Is So Rare,” 353; Jane Vaynman, “Enemies in Agreement: Domestic Politics, Uncertainty, and Cooperation Between Adversaries” (PhD diss., Harvard University, 2014), 12–16.
  10. Used with permission. Paul Scharre, “Autonomous weapons and stability.”
  11. Watts, “Regulation-Tolerant Weapons, Regulation-Resistant Weapons, and the Law of War”; Watts, “Autonomous Weapons: Regulation Tolerant or Regulation Resistant?”
  12. Watts, “Regulation-Tolerant Weapons, Regulation-Resistant Weapons, and the Law of War,” 609– 618.
  13. Rebecca Crootof defines a “successful” weapons ban as "both enacted and effective at limiting the usage of the banned weapon.” Crootof, “The Killer Robots Are Here,” 1910; Crootof, “Why the Prohibition on Permanently Blinding Lasers is Poor Precedent for a Ban on Autonomous Weapon Systems.”
  14. Crootof, “The Killer Robots Are Here,” 1884–1890.
  15. Crootof, “The Killer Robots Are Here,” 1888.
  16. The use of “means and methods of warfare which are of a nature to cause superfluous injury or unnecessary suffering” is barred under customary international humanitarian law: “Rule 70. Weapons of a Nature to Cause Superfluous Injury or Unnecessary Suffering,” IHL database, International Committee of the Red Cross, https://ihl-databases.icrc.org/customary-ihl/eng/docs/v1_rul_rule70.
  17. “Protocol of Non-Detectable Fragments (Protocol I). Geneva, 10 October 1980,” International Committee of the Red Cross, https://ihl-databases.icrc.org/applic/ihl/ihl.nsf/Article.xsp?action=openDocument&documentId=1AF77FFE8082AE07C12563CD0051EDF5.
  18. Thomas C. Schelling, The Strategy of Conflict (Cambridge, MA: Harvard University Press, 1980), 75.
  19. Crootof, “The Killer Robots Are Here,” 1890.
  20. “The German Sawback Blade Bayonet,” Armourgeddon Blog, January 22, 2015, https://www.armourgeddon.co.uk/the-german-sawback-blade-bayonet.html and Used with permission. Scharre, “Autonomous weapons and stability.”
  21. Charles J. Dunlap Jr., “Is it Really Better to be Dead than Blind?” Just Security, January 13, 2015, https://www.justsecurity.org/19078/dead-blind/.
  22. Crootof, “Why the Prohibition on Permanently Blinding Lasers is Poor Precedent for a Ban on Autonomous Weapon Systems.”
  23. Used with permission. Scharre, “Autonomous weapons and stability.”
  24. “Each State Party undertakes not to use riot control agents as a method of warfare.” Convention on the Prohibition of the Development, Production, Stockpiling and Use of Chemical Weapons and on Their Destruction, Article I.5, August 31, 1994, 2, https://www.opcw.org/fileadmin/OPCW/CWC/CWC_en.pdf; Executive Order No. 11850, 3 C.F.R. 980 (1975), https://www.archives.gov/federal-register/codification/executive-order/11850.html; Michael Nguyen, “Senate Struggles with Riot Control Agent Policy,” Arms Control Today, 36 no. 1 (January–February 2006), https://www.armscontrol.org/act/2006_01-02/JANFEB-RiotControl; and Used with permission. Scharre, “Autonomous weapons and stability.”
  25. This dynamic seems to suggest that if lasers were used in future wars for non-blinding purposes and ended up causing incidental blinding, then their use would quickly evolve to include intentional blinding.
  26. “Convention Text,” Convention on Cluster Munitions, https://www.clusterconvention.org/convention-text/; International Campaign to Ban Landmines, “Treaty in Detail,” http://www.icbl.org/en-gb/the-treaty/treaty-in-detail/treaty-text.aspx.
  27. Used with permission. Scharre, “Autonomous weapons and stability.”
  28. Protocol for the Prohibition of the Use in War of Asphyxiating, Poisonous or Other Gases, and of Bacteriological Methods of Warfare (Geneva Protocol), June 17, 1925, https://2009-2017.state.gov/t/isn/4784.htm.
  29. Used with permission. Scharre, “Autonomous weapons and stability.”
  30. “Any State Party to this Convention which has reason to believe that any other State Party is acting in breach of obligations deriving from the provisions of the Convention may lodge a complaint with the Security Council of the United Nations. Such a complaint should include all relevant information as well as all possible evidence supporting its validity.” Convention on the Prohibition of Military or Any Other Hostile Use of Environmental Modification Techniques, October 5, 1978, https://www.state.gov/t/isn/4783.htm#treaty.
  31. Used with permission. Scharre, “Autonomous weapons and stability.”
  32. Used with permission. Scharre, “Autonomous weapons and stability.”
  33. Used with permission. Scharre, “Autonomous weapons and stability.”
  34. Vincent Boulanin, Lora Saalman, Petr Topychkanov, Fei Su, and Moa Peldán Carlsson, “Artificial Intelligence, Strategic Stability and Nuclear Risk” (Stockholm International Peace Research Institute, June 2020), https://www.sipri.org/publications/2020/other-publications/artificial-intelligence-strategic-stability-and-nuclear-risk; Technology for Global Security, “AI and the Military: Forever Altering Strategic Stability” (T4GS, February 13, 2019), https://securityandtechnology.org/wp-content/uploads/2020/07/ai_and_the_military_forever_altering_strategic_stability__IST_research_paper.pdf; Forrest E. Morgan, Benjamin Boudreaux, Andrew J. Lohn, Mark Ashby, Christian Curriden, Kelly Klima, and Derek Grossman, “Military Applications of Artificial Intelligence: Ethical Concerns in an Uncertain World” (RAND Corporation, 2020), https://www.rand.org/pubs/research_reports/RR3139-1.html; Michael C. Horowitz, Paul Scharre, and Alexander Velez-Green, “A Stable Nuclear Future? The Impact of Autonomous Systems and Artificial Intelligence,” 2019, https://arxiv.org/abs/1912.05291; Edward Geist and Andrew J. Lohn, “How Might Artificial Intelligence Affect the Risk of Nuclear War?” (RAND Corporation, 2018), https://www.rand.org/pubs/perspectives/PE296.html; Ben Buchanan, “A National Security Research Agenda for Cybersecurity and Artificial Intelligence,” CSET Issue Brief (Center for Security and Emerging Technology, May 2020), https://cset.georgetown.edu/research/a-national-security-research-agenda-for-cybersecurity-and-artificial-intelligence/; Michael C. Horowitz, Lauren Kahn, Christian, Ruhl, Mary Cummings, Erik Lin-Greenberg, Paul Scharre, and Rebecca Slayton, “Policy Roundtable: Artificial Intelligence and International Security,” Texas National Security Review, June 2, 2020, https://tnsr.org/roundtable/policy-roundtable-artificial-intelligence-and-international-security/; Melanie Sisson, Jennifer Spindel, Paul Scharre, and Vadim Kozyulin, “The Militarization of Artificial Intelligence” (Stanley Center for Peace and Security, August 2019), https://stanleycenter.org/publications/militarization-of-artificial-intelligence/; Giacomo Persi Paoli, Kerstin Vignard, David Danks, and Paul Meyer, “Modernizing Arms Control: Exploring responses to the use of AI in military decision-making” (United Nations Institute for Disarmament Research, 2020), https://www.unidir.org/publication/modernizing-arms-control; Andrew Imbrie and Elsa B. Kania, “AI Safety, Security, and Stability Among Great Powers: Options, Challenges, and Lessons Learned for Pragmatic Engagement,” CSET Policy Brief (Center for Security and Emerging Technology, December 2019), https://cset.georgetown.edu/research/ai-safety-security-and-stability-among-great-powers-options-challenges-and-lessons-learned-for-pragmatic-engagement/; Michael C. Horowitz, Lauren Kahn, and Casey Mahoney, “The Future of Military Applications of Artificial Intelligence: A Role for Confidence- Building Measures?” Orbis, 64 no. 4 (Fall 2020), 528–543; and Horowitz and Scharre, “AI and International Stability.”
  35. Rebecca Crootof, “Regulating New Weapons Technology,” in The Impact of Emerging Technologies on the Law of Armed Conflict, Eric Talbot Jensen and Ronald T.P. Alcala, eds. (New York: Oxford University Press, 2019), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3195980; and Rebecca Crootof and BJ Ard, “Structuring Techlaw,” Harvard Journal of Law and Technology, 34 no. 2 (Spring 2021), https://papers.ssrn.com/sol3/papers.cfm?abstract_id=3664124.
  36. For more on privacy-preserving approaches for sharing information and verifying algorithms’ behavior, see Andrew Trask, Emma Bluemke, Ben Garfinkel, Claudia Ghezzou Cuervas-Mons, and Allan Dafoe,“Beyond Privacy Trade-offs with Structured Transparency,” December 15, 2020, https://arxiv.org/pdf/2012.08347.pdf; Joshua A. Kroll , Joanna Huey , Solon Barocas , Edward W. Felten , Joel R. Reidenberg , David G. Robinson, and Harlan Yu, “Accountable Algorithms,” University of Pennsylvania Law Review, 165 no. 3 (2017), https://scholarship.law.upenn.edu/penn_law_review/vol165/iss3/3/; and Matthew Mittelsteadt, “AI Verification: Mechanisms to Ensure AI Arms Control Compliance,” CSET Issue Brief (Center for Security and Emerging Technology, February 2021), https://cset.georgetown.edu/publication/ai-verification/.
  37. One possible solution to the problem of post-inspection software updates could be installing continuous monitoring devices that would alert inspectors to any changes in software. Adopting such an approach requires further technological advancements, as well as states’ commitment to continuous intrusive monitoring, rather than periodic inspections. It is also possible that such an approach, if implemented, could have unforeseen destabilizing effects in certain scenarios. For example, a software update to improve functionality on the eve of a conflict could trigger an alert that would lead other states to assume arms control noncompliance. Alternatively, regime-compliant code that should not be altered could be embedded into physical hardware, for example through read-only memory (ROM) or application- specific integrated circuits (ASICs). See Mittelsteadt, “AI Verification,” 18–24.
  38. For an example of how such an approach might be implemented, see Ronald C. Arkin, Leslie Kaelbling, Stuart Russell, Dorsa Sadigh, Paul Scharre, Bart Selman, and Toby Walsh, “A Path Towards Reasonable Autonomous Weapons Regulation: Experts representing a diversity of views on autonomous weapons systems collaborate on a realistic policy roadmap,” IEEE Spectrum, October 21, 2019, https://spectrum.ieee.org/a-path-towards-reasonable-autonomous-weapons-regulation.
  39. For example, see the INF, SALT I, SALT II, START, SORT, and New START Treaties.
  40. Saif M. Khan, “U.S. Semiconductor Exports to China: Current Policies and Trends,” CSET Issue Brief (Center for Security and Emerging Technology, October 2020), https://cset.georgetown.edu/publication/u-s-semiconductor-exports-to-china-current-policies-and-trends/.
  41. “AI and Compute,” OpenAI blog, May 16, 2018, https://openai.com/blog/ai-and-compute/; Jaime Sevilla et al., Compute Trends Across Three Eras of Machine Learning (arXiv.org, March 9, 2022), https://arxiv.org/pdf/2202.05924.pdf; Ryan Carey, “Interpreting AI compute trends,” AI Impacts blog, July 2018, https://aiimpacts.org/interpreting-ai-compute-trends/; and Ben Dickson, “DeepMind’s big losses, and the questions around running an AI lab,” VentureBeat, December 27, 2020, https://venturebeat.com/2020/12/27/deepminds-big-losses-and-the-questions-around-running-an-ai-lab/.
  42. “AI and Efficiency,” OpenAI blog, May 5, 2020, https://openai.com/blog/ai-and-efficiency/.
  43. Rule No. 2020-18213, 85 Fed. Reg. 51596 (August 20, 2020), https://www.federalregister.gov/documents/2020/08/20/2020-18213/addition-of-huawei-non-us-affiliates-to-the-entity-list-the-removal-of-temporary-general-license-and; Jeanne Whalen and Ellen Nakashima, “U.S. tightens restrictions on Huawei yet again, underscoring the difficulty of closing trade routes,” Washington Post, August 17, 2020, https://www.washingtonpost.com/business/2020/08/17/us-cracks-down-huawei-again/; David Shepardson, “U.S. tightening restrictions on Huawei access to technology, chips,” Reuters, August 17, 2020, https://www.reuters.com/article/us-usa-huawei-tech-exclusive/exclusive-u-s-to-tighten-restrictions-on-huawei-access-to-technology-chips-sources-say-idUSKCN25D1CC; “4Q20 Quarterly Management Report,” press release, January 14, 2021, Taiwan Semiconductor Manufacturing Company, https://investor.tsmc.com/english/encrypt/files/encrypt_file/reports/2021-01/4Q20ManagementReport.pdf; “The struggle over chips enters a new phase,” The Economist, January 23, 2021, https://www.economist.com/leaders/2021/01/23/the-struggle-over-chips-enters-a-new-phase.
  44. For more on the potential for multilateral dialogues, see Horowitz and Scharre, “AI and International Stability.”
  45. “The AI Index,” https://aiindex.stanford.edu/; Jess Whittlestone and Jack Clark, “Why and How Governments Should Monitor AI Development,” August 28, 2021, https://arxiv.org/pdf/2108.12427.pdf.
  46. Martijn Rasser, Megan Lamberth, Ainikki Riikonen, Chelsea Guo, Michael Horowitz, and Paul Scharre, “The American AI Century: A Blueprint for Action” (Center for a New American Security, December 17, 2019), https://www.cnas.org/publications/reports/the-american-ai-century-a-blueprint-for-action; and Saif M. Khan, “Securing Semiconductor Supply Chains,” CSET Issue Brief (Center for Security and Emerging Technology, January 2021), https://cset.georgetown.edu/publication/securing-semiconductor-supply-chains/.

Authors

  • Paul Scharre

    Executive Vice President and Director of Studies

    Paul Scharre is the Executive Vice President and Director of Studies at CNAS. He is the award-winning author of Four Battlegrounds: Power in the Age of Artificial Intelligence...

  • Megan Lamberth

    Former Associate Fellow, Technology and National Security Program

    Megan Lamberth is a former Associate Fellow for the Technology and National Security Program at CNAS. Her research focuses on U.S. strategy for emerging technologies and the k...

View All Reports View All Articles & Multimedia