Killer Instinct Lethal Autonomous Weapons in the Modern Battle Landscape

Note - Online Edition - Volume 95

Introduction

In July 2016, a remote-controlled robot ended a shootout in Dallas.[1] Negotiations between the suspected gunman and police had failed, and the police detonated explosives attached to a bomb disposal robot where the suspect was holed up; the explosion killed him.[2] The Dallas police chief explained the decision to use a “killer robot” as one of necessity in a press conference: “We saw no other option but to use our bomb robot . . . [o]ther options would have exposed our officers to great danger.”[3] Though this bomb disposal robot is more akin to a remote-controlled car than to the Terminator, the lethal act echoes the global rise of military and law enforcement reliance on weaponized robotics as an unavoidable decision.

Nations are now able to place more and more distance between weapons users and the lethal force they project. Recent armed conflicts have seen an increased use of highly automated technologies; the best known of these are the armed, remotely piloted drones (unmanned aerial vehicles, or UAVs) employed by the United States and other nations. Drones enable those who control lethal force to be physically absent when it is deployed and instead to make a decision to kill from behind computers, out of the line of fire. While controversial in their own right, these weapons still rely on direct human control over all targeting and firing decisions. They are “human on-the-loop” robotics. However, lethal autonomous weapons systems (LAWS) bypass human control in the military decision to use lethal force. LAWS include any system capable of targeting and initiating the use of potentially lethal force without direct human supervision and direct human involvement in lethal decision making.[4] They add a new dimension to this distancing, so that targeting decisions can be made by the weapons themselves.

Autonomous weapons systems have been described as the next major revolution in military affairs; yet, deploying LAWS is not comparable to the technological transition from swords to gunpowder, or even to nuclear weapons, in warfare. LAWS would revolutionize the identity of those who use the weapons.[5] In deploying LAWS, the distinction between weapons and warriors risks becoming hazy.

The advent of unmanned weapon systems results not only from rapid technological development, but also from the changing nature of twenty-first-century armed conflicts.[6] Targeted nonstate actors are mobile, difficult to identify, and often shielded among civilian populations within urban areas. Advocates of unmanned weapons systems argue that drone warfare—and eventually deploying LAWS—may be the best response to combat the threat terrorists and insurgents pose.[7] The United States and other wealthy nations have a substantial interest in maintaining technological edge over such adversaries by fielding systems that enable them to deliver lethal force while minimizing the risk to their own forces.[8]

To address these trends, the United Nations has held a series of meetings on autonomous weapons, and within the next few years there will likely be an international treaty limiting or banning autonomous weapons. Other commentators have called for an outright ban or restrictions based on the Geneva Convention and its Additional Protocols, which they believe could encompass lethal autonomous weapons.[9] However, international humanitarian law[10] and human rights law fail to address the risks posed by the proliferation of lethal autonomous weapons, due to these treaties’ emphasis on human actors and the history of noncompliance by nation-states. I argue a future LAWS treaty should instead be based on disarmament.

This Note will lay out the debate surrounding LAWS in Part I. Part II will address how existing international humanitarian law (IHL) and international human rights law (IHRL) would fail to thoroughly regulate LAWS. Part III will discuss the precedent for disarmament treaties and ultimately lay out a framework for regulating autonomous weapons that avoids the pitfalls of IHL and IHRL.

I. The Landscape of Lethal Autonomous Weapons

A. LAWS Defined

Lethal autonomous weapons are “human out-of-the-loop” robotics. In the decision loop for lethal action, humans are currently “in-the-loop,” i.e. they complete or supervise each step of the six-step process in the kill chain (find, fix, track, target, engage and assess). In “human on-the-loop” robotics, such as unmanned aerial vehicles or Israel’s Iron Dome, a human might supervise one or more systems that automate many of the tasks in this six-step cycle. These are precursors to fully autonomous weapons. LAWS would take humans out of the loop entirely: delegating human decision-making responsibilities to an autonomous system designed to take human lives.

Autonomous systems will be equipped with advanced general, or “strong,” artificial intelligence applications.[11] General artificial intelligence systems exhibit human-like cognitive abilities in response to complex problems and situations, as opposed to more “mechanical” decision making, in which a military system might make a choice in order to complete a specific, discrete, and defined task.[12] Systems equipped with general AI adapt to circumstance and learn using environmental stimuli and machine learning.[13]

Because autonomous weapons systems represent such a radical departure from contemporary weapons and remotely piloted systems, definitions of LAWS are usually vague. The Department of Defense (DoD) defines an autonomous weapon system as:

A weapon system that, once activated, can select and engage targets without further intervention by a human operator. This includes human-supervised autonomous weapon systems that are designed to allow human operators to override operation of the weapon system, but can select and engage targets without further human input after activation.[14]

As some scholars have noted, this definition specifically indicates a fully autonomous system would rarely, if ever, be entirely human-free.[15] Instead, either the system designer or an operator would at least have to program it to function pursuant to specified parameters, and an operator would have to decide to employ it in a particular battlespace;[16] this is still an “out of the loop” system because supervision will likely be so limited.

Yet the DoD’s definition allows for an enormous array of levels of autonomy, situation application, and conceivable roles for LAWS—including reconnaissance, covert operations, crowd control, hostage rescue, and direct combat—without any attendant breakdown of “acceptable” or legal uses.[17] And there are countless potential combinations and uses of autonomous weapons at stake. Numerous land, sea, space, and submarine systems will likely also be armed, and cyber-attacks can be similarly automated. For example, the Defense Advanced Research Projects Agency (DARPA) is designing an anti-submarine warfare continuous trail unmanned vessel able to stay at sea autonomously while it finds, tracks, and attacks enemy submarines.[18] The DoD and DARPA have also noted the potential for autonomous, “swarm” technology in the battlespace to rapidly attack and overwhelm an adversary through large numbers of small, expendable systems working collaboratively.[19] While DARPA and the DoD have often stated the real value of these systems is to “extend and complement human capability by providing potentially unlimited persistent capabilities” (as opposed to replacing humans on the battlefield), the ability of a few human operators to control “swarms” of aerial vehicles and assorted systems of lethal weapons seems implausible.[20] Ultimately, the advantages of autonomy create incentives for states to develop lethal weapons with full autonomy, but the international community must narrow down its expansive definition of “lethal autonomous weapons” and make clear under what circumstances (if at all) they can be lawfully deployed.

B. The Ethical and Policy Case Against LAWS

Without any regulation currently in the works, legal discussions of lethal autonomous weapons are guided by questions of ethics and policy. While the arguments against “killer robots” may seem straightforward, LAWS could be considered a legitimate military advance along the lines of unmanned aerial weapons—autonomy could, in some respects, help to make armed conflict more humane and save lives on all sides.[21] The President of the International Committee for the Red Cross (ICRC) has suggested an autonomous system might be “programmed to behave more ethically and far more cautiously” in a battlespace than a human being.[22] Mathematical models can be used to describe behavior-based robotic control in which a range of responses are mapped to perceivable stimuli: it is possible to define a range of active, lethal behavior in which a subset governs ethical, lethal behavior.[23] In other words, even though a weapon can perform autonomously, it could still be subject to programmed ethical constraints in a given mission’s context.

Critics of LAWS highlight attendant ethical and policy questions despite such programming. Ethically, artificially intelligent systems have limitations in some respects as compared to humans. Armed conflict and IHL often require human judgement, common sense, appreciation of the larger picture, understanding of the intentions behind human actions, understanding of values, and anticipation of the direction in which events are unfolding.[24] Decisions over life and death in armed conflict may require compassion and intuition. Humans in a kill chain—while fallible—at least might possess these qualities, whereas it might not be possible to incorporate these complex behaviors when designing autonomous agents. A legitimate lethal decision process must also meet requirements that the human decision maker involved in verifying legitimate targets and initiating lethal force against them be allowed sufficient time to be deliberative, be suitably trained and well informed, and be held accountable and responsible.[25]

These same concerns occupy actors in the policy sphere. Introducing powerful new weapons systems could pose a threat to international security by creating serious international division and weakening the ability of international bodies to manage conflicts, particularly when lethal autonomous weapons may be used in the battlefield.[26] Peter Asaro, a founder of the International Committee for Robot Arms Control (ICRAC) argues that artificially intelligent systems will have “only highly limited capabilities” for learning and adaptation and it will therefore be “difficult or impossible to design systems capable of dealing with the fog and friction of war.”[27] The environment in which military systems operate is messy and complicated, and the nature of armed conflict now exacerbates the possibility that autonomous systems will face unanticipated situations and may act in an unintended fashion. LAWS lack broad contextual intelligence on par with humans—if a lethal system faces situations outside its intended design parameters, for example, and must distinguish between a “fearful civilian and a threatening enemy combatant,” it might not be able to evaluate the human target’s intentions.[28]

Furthermore, because LAWS would remove combatants from areas of conflict and reduce casualties for the actors who possess them, they reduce the thresholds for going to war or engaging in conflict. Heightening already-asymmetric conflict and warfare is particularly problematic, because risks would be further redistributed from combatants to civilians.[29]

C. Why Regulate Now?

The United States is not currently fielding any fully autonomous weapon systems, but a change in attitude should be expected. A series of DoD studies, plans, roadmaps, and statements have discussed autonomous weapons technology in anticipation of its fielding.[30] The DoD has stated that it is concerned that adversary nations could empower advanced weapons systems to act on their own. However, Deputy Defense Secretary Robert O. Work noted in March 2016 that the Pentagon has not “fully figured out” the issue of autonomous machines, but continues to examine it.[31] He further stated: “We will not delegate lethal authority to a machine to make a decision . . . . The only time we will . . . delegate a machine authority is in things that go faster than human reaction time, like cyber or electronic warfare.”[32] Despite these assurances, the Pentagon is preparing for threats from China and Russia through what it calls a Third Offset Strategy, in which the American military would seek to counter the military advances of adversaries through the introduction of machine learning.[33]

Operational realities make unmanned systems appealing to policymakers, including (1) decreasing the number of required personnel in a battlespace, (2) expanding the area in which combat can be conducted, (3) extending an individual soldier’s ability to act deeper in the battlespace, and (4) reducing casualties.[34] Autonomy is particularly attractive, and will likely drive the United States to discard its practice of keeping a human in the loop for lethal targeting decisions, because unmanned systems that require a human in or on the loop are prone to slow operation and vulnerable to disruption due to satellite communications jamming and cyber-attacks.[35] The United States is not alone in this line of development: many nations, including China, South Korea, the United Kingdom, and Israel, are developing advanced, autonomous weapons systems.[36] In an arms-race scenario, rapidly progressing technology may mean human operators will be unable to keep up.[37]

That said, some policymakers would rather wait and see. They argue that it is too early to know where the technology will go, and thus the debate over ethical and legal principles should be deferred until a system is at hand.[38] The second approach calls for prohibiting LAWS outright—the use of such systems, their production, and even efforts at technological development—because they can never be sufficiently “intelligent” to replace human judgement.[39]

Both of these views are shortsighted.[40] Current assessments of the future role of LAWS will affect the level of investment of financial, human and other resources in the development of this technology over the next several years. The international community should not wait until technology and weapons development hardens along a particular path to integrate LAWS into our ethical and legal understandings of international law. Banning LAWS outright, meanwhile, will be ineffective from both a policy and ethical perspective. The actors most inclined to misuse autonomous weapons will not comply with a multilateral treaty banning these weapons, and the underlying technological elements will likely proliferate.[41] Furthermore, because the automation of weapons will happen incrementally, it would be nearly impossible to design or enforce such a ban even against compliant actors—once development is farther along, the line between legal weapons and illegal autonomous weapons will not be distinct.[42] Ethically, defenders of LAWS argue that it would be pernicious to prohibit research and development of technology that could reduce the collateral damage of warfare.[43] Policymakers must propose a framework for evaluating autonomous weapons now to guide policymakers, system designers, and commanders regarding the intended future use of these systems within the international scheme.

II. Applying International Law

Under Article 36 of Additional Protocol I to the Geneva Conventions, states should evaluate new and modified weapons to ensure they do not violate the provisions of IHL, even at the earliest stages of production design.[44] It appears states are beginning to undertake formal assessments of the legality of LAWS. The DoD’s 2012 directive established policies and guidelines for the development of autonomous functions in weapons[45]—providing an approval chain and describing necessary legal reviews which should be conducted before autonomous weapons systems may be designed—as opposed to explicitly calling for the development of such systems.[46] In addition, the Convention on Certain Conventional Weapons has organized a meeting of experts on LAWS for the past three years, with many states sending delegations.[47]

However, as Louis Henkin famously noted: “[A]lmost all nations observe almost all principles of international law and almost all of their obligations almost all of the time.”[48] IHL and IHRL stand out as areas of international law in which countries have little incentive to police noncompliance with treaties or norms. International norms of IHRL are often under-enforced or imperfectly enforced, and sometimes enforced only informally through what U.S. Department of State Legal Adviser Harold Koh calls “transnational legal process.”[49] This process works sporadically, and lethal autonomous weapons would be particularly well suited to capitalize on gaps in enforcement. The DoD directive lays out a process to consider proposals for the development of autonomous weapon systems, but refrained from addressing the fact that LAWS will be complex systems that often combine a multitude of components that work differently in different combinations.[50] An unmanned, unarmed platform is not subject to the directive, but in the future such a platform could be used in conjunction with other, armed platforms as part of a lethal autonomous system. Autonomous technology lends itself to becoming weaponized.

If left unregulated, LAWS will be subject to IHRL and customary international law already in place. This includes jus ad bellum, which is concerned with the inter-state issue of whether a use of military force by one state on another state’s territory is compatible with the latter’s sovereignty and territorial integrity. IHRL and IHL, on the other hand, are concerned with the protection of individuals (and property), and focused on the specific features of a particular strike, such as against whom the strike is carried out.

Where there is a lack of compatibility with one of these legal regimes, the act is unlawful under international law, which can entail both state responsibility and, in certain circumstances, individual criminal responsibility. However, it will be exceedingly difficult to address whether LAWS have complied with these legal regimes, and in whom responsibility should vest.

A. International Human Rights Law (IHRL) Principles

IHRL is a varied legal model governing law enforcement of military operations: it is primarily shaped by human rights treaties as well as customary international human rights norms that govern state behavior. The rights to life and to dignity are the two main rights at stake in the LAWS debate.

Whether LAWS can ensure lethal force is properly directed and calibrated—aimed at appropriate targets and not overstepping the boundaries of what is necessary to neutralize an immediate threat during law enforcement—is related to the protection of the right to life of those who are protected by law against the use of force.[51] The right to dignity is another concern in IHRL: some scholars argue that to allow machines to determine when and where to use force against humans is to “reduce them from humans to objects; they are treated as mere targets.”[52]

The human rights approach places a strong emphasis on the need for accountability, as where an arbitrary deprivation of life occurs, there must be accountability. As Christof Heyns has pointed out, “[c]ontrol and accountability are two sides of the same coin:” if humans do not have control over force release they cannot be held accountable, which leaves an “accountability vacuum.”[53] The international community will soon confront a situation in which an autonomous weapon could identify a target and execute a kill chain, while that human target defies easy categorization under IHL. The Geneva Convention governs hostilities directed against non-state actors; but in this case, neither actor’s rights nor responsibilities would be well accounted for in law.

LAWS also raise questions about broader individual and political accountability: in the event of action that violates IHRL, who would we try for the crime? The robot, the person who programmed it, or the officer who ordered its use?[54] Imposing criminal liability on any of these actors would not be a useful endeavor in terms of deterrence. For example, as machine learning lacks transparency and no single person will likely understand the complex interactions between the constituent parts of LAWS, military commanders might not have the requisite mens rea to incur traditional command responsibility and criminal liability, and even if they did understand, they might not have been able to prevent the action of the autonomous weapon.[55] Furthermore, state responsibility for acts of autonomous systems is crucial from a policy perspective, because otherwise prevention is similarly reduced.[56] The best way to address this problem would be to assign responsibility in advance, through an arms treaty that clarifies unlawful intents when deploying LAWS.

B. International Humanitarian Law (IHL)

If IHRL governs law enforcement, IHL governs the conduct of conflict.[57] In order for IHL to be applicable, a situation of armed conflict must exist.[58] The term “armed conflict” is used to ensure that actors cannot avoid their responsibilities under the Geneva Conventions by denying the existence of a state of war during a conflict.[59] In order for IHL to apply, certain “intensity requirements” applicable to both international and internal armed conflicts must be exceeded—this includes the number, duration, and intensity of individual confrontations; the type of weapons and other military equipment used; number and caliber of munitions; number of persons participating in the fighting; the number of casualties; and the number of civilians fleeing combat zones.[60]

Isolated drone attacks arguably do not meet the requirement of protracted violence, and it is likely that deploying LAWS would also fail these requirements precisely because they limit the number of persons involved and would probably be used in an isolated manner (IHRL might still be applicable in such situations). The U.S. has acted under a type of mixed model in dealing with terrorists—U.S. Department of State Legal Adviser Harold Koh has said that “whether a particular individual will be targeted in a particular location will depend upon considerations specific to each case, including those related to the imminence of a threat . . . and the willingness and ability of those states to suppress the threat the targets poses.”[61]

The assumption and allocation of responsibility is also vital in order to comply with IHL. Jus in bello, like IHRL, requires that someone must be responsible for a possible war crime. While I discussed the possibility that responsibility could ultimately vest in the commanding officer for the system’s use, it would be unfair or unjust to both that individual and any resulting casualties in the event of a violation, due to the inability to directly control an autonomous robot.[62] Roboticist Ronald Arkin refutes this notion, arguing that this issue could be resolved by directly encoding prescriptive ethical codes within the robot itself, which could govern its actions in a manner consistent with the laws of armed conflict and rules of engagement.[63]

Where a state of armed conflict does exist, the ultimate goal of IHL is to protect those who are not, or are no longer, taking direct part in the hostilities, as well as to restrict the recourse to certain means and methods of warfare. There are five principles which run through the language of the various humanitarian law treaties which the United States acknowledges regarding conduct in armed conflict. These are: (1) a general prohibition on the employment of weapons of a nature to cause superfluous injury or unnecessary suffering; (2) military necessity; (3) proportionality; (4) discrimination; and (5) command responsibility.[64] While critics point to the Martens Clause and military necessity,[65] proportionality and discrimination raise the highest burdens for lethal autonomous weapon system compliance.

1. Distinction.According to this rule, civilians may not be targeted unless, and for such time as, they directly participate in hostilities. The notion of direct participation in hostilities has been elaborated on by the International Committee of the Red Cross, which makes clear that the key criterion is that the civilian either carries out acts that directly cause harm or engages in an operation that directly causes harm.[66] As long as it is possible to supply the autonomous system with sufficiently reliable and accurate data to ensure it can discriminate, the weapon will comply.

Arkin argues LAWS will be able to sufficiently integrate the criteria for distinction into its programming, by using a method that he believes can produce the same ethical results independent of whether the actor is a human or a computational agent.[67] Others refute this notion, because LAWS lack qualities such as common sense, appreciation of the larger picture, understanding of civilians and combatants’ intentions, and anticipation of the direction in which events are unfolding.[68]

However, the nature of modern warfare makes it much harder to distinguish civilians from combatants. The International Committee of the Red Cross provides that in order for an actor to qualify as a direct participant in hostilities, a specific act must meet the following criteria: (1) threshold of harm, (2) direct causation, and (3) belligerent nexus.[69] Otherwise, individual status categories have historically played an important role in the law of war.[70] Protective schemes turn on whether affected persons are properly classified as “combatants,” “noncombatants,” “prisoners of war,” or “civilians.”[71] When potential targets are assigned a status, that status determines the scope and content of their protection. However, in the Global War on Terror, civilians participated in hostilities, and “irregulars”—a residual category of unlawful combatants—did not enjoy protection.[72] For counterinsurgency and unconventional armed conflict, in which combatants are often only identifiable through the interpretation of conduct, LAWS would face a significant obstacle to compliance.[73]

2. Proportionality.—This principle requires that the expected harm to civilians be measured, prior to the attack, against the anticipated military advantage to be gained from the operation.[74] In some ways, this rule presupposes human subjective estimates of value and context-specificity given the attendant circumstances. The value of a target is constantly changing and depends on the moment in the conflict, while some circumstances may be so complex that the level of permissible collateral damages will not be obvious. Noel Sharkey has prominently criticized the likelihood of undesired and unexpected behavior by LAWS based on their inability to “frame” and contextualize an environment.

It remains to be seen whether these concepts can be translated into an ethical architecture—Arkin and other leading roboticists are attempting to create algorithms or artificial intelligence systems for autonomous weapons that can take such fundamental principles into account.[75] A collateral damage estimate methodology (CDEM) can help an autonomous weapon determine the likelihood of collateral damage to objects or persons near a target.[76] However, the analysis would not resolve whether a particular attack complies with the rule of proportionality: it is necessary to consider expected collateral damage in light of the anticipated military advantage which may result. Under CDEM, the greater the likelihood of harm to civilians, the higher the required level of command to authorize the attack.[77]

Proportionality presupposes a commander with authority to authorize the attack will make the proportionality determination as part of the attack’s approval process. Beyond that obstacle, the battlespace is complex and fluid: it is unlikely that a lethal autonomous weapon will be able to perform robust assessments of a strike’s likely military advantage without incredible advances in artificial intelligence that enable a system to weigh proportionality with human-like “reasonableness.”

C. Modern Warfare and Human Rights Treaty Noncompliance

Lethal autonomous weapons occupy a nexus in which two concurrent phenomena are destabilizing international law.

First, unorthodox scenarios are now the primary focus for the use of lethal force, and they will similarly be the United States’ focus should it continue to engage in counterterrorism and counterinsurgency armed conflict. Robert Chesney has described armed conflict in the second decade after 9/‌11 as characterized by a “‘shadow war,’ taking place on an episodic basis in locations far removed from zones of conventional combat operations.”[78] Accordingly, there is substantial legal friction: the international community faces simultaneously a growing disconnect between the conception of the enemy as part of domestic legal architecture, due to the rise of regional non-state actors, and ambiguity concerning when a state of armed conflict exists. Drones, the predecessors to lethal autonomous weapons, occupy a legal gray zone internationally because the more difficult it has become to pursue overt action, the more politically attractive it is to use unmanned weapons that minimize the risk to domestic forces. As Chesney notes: “Technological change has disrupted the calculus of covert action, creating unprecedented opportunities for projecting lethal force without relying on proxies and with relatively little risk to U.S. personnel.”[79]

Second, human rights and humanitarian rights treaties will not be able to adequately address the risks inherent in LAWS because noncompliance with human rights treaties appears to be common among countries that have ratified those treaties.[80] Furthermore, countries with poor human rights ratings are sometimes more likely to have ratified the relevant treaties than are countries with better ratings.[81] International legal rationalists have claimed for years that countries will comply with treaties only when doing so enhances their interests, whether those interests are defined in terms of geological power, reputation, or domestic impact. Normative scholars, meanwhile, have claimed that strict self-interest is less important to understanding international law compliance than the persuasive power of legitimate legal obligations. Oona Hathaway has responded to these arguments, finding that countries comply with treaties “not only because they are committed to or benefit from the treaties, but also because they benefit from what ratification says to others.”[82] Treaties operate on more than one level simultaneously—they play instrumental and expressive roles in the international community. Yet human rights treaties have no statistically significant effect on practices.[83] In other words, membership in a treaty regime can be important only in that it takes a position and publicly enunciates that the principles outlined in the treaty are consistent with the ratifying government’s commitment to human rights.[84] A state may ratify a human rights treaty governing LAWS only to demonstrate a front of compliance while assuming there will be no consequences for noncompliance, while nations like the U.S., who are more likely to develop LAWS, historically have avoided signing certain human rights treaties.

III. The Benefits of Disarmament

There are currently no laws or treaties specifically governing or restricting military robots, unmanned vehicles, or unmanned platforms, much less lethal autonomous weapons. Because LAWS will not be well-regulated by the patchwork of IHL and IHRL, the international community should adopt a convention defining its specific technology and anticipated practices.

International disarmament treaties are legally binding, multilateral agreements. They could formally regulate oversight of a new weapons category, and in doing so would encourage state compliance far beyond IHL historically has. Their provisions often include a ban or other limitations short of a ban on activity such as acquisition, retention, and stockpiling; research and development; testing; deployment; transfer or proliferation; and use.[85] New international arms control instruments are typically freestanding, but agreements currently in force can provide models upon which to base a treaty restricting lethal autonomous weapons. The United States is not a party to all of the following conventions, and is not specifically bound by them (to the extent their requirements do not rise to the level of customary international law).[86] However, the United States takes a great deal of interest in the articulation of standards which regulate conduct generally in the battlespace.[87]

Even the broadest and most aggressively implemented international legal arms control instruments have weaknesses—existing international legal arms control instruments only apply to states and typically require state consent. Yet the following treaties have been extremely successful in restricting certain types of weapons:

The Chemical & Biological Weapons Conventions[88] limit or prohibit the use of certain targeted weapons. While the CWC effectively bans chemical weapons use or military preparation use, the BWC prohibits the use of biological and toxic weapons by reaffirming the 1925 Geneva Protocol.[89] These agreements also contain prohibitions and limitations on research and development, which is uncommon in international arms control instruments because restrictions based on research and development have deleterious effects on more benign technology and it is difficult to police or verify that states are complying with these restrictions without highly intrusive inspections.[90]

The Treaty on the Non-Proliferation of Nuclear Weapons (NPT) regulates acquisition by creating two classes of states.[91] Non-nuclear weapon state parties to the NPT are prohibited from receiving, manufacturing, or otherwise acquiring nuclear weapons, whereas the Comprehensive Test Ban Treaty[92] and the Treaty Banning Nuclear Weapon Tests in the Atmosphere, in Outer Space, and Under Water[93] (Limited Test Ban Treaty) specifically prohibit nuclear-weapons tests.

Finally, the 1980 Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons which may be Deemed to be Excessively Injurious or to have Indiscriminate Effects contains rules for the protection of military personnel, civilians, and civilian objects from injury or attack. Its provisions restrict use of landmines and booby traps, incendiary weapons, blinding lasers, and explosive remnants of war.[94]

A. Provisions

An arms control treaty with principles of humanitarian law in mind should include (1) a definition of LAWS and (2) a prohibition on specific intents or uses of LAWS with accountability vesting in certain actors. To reiterate, it may be more difficult to restrict LAWS as opposed to other weapons because they are combinations of multiple and often multipurpose technologies, so regulating use will likely be more effective than an outright ban on developing or deploying such systems.

1. Definition of lethal autonomous weapons.—The crux of full autonomy is the capability to identify, target, and attack a person or object without human interface. Although a human operator may retain the ability to take control of the system, the weapon is capable of operating on its own.

Because lethal autonomous systems have been defined so vaguely, some scholars argue any treaty should focus on the delegation of the authority to initiate lethal force to an automated process not under direct human supervision or control.[95] The concept of meaningful human control became an important aspect of this debate at the 2016 CCW informal meeting of LAWS experts.[96] Because a human–machine relationship extends throughout the development of a lethal autonomous system and is not limited to a decision to engage a target, it will not be useful to use the subjective idea of “meaningful” levels of human involvement and judgement as a metric. Both the tendency of the pace of battle to increase with technological developments and the costs associated with keeping a human in- or on-the-loop are likely to be greatly exacerbated at the point LAWS are deployed.[97] Furthermore, mandated levels of human oversight will not solve the problem, as human combatants are more likely to defer to the system’s recommended course of action and will rely crucially in other ways on decisions made by autonomous weapons.[98]

The definition of LAWS should abandon “meaningful” levels of human involvement as a standard and draw the definition broadly to define account for various platforms that will be capable of autonomous lethal decision-making.

2. Accountability for intents and uses.—A regulatory approach that focuses on technology—namely, keeping a human in- or on-the-loop in the weapons themselves—is misplaced in the case of LAWS. Instead, the focus should be on intent and use. As the ICRC noted in its report, “the crucial question does not seem to be whether new technologies are good or bad in themselves, but instead what are the circumstances of their use.”[99] The mere fact that a human is not in control of a particular engagement does not mean that no human is responsible for the actions of the autonomous weapon system.[100] A human must decide how to program the system and when to launch it.[101] A LAWS disarmament treaty should introduce a military-created standard of operation for autonomous systems. This standard could set how such robotic systems may be used in accordance with IHL, which would also address accountability concerns. Below this standard of care, liability should be imposed on several discrete categories of actors. First, the treaty should clarify state-level responsibility. International law has held states responsible for conduct that can be attributed to particular nations, including that of private actors and groups over which the state exercises a certain degree of control.[102] The ICJ has clarified that control over specific operations puts a state in a position to ensure that its actions comply with international law. Second, because autonomous military technology lends itself to situations outside recognized battlespace, where an armed conflict may not exist and IHRL is the applicable legal framework, the disarmament agreement must contain additional provisions governing use by non-state actors.[103]

Finally, responsibility must rest on military commanders. American LAWS will be designed, owned, and operated by the DoD, the individual branches of the armed forces, or DoD contractors.[104] Benjamin Kastan has thoroughly evaluated how existing law provides civil remedies when other weapons systems cause injury under the Federal Tort Claims Act, Foreign and Military Claims Acts, and Alien Tort Statute, and he argues LAWS only change the legal analysis on the issue of “operational” negligence.[105] Negligence may also be addressed under internal military discipline. Under current law it is not clear how courts would approach who may be held negligent in the case of deploying LAWS. Crimes with specific intent cannot apply to autonomous systems themselves (since machines cannot form intent) and the crime of murder would only apply to a human commander who directed the system to execute lethal force with the requisite specific intent.

However, the treaty could specifically create accountability if a human actor is guilty of involuntary manslaughter or negligent homicide. For example, a commander or contractor who deploys a lethal autonomous weapon with inadequate or incorrect instructions could be charged with involuntary manslaughter. Whether caution was due (typically a requirement for a finding of involuntary manslaughter) would have to be evaluated based on what the actor knew about the system, training standards, and the attendant circumstances.[106] Existing doctrines such as command responsibility may be able to fill that gap: military services and perhaps states themselves would be responsible for the conduct of LAWS commanders.

Conclusion

LAWS tap into a deep-seated fear among the public of delegating killing decisions to machines. The support for the Campaign to Ban Killer Robots[107] and the serious international discussion on prohibiting LAWS outright—when autonomous weapons are still on the drawing board—demonstrate that many feel that just because we can create something, does not mean that we should.[108] However, a treaty containing a specific, workable definition of LAWS and emphasizing transparency, accountability, and the rule of law would create sorely needed parameters for future research and deployment of LAWS. The international community could set a bar below which lethal autonomous systems may not be used in fully autonomous modes. International law needs guidance on LAWS beyond the existing patchwork, and if the international community paves the way for a disarmament treaty, it may be one of humanity’s great success stories.

  1. .Andrea Peterson, In An Apparent First, Dallas Police Used a Robot to Deliver Bomb that Killed Shooting Suspect, Wash. Post (July 8, 2016), https://www.washingtonpost.com/news/the-switch/wp/2016/07/08/dallas-police-used-a-robot-to-deliver-bomb-that-killed-shooting-suspect/ [https://perma.cc/S9VY-B4SR].
  2. .Id.
  3. .Id.
  4. .Christof Heyns, Rep. of the Special Rapporteur on Extrajudicial, Summary or Arbitrary Executions, Hum. Rts. Council, U.N. Doc. A/HRC/23/47, at 5 (Apr. 9, 2013) [hereinafter Heyns Report].
  5. .Id. at 5–6.
  6. .See generally Derek Jinks, The Applicability of the Geneva Conventions to the “Global War on Terrorism, 46 Va. J. Int’l L. 165 (2005) (applying the Geneva Conventions to international and noninternational armed conflict in the Global War of Terrorism).
  7. .Ronald Arkin, Governing Lethal Behavior in Autonomous Robots 5 (2009); see also Kenneth Anderson, The Case for Drones, Comment. Mag. (June 1, 2013), https://www.commentarymagazine.com/articles/the-case-for-drones/ [https://perma.cc/V372-Y5S6] (arguing that drone warfare is an “honorable attempt” to seek out terrorists and insurgents who hide among civilians, as the “unpredictability and terror” of a sudden attack has a significant impact on planning and effectiveness of terrorist organizations).
  8. .See Anderson, supra note 7 (defending drones on the basis that remote weapons technology allows greater discrimination in time, manner, and targeting in conflict when adversaries use unwitting civilian shields, compared to weapons such as landmines).
  9. .E.g., Arkin, supra note 7; Michael N. Schmitt & Jeffrey S. Thurnher, “Out of the Loop”: Autonomous Weapon Systems and the Law of Armed Conflict, 4 Harv. Nat’l Sec. J. 231, 239 (2013); John Lewis, Comment, The Case for Regulating Fully Autonomous Weapons, 124 Yale L.J. 1309 (2015).
  10. .Throughout this Note I refer to “international humanitarian law” and “laws of armed conflict” interchangeably as the law that regulates the conduct of armed conflicts found in the Geneva Conventions and related protocols, treaties, and customary international law.
  11. .Schmitt & Thurnher, supra note 9, at 239.
  12. .There is currently no consensus as to when general artificial intelligence will be achieved, but many estimates range from five to fifteen years. Compare John Markoff, Artificial Intelligence Is Far From Matching Humans, Panel Says, N.Y. Times (May 25, 2016), http://www.nytimes.com/2016/05/26/technology/artificial-intelligence-is-far-from-matching-humans-panel-says.html [https://perma.cc/Q4CU-D8BT] (quoting a computer scientist at a White House-sponsored conference as saying “The A.I. community keeps climbing one mountain after another, and as it gets to the top of each mountain, it sees ahead still more mountains”), with Noel Sharkey, Automating Warfare: Lessons Learned from the Drones, 21 J.L. Info & Sci. 140, 142–44 (2011) (emphasizing the distinction between “autonomous” weapons systems and “artificial intelligence” in the five to fifteen year estimate range).
  13. .See Ryan Calo, Robotics and the Lessons of Cyberlaw, 103 Calif. L. Rev. 513, 538–39 (2015) (describing the relationship between emergent behavior in robotics and autonomy).
  14. .U.S. Dep’t of Def., Directive 3000.09, Autonomy in Weapon Systems, U.S. Dep’t Def. 13–14 (Nov. 21, 2012), http://www.dtic.mil/whs/directives/corres/pdf/300009p.pdf [https://perma.cc/
    HX7J-FB85] [hereinafter DoD Directive].
  15. .Schmitt & Thurnher, supra note , at 235.
  16. .Id.
  17. .See Arkin, supra note , at 52–53 (describing the results of a survey evaluating ethical boundaries for LAWS using acceptable roles, situations, and levels of autonomy as metrics).
  18. .Schmitt & Thurnher, supra note , at 240; see also DARPA’s Anti Submarine Warfare Game Goes Live, DARPA (Apr. 4, 2011), http://www.spacewar.com/reports/DARPA_Anti_
    Submarine_Warfare_Game_Goes_Live_999.html [https://perma.cc/23AG-CPVS].
  19. .Kenneth Anderson & Matthew Waxman, Law and Ethics for Robot Soldiers, Pol’y Rev., Dec. 1, 2012, http://www.hoover.org/research/law-and-ethics-robot-soldiers [https://perma.cc/
    SE3W-7633] [hereinafter Anderson & Waxman, Law and Ethics].
  20. .Schmitt & Thurnher, supra note , at 235.
  21. .See generally Arkin, supra note .
  22. .Jakob Kellenberger, International Humanitarian Law and New Weapon Technologies, Keynote Address at the 34th Round Table on Current Issues of International Humanitarian Law (Sept. 8, 2011), https://www.icrc.org/eng/resources/documents/statement/new-weapon-technologies-statement-2011-09-08.htm [https://perma.cc/KF6F-WX8C].
  23. .See Arkin, supra note , at 57–67 (describing mathematical formalization as a basis for the development of autonomous systems capable of supporting ethical behavior regarding the application of lethality in war).
  24. .See infra Part II.
  25. .Peter Asaro, On Banning Autonomous Weapon Systems: Human Rights, Automation, and the Dehumanization of Lethal Decision-Making, 94 Int’l Rev. Red Cross 687, 695 (2012).
  26. .Heyns Report, supra note , at 6.
  27. .Asaro, supra note 25, at 692.
  28. .Losing Humanity: The Case Against Killer Robots, Hum. Rts. Watch 4 (Nov. 2012), http://www.hrw.org/sites/default/files/reports/arms1112_ForUpload.pdf [https://perma.cc/7GJR-E37J]. But see Arkin, supra note , 29–30 (arguing LAWS can be designed without emotions such as fear and anger, which can cloud the judgment of human soldiers and lead to their engaging in “fearful measures and criminal behavior”); Anderson & Waxman, Law and Ethics, supra note 19 (noting lethal autonomous systems might be more precise because human soldiers have failings exacerbated by fear, vengeance, and other emotions).
  29. .Losing Humanity, supra note 28, at 4.
  30. .E.g., U.S. Dep’t of Def., Office of the Sec’y of Def., Unmanned Systems Roadmap 2007‑2032, at 49, 54 (2007), http://www.fas.org/irp/program/collect/usroadmap2007.pdf [https://perma.cc/KP5A-QKTX]; U.S. Dep’t of Def., Office of the Sec’y of Def., Unmanned Systems Roadmap 2005-2030, at 52 (2005), https://fas.org/irp/program/collect/
    uav_roadmap2005.pdf [https://perma.cc/SD52-4FMN]; U.S. Air Force, Unmanned Aircraft Systems Flight Plan 2009-2047, at 50–51 (2009), https://fas.org/irp/program/collect/
    uas_2009.pdf [https://perma.cc/Q6G3-VJLW].
  31. .Dan Lamothe, The Killer Robot Threat: Pentagon Examining How Enemy Nations Could Empower Machines, Wash. Post (Mar. 30, 2016), https://www.washingtonpost.com/news/
    checkpoint/wp/2016/03/30/the-killer-robot-threat-pentagon-examining-how-enemy-nations-could-empower-machines/ [https://perma.cc/F7R4-LUE2].
  32. .Id.
  33. .The concept derives its name from two earlier “offsets:” In the first, the Pentagon developed tactical nuclear weapons during the Cold War; in the second, the military introduced GPS to guide bombs and missiles in the field. Id; see also John Markoff, Arms Control Groups Urge Human Control of Robot Weaponry, N.Y. Times (Apr. 11, 2016), http://www.nytimes.com/2016/
    04/12/technology/arms-control-groups-urge-human-control-of-robot-weaponry.html [https://
    perma.cc/BE2W-B9V5] (noting the Third Offset strategy seeks to exploit technologies to maintain American military superiority).
  34. .Arkin, supra note 7, at xii.
  35. .Schmitt & Thurnher, supra note 9, at 237–38.
  36. .Arkin, supra note 7, at 26; Anderson & Waxman, supra note 19; Markoff, supra note 12.
  37. .Schmitt & Thurnher, supra note 9, at 238.
  38. .Kenneth Anderson & Matthew Waxman, Brave New War, Hoover Institution (Dec. 14, 2012), http://www.hoover.org/research/brave-new-war [https://perma.cc/YJU5-6DJG].
  39. .See The Problem, Campaign to Stop Killer Robots, http://www.stopkillerrobots.org/
    the-problem [http://perma.cc/BYN8-YMQP] (calling for an international prohibition on the development, deployment and use of armed autonomous unmanned systems because machines should not be allowed to make the decision to kill people); see also Markoff, supra note 12 (noting that researchers at a recent conference on artificial intelligence indicated “A.I. research is still far from matching the flexibility and learning capability of the human mind”); Sharkey, supra note 12 (arguing that the main ethical problem in developing LAWS is that no autonomous robots or artificial intelligence systems are likely to be able to distinguish between combatants and noncombatants).
  40. .There is a growing body of scholarship responding to these views and calling for an approach in between these policymaking poles. See generally Lewis, supra note (advocating for regulations addressing the most dangerous aspects of lethal autonomous weapons). For example, Kenneth Anderson and Matthew Waxman published a series of papers from 2012 to 2013 in which they called for a set of principles that could guide the gradual evolution and adaptation of longstanding law-of-war principles to regulate how the United States develops and tests its lethal autonomous weapons systems. See, e.g., Anderson & Waxman, Law and Ethics, supra note 19; Kenneth Anderson & Matthew Waxman, Killer Robots and the Laws of War, Wall St. J. (Nov. 3, 2013, 6:33 PM), http://www.wsj.com/articles/SB10001424052702304655104579163361884479576 [https://perma.cc/YY9K-YVT2] [hereinafter Anderson & Waxman, Killer Robots]. However, scholars typically attempt to adapt IHL and IHRL to include lethal autonomous weapons, which may be infeasible. See infra Part II.
  41. .Anderson & Waxman, Killer Robots, supra note 40.
  42. .Id.; see, e.g., Markoff, supra note 33 (using the newly developed Long Range Anti-Ship Missile as an example of a weapon which defies the distinction between semiautonomous and fully autonomous weapons, as it is designed to make final targeting decisions after a human operator launches it).
  43. .Anderson & Waxman, Killer Robots, supra note 40.
  44. .Protocol Additional to the Geneva Conventions of 12 August 1949, and Relating to the Protection of Victims of International Armed Conflicts (Protocol I) art. 36, opened for signature June 8, 1977, 1125 U.N.T.S. 3 [hereinafter Additional Protocol].
  45. .DoD Directive, supra note 14, at 1; see also Jeffrey S. Thurnher, The Law that Applies to Autonomous Weapon Systems, 17 Am. Soc’y Int’l L. Insights (Jan. 18, 2013), http://www.asil.org/insights/volume/17/issue/4/law-applies-autonomous-weapon-systems [https://perma.cc/5NWF-5YDW] (noting that the U.S. Department of Defense Directive and the Human Rights Watch report were released several days apart and seemed to reach different conclusions about the lawfulness of such weapons).
  46. .DoD Directive, supra note 14, at 1–2. Some experts contend that Article 36 is customary international law binding on all states, while others argue that it is merely best practice. See Losing Humanity, supra note 28, at 20–21 (noting that many weapons-producing states have accepted the obligation to review, but that the United States is not party to Protocol I).
  47. .2016 Meeting of Experts on LAWS, United Nations Off. at Geneva, http://www.unog.ch/80256EE600585943/(httpPages)/37D51189AC4FB6E1C1257F4D004CAF2 [https://perma.cc/RTN9-Y4VR].
  48. .Louis Henkin, How Nations Behave: Law and Foreign Policy 47 (2d ed. 1979) (emphasis omitted).
  49. .See Harold Hongju Koh, How is International Human Rights Law Enforced?, 74 Ind. L.J. 1397, 1399 (1998) (defining the process as institutional interaction, interpretation of legal norms, and attempts to internalize those norms in domestic legal systems).
  50. .DoD Directive, supra note 14, at 1.
  51. .Christof Heyns, Professor, University of Pretoria, Autonomous Weapon Systems: Human Rights and Ethical Issues, Speech at United Nations Special Rapporteur on Extrajudicial, Summary or Arbitrary Executions 1–2 (Apr. 14, 2016) (transcript available at http://www.unog.ch/80256EDD006B8954/(httpAssets)/205D5C0B0545853BC1257F9B00489FA3/$file/heyns+CCW+2016+talking+points.pdf [https://perma.cc/KV5L-FDEK]).
  52. .See id. at 2 (opining that human targets will be reduced to “zeros and ones in the digital scopes of weapons”).
  53. .Id.
  54. .See Robert Sparrow, Killer Robots, 24 J. Applied Psych. 62, 69–73 (2007) (discussing the problems with holding each of these actors accountable).
  55. .Id.
  56. .Heyns Report, supra note , at 14–15.
  57. .For the purposes of this Note, I will distinguish between IHL and IHRL using the “complementary model,” which is based on whether an armed conflict exists. See, e.g., Oona A. Hathaway et al., Which Law Governs During Armed Conflict? The Relationship Between International Humanitarian Law and Human Rights Law, 96 Minn. L. Rev. 1883, 1886 (2012) (outlining three theoretical approaches to the relationship between human rights law and humanitarian law). There is a significant body of scholarship dedicated to the relationship between both schemes, and some scholars argue that one encompasses the other or that the two overlap. E.g., Lesley Wexler, International Humanitarian Law Divergence, 42 Pepp. L. Rev. 549, 556–57 (2015) (noting many states fundamentally disagree on the correct interaction of the bodies of law, but that the United States has recently favored the exclusive or strong application of IHL to situations arising under armed conflict).
  58. .Mark Klamberg, International Law in the Age of Asymmetrical Warfare, Virtual Cockpits and Autonomous Robots, in International Law and Changing Perceptions of Security 152, 158 (Jonas Ebbesson et al. eds., 2014).
  59. .Id. at 161.
  60. .Id. at 162.
  61. .Harold Koh, Legal Adviser Koh’s Speech on the Obama Administration and International Law, March 2010, Council on Foreign Rels. (March 25, 2010), http://www.cfr.org/
    international-law/legal-adviser-kohs-speech-obama-administration-international-law-march-2010/p22300 [https://perma.cc/VU54-BDCL].
  62. .See generally Sparrow supra note 54.
  63. .Arkin, supra note , at 38–39.
  64. .Gary E. Marchant et. al., International Governance of Autonomous Military Robots, 12 Colum. Sci. & Tech. L. Rev. 272, 294–95 (2011).
  65. .The Martens Clause prohibits weapons that run counter to the “dictates of public conscience.” Losing Humanity, supra note 28, at 25. This obligation not to use weapons that have indiscriminate effects underlies the prohibition of certain weapons, and some weapons have been banned or restricted because they cause superfluous injury or unnecessary suffering. Additional Protocol, supra note 44, at 21. However, the clause is customary international law and applies only in the absence of treaty law. See Schmitt & Thurnher, supra note , at 275 (“[I]t is a failsafe mechanism meant to address lacunae in the law; it does not act as an overarching principle that must be considered in every case.”).
  66. .Nils Melzer, Int’l Committee of the Red Cross, Interpretive Guidance on the Notion of Direct Participation in Hostilities Under International Humanitarian Law 46 (2009).
  67. .See Arkin, supra note , at 95–96 (developing a scheme to integrate IHL into classes of lethal actions for autonomous systems based on absolutely forbidden actions and obligatory actions).
  68. .Klamberg, supra note 58, at 167. But see Arkin, supra note , at 122–24 (describing the necessary IHL factors any ethical architecture for LAWS must include).
  69. .Melzer, supra note 66, at 46; see also Klamberg, supra note 58, at 164 (clarifying these criteria in relation to drone attacks in the Middle East).
  70. .Derek Jinks, Protective Parity and the Laws of War, 79 Notre Dame L. Rev. 1493, 1495 (2004).
  71. .Id. Professor Jinks discusses these distinctions at greater length as they relate to prisoners of war. Id. at 1496–98.
  72. .Id. at 1498.
  73. .But see Heyns Report, supra note , at 13 (noting that technology can offer increased protection to humans by “lifting the fog of war” for human soldiers using powerful sensors and processing powers).
  74. .Heyns Report, supra note , at 13.
  75. .Anderson & Waxman, Law and Ethics, supra note 19.
  76. .Schmitt & Thurnher, supra note , at 254.
  77. .Id. at 255.
  78. .Robert M. Chesney, Beyond the Battlefield, Beyond Al Qaeda: The Destabilizing Legal Architecture of Counterterrorism, 112 Mich. L. Rev. 163, 167 (2013).
  79. .Id. at 207.
  80. .See Oona A. Hathaway, Do Human Rights Treaties Make a Difference?, 111 Yale L.J. 1935, 1978 (2002) (finding that although countries that have ratified treaties have better human rights ratings on average, noncompliance seems to be rampant).
  81. .Id.
  82. .Id. at 2002.
  83. .Id.
  84. .Id. at 2006.
  85. .Heyns Report, supra note , at 19.
  86. .Marchant et al., supra note 64, at 290.
  87. .Id.
  88. .Convention on the Prohibition of the Development, Production, Stockpiling and Use of Chemical Weapons and on Their Destruction, Jan. 13, 1993, 1974 U.N.T.S. 45; Convention on the Prohibition of the Development, Production, Stockpiling of Bacteriological (Biological) Weapons and Toxins and on Their Destruction, Apr. 10, 1972, 26 U.S.T. 583, 1015 U.N.T.S. 163 [hereinafter BWC].
  89. .BWC, supra note 88.
  90. .Marchant et al., supra note 64, at 302.
  91. .Treaty on the Non-Proliferation of Nuclear Weapons (NPT), opened for signature July 1, 1968, 21 U.S.T. 483, 729 U.N.T.S. 161 (entered into force Mar. 5, 1970) [hereinafter NPT].
  92. .Comprehensive Nuclear Test Ban Treaty, opened for signature Sept. 24, 1996, 35 I.L.M. 1439 (1996).
  93. .Treaty Banning Nuclear Weapon Tests in the Atmosphere, in Outer Space and Under Water, Aug. 5, 1963, 14 U.S.T. 1313, 480 U.N.T.S. 43.
  94. .Convention on Prohibitions or Restrictions on the Use of Certain Conventional Weapons Which May Be Deemed to Be Excessively Injurious or to Have Indiscriminate Effects (CCW), Oct. 10, 1980, 1342 U.N.T.S. 137 (1980). Protocols I–V enumerate these weapons.
  95. .Asaro, supra note 25, at 696.
  96. .See Michael W. Meier, U.S. Delegation Opening Statement at the Convention on Certain Conventional Weapons (CCW) Informal Meeting of Experts on Lethal Autonomous Weapons Systems 2 (Apr. 11, 2016) (transcript available at http://www.unog.ch/
    80256EDD006B8954/(httpAssets)/EFF7036380934E5EC1257F920057989A/$file/2016_LAWS+MX_GeneralExchange_Statements_United+States.pdf [https://perma.cc/N8D4-MFSN]) (calling for in-depth discussions about the usefulness of the phrase “meaningful human control”).
  97. .Sparrow, supra note 54, at 69.
  98. .Id.
  99. .Int’l Comm. of the Red Cross, International Humanitarian Law and the Challenges of Contemporary Armed Conflicts 40 (Report of the 31st Conference of the Red Cross and Red Crescent, Geneva, Switzerland 31IC/11/5.1.2 2011), https://app.icrc.org/e-briefing/new-tech-modern-battlefield/media/documents/4-international-humanitarian-law-and-the-challenges-of-contemporary-armed-conflicts.pdf [https://perma.cc/BY22-Z4HM].
  100. .Schmitt & Thurnher, supra note 9, at 277.
  101. .See DoD Directive, supra note 14 (stating that “[p]ersons who authorize the use of, direct the use of, or operate autonomous and semi-autonomous weapon systems must do so with appropriate care and in accordance with the law of war, applicable treaties, weapon system safety rules, and applicable rules of engagement”).
  102. .Hum. Rts. Watch & Int’l Hum. Rts. Clinic at Harv. L. Sch., Killer Robots and the Concept of Meaningful Human Control, Hum. Rts. Watch 13 (Apr. 11, 2016), https://www.hrw.org/news/2016/04/11/killer-robots-and-concept-meaningful-human-control [https://perma.cc/4ZXT-83BD].
  103. .For a discussion of legal accountability for automated weapons systems, see generally Benjamin Kastan, Autonomous Weapons Systems: A Coming Legal “Singularity”?, 2013 U. Ill. J.L. Tech. & Pol’y 45 (2013).
  104. .Id. at 69.
  105. .Id. at 70.
  106. .Id. at 80.
  107. .The Problem, Campaign to Stop Killer Robots, http://www.stopkillerrobots.org/the-problem [https://perma.cc/4VTX-44KR].
  108. .Charli Carpenter, Beware the Killer Robots: Inside the Debate over Autonomous Weapons, Foreign Aff. (July 3, 2013), http://www.foreignaffairs.com/articles/139554/charli-carpenter/beware-the-killer-robots [https://perma.cc/LU5D-AH7C].