Thank you Mr. Chairperson. We would like to take a moment to reflect on what we have heard this week. The international community has come together to discuss a challenging issue, the implications of increasing autonomy for military operations, particularly in the use of force. This is an important issue, and we commend states and members of civil society for engaging these issues.
They are also difficult issues. Unlike previous weapons that the CCW has dealt with, autonomous weapons generally do not yet exist, and so sometimes it is hard to envision exactly what autonomous weapons are and how they might be used.
It is clear that there is still work to be done in converging on a common understanding of autonomous weapons. We have heard a divergent array of views, with some viewing LAWS as weapons with more advanced targeting systems and a wider search area than missiles today, and others viewing them as thinking or learning machines with human-level cognition. There are significant differences between these viewpoints, with important consequences for understanding the risks and potential benefits of autonomous weapons.
The current lack of a common language for communicating about autonomous weapons makes this discussion challenging. We also lack a common framework for how to think about the potential implications of LAWS. From disagreements about whether autonomous weapons violate the public conscience provision of the Martens Clause to questions about why any state would want to develop or use an them in the first place, it is clear that the international community is still in the early stages of discussions on these issues. It is critical to avoid rushing to judgment.
Nevertheless, we have heard some common themes.
We have not heard any members suggest that some level of human control over the use of force is not needed. Rather, there seems to be an emerging consensus that human control and judgment is needed. And most seem to agree that there should be a necessary quality to that control, just as there is with the use of weapons today. Whether one uses the term meaningful, adequate, effective, or some other term, there seems to be agreement that human judgment is needed over the use of force.
And indeed, a world where humans are no longer involved in decisions about life and death is a disturbing future.
Humans have certain obligations under the laws of war and, as many pointed out, there are important moral and ethical considerations outside of IHL as well.
As we begin to understand how autonomy should be incorporated into future military systems, we should not lose sight of these obligations and considerations.
It is humans who are bound by the laws of war, who adhere to it or break it.
Machines are tools, and we should take care not to see them as agents. LAWS may be used in ways by humans that are unlawful or ways that are lawful, but LAWS themselves are not agents under the laws of war. We should not ask whether LAWS could make decisions about proportionality, distinction, and military necessity, complying with IHL. Rather, we should ask whether LAWS could be used – by humans – in ways that comply with IHL. These are human judgments. Human judgment is and will remain essential.
At the same time, we should not attempt to make blanket determinations about what can or cannot be done in the future based on the state of technology today. We heard this week from a leading researcher in artificial intelligence that computers are able to recognize over 1000 categories of objects and perform facial recognition better than humans. Is it so hard to imagine that computers might be able to recognize objects in war better than people, to distinguish between a person carrying a rifle and a person carrying a rake?
Therefore, we should not think of humans or LAWS as the choice before us. Instead, we should look to find ways to use autonomy to ensure that the use of force is more lawful, more discriminate, and enhances human accountability and responsibility.