

Lethal Autonomous Weapons Systems also raise a fundamental issue of human dignity - allowing machines to 'decide' to kill a human being. Instead, it aims to provide an analytical framework for states and experts to assess how the normative and operational framework regulating the development and use of AWS may need to be clarified and developed further. Lethal Autonomous Weapons Systems carry the risk of lowering the threshold for engaging in conflict, by lowering the risk of a country's own troop losses. In its findings and recommendations, the report does not pre-judge the policy response that should regulate AWS. It maps ( a) what limits IHL already places on the development and use of AWS ( b) what IHL demands from users of AWS to perform and satisfy IHL obligations, whether the obligations are of a state, an individual or both and ( c) threshold questions concerning the type and degree of human–machine interaction required for IHL compliance. This report aims to help states form and express their views on the legal provisions that already do, or should, govern the development and use of AWS, particularly with respect to the required type and degree of human–machine interaction.

However, in certain key respects, how and to what extent existing IHL rules provide limits on the development and use of AWS remains either subject to debate or underexplored. Compliance with international humanitarian law (IHL) is recognized as a critical benchmark for assessing the acceptability of autonomous weapon systems (AWS).
