Print
See related documents

Report | Doc. 15683 | 09 January 2023

Emergence of lethal autonomous weapons systems (LAWS) and their necessary apprehension through European human rights law

Committee on Legal Affairs and Human Rights

Rapporteur : Mr Damien COTTIER, Switzerland, ALDE

Origin - Reference to committee: Doc. 14945, Reference 4479 of 27 January 2020. 2023 - First part-session

Summary

The committee is concerned about the risks posed by the emergence of lethal autonomous weapons systems and considers that the search for a fair balance between maintaining military competitiveness and protecting human rights must be strengthened. It proposes an intermediate solution, between an outright ban and no regulation at all, with two components:

  • firstly, universal recognition that fully autonomous lethal weapons systems that select targets and eliminate them without any meaningful human control can never comply with international humanitarian and human rights law and are therefore prohibited by international law as it stands now;
  • secondly, the development of a legal framework for other, partially autonomous lethal weapons systems that sets out rules to ensure compliance with the laws of war that are appropriate to the particular challenges posed by such weapons.

This framework should eventually be laid down in a legally binding international instrument, in the form of an international convention on lethal autonomous weapons systems. In the meantime, at least a non-binding code of conduct should be developed that could serve as a guide for the negotiators of a future convention.

A. Draft resolution 
			(1) 
			Draft resolution unanimously
adopted by the committee on 14 November 2022.

(open)
1. The Parliamentary Assembly notes that rapid technological progress in the field of artificial intelligence is paving the way for the emergence, in the near future, of lethal autonomous weapons systems (LAWS).
2. According to the definition of the International Committee of the Red Cross (ICRC), the term LAWS encompasses “Any weapon system with autonomy in its critical functions. That is, a weapon system that can select (i.e. search for or detect, identify, track, select) and attack (i.e. use force against, neutralize, damage or destroy) targets without human intervention.” LAWS, therefore, are neither remote-controlled systems in which a human retains control throughout, nor automatic systems in which a particular process has been programmed in advance so that their action is totally predictable.
3. The emergence of LAWS has prompted concern on the part of numerous States as well as civil society. 54 non-governmental organisations have launched a campaign in favour of a preventive prohibition of research and development of these emerging technologies and, even more so, of the use of what they call “Killer Robots”. This position of principle was adopted by the European Parliament in a resolution dated 12 September 2018.
4. The “arms race” logic implied in this field prompts some to see LAWS as the third military revolution in the history of international relations, after the invention of gunpowder and that of nuclear weapons. Global military powers which fail to invest in this technology would therefore risk being left behind.
5. LAWS carry the risk of lowering the threshold for engaging in conflict, by lowering the risk of a country's own troop losses. LAWS also raise a fundamental issue of human dignity – allowing machines to “decide” to kill a human being.
6. The conformity of LAWS with international humanitarian law hinges above all on the possibility, or not, of complying with the principles of distinction, proportionality and precautions in attack.
6.1. The principle of distinction between civilian and military targets could be complied with by LAWS that are well designed and programmed to execute surgical strikes aimed solely at military targets.
6.2. Judgement calls as to whether an attack satisfies the principle of proportionality are made on the basis of values and interpretations of the particular situation rather than on numbers or technical indicators. Making such judgements, which reflect ethical considerations, requires human judgement which is unique. It is for this reason that at least a minimum degree of human control is indispensable.
6.3. To comply with the principle of precaution, the course of action taken by LAWS must be predictable. Users must be capable of adjusting or nullifying the effects of the weapons systems if necessary, something that is possible only if they can reasonably foresee how a weapons system will react.
6.4. The conformity of LAWS with international human rights law, and notably with the European Convention on Human Rights (ETS No. 5), depends on clear regulation of their use. Article 2 of the Convention requires that the right to life be protected by law. This means that the State must introduce a legal framework defining the limited circumstances in which the use of these weapons is authorised. The case law of the European Court of Human Rights relates to other types of weapons. But the use of LAWS should not be subject to standards that are any less strict.
7. From the viewpoint of international humanitarian law and human rights, regulation of the development and above all of the use of LAWS is therefore indispensable. The crucial point is human control. Respect for the rules of international humanitarian and human rights law can only be guaranteed by maintaining human control, to degrees that vary according to the stances taken by States and other actors of the international community. Several levels of human control may be envisaged: significant control, effective control or appropriate levels of human judgement. Human control must be maintained over lethal weapons systems at all stages of their life cycle.
7.1. Human control can be exercised at the development stage, including through technical design and programming of the weapon system (ethics by design): decisions taken during the development stage must ensure that the weapon system can be used in the intended or expected circumstances of use, in accordance with international humanitarian law and other applicable international norms, in particular the European Convention on Human Rights.
7.2. Human control may also be exerted at the point of activation, which involves the decision of the commander or operator to use a particular weapon system for a particular purpose. This decision must be based on sufficient knowledge and understanding of the weapon’s functioning in the given circumstances to ensure that it will operate as intended and in accordance with international humanitarian law and other applicable international norms. This knowledge must include adequate situational awareness of the operational environment, especially in relation to the potential risks to civilians and civilian property.
7.3. In order to ensure compliance with international humanitarian law and other applicable international norms, it may be thought necessary to exert additional human control during the operation stage, when the weapon autonomously selects and attacks targets. Human intervention may be necessary in order to comply with the law and remedy shortcomings at the development stage and at the point of activation.
8. Unlike humans, machines do not have feelings and are not moral agents. If a person commits a war crime with an autonomous weapon, it is the human who commits the crime, using the autonomous weapon as the tool. Humans must be not only legally accountable but also morally responsible for the actions of LAWS. Some decisions pertaining to the use of weapons require legal and moral judgments, such as weighing likely civilian casualties against military advantages from conducting attacks. These judgments must be endorsed by humans since they are also moral judgments and have legal scope.
9. The relevant provisions of international humanitarian law imply that such weapons systems must not be used if they are likely to cause superfluous injury or unnecessary suffering, or if they are inherently indiscriminate, or if they are incapable of being used in accordance with law.
10. On the assumption that future LAWS meet all the legal requirements of the laws of war when they operate normally, malfunctions of the system could cause an erroneous attack and thereby raise accountability issues. It must be possible to establish legal responsibility in the event of a malfunctioning lethal autonomous weapons system by analysing compliance with the requirement of adequate human control. It should be possible to link unlawful actions committed by a lethal autonomous weapons system resulting in violations of international humanitarian law and other international norms alternatively to the individual or groups of individuals at the origin of its design, manufacturing or programming or its deployment and ultimately to the user State. In this regard, the user State has a particular responsibility to test and verify in advance the weapons it intends to use to ensure that they are predictable and reliable and not likely to commit violations of international humanitarian law through error, malfunction or poor design, and to verify the contexts in which their use is possible in accordance with law.
11. The Assembly notes that the questions of the compatibility of LAWS with IHL and human rights are being discussed by States Parties to the Convention on Certain Conventional Weapons (CCW), which have set up a Group of Governmental Experts (GGE). Working on the basis of the “11 Guiding Principles on LAWS” adopted in 2019 and the Final Declaration of the 6th Review Conference of the States Parties to the CCW in December 2021, that Group continues to seek a consensus on the future regulation of this emerging technology.
12. At its July 2022 session, the GGE adopted a statement to the effect that it had reached agreement that the right of parties to an armed conflict to choose the methods and means of warfare was not unlimited and that international humanitarian law was also applicable to LAWS. Any violation of international law, including a violation involving a lethal autonomous weapons system, incurred the responsibility under international law of the State concerned. The Group further proposed extending its work into 2023.
13. The Assembly notes that a group of European States has proposed a two-tier approach to the GGE:
13.1. Firstly, the States Parties to the CCW should recognise that LAWS which cannot be used in conformity with international law, including international humanitarian law, are de facto banned; and that, consequently, LAWS operating completely outside any human control and a responsible chain of command are unlawful.
13.2. Secondly, agreement should be reached on the international regulation of other weapons systems presenting elements of autonomy in order to guarantee conformity with international humanitarian law by:
13.2.1. ensuring appropriate human control throughout the life cycle of the system in question;
13.2.2. maintaining human responsibility and the obligation of accountability at any time, in all circumstances and throughout the life cycle, as the basis of the responsibility of the State and that of the individual, which may never be transferred to machines;
13.2.3. implementing suitable measures to mitigate the risks and appropriate guarantees regarding security and safety.
14. The Assembly supports this two-tier approach and considers that the emergence of LAWS requires clear regulation of this technology to ensure respect for international humanitarian law and human rights and that the appropriate forum to agree on the future regulation of LAWS is the Conference of States Parties to the CCW and its GGE.
15. As to the legal form of such regulation, the long-term goal should be a binding text in the form of a protocol to the CCW or even a specific international convention.
16. Pending the emergence of the broad consensus needed to draw up such an instrument, a non-binding instrument should be prepared in the form of a code of conduct. This instrument, which might be updated on a regular basis, could codify the guiding principles that are already broadly recognised and highlight the good practices adopted by given States Parties to the CCW.
17. The Assembly therefore calls on Council of Europe member States as well as observer States and States whose parliaments enjoy observer or partner for democracy status with the Assembly to take a constructive role in the discussions in progress within the CCW and its GGE with a view to regulating the emergence of LAWS and to support the two-tier approach mentioned above.
18. Should no consensus emerge within a reasonable period of time for the elaboration of a code of conduct and subsequently for the preparation and negotiation of an international agreement within the meaning of paragraphs 14 and 15, or should such steps appear to have no chance of success, the Assembly invites Council of Europe member States as well as observer States and States whose parliaments enjoy observer or partner for democracy status with the Assembly to consider initiating such work at Council of Europe level.

B. Explanatory memorandum by Mr Damien Cottier, rapporteur

(open)

1. Introduction

1. On 4 July 2019, the motion for a resolution entitled “Emergence of lethal autonomous systems (LAWS) and their necessary apprehension through European human rights law” (Doc. 14945) was referred to the Committee on Legal Affairs and Human Rights for report. The committee appointed me as rapporteur on 23 June 2022, following the resignation of the previous rapporteur, Fabien Gouttefarde (France, ALDE).
2. The motion for a resolution calls for an analysis of the ethical and legal issues raised by the potential future use of lethal autonomous weapons systems in armed conflicts, and more specifically their compatibility and conformity with human rights, in particular the European Convention for the Protection of Human Rights (ETS No. 5, “the Convention”). Attention will be drawn to the difficulties encountered in developing a legal definition.
3. It should be pointed out that only LAWS will be examined here and that these are not to be confused with automated or remote-controlled weapons such as armed drones. Armed drones (UCAVs, Unmanned Combat Aerial Vehicles) are unmanned aircraft that can be operated automatically or remotely and can carry weapons as a payload. Although they have no on-board pilot, they are remotely controlled by a pilot or can follow independently pre-programmed flight routes or even automatically track a target. They are automated or remote-controlled systems. The choice of target or the decision to use lethal force is always made by a human.
4. By contrast, according to some concepts, LAWS would be systems that make decisions autonomously, namely without any human intervention, concerning target selection or flight path, or the use of lethal force. In the case of drones, this technology has not yet been used to control the missile or operate the payload. LAWS, therefore, are neither remote-controlled systems in which a human retains control throughout, nor automatic systems in which a particular process has been programmed in advance so that their action is totally predictable.
5. The military powers of the international community have significantly different views as to the use of LAWS. Some consider that at least initially, LAWS will not entirely replace human soldiers, but they will have tasks of substitution adapted to their specific capabilities. They will most likely be used in some form of collaboration with humans during armed conflict, although they will still be autonomous in terms of their own functions. The existing legal framework needs to be examined in the light of this scenario, therefore, along with the scenario in which LAWS would be deployed without any human participation. 
			(2) 
			Report of the Special
Rapporteur on extrajudicial, summary or arbitrary executions, Christof
Heyns, to the UN General Assembly, 9 April 2013, paragraph 47.
6. Activists from 54 non-governmental organisations have launched a campaign in favour of a preventive prohibition of research and development of this emerging technology and thus, even more so, of any deployment of what they call “Killer Robots”. 
			(3) 
			<a href='https://www.stopkillerrobots.org/'>www.stopkillerrobots.org/.</a> This position of principle was endorsed by the European Parliament in its resolution dated 12 September 2018 on autonomous weapons systems. 
			(4) 
			Resolution of the European
Parliament of 12 September 2018 on autonomous weapons systems (2018/2752(RSP)), paragraphs
3 et 4. Since 2014, the States Parties to the UN Convention on Certain Conventional Weapons (CCW) have been holding regular rounds of discussions on autonomous weapons in order to develop a common definition and the beginnings of some regulation. In 2017, artificial intelligence experts sent an open letter calling on governments and the UN to “prevent an arms race in these weapons” and “to avoid the destabilising effects of these technologies” which pose a threat to, inter alia, international humanitarian and human rights law.
7. One reason why this analysis is so urgently needed is that current assessments of the future role of LAWS will affect the level of investment of financial, human and other resources in the development of this technology over the next few years. To some extent, therefore, the current assessments – or lack of them – risk becoming self-fulfilling prophecies. 
			(5) 
			Report
of the Special Rapporteur on extrajudicial, summary or arbitrary
executions, op. cit., paragraph 49. On the other hand, the risks associated with lack of capacity for the global military powers who would fail to invest in this new technological field and thus the “arms race” logic implied in this field prompts some researchers to consider that LAWS are the third military revolution in the history of international relations, after the invention of gunpowder and that of nuclear weapons.
8. This report focuses on the use of such weapons in the context of armed conflict and thus primarily in the context of the application of international humanitarian law (IHL). However, important ethical and legal questions would also arise if such weapons were used by civilian authorities, in particular police forces, outside the context of conflict, for special operations (for example anti-terrorism). This related issue, which does not appear to arise in Council of Europe member States today, would involve a detailed analysis of obligations under the European Convention on Human Rights and other European and international human rights standards. This should be the subject of a report in its own right.

2. Definition of LAWS

9. Given the different aspects of the technology and artificial intelligence, it remains difficult to reach a consensus on the definition of LAWS. Most parties to the discussions agree that the defining characteristics of LAWS are their full autonomy and lethality, although the details of these terms are the subject of much debate.
10. In his report to the United Nations General Assembly in 2013, Christof Heyns talked about LAR or “lethal autonomous robotics” 
			(6) 
			Report of the Special
Rapporteur on extrajudicial, summary or arbitrary executions, paragraph
28. which he defines in the same way that the United States Department of Defense defines LAWS: “weapon systems that, once activated, can select and engage targets without further intervention by a human operator.”
11. The International Committee of the Red Cross (ICRC) takes a similar approach with the following more detailed definition of LAWS: “Any weapon system with autonomy in its critical functions. That is, a weapon system that can select (i.e. search for or detect, identify, track, select) and attack (i.e. use force against, neutralize, damage or destroy) targets without human intervention.” 
			(7) 
			ICRC, Views of the
ICRC on autonomous weapon systems, paper submitted to the Convention
on Certain Conventional Weapons Meeting of Experts on Lethal Autonomous
Weapons Systems (LAWS), 11 April 2016, <a href='https://www.icrc.org/en/document/views-icrc-autonomous-weapon-system'>www.icrc.org/en/document/views-icrc-autonomous-weapon-system</a>.
12. What emerges from these definitions is that autonomous systems are able to select and engage targets individually and independently without any human involvement. Thus, crucial military targeting decisions that would otherwise be made by humans will be made by a machine. Human decisions are limited to the preliminary stages such as programming and initial deployment; there is no human control during missions, other than a potential general command capability such as deactivation. 
			(8) 
			Brenneke,
Matthias, Lethal Autonomous Weapon Systems
and their Compatibility with International Humanitarian Law: A Primer
on the Debate (2019), pp. 64 and 65.
13. The ICRC working definition encompasses any weapon system capable of independently selecting and attacking targets and provides a useful basis for legal analysis by delineating the broad scope of the discussion about autonomous weapon systems without the need to immediately identify the systems that raise legal concerns. 
			(9) 
			ICRC, Views of the
ICRC on autonomous weapon systems.

2.1. Forms of autonomy in context

14. Weapon systems autonomy can be divided into three categories. 
			(10) 
			Paul Scharre, Michael
C. Horowitz, “An Introduction to autonomy in weapon systems”, Center
for a New American Security (CNAS), p. 8. The degree of autonomy used by these weapons systems, according to today’s state of technological maturity, depends on the scope of the intervention of a human operator in their deployment and use. 
			(11) 
			Julien Ancelin, “Les
systèmes d’armes létaux autonomes (SALA): Enjeux juridiques de
l’émergence d’un moyen de combat déshumanisé”, La Revue des droits de l’homme,
Actualités Droits-Libertés, 25 October 2016, p. 3.
a. “Human in the loop”: weapon systems that use autonomy to engage individual targets or specific groups of targets that a human can and must decide to engage, 
			(12) 
			Id. for example guided munitions where the weapon’s technology assists the operator in striking the target. The person launching the weapon knows what specific targets are to be engaged, and retains the conscious decision that those targets should be destroyed. 
			(13) 
			“An Introduction to
autonomy in weapon systems”, op. cit., pp. 8 and 9.
b. “Human on the loop”: weapon systems that use autonomy to select and engage targets, but human controllers can halt their operation if necessary. 
			(14) 
			“Les
systèmes d’armes létaux autonomes (SALA): Enjeux juridiques de
l’émergence d’un moyen de combat déshumanisé”, op. cit., p. 3. At least 30 nations use human-supervised defensive systems with greater autonomy, where humans are “on the loop” for selecting and engaging specific targets. 
			(15) 
			“An
Introduction to autonomy in weapon systems”, op. cit., p. 8. To date, these have been used for defensive situations where the reaction time required for engagement is so short that it would be physically impossible for humans to remain “in the loop” and take a deliberate action before each engagement and still defend effectively. Human operators supervise; they are aware of the criteria for the selection of specific targets and the engagement of force follows pre-programmed rules. Human controllers can intervene to deactivate the weapon system, but do not make an active decision to engage specific targets. 
			(16) 
			Ibid., pp. 8 and 12.
c. “Human out of the loop”: weapon systems that use autonomy to select and engage specific targets without any possible intervention by human operators. 
			(17) 
			Ibid.,
p. 8.
15. Autonomy is interdependent on the extent of the human operator's intervention in the deployment and use of the weapon system, which can be highly variable depending on the complexity of the technology and the environment in which the weapon is used, ranging from remote-controlled systems to automation and empowerment. 
			(18) 
			“Les systèmes d’armes
létaux autonomes (SALA): Enjeux juridiques de l’émergence d’un
moyen de combat déshumanisé”, op. cit., p. 3.

2.2. Autonomous or semi-autonomous weapons

16. The most likely near-term candidates for autonomous weapons are not sentient or malevolent humanoid robots but rather something more like wide-area search-and-destroy loitering munitions, like those depicted in the video “Slaughterbots”. 
			(19) 
			<a href='https://autonomousweapons.org/slaughterbots/'>https://autonomousweapons.org/slaughterbots/</a>. Thus, the definitions must clearly distinguish, in a way that is technically rigorous, between autonomous weapons and the precision-guided homing munitions, known as semi-autonomous weapons systems (SAWS) that have been in use for over seventy years. 
			(20) 
			“An Introduction to
autonomy in weapon systems”, op. cit., p. 16. Unlike autonomous systems which select and engage targets autonomously, SAWS are weapon systems which incorporate autonomy into one or more targeting functions and, once activated, are intended to only engage individual targets or specific groups of targets that a human has decided are to be engaged. Falling mid-way between the two are human-supervised autonomous weapon systems, with the characteristics of LAWS, but with the ability for human operators to monitor the weapon system’s performance and intervene to halt its operation, if necessary. 
			(21) 
			Id.
17. The idea of a human decision is embedded within each of the above definitions. The decision to place an autonomous weapon into operation versus a semi-autonomous one is a very different decision. Even in the case of a fire-and-forget homing missile, which, once launched, is capable of moving in total autonomy, without any human intervention, the decision about which individual target or specific group of targets is to be engaged by that homing missile was made by a human operator. By contrast, in the case of an autonomous weapon, the human has decided to launch a weapon to seek out and destroy a general class of targets over a wide area but does not take a decision about which specific targets are to be engaged. Both definitions, however, focus on the decision the human is making or not making 
			(22) 
			“Les systèmes d’armes
létaux autonomes (SALA): Enjeux juridiques de l’émergence d’un
moyen de combat déshumanisé”, op. cit., p. 4. and do not apply the word “decision” to something the weapon itself is doing. This could raise important difficulties as to taking into account the integration of the systems’ artificial intelligence in what could be likened to free will. 
			(23) 
			“An Introduction to
autonomy in weapon systems”, op. cit., p. 16.

2.3. Human control

18. The ICRC definition is not intended to prejudge the level of autonomy in weapon systems but its purpose is to help define an appropriate degree of human control that may, or may not, be considered capable of guaranteeing the respect of international humanitarian law. 
			(24) 
			“Les systèmes d’armes
létaux autonomes (SALA): Enjeux juridiques de l’émergence d’un
moyen de combat déshumanisé”, op. cit., p. 5. In the legal discussion, the analysis of the closeness of the link between human decision making and the action of the machine is of primary importance. Compliance with IHL can only be assured by maintaining human control, the intensity of which varies according to the positions taken by States and other actors of the international community. 
			(25) 
			Neil Davison, “A legal
perspective: Autonomous Weapon Systems under international humanitarian
law”, p. 6.
19. There is general agreement among CCW States Parties that “meaningful” or “effective” human control, or “appropriate levels of human judgement” 
			(26) 
			“Les systèmes d’armes
létaux autonomes (SALA): Enjeux juridiques de l’émergence d’un
moyen de combat déshumanisé”, op. cit., p. 5. must be retained over lethal weapon systems. 
			(27) 
			Recommendations to
the 2016 Review Conference, submitted by the Chairperson of the
Informal Meeting of Experts, paragraph 2 (b).
20. ARTICLE 36, an NGO, developed the concept of “significant” human control, arguing that other terms such as “important, appropriate, proper or necessary” 
			(28) 
			Michael
C. Horowitz, Paul Scharre, “Meaningful human control in weapon systems:
A Primer”, CNAS, p. 10. human implication or control could very well serve to describe the concept whose importance resides in the development of more precise criteria. This has been widely discussed ever since. 
			(29) 
			Article
36, “Memorandum for delegates at the Convention on Certain Conventional
Weapons (CCW) - Meeting of Experts on Lethal Autonomous Weapons
Systems (LAWS)”. However, whatever the terminology that will be adopted, the criteria for the definition of human control which will achieve consensus cannot render illicit the use of certain weapons that have been in use for a long time, including ones that drastically reduce the risk of civilian casualties, in order to avoid that the rules of international law are divorced from the reality of war. For example, the definition of meaningful human control proposed by the International Committee for Robot Arms Control (ICRAC) includes a provision to the effect that, for there to be meaningful human control, “a human commander (or operator) must have full contextual and situational awareness of the target area and be able to perceive and react to any change or unanticipated situations that may have arisen since planning the attack.” 
			(30) 
			Frank Sauer, “ICRAC
statement on technical issues to the 2014 UN CCW Expert Meeting”,
14 Mai 2014. The fact is, however, that humans have been using weapons where they do not have real-time sight of the target area since at least the invention of the catapult. 
			(31) 
			“Meaningful human control
in weapon systems: A Primer”, op. cit., p. 9. Such criteria therefore seem to be unrealistic. In this respect, it is worrying that the definition of LAWS adopted by the group of experts set up by the European Commission in its “Ethics Guidelines for Trustworthy AI” includes weapons types that have been in use for a long time. 
			(32) 
			“Ethics
guidelines for trustworthy AI”, High level group of independent
experts on artificial intelligence set up by the European Commission,
8 April 2019, paragraph 134, p. 45.
21. In its original 2013 document introducing the concept of meaningful human control, ARTICLE 36 argues that there are three necessary requirements for meaningful human control:
a. Information – a human operator, and others responsible for attack planning, need to have adequate contextual information on the target area of an attack, information on why any specific object has been suggested as a target for attack, information on mission objectives, and information on the immediate and longer-term weapon effects that will be created from an attack in that context.
b. Action – initiating the attack should require a deliberate action by a human operator.
c. Accountability – those responsible for assessing the information and executing the attack need to be accountable for the outcomes of the attack. 
			(33) 
			Article
36, “Killer Robots: UK Government Policy on Fully Autonomous Weapons”,
april 2013.
22. Both the ARTICLE 36 and ICRAC statements emphasise the general notion of informed action by a human. While the standard for information required may be unrealistic in these proposals, informed action is central to the concept of meaningful human control. This raises the question of how much information is required for a human operator to make a meaningful decision about the use of force.
23. ARTICLE 36’s approach of “adequate” information might be the most appropriate: in order to make a decision about the lawfulness of their action, the person must have enough information about the target, the weapon, and the context for engagement. This does not mean that each human operator involved in the chain of decision making need have the complete picture. As happens today for soldiers intervening in a building or a pilot dropping a bomb on a pre-planned target, human operators may rely on decisions that have been made by other humans in the chain of command. However, relying on others does not mean blind trust or abrogating one's own moral judgement. A single individual may not be responsible for all aspects of decision-making relating to attacking a target, but any given person can be held accountable for his or her own actions related to that attack. 
			(34) 
			“Meaningful human control
in weapon systems: A Primer”, op. cit., pp. 13 and 14.
24. According to the study carried out in 2015 by the Centre for a New American Security (CNAS), human control is meaningful when humans make informed, conscious decisions about the use of the weapon (no one is merely pushing a button when they see a light blink on) and when the information they have to make that decision is sufficient for them to ensure the lawfulness of the action they are taking, given what they know about the target, the weapon, and the context for action. This is important, especially in the context of responsibility for errors. Human operators must have effective control over the use of weapons. This is the case even if some of them are “fire and forget” weapons that cannot be recalled after launch. This is because trained human operators have a clear understanding of how the weapon will function in certain environments as well as its limitations, so they can use it appropriately. 
			(35) 
			Ibid., p. 13.
25. Human control can be exercised at the development stage, including through technical design and programming of the weapon system. Decisions taken during the development stage must ensure that the weapon system can be used in accordance with IHL and other applicable international law in the intended or expected circumstances of use. At this stage, the predictability and reliability of the weapon system must be verified through testing in realistic environments. Operational limits must be set so that the weapon is only activated in situations where its effects will be predictable. Also, the operational requirement and technical mechanism for human supervision, as well as the ability to deactivate the weapon, will need to be established.
26. Human control may also be exerted at the point of activation, which involves the decision of the commander or operator to use a particular weapon system for a particular purpose. This decision must be based on sufficient knowledge and understanding of the weapon’s functioning in the given circumstances to ensure that it will operate as intended and in accordance with IHL. This knowledge must include adequate situational awareness of the operational environment, especially in relation to the potential risks to civilians and civilian objects. It will also depend on various operational parameters, most of which will be set at the development stage, and some that will be set or adjusted at the activation stage:
a. The task the weapon system is assigned to,
b. The type of target the weapon system may attack,
c. The type of force and munitions it employs (and associated effects),
d. The environment in which the weapon system is to operate,
e. The mobility of the weapon system in space,
f. The time frame of its operation,
g. The level of human supervision and ability to intervene after activation.
27. In order to ensure compliance with IHL, there may need to be additional human control during the operation stage, when the weapon autonomously selects and attacks targets. Where the technical performance of the weapon and operational parameters set during the development and activation stages are insufficient to ensure compliance with IHL in carrying out an attack, it will be necessary to define the conditions in which the ability for human control and decision making during the operation stage must be retained.

3. Legal perspective

28. Autonomous weapon systems, as defined, are not specifically regulated by international treaties. It is the way in which they are used, against whom and for what purposes that must be compliant with international humanitarian and human rights law, however. The International Court of Justice was clear in its 1996 Advisory Opinion that the established principles and rules of humanitarian law applicable in armed conflict apply to “all forms of warfare and to all kinds of weapons, those of the past, those of the present and those of the future”. 
			(36) 
			International Court
of Justice, “Legality of the Threat or Use of Nuclear Weapons”,
Advisory Opinion of 8 July 1996, paragraph 86.
29. Given that lethal weapon systems are at stake, it needs to be considered, whether and to what extent LAWS could interfere with the guarantees foreseen by the European Convention on Human Rights, in particular the right to life (Article 2).
30. LAWS are most likely to be used in situations of armed conflict rather than in any other situations, so international humanitarian law would apply. Analysing the conformity of LAWS and the criteria for meaningful human control through the lens of human rights is also necessary, however, since human rights apply at all times and in all places, whereas the application of humanitarian law depends on the existence of armed conflict in which humanitarian law takes precedence over human rights law as lex specialis. Human rights might form the governing legal framework in many situations. For example, during military operations in situations that cannot be classified as an armed conflict, in situations of occupation or armed conflict in which humanitarian law and human rights law often overlap in practice. 
			(37) 
			Amanda Eklund, “Meaningful
Human Control of Autonomous Weapon Systems: Definitions and Key
Elements in the Light of International Humanitarian Law and International
Human Rights Law”, p. 35.
31. The European Court of Human Rights has even pointed out that Article 2 must be interpreted in so far as possible in light of the general principles of international law, including the rules of international humanitarian law which play an indispensable and universally accepted role in mitigating the savagery and inhumanity of armed conflict. 
			(38) 
			ECtHR, Varnava and Others v. Turkey, paragraph
185. Consequently, even in situations of international armed conflict, the safeguards under the Convention continue to apply, albeit interpreted against the background of the provisions of international humanitarian law. 
			(39) 
			ECtHR, Hassan v. the United Kingdom [GC],
paragraph 104.

3.1. European human rights law perspective

32. The first major requirement of Article 2 is to clearly regulate the use of autonomous weapon systems. The right to life contains two substantive obligations, and one of them is the obligation to protect the right to life by law. That means the State must put in place a legal framework which defines the limited circumstances when the use of force is allowed. With regard to weapons in general, the Court has emphasised that it is of primary importance that domestic regulations exclude the use of weapons that carry “unwarranted consequences”. 
			(40) 
			ECtHR, Tagayeva and Others v. Russia, paragraph
595. These requirements can be connected to the concept of human control examined above, which aims to ensure that humans can make context-based assessments and that the technology will function reliably and predictably. The national regulation will most likely be required to ensure that the use of autonomous weapon systems will comply with the requirements of, for example, “unwarranted effects” and “safeguards against avoidable accidents”. 
			(41) 
			Id. Even if existing case law concerns other kinds of weapons, such as firearms, it seems reasonable that the Court would not place less strict standards on autonomous weapon systems. 
			(42) 
			“Meaningful
Human Control of Autonomous Weapon Systems: Definitions and Key
Elements in the Light of International Humanitarian Law and International
Human Rights Law”, op. cit., pp. 38 and 39.
33. The text of Article 2, read as a whole, demonstrates that paragraph 2 does not primarily define instances where it is permitted intentionally to kill an individual, but describes the situations where it is permitted to “use force” which may result, as an unintended outcome, in the deprivation of life. The use of force, however, must not exceed what is “absolutely necessary” 
			(43) 
			ECtHR, McCann and Others v. the United Kingdom,
paragraph 148; ECtHR, Yüksel Erdoğan
and Others v. Turkey, paragraph 86; ECtHR, Ramsahai and Others v. the Netherlands [GC],
paragraph 286; ECtHR, Giuliani and Gaggio
v. Italy [GC], paragraph 17; Guide on Article 2 of the
European Convention on Human Rights, Right to life, 30 April 2020. to preserve a person’s life or to defend a person from unlawful violence, which is a stricter test of necessity than that applicable to most of the other rights enshrined in the Convention when determining whether State action is “necessary in a democratic society”. 
			(44) 
			Doc. 13731 “Drones and targeted killings: the need to uphold human
rights and international law”, paragraph 27.
34. The European Court of Human Rights has emphasised that it is acutely conscious of the difficulties faced by modern States in the fight against terrorism and the dangers of hindsight analysis. Consequently, the absolute necessity test formulated in Article 2 is bound to be applied with different degrees of scrutiny, depending on whether and to what extent the authorities were in control of the situation and other relevant constraints inherent in operative decision making in this sensitive sphere. 
			(45) 
			ECtHR, Tagayeva and Others v. Russia, paragraph
481.
35. The Court makes a distinction between “routine police operations” and situations of large-scale anti-terrorist operations. In the latter case, often in situations of acute crisis requiring “tailor-made” responses, States should be able to rely on solutions that would be appropriate to the circumstances. That being said, in a lawful security operation which is aimed, in the first place, at protecting the lives of people who find themselves in danger of unlawful violence from third parties, the use of lethal force remains governed by the strict rules of “absolute necessity” within the meaning of Article 2 of the Convention. Thus, it is of primary importance that the domestic regulations be guided by the same principle and contain clear indications to that extent, including the obligations to decrease the risk of unnecessary harm and exclude the use of weapons and ammunition that carry unwarranted consequences. 
			(46) 
			Ibid., paragraph 595.
36. The case of Streletz, Kessler and Krenz v. Germany concerning the border-policing regime of East Germany resulting in the killing of East Germans attempting to escape to West Germany illustrates the need to make necessity assessments in the light of automated use of force. 
			(47) 
			“Meaningful Human Control
of Autonomous Weapon Systems: Definitions and Key Elements in the
Light of International Humanitarian Law and International Human
Rights Law”, op. cit., p. 39. The weapons used in this case, anti-personnel mines and automatic-fire systems, were not autonomous in the sense of LAWS, but due to their automatic and indiscriminate effect, together with the categorical nature of the orders given to border guards to annihilate border violators and protect the border at all costs, the Court considered that the automated killing flagrantly infringed the fundamental rights of the Constitution and violated the right to life. 
			(48) 
			ECtHR, Streletz, Kessler and Krenz v. Germany,
paragraph 73; “Meaningful Human Control of Autonomous Weapon Systems:
Definitions and Key Elements in the Light of International Humanitarian
Law and International Human Rights Law”, op. cit., p. 39. This case is not about the autonomy of the weapon technology itself, but the organisation of the operation as such and the absence of a necessity assessment when automating the killing. This particularly needs to be considered when it comes to LAWS used for defence purposes. Streletz, Kessler and Krenz v. Germany illustrates that there must be control over the individuated use of the system – in the sense of making, and complying with, necessity assessments – because otherwise the use of lethal force will probably be considered as having automated and indiscriminate effects which would flagrantly violate the right to life. 
			(49) 
			Id.
37. In the “Gibraltar case”, where British soldiers shot suspected IRA terrorists, it was not the actions of the soldiers in themselves which gave rise to a violation of the right to life, but the control and organisation of the operation as a whole. 
			(50) 
			ECtHR, McCann and Others v. the United Kingdom,
paragraphs 199-201. The case illustrates that the planning stage of an operation is connected to whether the use of force was absolutely necessary. Consequently, the condition of meaningful human control for the compliance of LAWS with European human rights law will have to integrate the criterion of the necessity of the use of force in the planning of the operation. 
			(51) 
			“Meaningful Human Control
of Autonomous Weapon Systems: Definitions and Key Elements in the
Light of International Humanitarian Law and International Human
Rights Law”, op. cit. p. 39. The requirement to plan and exercise “strict control” over operations possibly involving the use of lethal force would probably place even stricter demands on the planning stage before launching an autonomous weapon system which could self-initiate the use of force, than when engaging State agents. 
			(52) 
			Ibid.,
p. 40.
38. This aspect might be even more important in relation to LAWS than in cases such as McCann (Gibraltar case) regarding the shooting by human agents. 
			(53) 
			Id. The reason why the actions of the soldiers did not, in themselves, give rise to a violation in this case was the soldiers’ “honest belief which [was] perceived, for good reasons, to be valid at the time but subsequently [turned] out to be mistaken.” 
			(54) 
			ECtHR, McCann and Others v. the United Kingdom,
paragraph 200. Justifying an infringement based on a mistaken honest belief will probably not be accepted when an autonomous weapon system kills someone by mistake. The concept of an “honest belief” would be difficult to apply to a machine, unless the Court was to consider whether the human operator or military organisation had an honest belief that the use of force was necessary. Such an argument would most likely not be accepted since this belief must be subjectively reasonable with regard to the circumstances at the relevant time. 
			(55) 
			“Meaningful Human Control
of Autonomous Weapon Systems: Definitions and Key Elements in the
Light of International Humanitarian Law and International Human
Rights Law”, op. cit., pp. 40 and 41; ECtHR, McCann
and Others v. the United Kingdom, paragraph 149; Giuliani
and Gaggio v. Italy, paragraphs 176 and 209. This requirement will not be met in the case of autonomous weapon systems. The timespan between the human decision to launch the weapon system and the eventual use of force initiated by the system would be insufficient, unless there are possibilities for human supervision and intervention providing sufficient environmental understanding for an operator to form an honest and genuine belief valid at the relevant time. 
			(56) 
			“Meaningful Human Control
of Autonomous Weapon Systems: Definitions and Key Elements in the
Light of International Humanitarian Law and International Human
Rights Law”, op. cit., pp. 40 and 41.
39. Beyond necessity, another required assessment is the one of proportionality (the balance to be struck between, for example, the value of life and military advantage). It is the responsibility of humans using the weapons to make this assessment, which is another necessary aspect of meaningful human control over LAWS. The Court has emphasised that States that take on a pioneer role in the development of new technologies have a special responsibility to strike the right balance in their proportionality assessments. 
			(57) 
			Ibid, p. 41: See ECtHR, S. and Marper v. United Kingdom [GC],
paragraph 112, which does not concern the right to life, but respect
for private life in relation to retention of DNA information. Still,
the case illustrates the Court’s view that States pioneering within
the development of new technologies have a special responsibility
to strike the right balance between the advantages of the new technology
and the rights at stake.

3.2. Compliance with international humanitarian law

40. According to Article 36 of the Additional Protocol to the Geneva Conventions of 12 August 1949, and relating to the Protection of Victims of International Armed Conflicts (Protocol I), States which develop, supply and use new weapons must ensure their compliance with IHL rules. It is humans, therefore, who are responsible for applying the law and who can be held accountable for violations, not the weapon itself. These legal requirements, notably the rule of distinction, the prohibition of indiscriminate attacks, the rule of proportionality and precautions in attack, must be fulfilled by those persons who plan, decide on and carry out attacks. 
			(58) 
			Boulanin;
Davison, Goussac, Carlsson, “Limits on autonomy in weapon systems”,
SIPRI, p. 5.

3.2.1. Rule of distinction

41. Articles 48 and 51 paragraph 4 of Protocol I prohibit indiscriminate attacks, namely attacks in which no distinction is made between civilian and military targets. According to this rule of distinction, the system must have the capacity to distinguish between active combatants and protected persons, and between military and civilian objects, because the attacks must never be directed against protected persons and objects. This prohibition includes the prohibition of attacks which employ inherently indiscriminate means of combat, whose effects cannot be limited and which therefore affect legitimate objectives and civilians without distinction (Article 51 paragraph 4 c) of Protocol I). That includes, for example, biological weapons which, by their nature, cannot distinguish between civilians and combatants. In its advisory opinion on the legality of the threat or use of nuclear weapons, however, the International Court of Justice did not rule out the possibility that even nuclear weapons could be used in such a way as to avoid violating the rule of distinction, for example, by being directed against a military target in a vast desert, so that their effects were confined to the military target alone and affected neither civilians nor civilian objects. 
			(59) 
			“Legality of the Threat
or Use of Nuclear Weapons”, Advisory Opinion, ICJ, 1996, p. 263.
42. A particular category of protected persons is that of wounded combatants (hors de combat) and those wishing to surrender. Any LAWS must therefore be able to protect these persons. 
			(60) 
			See
GEG, A “compliance-based” approach to Autonomous Weapon Systems, Working Paper submitted by Switzerland
(2017), paragraph 11, <a href='https://docs-library.unoda.org/Convention_on_Certain_Conventional_Weapons_-_Group_of_Governmental_Experts_(2017)/2017_GGEonLAWS_WP9_Switzerland.pdf'>https://docs-library.unoda.org/Convention_on_Certain_Conventional_Weapons_-_Group_of_Governmental_Experts_(2017)/2017_GGEonLAWS_WP9_Switzerland.pdf.</a> In this context it is also worth recalling the “Martens Clause”, which is part of customary international law and according to which the “laws of humanity and the dictates of public conscience” must be respected even in the absence of an explicit prohibition. 
			(61) 
			Ibid.,
paragraph 18.
43. On the assumption that LAWS are specifically designed for targeting and high precision, they would as such be fundamentally capable of complying with the distinction rule. Even though a violation of the distinction rule may occur through the actual use of LAWS, in a specific situation, that does not a priori appear sufficient to render the entire category of weapons unlawful.

3.2.2. Rule of proportionality

44. In addition, there must be compliance with the principle of the law of war that any military action must be necessary and proportionate to the damage (see in particular Articles 50 of the Geneva Convention I and 51 paragraph 5b of its Protocol I).
45. The challenge arises because LAWS operate and act on the basis of technical indicators, namely pre-programmed target profiles. LAWS obtain information about their environment through sensors and computer-generated analysis and apply it to the profiles. Many experts agree that such processes do not in themselves constitute the proportionality assessment and cannot replace the decisions required of persons under the rule of proportionality. 
			(62) 
			“Limits on autonomy
in weapon systems”, op. cit., p. 5.
46. Qualitative and evaluative judgements as to whether an attack complies with the rule of distinction or is proportionate are made on the basis of values and interpretations of the particular situation rather than numbers or technical indicators. For example, it is difficult to quantify civilian casualties or military necessity and to solve the wide variety of situations in the LAWS’ numerical terms. Making such assessments requires uniquely human judgement. Such judgements, which also reflect ethical considerations, are or must be part of military training. 
			(63) 
			Id.
47. A human can certainly take advice from a system, however. The algorithms’ assessment can be communicated to the human who would take control, insofar as they would actually decide whether the system attacks or not. In such a scenario, the judgment as to whether an attack satisfied the IHL rule of proportionality would remain within the realm of human decision making. Such mechanisms referring to a human decision must therefore be put in place to respect the principle of proportionality. In this sense, not only the States using such systems but also those manufacturing and supplying them have a responsibility under Protocol I (see paragraph 39 above).

3.2.3. Principle of precaution

48. In order to comply with the rules of distinction and proportionality, and also the requirement to take precautions in attack, LAWS must be predictable to some extent. The users must be capable of limiting or nullifying the effects of the weapon systems, if necessary, something that is only possible if they can reasonably foresee how a system will react.
49. All LAWS, even so-called “deterministic” systems, raise concerns about unpredictability, because the consequences of any output will vary depending on the circumstances in the environment at the time of the attack. A LAWS will apply force at a specific time and place unknown to the user when they activated the system. Moreover, the environment may vary over time and the status and surroundings of the target may change swiftly or frequently (for example if civilians have moved into the immediate vicinity). 
			(64) 
			Ibid., p. 7.

3.3. Legal and moral responsibility

50. The emergence and a fortiori the use of LAWS during conflicts raise new legal issues which are not directly and expressly governed by the existing rules of the law of armed conflicts and highlight potential gaps in terms of accountability. On the assumption that future LAWS meet all the legal requirements of the laws of war when they operate normally, malfunctions of the system could cause an erroneous attack and thereby raise accountability issues. In the case of a malfunctioning LAWS, it could be difficult, if not impossible, to establish the responsibility of a human operator. It must be possible to establish where responsibility lies in case of malfunctioning LAWS by determining whether there was sufficient control according to the criteria described above. 
			(65) 
			“Meaningful human control
in weapon systems: A Primer”, op. cit., p. 8. The question of the manufacturer's responsibility will also arise in such a case, and the manufacturer will have to be able to demonstrate that he has taken sufficient precautions to ensure compliance with IHL at his level.
51. Any action aimed at establishing the responsibility of a LAWS would be futile, as the machine is neither designed nor capable, by nature, and despite a high degree of autonomy, to understand the consequences of its actions from the perspective of criminal liability designed for humans. Consequently, unlawful actions committed by a LAWS resulting in violations of IHL should be able to be linked alternatively to the individual or groups of individuals at the origin of its design or programming or its deployment and ultimately to the State of nationality of the armed forces which holds it. 
			(66) 
			“Les systèmes d’armes
létaux autonomes (SALA): Enjeux juridiques de l’émergence d’un
moyen de combat déshumanisé”, op. cit., p. 6. Thus, under the law of the international responsibility of States, a State could be held responsible for violations of IHL resulting from the use of an autonomous lethal weapon system. Indeed, under general international law governing State responsibility, they would be held responsible for internationally wrongful acts, such as violations of IHL committed by their armed forces using an autonomous weapons system. The main question is that of the capacity of the international legal order to extend its material domain without the need to adopt new formal guarantees of application. 
			(67) 
			Ibid., p. 7.
52. Some authors believe that in the current state of international law, the responsibility of those in charge of political and military decisions, of the operational part, of industrial design or programming could always be established in case of a violation of IHL by a lethal autonomous weapons system. 
			(68) 
			Id. A State would also be responsible if it were to use an autonomous weapon system that has not been adequately designed, tested or reviewed prior to deployment. 
			(69) 
			“A
legal perspective: Autonomous Weapon Systems under international
humanitarian law”, op. cit., p. 17.
53. Unlike humans, machines do not have feelings and are not moral agents. Even if a person committed a war crime with an autonomous weapon, it would be the human who committed the crime, using the autonomous weapon as the tool for committing the crime. For this to remain true, however, then humans must remain not only legally accountable but also morally responsible for the actions of autonomous weapons systems. Furthermore, some decisions pertaining to the use of weapons require legal and moral judgements, such as weighing likely expected civilian casualties against military advantages from conducting attacks. Some have argued that regardless of whether machines could perform these functions in a legally compliant manner, humans ought to validate them since they are also moral judgements. In this respect, “meaningful” refers to humans retaining moral responsibility for the use of weapons, even weapons that might incorporate high degrees of autonomy. 
			(70) 
			“Meaningful human control
in weapon systems: A Primer”, op. cit., p. 8. Otherwise, the choice of political or military leaders who agree to acquire such a weapon or to use it in a particular context, with knowledge of the machine's decision-making systems and the resulting risks of violation, should engage their responsibility, and the obligation to test and verify the weapon and to determine in which contexts it can be used takes on particular importance within the meaning of Article 26 of Protocol I. 
			(71) 
			“A
legal perspective: Autonomous Weapon Systems under international
humanitarian law”, op. cit., pp. 9 and 16.
54. Nevertheless, some underline that while it is perfectly possible to hold a military authority responsible for an unlawful act committed by a LAWS, just as it can be for the same type of act committed by a soldier having acted under its orders, 
			(72) 
			Vaurs-Chaumette (A.-L.),
“Chapitre 39: Les personnes pénalement responsables”, in Ascensio (H.), Decaux (E.), Pellet
(A.) (dir.), Droit international pénal,
CEDIN Paris X, Pedone, 2012, 2ème édition révisée, pp. 483-485. there is a high risk that the element of intention required to assign responsibility is lacking. In fact, for a military authority to be held responsible it must have been aware of the planned wrongful acts without intervening to prevent them or it has not sanctioned its subordinate who committed the act. However, it is reasonable to doubt the ability of military leaders to be “able to have a sufficient understanding of the complex programming” 
			(73) 
			Report of the Special
Rapporteur on extrajudicial, summary or arbitrary executions, op.cit.,
paragraph 78, pp. 16-17. of the LAWS which perpetrated the unlawful act. 
			(74) 
			“Les
systèmes d’armes létaux autonomes (SALA): Enjeux juridiques de
l’émergence d’un moyen de combat déshumanisé”, op. cit., p. 7.
55. By contrast, a programmer who intentionally programs an autonomous weapon to function in violation of IHL or without taking IHL sufficiently into account, or a commander who activates a weapon unable to function in accordance with IHL in that environment would certainly be criminally liable for any ensuing violation. Similarly, a commander who knowingly decides to activate an autonomous weapon system whose performance and effects they cannot reasonably foresee in a given situation may be held criminally liable for any resulting violation of IHL, to the extent that their decision to deploy the weapon is considered reckless in the circumstances.
56. In addition, under national product liability laws, manufacturers and programmers could also be held liable for programming errors or the malfunction of an autonomous weapon system or for the absence of sufficient precautionary measures. It should, however, be emphasised in this regard that the responsibility then brought into play will be of a civil and internal character and not of a penal and international character as provided for in international humanitarian law or international human rights law. Moreover, it seems useful to recall that international law only marginally allows for international liability of companies to be engaged and that consequently the companies which design and manufacture LAWS are not formally subject to any obligation to comply with IHL. 
			(75) 
			Id. It is therefore the responsibility of the State that acquires and commits the LAWS to ensure that its design and programming meet strict criteria and to test and review their reliability. Otherwise, a design or programming problem, whether intentional or unintentional, could circumvent important IHL norms, for example the issue of differentiation or proportionality, without it being possible to hold anyone accountable under IHL.

4. Hearing of 5 November 2021

57. At the meeting on 5 November 2021, I organised a hearing with four leading experts:
  • Raja Chatila, Professor Emeritus, former Director of the Institute of Intelligent Systems and Robotics, Sorbonne University, Paris, France;
  • Noel Sharkey, President of the NGO “The international committee for Robot arms control”, computer scientist specialising in robotics, University of Sheffield, United Kingdom;
  • Jean-Gabriel Ganascia, President of the Ethics Committee of the “Centre National de la Recherche Scientifique” (Comets), Paris, France;
  • Jean-Baptiste Jeangène Vilmer, Director of the Institute for Strategic Research at the Military School (IRSEM), Paris, France.
58. Dr Jeangène Vilmer focused on the ethical and diplomatic dimensions of LAWS, with reference to discussions on the topic, beginning with the first examination by the UN Human Rights Council in 2013 up to the Group of Governmental Experts (GGE) as of 2016. The group represented some 90 States, the ICRC and NGOs. It was to deliver its final report in December 2021. Dr Jeangène Vilmer stressed that most NGOs and some member States were urging a total ban on the use of this type of weaponry. Other States took a different view: some were “obstructionist” (Russia) while others were more constructive (United States, United Kingdom, France, Israel, etc.). In all events, no consensus had yet been reached, and two key questions were still pending before the GGE: how to define LAWS and what ethical grounds there might be to authorise the use of such weapons. Dr Jeangène Vilmer then outlined the arguments for and against the use of LAWS, from the ethical and utilitarian viewpoints. As the law currently stood, some rules already applied to the use of these arms. Firstly, LAWS could not be used either against non-military targets or against certain military targets in specific contexts. Secondly, they should be programmed to refrain from striking where there was any doubt and generally used only on a subsidiary basis, as part of a human decision-making process. To conclude, the expert strongly opposed describing these arms as fully “autonomous”: the “human factor” was in fact always necessary. Given the reluctance of major States to agree to a ban on LAWS, he was in favour of drawing up a set of guiding principles or a “code of conduct”. 
			(76) 
			The full text of the
expert's statement is available (in French) from the committee secretariat.
59. Mr Chatila mentioned the difficulties in defining this type of weaponry. The term “autonomous” must not be understood as absolute but rather as relating to computational intelligence. Mr Chatila stressed that the autonomy of a machine had to be viewed in relation to the tasks and the environments in which any intelligent computer system operated. That was why the term “autonomous” meant both operational autonomy and autonomy of decision-making. Mr Chatila then described the characteristics of these forms of autonomy and listed a number of issues linked to the use of LAWS, which included the lack of contextual decision-making processes, the impossibility of predicting developments unfolding on the battlefield and the inability of LAWS to adapt to unforeseen circumstances. On top of these factors came the excessive faith placed by humans in the data supplied by information technologies (“automation bias”), differing moral values of individuals and the general issue of the possibility of delegating responsibility for “acts” committed by machines. Finally, it appeared that LAWS were becoming easier to access, including, potentially, by non-state players, which would make them even more difficult to control. 
			(77) 
			The full text of the
expert's statement is available (in English) from the committee
secretariat.
60. Mr Ganascia spoke about the use of LAWS from the sociological and ethical viewpoints. He mentioned the aspects of unpredictability, lethality, autonomy and automaticity of these weapons and explained that artificial intelligence was not a reliable tool in an armed conflict, as it was incapable of taking decisions based on moral grounds. He compared LAWS with other types of banned weapons, such as chemical weapons or other weapons of mass destruction that were incapable of discriminating between combatants and civilians. He analysed LAWS from different points of view and concluded more arguments had been put forward against their development rather than in favour of it. He referred to the initiatives in some countries, chiefly in Europe, seeking to impose a moratorium on the development of LAWS. However, in his view, certain major States would never feel bound by an international moratorium and would continue to develop these weapons. This would pose a threat to European values, and the Council of Europe should oppose it. 
			(78) 
			The full text of the
expert's statement is available (in French) from the committee secretariat.
61. Mr Sharkey explained how he saw the issues posed by the use of LAWS, from a somewhat more technical viewpoint. Firstly, there was the meaning of “autonomous system”, which should be taken as meaning a machine, robot or information system capable of acting without human intervention. This notion raised numerous legal and ethical questions. Mr Sharkey pointed out that the States were still incapable of reaching a consensus on some of these questions. While some States were in favour of a total ban on these weapons, others had a preference for regulation by non-binding legal instruments or rejected any restrictions outright. None of the “autonomous weapons” that were available today or would be in the near future could offer a full guarantee of complying with the laws of war, and more specifically the principles of proportionality, distinction and precaution. Only a human mind was capable of assessing these aspects, which could not be transposed to a mathematical algorithm. Mr Sharkey highlighted the phenomenon of “algorithmic bias” to be found in other sectors that used artificial intelligence, such as law and order, health and social protection. LAWS could also destabilise global security by triggering a new arms race, geared in particular to developing artificial intelligence for military purposes, which would not be subject to any human control. Mr Sharkey agreed with the other experts that ethical decisions on matters of life or death must not be delegated to LAWS. In this regard, the aspect of respect for human rights had never been properly examined by the UN negotiating group. LAWS opened up a whole host of possibilities for oppressive regimes to violate human rights with complete impunity. The proliferation of autonomous weapons prevented humans from controlling them. These weapons operated at such speed that the human brain could not keep up. In addition, there was still a risk of several autonomous algorithms interacting and shutting humans out of the equation, with disastrous consequences.
62. In reply to questions and comments from committee members, Dr Jeangène Vilmer confirmed that there was a risk of “privatisation” of these weapons, given the growing financial and economic power of private companies. Another risk was that LAWS could be used by terrorists. Mr Vilmer reiterated that context was crucial for taking a decision in the light of humanitarian and human rights law, and machines were still unable to do this. It was impossible to give a definitive answer as to whether LAWS should be banned or, failing that, regulated. These weapons had their pros and cons. That said, the mood tended towards drawing up a code of conduct governing their use, rather than an outright ban. Mr Chatila confirmed that the risk of these weapons becoming widespread persisted, in view of the enormous military advantage they offered. Consequently, a ban had never been entertained by most States, which had nevertheless agreed to sketch out some rules limiting their use in temporal and spatial terms. Mr Ganascia also agreed that there was a risk of privatisation, especially if such weapons were banned. A straight ban would be declarative in nature and would not stop private enterprises developing them in secret. He stressed that the most important question in this context concerned the establishing of their responsibility for using these weapons. The more autonomous the weapons, the less clear-cut the responsibility of humans was. Properly regulating the development of these weapons would be more effective than introducing a total ban. If humans had the will and the ability to keep control, these weapons would no longer be regarded as fully "autonomous". For Mr Sharkey the important question was definitely the responsibility for using LAWS, and it was a question for which there was still no answer.

5. Current state of discussions within the specialised Group of Governmental Experts (GGE)

63. Within the framework of the 6th Review Conference of the States Parties to the CCW on 13-17 December 2021, the States agreed that the work of the GGE on emerging technologies in the area of LAWS should continue in 2022.
64. In the final document of the 6th Conference, 
			(79) 
			CCW/ConfVI.11, paragraphs
17-22 (at: <a href='https://reachingcriticalwill.org/images/documents/Disarmament-fora/ccw/2021/RevCon/documents/final-document.pdf'>https://reachingcriticalwill.org/images/documents/Disarmament-fora/ccw/2021/RevCon/documents/final-document.pdf</a>). the States Parties to the CCW reaffirmed that international humanitarian law was also applicable to LAWS, that such weapons systems must not be used if they are of a nature to cause superfluous injury or unnecessary suffering, or are inherently indiscriminate, or are otherwise incapable of being used in accordance with international humanitarian law. The Conference further considered that the CCW provided an appropriate framework for dealing with the issue of emerging technologies in this area.
65. The Stop Killer Robots.org NGO rejects the outcome of this conference and considers that “a minority of states including the US and Russia, already investing heavily in the development of autonomous weapons, are committed to using the consensus rule in the CCW to hold the majority of States hostage and block progress towards the international legal response that is urgently needed. The outcome of the Review Conference falls drastically short, and does not reflect the will of the vast majority of States, civil society, or international public opinion”. 
			(80) 
			<a href='https://www.stopkillerrobots.org/news/historic-opportunity-to-regulate-killer-robots-fails-as-a-handful-of-states-block-the-majority/'>Stop
Killer Robots</a>.
66. The 6th Review Conference of the CCW mandated the GGE to meet for 10 days in 2022 
			(81) 
			In March and July 2022,
see <a href='https://meetings.unoda.org/meeting/ccw-gge-2022/'>Convention
on Certain Conventional Weapons – Group of Governmental Experts
on Lethal Autonomous Weapons Systems – UNODA Meetings Place.</a> to consider proposals and elaborate possible measures and other options related to the normative and operational framework on emerging technologies in the area of LAWS, building upon the earlier recommendations and conclusions of the Group (notably the “11 Guiding Principles on LAWS” adopted in 2019) 
			(82) 
			See <a href='https://multilateralism.org/fr/domaines-daction/11-principes-sur-les-systemes-darmes-letaux-autonomes/'>“11
Principles on Lethal Autonomous Weapons”, Alliance for Multilaterialism
(multilateralism.org).</a> and bringing in expertise on legal, military and technological aspects while maintaining the principle of decision by consensus 
			(83) 
			CCW/Conf.VI/CRP.3, <a href='https://documents.unoda.org/wp-content/uploads/2022/01/CCW-CONF.VI-11-20220110.docx'>https://documents.unoda.org/wp-content/uploads/2022/01/CCW-CONF.VI-11-20220110.docx.</a>. After two meetings in March and July 2022, the Group adopted a report 
			(84) 
			CCW/GGE.1./2022/CRP.1/Rev.1, <a href='https://www.esquerda.net/sites/default/files/imagens/08-2022/crp1-rev1.pdf'>www.esquerda.net/sites/default/files/imagens/08-2022/crp1-rev1.pdf.</a> containing certain recommendations, including that the Group's work be continued in 2023. In its conclusions, the Group noted that it had discussed a number of options regarding a future legal framework for LAWS: a legally binding instrument under the framework of the CCW or a non-legally binding instrument; clarity on the implementation of existing obligations under international law, in particular IHL; an option that prohibits and regulates LAWS on the basis of IHL; and the option that no further legal measures are needed. Notwithstanding, the Group agreed that the right of the parties to an armed conflict to choose methods or means of warfare was not unlimited and that international humanitarian law was also applicable to LAWS. Any violation of international law, including those involving LAWS, incurred the international responsibility of the State concerned. The Group's “recommendations” went no further, however, than proposing that the work of the GGE be continued in 2023, employing the same working methods (notably the requirement of a consensus) and with the same terms of reference that had governed its meetings in 2022.
67. A working paper submitted to the GGE by a group of European countries proposed a two-tier approach aimed at getting discussion moving again. 
			(85) 
			Finland,
France, Germany, Netherlands, Norway, Spain and Sweden; link to
the document: <a href='https://cd-geneve.delegfrance.org/IMG/pdf/wp-laws_de-es-fi-fr-nl-no-se.pdf?2591/1ab7cb7c2cffed505f6e676da7883fe2bf94f5d9'>https://cd-geneve.delegfrance.org/IMG/pdf/wp-laws_de-es-fi-fr-nl-no-se.pdf?2591/1ab7cb7c2cffed505f6e676da7883fe2bf94f5d9
.</a> The document pointed out that the States Parties to the CCW should recognise that LAWS that cannot comply with international law, including IHL, are de facto prohibited; and, consequently, that LAWS operating completely outside human control and a responsible chain of command are unlawful. The second sphere of action entails proposing international regulation of other weapons systems presenting elements of autonomy to ensure compliance with IHL.
68. To operationalise these proposals, the authors of the document invite the States Parties to

(1) commit to not developing, producing, acquiring, deploying or using fully autonomous lethal weapons systems operating completely outside human control and a responsible chain of command (see guiding principles b, c, and d);

(2) commit to only developing, producing, acquiring, modifying, deploying or using LAWS when two conditions are met: firstly, that compliance with international law is ensured when studying, acquiring, adopting or modifying and using lethal weapons systems featuring autonomy and, secondly, that appropriate human control is retained during the whole life cycle of the system in question by ensuring that humans will be in a position to inter alia:

  • at all times: have sufficient assurance that weapons systems, once activated, act in a foreseeable manner in order to determine that their actions are entirely in conformity with applicable national and international law, rules of engagement and the intentions of their commanders and operators. For this purpose, developers, commanders and operators – depending on their role and level of responsibilities – must have a sufficient understanding of the weapons systems’ way of operating, effect and likely interaction with their environment. This would enable the commanders and operators to predict (prospective focus) and explain (retrospective) the behaviour of the weapons systems;
  • during the development phase: evaluate the reliability and predictability of the system, by applying appropriate testing and certification procedures, and assess compliance with IHL through legal reviews;
  • during the deployment: define and validate rules of use and of engagement as well as a precise framework for the mission assigned to the system (objective, type of targets etc.), in particular by setting spatial and temporal limits that may vary according to the situation and context, and monitor the reliability and usability of the system;
  • when using: also exercise their judgement with regard to compliance with rules and principles of IHL, in particular distinction, proportionality and precautions in attack, and thus take critical decisions over the use of force. This includes human approval for any substantial modification of the mission’s parameters, communication links and ability to de-activate the system if and when necessary, unless technically not feasible;

(3) preserve human responsibility and accountability (see guiding principles b and d) at all times, in all circumstances and across the entire life cycle as basis for State and individual responsibilities which can never be transferred to machines. To that end, the following measures and policies should be implemented:

  • where responsibility is concerned: doctrines and procedures for the use of lethal weapons systems featuring autonomy; adequate training for human decision-makers and operators to understand the system’s effect and its likely interaction with its environment; operation of the system within a responsible chain of human command, including human responsibility for decisions to deploy and for the definition and validation of the rules of operation, use and engagement;
  • where accountability is concerned: measures enabling an after-action review of the system to assess compliance of a system with IHL; mechanisms to report violations; investigation by States of credible allegations of IHL violations by their armed forces, their nationals or on their territory; and disciplinary procedures and prosecution of suspected perpetrators of grave breaches of IHL as appropriate.

(4) adopt and implement tailored risk mitigation measures and appropriate safeguards regarding safety and security.

6. Conclusion

69. By way of conclusion, it seems clear that international regulation of LAWS does ultimately need to be developed. International law, as it stands, does not provide sufficient safeguards to deal with the new issues raised by LAWS. It is undeniable that the latter may give rise to a new paradigm in the governance of warfare, which could lower the threshold for engaging in armed conflict, as States see a drastically reduced threat of losses of their own human soldiers. Consequently, more efforts are needed to find the right balance between military advantage and human rights protection. Some of those involved in the debate about LAWS argue that the requirement of meaningful human control over the use of lethal force is already implied in international law. This would mean that weapons that lack meaningful human control are illegal. But it remains to be seen whether that requirement must be made explicit. In my opinion, in the spirit of Article 7 of the European Convention on Human Rights (nulla poena sine lege), this condition should be made explicit, with a clear and realistic definition of what meaningful human control signifies. 
			(86) 
			“Meaningful
human control in weapon systems: A Primer”, op. cit., p. 15.
70. The aforementioned working paper submitted to the GGE in July 2022 (see paragraphs 64-65) advocates a two-tier approach, to move forward efforts to reach a consensus. The first level of action entails clarifying that certain systems operating completely outside human control cannot comply with international humanitarian law while other systems incorporating elements of autonomy can be governed through positive obligations set out in a regulatory framework to be defined in a second phase.
71. I tend to share this position, which seems to me pragmatic and reasonable as well as mindful of important principles. Between on one side the position of NGOs and a number of countries campaigning for an outright ban on the development, deployment and use of LAWS, and on the other that of certain countries, including Russia and the United States, which refuse to submit to any legal regulation of this emerging technology, we should look for the right middle path. According to the above proposal, States (still within the GGE framework) should commit to:
  • recognising that fully autonomous lethal weapons systems operating completely outside human control and a responsible chain of command are prohibited by current international law;
  • regulating the other lethal weapons systems with autonomous features in order to guarantee compliance with the rules and principles of international humanitarian law, while preserving human responsibility and accountability, ensuring appropriate human control, testing and verifying weapon systems, and implementing measures to mitigate the risks, including by setting up appropriate instruction and training systems for the persons using them.
72. The ongoing work in the context of the CCW is encouraging and the framework for discussion is appropriate. However, the consensus rule that prevails in the CCW may lead to a lengthy delay in the conclusion of the process, or of one of its phases, or even to its blocking. This risk increases in the current context of high international tensions. I therefore recommend that, should this be the case, Council of Europe member and observer States consider, as a subsidiary measure, launching a process at the level of the Organisation that could lead to a legal framework open to the participation of other States.
73. It is from this viewpoint that I have drawn up the draft resolution preceding this report.