International Humanitarian Law in the days of high-tech warfare

Volume 9 | Número 95 | Nov. 2022

(Imagem: Pixabay)

Por Lydia Ribeiro Carvalho


Contemporarily, due to a transformation of the scenarios in which conflicts unfold, as a consequence of technological developments, lethal autonomous weapon systems became able to decide which targets to attack independently of human intervention. Considering that, the present study intends to understand how technological advances have been threatening compliance with International Humanitarian Law (IHL). 

With that purpose, an exploratory qualitative research was carried out through a literature review based on the data collection methods of bibliographical research and document analysis of reports produced by the International Committee of the Red Cross, therefore including both academic literature and experts analysis. 

Throughout this reflection, the assertion that new legally binding regulations need to be implemented will be discussed with an evaluation of the use of Unmanned Aerial Vehicles (UAVs) and Lethal Autonomous Weapon Systems (LAWS).


International Humanitarian Law (IHL) can be defined as “[…] a set of rules which seeks to limit the effects of armed conflict. It restricts the means and methods of warfare permitted to combatants and protects those who are not, or are no longer, actively taking part in fighting” (ICRC, 2021, p.1). Another understanding in legal doctrine interprets IHL as “[…] ‘the body of international norms of conventional or customary origin specifically intended to be applied to armed conflicts, international or non-international, which limits, for humanitarian reasons, the right of the parties in conflict to freely choose the methods and means used in the war or which protects the persons or property affected, or likely to be affected by the conflict’” (SWINARSKI, 1997). From such explanations, it can be inferred that IHL is an array of norms that governs behavior in armed conflicts, that defines how wars are to be conducted, in order to uphold humanity in troubled times. 

Formerly known as the laws of war or the laws of armed conflict, IHL is one of the most codified branches of international law, although several principles of customary international law – the one which derives from custom rather than formalized law – also play a relevant role in complementing these conventional rules (SHAW, 2017). A major part of what IHL encompasses, can be found in the four 1949 Geneva Conventions, that outline the rules for the conduct of hostilities. A series of conventions regarding the “rights and duties of war” had been previously adopted in certain conferences, but, due to implementation problems, they were replaced by the Geneva Conventions – a set of treaties agreed to by countries worldwide. Regarding the codification of IHL, Malcom Shaw states that:

“A series of conventions were adopted […] concerning land and naval warfare, […] the rights and duties of neutral states and persons in case of war, and an emphatic prohibition on the employment of ‘arms, projectiles or material calculated to cause unnecessary suffering’. However, there were inadequate means to implement and enforce such rules with the result that much appeared to depend on reciprocal behaviour, public opinion and the exigencies of morale. Such agreements were replaced by the Four Geneva ‘Red Cross’ Conventions of 1949 which dealt respectively with the amelioration of the condition of the wounded and sick in armed forces in the field, the amelioration of the condition of wounded, sick and shipwrecked members of the armed forces at sea, the treatment of prisoners of war and the protection of civilian persons in time of war.” (SHAW, 2017, p.892)

The First Geneva Convention aims, in its 64 articles, to protect wounded and sick soldiers, medical personnel, wounded and sick civilian support personnel accompanying the armed forces, military chaplains and civilians who spontaneously take up arms to repel an invasion. The Second Geneva Convention adapts the First Convention to the conditions faced by armed forces at sea. It has 63 articles that apply to “armed forces members who are wounded, sick or shipwrecked; hospital ships and medical personnel; civilians who accompany the armed forces” (AMERICAN RED CROSS, 2011, p.2). 

The Third Convention protects Prisoners of War (POWs). The Convention’s 143 articles require that POWs be treated humanely and receive sufficient food, clothing and medical care. Its provisions also establish guidelines on labor, discipline and criminal trial. The Fourth Convention affords protection to civilians located in areas of armed conflict and in occupied territories. It was an innovation and a significant attempt to protect civilians who, as a consequence of war, found themselves under the control of a State of which they were not nationals.

Three Additional Protocols, supplementary to the Geneva Conventions, were adopted to expand the Conventions’s safeguards. “They strengthen the protection of victims of international (Protocol I) and non-international (Protocol II) armed conflicts and place limits on the way wars are fought. […] A third Additional Protocol was adopted creating […] the Red Crystal, which has the same international status as the Red Cross and Red Crescent emblems.” (ICRC, 2010).

The norms of IHL convey some core principles, such as the principle of precautions, the principle of distinction and the principle of proportionality. The principle of precautions states that there is an obligation to take constant care in the conduct of military operations to avoid or minimise incidental civilian losses, e.g. doing everything feasible to confirm that targets are indeed military objectives, so that it is possible to cancel or suspend an attack if civilians will be affected, and giving effective warnings in advance of attacks which may affect the civilian population, when possible (ICRC, 2005).

“The principle of precautions in attack was first set out in Article 2(3) of the 1907 Hague Convention (IX), which provides that if for military reasons immediate action against naval or military objectives located within an undefended town or port is necessary, and no delay can be allowed the enemy, the commander of a naval force ‘shall take all due measures in order that the town may suffer as little harm as possible’.1 It is now more clearly codified in Article 57(1) of Additional Protocol I, to which no reservations have been made.” (ICRC, 2005, p.51)

The principle of distinction requires parties to armed conflicts to distinguish at all times between civilians and combatants. It prohibits indiscriminate attacks (that strike militaries and civilians without distinction) and indiscriminate means of warfare (for example, a weapon with effects that cannot be controlled), as well as direct attacks against the civilian population. So that only those directly participating in hostilities can be lawfully targeted (ICRC, 2005). Regarding the codification of the principle: 

“The principle of distinction between civilians and combatants was first set forth in the St. Petersburg Declaration, which states that ‘the only legitimate object which States should endeavour to accomplish during war is to weaken the military forces of the enemy’. The Hague Regulations do not as such specify that a distinction must be made between civilians and combatants, but Article 25, which prohibits ‘the attack or bombardment, by whatever means, of towns, villages, dwellings, or buildings which are undefended’, is based on this principle. The principle of distinction is now codified in Articles 48, 51(2) and 52(2) of Additional Protocol I, to which no reservations have been made.” (ICRC, 2005, p.3)

In its turn, the principle of proportionality prohibits attacks against military targets if such attacks may be expected to cause a consequential damage to civilian life which would be excessive in relation to the concrete military advantages anticipated. Therefore, a military target may only be attacked if the potential civilian losses are not expected to outweigh the foreseen military advantage (ICRC, 2005). According to the ICRC report:

“The principle of proportionality in attack is codified in Article 51(5)(b) of Additional Protocol I, and repeated in Article 57. […] The principle of proportionality in attack is also contained in Protocol II and Amended Protocol II to the Convention on Certain Conventional Weapons. In addition, under the Statute of the International Criminal Court, ‘intentionally launching an attack in the knowledge that such attack will cause incidental loss of life or injury to civilians or damage to civilian objects . . . which would be clearly excessive in relation to the concrete and direct overall military advantage anticipated’ constitutes a war crime in international armed conflicts” (ICRC, 2005, p.47)

Evidently, IHL is a long standing, solid body of regulations. However, the use of new scientific and technological developments as means of warfare challenges its existing rules and raises humanitarian, legal and ethical issues.


According to the World Bank, ​​over 50% of the global population currently lives in urban areas and by 2045 the world’s urban population will increase by 1.5 times. As the world becomes more urban, so does conflict. Present-day armed conflicts increasingly take place in urban settings and new challenges arise with it, since, “according to Article 57 AP I and customary IHL applicable in all types of armed conflict, in the conduct of military operations, constant care must be taken to spare the civilian population, civilians and civilian objects. IHL therefore requires that parties to a conflict take feasible precautions in carrying out attacks” (WEIZMANN, 2013, p. 39).

The combination of civilian and military individuals and structures in cities makes it more difficult to take such precautions. Due to this combination, it is essential that the planning of an operation includes the collection of accurate data on the possible direct and indirect effects that can be expected.

“Unless circumstances do not permit, effective advance warning must be given of attacks that may affect the civilian population. Most attacks in urban areas may well do so. The effectiveness of a warning should be assessed from the perspective of the civilian population that may be affected. It should reach and be understood by as many civilians as possible among those who may be affected by the attack, and it should give them time to leave, fnd shelter, or take other measures to protect themselves. Advance warnings do not relieve the party carrying out the attack from the obligation to take other precautionary measures, and civilians who remain in the area that will be affected by the attack – whether voluntarily or not – remain protected.” (ICRC, 2019, p.17)

A military operation will, most likely, cause harm to civilians, but as warfare becomes urbanized, it becomes more critical to strive to contain its effects to the military sphere. It becomes even more important that parties to a conflict heed the principle of proportionality, in order to avoid attacks in which civilian damage is excessive in relation to the expected direct military advantage.

“Even when services that are indispensable for sustaining life in urban areas are not directly targeted, they are disrupted as an indirect result of attacks, or become more and more degraded until they are at the point of breakdown. In some cases, services are deliberately denied to specific areas, in order to exert pressure on civilians living there. Inhabitants are left without sufficient food or water, sanitation and electricity, and deprived of health care; such privation is aggravated when cities are besieged. In addition, fighting in urban centers results in widespread displacement. Once fighting stops, unexploded ordnance and/or other forms of weapon contamination, and the lack of essential services, prevent many of the displaced from returning. Many of these consequences are not unique to cities, but they occur on a significantly larger scale in urban warfare […].” (ICRC, 2019, p.16)

In this sense, attention to the principles of distinction, proportionality and precautions is essential for the protection of civilians, as well as for ensuring the legality of the attack.  A relevant challenge related to the increasing urbanization of conflicts has been the use of Unmanned Aerial Vehicles (UAVs), also known as Unmanned Combat Air Vehicles (UCAVs) or “drones”, as well as the use of  Lethal Autonomous Weapon Systems (LAWS). The utilization of UAVs has been justified with the argument that their “[…] precision-guided munitions and long loitering times allow operators to better distinguish legitimate targets from civilian sites and to minimize collateral damage in ways that manned aircraft or ground operations  cannot” (MAYER, 2015, p.766), which would make them better suited to the challenges posed by urban locations, since their precision would allow a “humane” way to fight. However, this is a questionable premise, as there are several documented occasions in which unnecessary civilian deaths and destruction of civilian properties have been caused by drone strikes. It is also worth noting that the idea that there can be a more or less “humane” method to annihilate lives is a complete contradiction.

“[…] Three close but not synonymous notions are blithely confused under the term ‘precision’: the accuracy of the firing, the extent of its impact, and the adequacy of the identification of its target. […] This official ‘truth’, that the drone’s increased precision turns it into an ethical weapon because it is better able to discriminate between civilians and combatants, is repeated, with not the least sign of any critical examination, in dozens of press articles and academic publications. However, endlessly drumming it in does not make it logically consistent.” (CHAMAYOU, 2015, p.141-143)

The use of these remote-controlled aircrafts may also lead to what is called “the PlayStation phenomenon”. “The PlayStation phenomenon makes it less likely that a person controlling a remote drone will hesitate to use lethal force because physical distance can break the psychological barrier that inhibits one person from killing another human being” (TAKEMURA, 2014, p.526). The operator may not even see the blood of the target hit and his own life is not at risk. This kind of psychological distance favors the occurrence of atrocities.

When it comes to the lawfulness aspect, “international law does not prohibit either the possession or the use of armed drones. In other words, the legal lacunae directly regulating drones exists, and the applicable laws covering their use would be any international law applicable to similar weapons in armed conflicts and law enforcement in times of peace” (TAKEMURA, 2014, p.526). Armed drones are not prohibited under international law and the weapons normally deployed from them, such as guided bombs, are “conventional weapons” that are also not ilicit under IHL.

Another source of legal and ethical concern that is not specifically regulated by IHL are the Lethal Autonomous Weapon Systems (LAWS). An autonomous weapon system can be characterized as: “Any weapon system with autonomy in its critical functions. That is, a weapon system that can select (i.e. search for or detect, identify, track, select) and attack (i.e. use force against, neutralize, damage or destroy) targets without human intervention.” (ICRC, 2019, p.5). 

Unlike weapons controlled directly or remotely by humans (such as remote-controlled drones), this type of system, once activated or launched by a human operator, takes on the functions that are usually performed by humans. It analyzes the surroundings and, using its sensors, its built-in programming and its connected weapons, it attacks in response to detected objects or people, according to a pre-established general target profile. The choice of target, as well as the moment and location of the attack, is up to the machine, which raises a significant degree of unpredictability and doubt regarding compliance with international humanitarian law.

“Existing autonomous weapon systems include: air defense systems – short and long range – with autonomous modes for shooting down incoming missiles, rockets, mortars, aircraft and drones; active protection systems, which function in a similar way to protect tanks or armored vehicles from incoming missiles or other projectiles; and some loitering weapons – a cross between a missile and a drone – which have autonomous modes enabling them to target radars based on a pre-programmed radio-frequency signature.” (ICRC, 2019, p.5)

These systems can take in an amount of information beyond humanly possible and analyze such information faster than any human could. They can also be employed in areas that are not feasible or too risky for human presence. However, they are not reliable when it comes to following the rules of war. 

“A certain level of human control or involvement is inherent in the implementation of the IHL rules on the conduct of hostilities. While IHL creates obligations for States and parties to armed conflicts, IHL rules are ultimately implemented by human subjects who are responsible for complying with these rules in carrying out attacks, and must be held accountable for violations. It follows that some degree of human control over the functioning of an autonomous weapon system, translating the intention of the user into the operation of the weapon system, will always be necessary to ensure compliance with IHL, and this may indeed limit the lawful level of autonomy.” (DAVISON, 2018, p.11)

Thus, these weapons need to be modified. A level of human control over the weapons’ functions needs to be present in order to guarantee IHL compliance. An examination of the operational characteristics of autonomous weapon systems can elucidate how human control can be exerted to ensure a proper supervision of the weapon’s activities and an ability to intervene in its actions. Autonomy needs to be limited because predictability and reliability are essential. 

“Predictability is the ability to ‘say or estimate that (a specified thing) will happen in the future or will be a consequence of something’. Applied to an autonomous weapon system, predictability is knowledge of how it will function in any given circumstances of use, and the effects that will result. Reliability is ‘the quality of being trustworthy or performing consistently well’. In this context, reliability is knowledge of how consistently the machine will function as intended—e.g., without failures or unintended effects.” (DAVISON, 2018, p.10)

These properties depend on the interaction of the system with the environment, and “the more complex the environment, the greater the adaptability needed to ensure the autonomous functioning of a robotic system. […] Increasing adaptability in an autonomous system is generally equated with increasingly ‘intelligent’ behaviour – or AI.” (ICRC, 2019, p.14). Artificial Intelligence (AI) allows systems to perform increasingly complex tasks without human involvement. AI systems can also be trained to learn from data in a way that leads to what is called machine learning.  

“Instead of following pre-programmed rules, machine learning systems build their own model (or ‘knowledge’) based on sample data input representing the input or task they are to learn, and then use this model to produce their output, which may consist of carrying out actions, identifying patterns or making predictions” (ICRC, 2019, p.14). Machine learning adds a concerning level of unpredictability since its functioning is data driven. The results it produces depend on how the learning process occurs. What the system learns cannot be known or foreseen.

This problem also includes the presence of bias in algorithms, which – as any product of technology – is not neutral. “[…] technological systems are designed by humans to meet human needs and objectives and thus they reflect human values and decisions. This awareness reminds us of our ability and responsibility to shape the technologies we design and choose to employ” (UNIDIR, 2018, p.1). Bias in AI and machine learning algorithms is another multifaceted issue that needs to be addressed.

“Types of bias include the following: Training data bias – Perhaps the most common form of bias. Since machine learning algorithms learn using training data to refine their models, limits on the quantity, quality and nature of this data can introduce bias into the functioning of the algorithm. Algorithmic focus bias – The algorithm gives different, or inappropriate, weighting to different elements of the training data and/or ignores some aspects of the data, leading, for example, to conclusions that are not supported by the data. Algorithmic processing bias – The algorithm itself introduces bias in the way it processes data. Developers often introduce this type of bias or “regularization” intentionally as a way of counteracting other biases […]. Emergent bias – Emergent bias can cause an algorithm to function in unexpected ways owing to feedback from the environment. It is related to the context in which an algorithm is used, rather than to its technical design or the training data. Transfer context bias – An algorithm is used outside the context in which it was designed to function, possibly causing it to fail or behave unpredictably. Interpretation bias – A user (human or machine) misinterprets the output of the algorithm, especially where there is a mismatch between information provided by the system and the information that the user requires to take a particular decision or perform a task.” (ICRC, 2019, p.18)

Besides these pointed out issues, the use of autonomous weapons – as is the case for remote-controlled robots – contributes to a sense of indifference towards victims. Operators also experience both a physical and psychological distance from what happens at the battlefield and the consequences of their actions. This can easily lead to an excessive use of force, which further increases the threat to human rights.


Since a ban on the use of the previously mentioned technologies is not realistic, it becomes clear that to guarantee the protection of human rights in armed conflicts a considerable effort must be applied into regulating such machinery. The lack of specific regulations undermines IHL and leaves a gray area in where, when and how these weapons should be applied. To clear up this issue new legally binding norms need to be adopted by States.

All autonomous weapon systems show a degree of unpredictability, but the applicability of IHL depends on totally unpredictable systems being firmly ruled out. Furthermore, to ensure compliance with IHL, limits should be established for LAWS on its types of target, duration and scale of use, and proper human supervision and control needs to be required at all times. Delegating to machines the decision of who should die and who should survive and allowing their – often biased – algorithms to have such a big decision-making power over battlefield actions is something that should be urgently reconsidered. Therefore, it is necessary to disseminate this debate. 

“Greater debate is urgently needed as lethal AWS are likely to proliferate rapidly, enhance terrorist tactics, empower authoritarian rulers, undermine democratic peace, and are vulnerable to bias, hacking, and malfunction. A proper public debate concerning the ramifications of ‘killer robots’ should start in earnest” (HANER, 2019, p.331).

Launched in 2013, the “Stop Killer Robots” campaign, a coalition that calls for new international law on autonomy in weapons systems, has been working on raising awareness on this matter, but more initiatives like this are necessary to expand the public debate and to generate more pressure on the adoption of the necessary measures. 


AMERICAN RED CROSS. Summary of the Geneva Conventions of 1949 and their Additional Protocols. American Red Cross, 2021. Disponível em: Acesso em: 10 jul. 2022.

ARTICLE 36. Regulating Autonomy in Weapons Systems. Article 36, 2021. Disponível em: . Acesso em: 9 jul. 2022.

CHAMAYOU, Grégoire. A theory of the drone. New York: The New Press, 2015.

DAVISON, Neil. A legal perspective: Autonomous weapon systems under international humanitarian law. UNODA Occasional Papers, n.30, pp. 5-18, 2018. 

FOLLY, Maiara. ‘Robôs assassinos’: o perigo das armas autônomas letais. Nexo Jornal, 2021. Disponível em: . Acesso em: 9 jul. 2022. 

HANER, Justin; GARCIA, Denise. The Artificial Intelligence Arms Race: Trends and World Leaders in Autonomous Weapons Development. Global Policy, vol.10, n. 3, pp. 331-337, 2019.

ICRC. Autonomy, artificial intelligence and robotics: Technical aspects of human control. International Committee of the Red Cross, 2019. Disponível em: . Acesso em: 9 jul. 2022.

ICRC. Customary International Humanitarian Law – Volume I: Rules – Jean-Marie Henckaerts; Louise Doswald-Beck. Cambridge: Cambridge University Press, 2005.

ICRC. International Committee of the Red Cross (ICRC) position on autonomous weapon systems: ICRC position and background paper. International Review of the Red Cross, 102 (915), pp. 1335–1349, 2021.

ICRC. International Humanitarian Law and Cyber Operations during Armed Conflicts. International Committee of the Red Cross, 2019. Disponível em: . Acesso em: 9 jul. 2022.

ICRC. International Humanitarian Law and the challenges of contemporary armed conflicts: Recommitting to protection in armed conflict on the 70th anniversary of the Geneva Conventions. International Committee of the Red Cross, 2019. Disponível em: . Acesso em: 10 jul. 2022.

ICRC. Legal review of new weapons. International Committee of the Red Cross, 2021. Disponível em: . Acesso em: 9 jul. 2022.

ICRC. The Geneva Conventions of 1949 and their Additional Protocols. International Committee of the Red Cross, 2020. Disponível em: . Acesso em: 10 jul. 2022.

MAYER, Michael. The new killer drones: understanding the strategic implications of next-generation unmanned combat aerial vehicles. International Affairs, vol. 91, n.4, pp.765-780, 2015.

SHAW, Malcolm N. International Law. 8 ed. Cambridge: Cambridge University Press, 2017.

STAUFFER, Brian. Stopping Killer Robots: Country Positions on Banning Fully Autonomous Weapons and Retaining Human Control. Human Rights Watch, 2020. Disponível em: . Acesso em: 10 jul. 2022.

SWINARSKI, Christophe. O Direito Internacional Humanitário como sistema de proteção internacional da pessoa humana. In: Cadernos do Direito Internacional Humanitário. Macau: Cruz Vermelha de Macau, 1997.

TAKEMURA, Hitomi. Unmanned Aerial Vehicles: Humanization from International Humanitarian Law. Wisconsin International Law Journal, vol. 32, n. 3, pp. 521-545, 2014.

UNIDIR. Algorithmic Bias and the Weaponization of Increasingly Autonomous Technologies. United Nations Institute for Disarmament Research, 2018. Disponível em: . Acesso em: 10 jul. 2022.

WEIZMANN, Nathalie. Remotely Piloted Aircraft and International Law. In: AAROSON, Michael; JOHNSON, Adriand (org.). Hitting the target? – How new capabilities are shaping international intervention. Whitehall: Royal United Services Institute for Defence and Security Studies, 2013, pp. 33-44.

WORLD BANK. Urban Development. World Bank, 2020. Disponível em: . Acesso em: 10 jul. 2022.

Lydia Ribeiro Carvalho é graduanda em Relações Internacionais pelo Instituto de Relações Internacionais e Defesa da UFRJ e integrante do grupo de estudos Inana, que aborda Gênero e Relações Internacionais.

  • Trabalho desenvolvido sob a supervisão da professora Renata Reynaldo (IRID/UFRJ)

Como citar:

CARVALHO, Lydia Ribeiro. International Humanitarian Law in the days of high-tech warfare. Diálogos Internacionais, vol.9, n.95, nov.2022. Disponível em:

Diálogos Internacionais

Divulgação científica de Relações Internacionais, Defesa e Economia Política Internacional ISSN 2596 2353