top of page
  • ד״ר טל מימרן

THE IDF INTRODUCES ARTIFICIAL INTELLIGENCE TO THE BATTLEFIELD – A NEW FRONTIER?

New and emerging technologies impact significantly the ways in which military operations are conducted. Advancements have been achieved in the development and deployment of autonomous weapon systems, military use of cyberspace, and more. One emerging field in which significant leaps are currently being made is Artificial Intelligence (AI) with military application.


By Tal Mimran, Lior Weinstein 1.3.2023 | The Lieber Institute for Law & Warfare at West Point











In recent weeks, several high-ranking Israeli Defense Forces (IDF) officers informed the press that Israel is deploying AI tools as part of its military arsenal. According to a thorough interview in Hebrew, the IDF uses AI to assist its offensive decision-making, for example to determine if a target is a military or a civilian one. In addition, some defensive tools are used to alert forces that they are under threat of a rocket or missile attack, or to aid in better safeguarding border movement, especially after the new IDF AI strategy was launched in 2022.

It is possible that the AI explosion in the public sphere, with the introduction of Chat GPT3 and Microsoft’s announcement that it will add an AI toolbar to Bing, influenced the IDF in declaring openly its novel use of AI. The decision to come into the open in this regard asserts technical supremacy and has value in terms of deterrence. Yet is this the right time or manner in which to do so?

The reports from the IDF raise several questions. In this post, we focus on the review of new weapons and means and methods of warfare. As we will show, it seems that the IDF’s declarations are premature, and more prudence is required when deploying tools that lead armies into uncharted territories.

Review of Weapons, Means and Methods of Warfare

A basic tenet of international humanitarian law (IHL) is that States are limited in their choice of weapons and means or methods of warfare by norms of international law. In particular, Article 36 of Additional Protocol I to the Geneva Conventions (AP I) obliges its State Parties to determine “in the study, development, acquisition or adoption of a new weapon or new means or methods of warfare,” whether their employment would be prohibited under international law. The importance, as well the challenges of conducting proper legal reviews under Article 36, increases when dealing with new technologies with unclear impacts on civilians and civilian objects.

Article 36 invites States to consider new weapons, means, or methods of warfare in light of IHL and any other rule of international law applicable to the High Contracting Party. Given the increased acceptance of the co-application of IHL and international human rights law (IHRL) in armed conflict situations, legal reviews should, in principle, include an assessment of compatibility with both bodies of law.

Although many provisions of IHL only apply during times of armed conflict, legal reviews pursuant to Article 36 may, and often do, take place in peacetime. Article 36 creates a procedural obligation for States that are party to AP I. However, it may be claimed that other States that are not party to AP I, but are nonetheless bound by substantive limits on weapons, means, or methods of warfare, should also resort to a comparable ex ante review of weapons and means, to avoid taking measures that would lead to a violation of their substantive obligations.

For instance, as per IHRL, General Comment 36 of the Human Rights Committee (HRC) takes the approach that ensuring the protection of the right to life under Article 6 of the International Covenant on Civil and Political Rights (ICCPR) invites preventive impact assessment measures, including legal reviews for new weapons. Indeed, in practice, some States, like the United States, have resorted to review procedures without being party to AP I.

Deployment of AI Military Tools by the IDF

Cyberspace has become an important domain for military operations with cyber-attacks becoming an integral part of the reality of armed conflicts. New cyber weapons and capacities which constitute new means of warfare or invite the application of new methods of warfare warrant, without question, a legal review under Article 36. Where cyber-attacks facilitate conventional attacks – for example when a cyber-attack neutralizes air-defense systems – the use of cyber-attack constitutes a means of warfare supporting the use of kinetic weapons, which would merit an Article 36 legal review.

The tools deployed by the IDF, as described in the interviews, constitute a new means of warfare that require a legal review under Article 36. While Israel is not a State Party to AP I, it is party to the ICCPR and as such General Comment 36 is of relevance to it. Should Israel decide to conduct a review on its AI tools with military application, it will be of importance to verify whether the tools are indiscriminate in nature or cause disproportionate harm to civilians and civilian objects.

This review is of special pertinence given that cyber-attacks can cause significant and widespread damage to objects and infrastructure (e.g. Stuxnet), and that cyber-attacks can precede the deployment of conventional military force or comprise part of a broader attack (as occurred in the context of the Russia-Ukraine War). In the current case, however, it seems that the IDF is engaged in a process of a trial and error in the battlefield.

The tendency to lean on AI is obvious, as such a tool can calculate in a few seconds some things that humans will need weeks to do, if they can achieve them at all. Yet, so long as AI tools are not explainable (as the International Committee of the Red Cross, and others, have pointed out), in the sense that we cannot fully understand why they reached a certain conclusion, how can we justify to ourselves whether to trust the AI decision when human lives are at stake? The public statements acknowledged that some of the targets attacked by the IDF are produced by AI tools. If one of the attacks produced by the AI tool leads to significant harm of uninvolved civilians, who should bear responsibility for the decision?

The IDF also recognized that it relies on private military companies. This is surprising, because Israel has faced criticism regarding Israeli companies that sell offensive cyber tools to non-democratic regimes, which then use them to supress political resistance and monitor journalists. Revelations concerning NSO’s Pegasus spyware, for example, even incentivized institutional responses. These included the European Parliament’s decision to found a Committee of Inquiry to investigate the use of Pegasus, leading to a series of critical reports, and the United States’ decision to place NSO on a “blacklist.” Some, like former UN Special Rapporteur David Kaye and Human Rights Watch, went further and called for a complete ban of offensive cyber tools until there is international regulation of their use.

Another important question is against whom this technology is deployed, and when? Are these tools deployed against a counterpart that is also tech-savvy, say Iran, or is it part of the management of territories of the West Bank? The context matters, and impacts the perception that will be developed in relation to these tools.

Finally, admitting that Israel used AI in the battlefield invites, and justifies, reciprocal use of such tools against Israel. Other States can also rely on these statements to deploy AI tools in other contexts, for example in the Russia-Ukraine conflict.

Conclusion and a Word on the Human Factor

There is room for prudence when deploying new military capabilities, especially ones that are not regulated like AI-based tools. There are deep inter-State disagreements that are rooted in the different perspectives and values of States. Israel, at least in terms of technological supremacy, views itself as well positioned to promote the use of innovative tools that provide it with a technical edge. Notwithstanding, there are risks for such an approach.

One encouraging aspect is that it seems the IDF seeks to use tools that complement human decision making, rather than as substitutes for the human factor. It is important to maintain a human in the loop in order to promote accountability, because we are not fully aware of the capabilities and the risks of AI tools. In this regard, Israel is setting a positive example that should be followed.

***

Dr. Tal Mimran is an adjunct lecturer at the Hebrew University of Jerusalem and at the Zefat Academic College. He is the Academic Coordinator of the International Law Forum of the Hebrew University, and the Research Director at the Federmann Cyber Security Research Center in the Law Faculty of the Hebrew University.

Lior Weinstein is a Master‘s student of international law (LLM) at the Hebrew University, Jerusalem and a Researcher at the Tachilit Policy Center in the fields of Law and Technology and International Law.







ד"ר טל מימרן

הוא ראש תוכנית "אמנה חברתית לעידן הדיגיטלי" במכון תַּכְלִית, חוקר ומרצה בתחומי המשפט הבינלאומי והסייבר.






Comments


bottom of page