The use of Artificial Intelligence (AI) is a complex ethical question, and it is further compounded by the potential for AI developments to be used in battle to take lives. Although weapons’ impacts are often not fully understood until they are actively deployed, it is essential to recognise the potential ethical challenges from the outset.
The historical example of the Manhattan Project participants, who initially believed their work was ethical and beneficial for humanity, highlights the importance of critical reflection. Witnessing the devastation of Hiroshima and Nagasaki and experiencing the Cuban missile crisis forced them to confront the existential crisis resulting from their involvement.
Military AI refers to the deployment of AI and Machine Learning (ML) technologies in military applications. This includes autonomous weapon systems, AI-assisted decision-making, and intelligence gathering. As the Russia-Ukraine conflict rages on, battlefields like this are becoming increasingly proliferated with artificial combatants, giving us an insight into AI’s role in future warfare.
Dr Jorrit Kamminga, Director of RAIN Ethics and Maj Gen (Retd.) Robin Fontes contends that “Ukraine is a laboratory in which the next form of warfare is being created. “It is not a laboratory on the margins, but a centre stage, a relentless and unprecedented effort to fine-tune, adapt, and improve AI-enabled or AI-enhanced systems for immediate deployment. That effort is paving the way for AI warfare in the future.”
AI and ML are used from both sides in weaponry and intelligence. Russia used a powerful drone that could identify targets using AI, and Ukraine used controversial facial recognition software during the conflict. At the same time, the United States (US) is exploiting AI capabilities in Ukraine to analyse data related to the conflict and track the movements and activities of different actors to help the US military better model and anticipate how an advanced adversary will behave in the real world, particularly Russia and China.
In this article, we will explore the impact of emerging and disruptive technologies on the stability of deterrence and the opportunities these technologies could offer for arms control and confidence-building measures. First, we will be discussing the changing battlefield landscape through the development of AI and ML capabilities for military purposes.
The rapid introduction of computers into war-fighting tools such as drones or driverless tanks, as well as the increasingly compressed decision-making time that comes with this technological advancement, have thrust humankind towards a situation where governments and militaries around the world may risk losing control over both lethal and non-lethal arms. We further discuss the need for legal and ethical norms, both international and domestic, that could help prevent the exclusion of humans from command and control chains in AI tools that could be used for war and deterrence relationships.
How AI is being deployed by the military—an ethical consideration
Setting aside that warfare is inherently unethical, let us analyse the role of AI on the battlefield and its implications beyond battlefields. There are two major warfare paradigms: ‘counterterrorism’—a more targeted approach where the military deals with mobile targets, people, and vehicles; and then there is the ‘classic aero-land warfare’—with tanks, aircrafts, and more traditional military equipment. These two will have quite different implications due to the future of AI on the battlefield bringing more autonomy to weapon systems, particularly in critical functions of target selection, identification, and the decision to use force.
Many things that AI is currently being applied to in the military are not objectionable—AI co-pilot was designed to help unconscious pilot land safely, which is fairly an innocuous use of AI. There is research in using ML and AI to implement cybersecurity to use the fancy vulnerability scanner and potentially patch vulnerabilities. Other things like ML-based language translation—part of the Jedi Contract—and everyday logistics and bomb disposal robots are active work areas. But then there are more contested applications, like ‘target selection’ and ‘identification’, which are particularly tricky. Tools like ‘bug spot’ for collateral damage estimation; and ‘Skynet’ which aims to detect ‘terrorists’ or ‘insurgents’ from mobile phone patterns. Project Maven aims to track people and vehicles around areas where wide-area motion imagery surveillance feeds that into further layers of the analysis process.
Currently, we are getting some sanitised views when listening to state parties who favour AI with reasons like—it will make life safer for civilians because of precision and accuracy. Ethical guidelines like the US DOD’s 5 Principles of AI Ethics and China’s position paper on Regulating Military Applications of AI attempt to assure the public that the AI they develop for military use will be designed and deployed ethically.
By adopting these standards, governments worldwide want to convince the public that the continued use of battlefield AI is acceptable. But this does not stand up as a narrative because what we are asking entirely of these AI systems, is a complicated problem of correctly distinguishing between combatant and non-combatant, and that’s something which humans have significant advantages over ML and AI systems. The human ability to understand the context and think through a situation instead of just applying some statistical patterns.
Technical and pragmatic considerations of AI on the battlefield
Domestic Implication: What’s downstream, and how do AI’s military applications connect to domestic issues? Many AI tools used at the borders by the military will be used later by police forces and large companies. For example, full-motion video was used in Afghanistan to attack the network and was later used by companies like Persistent Surveillance Systems over different cities. The other side of it is the fact that autonomous lethal weapons obviously will also be used for autonomous so-called less-lethal weapons. So, a perfect example of this is a company in Australia called Cyborg Dynamics in a podcast talking about their autonomous drones with their little grenade launcher on them. They said their international market was to sell by replacing the grenade launcher with tear gas so that police could use it on protestors. Later, the idea was developed and autonomous drones were used to shoot tear gas and rubber-busting grenades in Gaza during the conflict. To paraphrase William Gibson – “the future is here; it’s just not evenly distributed”, and we can look at the borders to get an idea of what’s coming domestically.
Automation bias: This bias is when humans follow instructions as directed by the computer without questioning. So, for example, if the computer’s vision algorithm being used for targeting says that the strike is good, and we accept that answer—it becomes a high-risk and tricky thing. Another aspect is that war situations are very dynamic—no conflict is the same. The geography, the warring parties, the tactics used, and the weapons used, will vary from time to time and place to place. So, there is no theoretically valid model of a combatant that stays stable. Hence there is no computationally proper way to make that determination while fulfilling the International Humanitarian Law (IHL), i.e., assessing the behaviour of these combatants. So, any system that tries to do this will be inaccurate and lead to many problems. Thus ‘human control’ in these systems that are already taking some active role in this deliberation process is undermining that unique human ability to apply critical context-specific judgement to these things.
Can we programme these systems with ethics or IHL to make war more ethical?
The policy processes around the ethics of AI, and specifically military AI, are still evolving. However, we need to be conscious of the fact that ethics is not just about following a set of pre-defined rules—the role of human conscience, the part of human’s ability to dissent and discuss what norms should be, is vital.
One of the most important international agreements is the IHL. It is a set of rules that govern the conduct of armed conflict, which includes provisions on the protection of civilians and other non-combatants, as well as the prohibition of certain weapons and methods of warfare. However, it is an extensive fuzzy set of rules, implementing ethics or IHL into AI systems to make warfare more ethical poses challenges. Ethical considerations are context-dependent and open to interpretation, which makes programming them into software a complex task. The understanding of ethics and IHL within the software can lack transparency and necessary scrutiny. Therefore, it is crucial to critically evaluate the choices made when handling AI systems and ensure they align with ethical and legal norms.
Way forward
Human military personnel receive knowledge of ethics, morality, and law within society before they go to war. This knowledge is further expanded through reinforcement of their military’s warrior and service-specific ethos. As militaries worldwide increasingly incorporate AI and autonomy into the battlespace and intelligent machines take responsibility, why should we approach them differently? Acknowledging the inherent complexity and risks associated with military AI, it is crucial to establish rigorous standards and ethical norms. International collaboration is crucial in establishing ethical norms and guidelines that prioritise human control, transparency, accountability, and the protection of civilian lives. By integrating diverse perspectives and expertise, it is possible to foster responsible development and deployment of AI in warfare.
About the author: Animesh Jain is a Policy Fellow with the Office of Principal Scientific Advisor to the Government of India – Policy Analytics and Insights Unit.
Source: This article was published by the Observer Research Foundation