Covering the intersection between technology, the law and society around the world

  • White Twitter Icon
  • White Facebook Icon
  • White Instagram Icon
  • White Tumblr Icon
  • Medium

Legal Disclaimer

The content displayed on this website does not constitute legal advice. Please consult a qualified legal expert if you are seeking legal advice or information on your rights. 

An AI Army

June 20, 2017

 

 

 

 

The inevitable militarisation of AI comes with both costs and benefits

One area which AI will certainly have a big impact, beyond the private and public sector context, is on international law and the rules of engagement. With all that AI can offer, it will inevitably be used in modern warfare, with similar advantages it can provide for cybersecurity being sought.

 

How exactly would the militarisation of AI be advantageous? There are a few ways it could be. Firstly, it can significantly reduce the need for boots on the ground. General Robert Cone, in charge of the US Army's Training and Doctrine Command, believes that drones and robots could reduce approximately a quarter of troops by 2030. Enhanced technology has allowed for an "a smaller, more lethal, deployable and agile force.” As such, the US Army has contemplated cutting the size of brigade combat teams in the thousands. For now, many of the tasks which machines are capable of performing better than humans are specific and narrow. It is possible to envisage, however, that these machines will become smart enough to operate in a broader range of more complex scenarios and environments. Eventually, humans may not be needed to control and monitor these machines even from afar, and thus the likely consequence is that the need for human involvement in an increasing amount of conflicts will not be needed, at least not to the same degree as today. As such, far less human lives have to be put directly in harm's way.

 

Secondly, AI entities may be capable of making much better decisions in the heat of the battle than any well-trained soldier could. As is with cybersecurity, AI entities, unlike humans, are not subject to emotions or irrationality, for they are just merely machines limited to only doing what they are programmed or instructed to do. Their natural ‘calmness’, if it could be phrased as such, means that the decision-making process is not subject to the occasional irrational behaviour or cognitive slips which humans are known to make, even amongst the most highly trained soldiers. The fact that AI entities are capable of processing much more data than humans also means that they are capable of making more informed decisions. In the same way that Alpha-Go was able to use Big Data to figure out the best moves to use in a game of Go, AI entities can use the same capabilities to determine the most effective operations and combat strategies to defeat the dedicated adversary.

 

This leads on to the third significant advantage, which is that AI entities would be able to better cope with the challenges presented by cyber warfare. This particular advantage is very reminiscent of the benefits of AI in cybersecurity in terms of its ability to deal with the growing amount of cyber-attacks taking place on a frequent basis and at an immense pace. Last year, the Defense Advanced Research Projects Agency (DARPA) at its Grand Cyber Challenge event tested autonomous hacking servers trying to attack each other while also patching themselves at the same time. Such abilities eliminate the problems currently presented by cyberspace, especially when it comes to adequately responding to attacks, which has become immensely difficult due to their instantaneous nature. Leaving smart machines to defend as well as attack for themselves can mean a far more robust defence system can be put in place to attend to the growing modern threats.

 

Yet, in the context of cyber warfare, there are still some potential disadvantages to the rise of an AI army. In particular, there a few reasons to suggest that the militarisation of AI could pose even more severe problems than nuclear bombs did in the 1980s. The first identifies the structure of the internet and the impact it has on the traditional ways of conducting warfare. In cyber warfare, the ability to be one up on your adversaries is significantly harder than the other traditional domains (land, water and airspace). The internet's openness means that activity taking place on it can hardly be constrained effectively. This allows for a much wider range of adversaries. This is due to the fact that, whereas there may have only been a few countries capable of developing nuclear weapons since developing such weapons require costly raw materials which are hard to obtain, cyberspace allows information and resources to be much more ubiquitous and more easily accessible.

 

Thus, not only would there be more adversaries which military forces would have to cope with, but all of these adversaries can take advantage of the more even playing field created by cyberspace. The superiority of the America’s cyber weapons and tools can more easily be outmatched by a hacker in Bulgaria. Therefore, as long as adversaries have a sufficient knowledge of code, a computer and a connection to the internet, they can, theoretically, be as capable of developing cyber offences as the US Cyber Command. The interconnectivity of the internet also means that an attack on one network can easily affect another, meaning the damage inflicted by an attack may not necessarily be limited to the intended target; the attack surface is thus much larger, expanding the dangers of cyber warfare. As such, the prospect of AI-powered electronic warfare could be far more damaging than nuclear weapons ever truly were, or at least as damaging.

 

A limitation to both the potential advantages and disadvantages of the militarisation of AI, though, would derive from the policy constraints on such weapons. The laws and regulations surrounding the development and use of autonomous weapons can perhaps necessarily frustrate the movement towards an AI army (it is necessary to note that the terms ‘AI weapons’ are ‘autonomous weapons' are practically synonymous as they both essentially refer to a weapons system using information and data to independently select and engage targets). The applicability of the current legal framework surrounding autonomous weapons highlights two principles which may be of significant concern; distinction and proportionality.

 

Distinction is about the ability to distinguish between military and civilian targets. As said in the Intentional Court of Justice in 1996:

The cardinal principles contained in the texts constituting the fabric of humanitarian law are the following. The first is aimed at the protection of the civilian population and civilian objects and establishes the distinction between combatants and non-combatants; States must never make civilians the object of attack and must consequently never use weapons that are incapable of distinguishing between civilian and military targets.

 

This important distinction principle is one of the important foundations of modern rules of engagement. The problem when it comes to autonomous weapons is whether they can be deployed and used in a way which adheres to such rules. Are these weapons capable of not carrying out indiscriminate attacks? As AI entities rely on masses of data to work accurately and effectively, the best way for autonomous weapons to adhere to the distinction rule would be to have access to quality data. This is where the difficulty lies. In her paper of autonomous weapons, Rebecca Crootof of Yale University presents the argument that autonomous weapons are not yet capable of distinguishing between civilian and military targets, as “doing so requires a complicated assessment of various factors, and there are many grey zones that bewilder even well-trained human soldiers.” While civilians engaging in hostilities are lawful targets, “armed civilians acting as law enforcement” are not and distinguishing between the two is far from easy. Some predict that the ability of AI entities to differentiate between lawful and unlawful targets should improve in the near-future, but for now, the legal controversy in this regard remains problematic.

 

Similar legal incompatibilities are also observable with respect to the principle of proportionality. It is forbidden, according to the First Additional Protocol to the 1949 Geneva Conventions, to deploy an “attack which may be expected to cause incidental loss of civilian life, injury to civilians, damage to civilian objects, or a combination thereof, which would be excessive in relation to the concrete and direct military advantage anticipated.” When authorising an attack, determining whether it would adhere to the proportionality principle consists of subjective analysis. This is something which autonomous weapons, acting independently, may struggle to do. Crotoof correctly points out that autonomous weapons are not exactly “able to qualitatively analyse, let alone weigh, the expected military advantage of a particular attack and the associated potential harm to civilians.” This is even more questionable when autonomous weapons are deployed in ever-changing environments, particularly where programming would have to be altered to adapt to such. The principle of proportionality is almost invariably centred around “human judgement,” and so the objective nature of analysing data to determine whether an attack or a particular action is indeed proportionate is thus essentially incompatible with this legal principle. Accordingly, the use of autonomous weapons could be used in a way which contradicts some of the important rules and principles around conflict and war in a rather unprecedented fashion.

 

The military-oriented advantages of AI are promising in some ways while worrying in others. It may be difficult to say at this point which side it leans more towards, but it can certainly be said that it raises more questions than answers.

 

 

NEXT  | Artificial Accountability

Would it be possible to hold AI entities criminally liable?

 

BACK  | Controlling the Creators

The potential downsides to artificial intelligence

 

 

 

Please reload

  • Twitter - Black Circle
  • Facebook - Black Circle
  • Instagram - Black Circle
  • Tumblr - Black Circle
Recent Posts

November 1, 2019

September 9, 2019

Please reload