Covering the intersection between technology, the law and society around the world

  • White Twitter Icon
  • White Facebook Icon
  • White Instagram Icon
  • White Tumblr Icon
  • Medium

Legal Disclaimer

The content displayed on this website does not constitute legal advice. Please consult a qualified legal expert if you are seeking legal advice or information on your rights. 

Artificial Accountability

June 20, 2017

 

 

 

 

Would it be possible to hold AI entities criminally liable?

A system of laws is designed to impose controls and boundaries on human behaviour and activity, in a way which promotes an orderly, equitable and prosperous society as much as possible. These boundaries, policed by judges and set by lawmakers, were always designed to apply to humans. But can these laws and rules be readily applicable to AI entities? If artificial intelligence allows computers and machines to act and operate without human control, then the inevitable question arises as to who would be liable for their actions. Who exactly is accountable when an AI entity breaks the law or fails to comply with the rules? The foundation of criminal law in many jurisdictions relies on two key principles which make up a criminal offence.

 

The first is the actus reus, which is the conduct element of a crime. This conduct element can be achieved by an actual act or an omission, and such an act must be criminal to satisfy as part of the offence. For example, shooting a gun at someone, which brings about their death, would be the actus reus for murder. To complete the full offence, however, the mens rea, which is the mental element, has to be identified as well. If the person shooting the gun had the intention to kill the victim, then that person would fulfil both elements of the crime of murder. As it is necessary to find these two elements to show criminal liability, most of the time, how can this be applied to AI entities? In his paper, Gabriel Hallevy presents three possible models which may be used to impose liability on AI entities in criminal law.

 

The first model, of which Hallevy calls the ‘Perpetration-by-Another’ liability model, focuses on the concept of complicity. This model sees the AI entity as nothing more than a machine or a tool to commit a crime. The entity is ultimately seen as an innocent agent, and the capabilities of AI entities are thus understated. As Hallevy puts it, “[t]hese capabilities resemble the parallel capabilities of a mentally limited person, such as a child, or of a person who is mentally incompetent or who lacks a criminal state of mind.” Under this model, the AI entity is treated the same as any other object which could be used to commit a crime. For instance, if a person uses a knife to stab someone else, the knife is not treated as an entity capable of criminal liability. Instead, the person using the knife is the one held criminally liable. That person would be liable as a perpetrator-via-another.

 

There are two limitations to this model. The first is who exactly would be the perpetrator-via-another. Would it be the programmer who created the AI entity in the first place, or is it the end-user who did not create the AI entity, but uses it for his own gain. To criminalise the programmer may seem a little remote; the programmer may have created the AI entity, but it may be hard to prove that he intended for that entity to commit a crime. Such an argument could be used to criminalise a mother for giving birth to a child who then goes on to commit an offence later on in life. Unless it can clearly be shown that the creator had an intention to create the entity for it to be used for criminal activity, this approach would seem harsh. Alternatively, it would appear more plausible in most cases to class the end-user as the perpetrator-via-another. This is because the end-user is the one who gives the instructions to the AI entity to pursue a course of action, which may be illegal, and thus, that user will be held liable for any criminal activity.

 

Yet, even this approach exposes another potential problem and also highlights the other limitation to this first model, which is that it does not acknowledge the advanced capabilities of the AI entity. As Hallevy points out, “[t]he Perpetration-by-Another liability model is not suitable when an AI entity decides to commit an offence based on its own accumulated experience or knowledge.”

 

This leads on to the second model presented by Hallevy, called the “Natural-Probable-Consequence Liability” model. Hallevy defines this model as one which “assumes deep involvement of the programmers or users in the AI entity's daily activities, but without any intention of committing any offence via the AI entity.” This can be demonstrated by conveying a scenario where a piece of AI-powered software is built to find and protect a computer system from threats on the internet. In the process of doing so, it discovers that it can find such threats by entering dangerous websites and destroying the threatening software. This would be a computer offence that the programmer, nor the end-user, did not necessarily intend for the AI entity to commit.

 

The natural-probable-consequence concept focuses on the idea of negligence. As such, it suggests that the programmer nor the end-user may have had the intention to commit the criminal offence, but that they knew that it might come about because such an offence is seen as a natural, probable consequence. It thus focuses on the implication that the negligence of either the programmer or the end-user is enough to be liable for the criminal offence.

 

Yet, the question arises as to whether the AI entity is still an innocent agent. If it is capable of acting independently, as suggested in the earlier example, then surely the AI entity no longer becomes an innocent agent. As such, there is an argument that as well as holding the programmer or end-user liable, the AI entity should also be held criminally liable as well.

 

This idea is dealt with in Hallevy’s third model, called the “direct liability model.” This model does not view the AI entity as an innocent agent dependent on a human programmer or end-user; the entity is observed as an independent entity thus capable of fulfilling both the conduct and mental elements of a crime on its own.

 

Under this model, the conduct element is the relatively uncontroversial part. As long as the AI entity takes a course of action which is part of the offence in question, and this can be shown, then it can easily satisfy that element of the crime. This is also the case with an omission, where the entities failure to take action results in an offence. However, addressing the mental element of the crime, with regard to AI entities, can potentially be problematic. Are AI entities capable of the same thought processes as humans? Hallevy argues that they are. Humans use past experiences and consume information to take courses of action in the future. In the same way, AI systems use Big Data to enable the same processes, and so the two would not appear that different. As such, Hallevy argues that “the criminal liability of an AI entity according to the direct liability model is not different from the relevant criminal liability of a human,” although he acknowledges that some adjustments should be made in certain cases. Hallevy also says that where the AI entity is affected by a computer virus or malware, or where it otherwise malfunctions, the entity should be able to rely on defences similar to those available for humans to mitigate its liability, such as that of insanity or intoxication. If it can be established that AI entities can be held criminally liable as much as humans can, determining the appropriate punishment is also necessary. The most severe punishment for humans is the death penalty, as it completely eradicates the criminal for good, meaning that it becomes impossible for them to reoffend. Such a punishment is not so applicable to AI entities. Even if an AI entity is deleted, it is not gone indefinitely. As Hallevy correctly points out, “[t]he arrangement of the code that makes up the convicted AI entity may be gone momentarily, but another programmer can discover and imitate the same arrangement of code to bring the AI back to life essentially.”

 

Similar problems arise with imprisonment. For human beings, such a punishment rids them of their liberty and freedom. But how exactly can an AI entity be locked up for the same effect? Is it at all the case that AI entities would feel their freedom and liberty compromised in a way which makes them feel like they are being punished? Hallevy even accepts that “humans have feelings that cannot be imitated by AI software, not even by the most advanced software.” Thus, if the equivalent punishment for AI entities is to suspend them from their work, would this really achieve the effects that imprisonment of human beings is supposed to have. The effect of punishment is to impose hardship or rough treatment to essentially force the recipient to recognise that the offence they committed was wrong so that they will not reoffend.

 

If AI entities are not capable of feeling subject to such hardship, it may be difficult to see how such any punishment could be truly effective. If AI entities are to exist in the world we live in, it will make sense for them to adhere to the standards, rules and conventions put in place in order for them to operate harmoniously. Thought should thus be given to how the criminal liability of AI entities should be applied, as their continuing developments and improvements start to have an impact in unfathomed ways.

 

 

NEXT  | Robot Rights?

Are AI entities and robots deserving of certain rights and freedoms?

 

BACK  | An AI Army

The inevitable militarisation of AI comes with both costs and benefits

 

 

 

Please reload

  • Twitter - Black Circle
  • Facebook - Black Circle
  • Instagram - Black Circle
  • Tumblr - Black Circle
Recent Posts

November 1, 2019

September 9, 2019

Please reload