xkcd

This is the text for my presentation at re:publica 18. The slides can be found here.

I have presented at re:publica for the last couple of years, and usually I try to do so with a work in progress paper at the early stages of research, and this is no exception. My first paper was about copyright ownership in artificial intelligence works, and that led to a continuing interest in AI in general. I have now published a full paper that arose from that talk two years ago, and I decided that it was time to look at the other side of the law, namely the responsibility side. This is a relatively new area for me, as I have just started being interested in the liability side of AI, instead of the rights given to creations generated with AI. I am also not going to give any solutions, this is not a regulatory paper, it’s mostly a description of existing laws.

But one of the main reasons why I became interested in the liability side of artificial intelligence arises from the book Rule 34 by Charles Stross (which is a sequel to the also fascinating Halting State). [spoiler alert] The book describes an anti-spam bot that goes on a killing rampage using Internet of Things devices, as it identifies that the best way of getting rid of the spam is to eliminate the people who are causing it.  On an interesting side note, Rule 34 refers to an Internet meme that posits that there is Internet pornography for every conceivable subject.

The problem of writing about AI is always to try to limit the subject. This is fast becoming a very vast subject ranging from algorithmic transparency to autonomous weapons systems. For now, I will not be covering algorithms in any depth, other people have been doing an excellent job of that. Similarly, subjects such as the actual regulation of self-driving cars does not interest me (as opposed to the liability question). My guess is that self-driving cars will happen and regulatory efforts will be secondary. The autonomous weapons debate, while very important, is beyond the remit of my research as well.

So we are left trying to make a passable definition of artificial intelligence that can withstand the test of time and changing technologies. Back in 1980 Douglas Hoffstater wrote in the excellent book Gödel, Escher, Bach that “AI is whatever hasn’t been done yet”, and I have always felt very attached to such a definition. This is because one of the issues that we are having is that we tend to frame the debate on the basis of AI as this incredibly powerful tool, almost super-human in speed and capabilities. The reality however is more mundane, AI is your Netflix, your Amazon recommendations, your Spotify suggestions, your Roomba and your Siri. This is precisely the side of AI that interests me more, not so much the killer robot kind, or the evil AI who will enslave humanity, but the everyday variety that will be the most pervasive. What are the legal implications of a Roomba killing your cat?

So I prefer to go for a more muted and understated definition of AI. Russell and Norvig define artificial intelligence as “the study of agents that receive precepts from the environment and perform actions.” Agency is the key operator, so I prefer to talk about autonomous agents, as the “smartness” of the system becomes secondary.

Having that in mind, we can study various instances of robots and autonomous smart agents committing acts that could give rise to legal liability.

Take for example cyclist Elaine Herzberg, who was killed by an Uber self-driving in Arizona in March 2018. While several details have discussed which could lead towards an indication of Uber’s potential liability, Uber will be settling the case with the victim’s family, likely for a sizeable amount of money. While the particulars will not be discussed in court, the police said “It’s very clear it would have been difficult to avoid this collision in any kind of mode (autonomous or human-driven) based on how she came from the shadows right into the roadway”.

Another interesting case is that of smart contracts, these are self-executing documents written on the blockchain, and the idea is that if the terms set by the parties are met, the code will be run without human intervention. On November 6 2017, a bug in a multi-signature smart contract for an Ethereum-based wallet resulted in the freezing of $300 million USD in ETH at the time. This is the ultimate case of computer says no, and there is no way of getting it back other than to change the blockchain and re-write history.

In the field of data mining and copyright, we have large number of companies training their AI using music, text, poetry, etc to produce new works. To give you an example, a program called “Bot Dylan” generates music after being “trained” by listening to thousands of Irish folk songs. Are these resulting works infringing copyright?

Finally, there is the case of the Random Darknet Shopper, and art project by Swiss collective !Mediengruppe Bitnik which purchased random items from Dark Web markets. In October 2014 it purchased 10 yellow ecstasy pills, and was “arrested by Swiss police, but then released as it was evidently an art project and there was no intention to cause harm.

The common element in all of these cases is a harm (real or potential) caused by autonomous agents, usually with little or no human guidance. What can the law do in these circumstances?

Roughly speaking, the law deals with new technology using these options: ban, regulate, self-regulate (or do nothing), co-regulate, apply existing law, or draft new legislation; often one or a variety of solutions is chosen.

When it comes to autonomous agents, we often see talk of some imagined solutions and thought experiments, such as the trolley problem and Asimov’s Three Laws of Robotics. But if we leave these out, what is left?

With regards to self-driving cars, I don’t think they pose a lot of legal challenges in the liability side. I see three potential sources of liability from these vehicles:

  • Product: Manufacturer liable for making a defective product.
  • Service: software developer, ISP, repair company liable for causing a fault in the system.
  • User: liability through misuse of the device, or neglect in keeping it in order (missing a vital patch).

With regards to smart contracts, the potential origins of liability are similar, but the situation is made more difficult by the distributed and decentralised nature of many contracts. It may be difficult to ascertain exactly who wrote a buggy contract (lots of copy/paste of popular code), but also there may not be an identified entity that could be object to liability.

Regarding data mining, we are discussing a different type of liability. In copyright, what matters is that someone has used a substantial part of a work, and that there can be a connection between the original work and the alleged infringement. In few systems (US fair use), most derivative works would be permitted as transformative, but the same is not true in other jurisdictions.

So the main question is whether we need new laws to cover the liability of autonomous agents.

My first feeling is that much of our current legal regime is still fit for purpose when it comes to liability arising from artificial intelligent agents. Existing laws on negligence still fit for purpose for most cases, be it tort, delict, or extra-contractual liability. For example, in the tort of negligence the concept of proximate cause is still useful when looking at the liability caused by AI, an event sufficiently related to an injury that the courts deem the event to be the cause of that injury. An important element of analysis will be foreseeability, if a manufacturer or a service provider could have reasonably foreseen the event, then there could be liability.

With regards to criminal liability, this could be more problematic, generally speaking it is difficult to allocate criminal responsibility to something caused by an AI because there is no mens rea, and most liability will be civil

When it comes to smart contracts, I actually think that we may need an overhaul in contractual law arising from autonomous agents, or perhaps get even more creative. Why not revisit the Roman law of slavery? Servus non habet personam, yet slaves did generate some sort of liability for their owners in certain circumstances.

Concluding, there are things that we know we don’t know, and there are things we don’t know we don’t know. We need to see how technologies are applied in practice, as things stand, I contend that most liability arising from autonomous agents can be.


0 Comments

Leave a Reply

This site uses Akismet to reduce spam. Learn how your comment data is processed.