The BBC Technology News and the Today programme have been talking today about the great robot debate. Experts, academics and other assorted folk will discuss tonight about the potential threats arising from increasing numbers of robots, a discussion organised by the Dana Centre and the Science Museum. While at first I thought this was going to be a mirth-inducing exercise, reading more about it has prompted me to explore the issue further.
The debate will discuss a foresight report entitled Robo-rights: Utopian dream or rise of the machines? commissioned and conducted by the DTI’s Chief Scientific Adviser. The report concludes that:
“As computers and robots become increasingly important to humans and over time become more and more sophisticated, calls for certain rights to be extended to robots could be made. If artificial intelligence is developed to a level where it can be deployed widely — a development some argue is likely in the coming years — this debate may intensify.”
Robot Liberation Army, here we come…
Nevertheless, experts present at the debate are going to argue about more worrying problems arising from robotics. The threat will not come from conscious androids asking for their right to vote in the next European election, nor will it come from human-looking automatons. Experts are concerned about something more worrying, they believe that the threat comes from dumb automated robots, highly independent machines involved in decision-making that has little or no oversight by humans. One particular area of concern seems to be the military use of drones and other independent mindless machines, where the potential for things going wrong would increase. I have to agree somewhat, hasn’t the military ever seen a science fiction film? Everybody knows that military robots always go bad.
Seriously though, I tend to be sceptical about these claims, as they seem to me to respond to scaremongering technophobic fears. Automated machines have been with us for decades, and they still have not decided to take over the world. Similarly, military drones making mistakes are similarly dangerous to military personnel making mistakes. Even less so, if the appropriate checks are built into the system’s architecture. Fears of robots are entrenched in culture, but they tend to ignore the fact that we have been living with vending machines, VCRs, AIBO and factory assembly arms for years now. When was the last time that you saw a Coke machine kill a person? (actually, people have been killed by vending machines).
Anyway, a discussion of robots is not complete without Asimov’s Three Laws of Robotics:
1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
2. A robot must obey orders given it by human beings except where such orders would conflict with the First Law.
3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.