Isaac Asimov is one of the most celebrated sci-fi writers and arguably his most famous creation is the ‘Three Laws of Robotics.’ In a nutshell, these are: A robot may not injure a human being or, through inaction, allow a human being to come to harm; a robot must obey orders given to it by human beings except where such orders would conflict with the First Law and, a robot must protect its own existence as long as such protection does not conflict with the First or Second Law.
Applying Asimov today
Prof Tom Sorell of the University of Warwick, UK, has recently
argued that Asimov’s three laws seem a natural response to the idea that robots will one day be commonplace and need internal programming to prevent them from harming people. However, he argues that whilst Asimov’s laws are organised around the moral value of preventing harm to humans, they are not as easy to interpret. Prof Sorell, an expert on robot ethics, worked on the EU-funded ACCOMPANY project that developed a robotic companion to help elderly people live independent lives.
Sorell writes that Asimov’s laws still seem plausible at face value because there are legitimate fears over the harm robots can do to humans, for example recent fatalities in the US from malfunctioning autonomous cars. But we are also living in an age where increasingly-sophisticated robots are being utilised to execute evermore complex tasks designed to protect and care for humans.
These include not only robots designed to care for the elderly, as in ACCOMPANY (see the
CORDIS Results Pack on ICT for Independent Living for more on EU-funded projects utilising robots to assist the elderly) but also robots that are designed to provide disaster relief. One example of this is the
robot utilised by an EU-funded project to assess damaged heritage buildings in the earthquake-stricken Italian town of Amatrice.
New robots, new paradoxes - new laws?
Asimov’s laws become shakier when considering the development of human-directed military drones designed to kill other humans from afar. Paradoxically, if a robot is being directed by a human controller to save the lives of their co-citizens by killing the humans that are attacking them, it can be said to be both following and violating Asimov’s First Law.
Also, if the drone is directed by a human, it can be argued that it is the human’s fault and not the drone’s for loss of life caused in combat situations. Indeed, armies equipped with drones will vastly reduce the amount of human life lost overall – perhaps it is better to use robots rather than humans as cannon fodder.
At the other end of the scale, Asimov’s laws are appropriate if keeping an elderly person safe is the robot’s main goal. But often robotics fits into a range of ‘assistive’ technologies that help the elderly to be independent, the goal of the ACCOMPANY project. This means allowing them to make their own decisions, including those that could result in their causing injury to themselves, such as through falling. A robot that allowed its human to make independent decisions that could result in their injury through falling would be breaking the First Law through inaction to prevent human injury.
However, Sorell argues that human autonomy must be respected, by both other humans and robots. Elderly people who make choices that will guarantee their continued independent living, but could put them at risk of injury, also need to be respected.
So whilst Asimov’s laws have influenced robotics developers for decades, now is perhaps the time to re-evaluate their effectiveness and begin discussions on a new set of laws that work well with the ongoing, awe-inspiring breakthroughs in robot and AI technology taking place in Europe and across the world.
For more information, please see:
ACCOMPANY project website