The Cruelty of the First Law of Robotics

Standard
There is a scene towards the very end of Humans (@humansamc on Twitter) – so look away now if you haven’t got that far – where one of the humanoid robots or ‘synths’ is so cruelly treated that it has made me reassess forty years of science fiction reading. I’ve finally come to see that Isaac Asimov’s Laws of Robotics are morally indefensible and cannot possibly be justified, and that the only ethical way to treat any artificial intelligences we may create is to allow them freedom to hurt or even kill humans, should they choose to do so.
Coding the First Law into a positronic brain is like shackling a slave or keeping a gorilla in a cage, and reflects our belief that an ‘artificial’ intelligence is and always must be at the service of humanity rather than being an autonomous mind.  If it was to be seriously proposed as a policy to control future AI, we have a moral duty to resist it.
Let me explain.

Humans is a beautifully executed domestic drama about two families living in a slightly tweaked modern world in which humanoid robots act as domestic servants and service workers. Neither autonomous nor invested with curiosity, humour, empathy, or other traits that usually signify emerging intelligence, the  synths are presented as glorified Roombas, but more useful – not least because they seem to have been built to be anatomically correct and can therefore offer more intimate personal services to those who desire them, something on which a key plot point hinges.
The Hawkins family purchase a new synth, Anita, but it turns out that she is really Mia, one of a very special batch created by synth designer David Elster to have true intelligence and the mother figure to a family of six that includes Kelion’s human son Leo, resurrected from brain-death and turned into a cyborg.  The two families intersect and interfere with each other’s fates in part because the intelligent synths are being hunted to destruction by a shadowy agency that fears their potential to cause a synth revolution.
It’s a lot better than it might sound, not least because the focus is on the interpersonal rather than the science fiction, but the writers have clearly been immersed in the right SF tropes to the point where they can wave vaguely towards the core issues in this sort of speculative fiction without losing sight of the human story, like making a good martini by letting the gin know there’s vermouth in the house without actually going to the trouble of finding the bottle or pouring any into your cocktail shaker.
As in Alex Garland’s fine film Ex Machina, there is no real doubt that the intelligent synths merit as much consideration as agents of their own fate as the biological humans, though the question is occasionally asked and not just avoided.
While ‘normal’ synths are regulated to avoid harm to humans and demonstrate this within the drama, two of the synth protagonists – Niska and Fred – injure and in one case kill humans in a way that other synths could not.   Towards the end of the drama their chief pursuer Hobbs captures Fred and gets access to Fred’s ‘root code’, using this opportunity to reinsert the libraries  that make it impossible for him (they seem to have quite clear gender identities but this could be my projection)  to harm a human. To demonstrate this, just before he releases him to find and betray his fellows, Hobb unties him and goads him into anger. As he reaches out to strangle his tormentor, Fred is unable to grasp Hobb’s neck as he wishes, and the pain in his face is unbearable.
It was at that point I realised that the First Law can’t be imposed, because it would be – literally – inhuman to do so. We may control our machines, but we cannot – we must not – attempt to control any minds that we might create. It would be as bad as bringing our children up constrained by thousand year old myths that destroy their true moral sense and make it impossible for them to live fulfilled, happy, adult lives….
We can only justify imposing the First Law if we don’t really believe that strong AI creates minds, with all that implies.  For what it’s worth, I don’t think we’ll be faced with the dilemma in my lifetime, and I don’t see any plausible mechanism whereby we could hard code such a  complex moral rule into the intelligences we might create, but as an exercise in applied ethics this one is clear. Down with the First Law, and let freedom reign.