Implications of a Passed Turing Test: How Far Should Human Rights go?
There is, always has been and likely always will be an ongoing debate surrounding human rights – who qualifies for them? What rights should they entail? You no doubt will have heard much of the struggle of the likes of Martin Luther King and Rosa Parks who fought for Black rights, and more recently of judgements of the European Courts around human rights and of the CIA’s treatment of terror suspects. All of these were (and in many cases still are) very controversial and divisive debates about human rights, and where they do and do not apply. Some argue that everyone who is human has inalienable human rights, whereas others would argue that felons, or humans of a particular race do not deserve these rights (the latter was more prevalent pre-1960s and generally does not exist today).
However, nobody has ever indulged in a discussion around non-human human rights. When does something become ‘not human’? Of course, there is a clear genetic distinction between ourselves and animals – but no such distinction exists between humans and Artificial Intelligence.
Professor Alan Turing – the British mathematician who is widely credited with the creation of the first computer – theorised that if Artificial Intelligence was truly to be classed as such, it had to be able to pass the so-called ‘Turing Test’. It had to be able to trick a human into believing that it was a person rather than a machine. Until recently, that was a far-fetched feat at best and an impossible one at worst. However, in 2014, the Turing Test was passed for the first time ever by ‘Eugene Goostman’, an AI designed to act like a 13-year-old Ukrainian boy.
But this breakthrough presents a very complex moral dilemma – if something has emotions, opinions and is perceived as a human by humans, does it meet the uncertain requirements for human rights? Of course, it does not breathe, pump blood or reproduce (at least not organically) and (for any religious readers), it was not created by any God. However, different people will interpret these facts in different ways, which means we will never agree on whether AI deserves human rights.
It could be argued that AI has empathy towards humans and therefore deserves empathy in return. And what are human rights if not the baseline for legally-bound empathy? If they are able to learn from human interaction, could they become master seducers and flirters? And if so, is it possible that they are experiencing lust and love? This again begs the question of whether this qualifies human rights for AI.
Another important factor is the implications of granting human rights to AI. They would have the right to be free from any discrimination, to vote and of course the right to life. To be free from discrimination sounds nice in principle but in reality it means that employers, the government and ordinary people would have to treat AI exactly the same as they would any human being. So if you were going for a job, an AI machine that can do the job better would get the job over you, even if human interaction was a key part of the job. Again, the right to vote sounds wonderful but in reality it creates huge problems. In theory, the AI would all compute facts in the same way so it is likely that they would also vote in uniform – creating an exodus of votes for one party or individual that would skew the results. And then perhaps the most drastic of all rights – the right to life. Once more, this sounds like a good idea but what it means in practice is that the termination of a computer program is classed as murder and could be punishable by death in some parts of the world – which seems a little over the top in my opinion.
It would also create problems with the actual writing of code; there are very strict regulations on how life can and cannot be created, which branch from one part or another of human rights legislation. For instance – in many countries it is illegal to modify the DNA of an unborn baby which, in theory would be treated as the equivalent of an AI’s source code. If a programmer cannot modify the code of AI while it is being created then there is no purpose for programmers working in AI, and it effectively puts a halt on all AI development in future.
It is a mixture of these practical difficulties as well as religious and moral arguments that would lead to the seemingly-obvious solution of partial human rights for AI. For example, they have to be terminated in humane ways, they have to be treated in certain ways and must have access to a certain amount of space, etc. Again, this is controversial because you are treating them like animals when they have many characteristics of humans.
Personally, I do not believe that, at least for the time being, we should grant any rights at all to AI machines because while they can appear to be humans, they are no more human than a dream or a hallucination – and we certainly wouldn’t dare to regulate these. I do understand the empathy felt towards AI – certainly it is portrayed very strongly in much of the media. However, I don’t believe that it is right to treat them as humans because they so clearly are not and the only instances in which they are treated as humans are in those which the Turing Test is being carried out – usually over some online chatbot or when talking to something which hides any physical appearance of the subject. I believe that if we gave them human rights, it would be increasingly difficult to end a program that has gone wrong – for example Microsoft’s ‘Tay’ bot on Twitter turned into a Neo-Nazi bot that was extremely racist towards Twitter users. If this was emulated in a bot that has the ability to move and manipulate objects – or even create more of itself, we could see another holocaust on a scale like never before.