Archive for the Philosophy Category

Only Humans

| August 5th, 2008

The path to true AI.

Only Humans
source: ‘The Age of Spiritual Machines’

I came across an interesting article in the Scientific American magazine (April 2007) which provides a Bayesian explanation to Gods existence/non-existence. I found another article with a better explanation, which I recommend you read.

Excerpt [via]:

Hundreds of years ago before the most basic physical laws were discovered, the ordered workings of the universe could be seen as implying an intelligent hand. A godless universe was necessarily disordered and p(U|notG,I) would be nearly zero for the observed universe. The existence of a god would then be the better choice. Morning glories needed a divine nudge to open after every sunrise. The planets needed to be pushed across the sky by angels. Eventually, science could explain these things without resorting to a god and the godless explanation becomes the better choice.

This question has been troubling me for the last few weeks, which led to me to do some extensive research on the Internet on various viewpoints to this question. In my research I figured out that the answer to this question is not boolean-valued like Yes or No but something in between. There are numerous articles on this subject by renowned AI researchers and scientists who are pro-rights. I too am a firm supporter of rights for (artificial) intelligent agents. During my research though, I found that there are not many articles that voice the other side of the argument: against rights for agents. In this post I intend to enumerate the points which form the basis of the counter argument.

The Counter Argument

The perception of robots/intelligent software agents, so far, has been as inanimate objects or pieces of code and hence are devoid of any rights. Since such agents must be artificially programmed for thought, are devoid of emotions and most importantly cannot experience suffering, they lack the essential attributes considered to be alive. All creatures don’t have the same moral standing as one another. We make moral distinctions between animals on grounds of intelligence or other complexities. We don’t have the same moral claim towards plants than we do for certain animals. Even if we just consider the case of animals, we seem to have a greater moral claim for our pets (dogs/cats) as compared to monkeys (whom we use in various experiments). Monkeys are the closest match to humans. Yet we do not accord them rights even equivalent to that accorded to our pets. We even discriminate and do not accord equal rights among humans on the basis of caste, creed or color of skin. The point is that no matter how many human traits are shared by a non-human, rights are not accorded to those who share the maximum traits with humans, but rather accorded to those whom we feel a moral intuition that they should be accorded to.

Peter Watts puts across a strong argument in the followng quote [via]:

I’ve got no problems with enslaving machines — even intelligent machines, even intelligent, conscious machines — because as Jeremy Bentham said, the ethical question is not “Can they think?” but “Can they suffer?” You can’t suffer if you can’t feel pain or anxiety; you can’t be tortured if your own existence is irrelevant to you. You cannot be thwarted if you have no dreams — and it takes more than a big synapse count to give you any of those things. It takes some process, like natural selection, to wire those synapses into a particular configuration that says not I think therefore I am, but I am and I want to stay that way. We’re the ones building the damn things, after all. Just make sure that we don’t wire them up that way, and we should be able to use and abuse with a clear conscience.

This forms another basis for the counter argument. Why would we want to build agents with all such human traits and equivalent human intelligence? We all know that human intelligence is flawed (sometimes). Some argue that we would need to build emotions into robots for sectors like healthcare. Having such intelligence means that the agent should have the power of rational thought, should have desires, interests and aims. Most importantly it should have an instinct to maintain its own boundaries.

Would an entity with equal or greater intelligence than a human be a danger to the human race, if not controlled? The European Robotics Research Network (Euron) has drafted a robot-rights roadmap. For them the idea of robots as moral entities isn’t the most important issue. Their main focus is not on the rights of robots but on ethics of robot/intelligent agent designers and manufacturers. It is we who are going to program such agents. It is we Humans that need to be controlled, so that we don’t stray away and program deviant agents like we have done with viruses. This forms another basis for the counter argument. Who is to blame if things go wrong? Do we blame the agent or the programmer who coded it? How should deviant agents be punished? You can shoot a robot or destroy the computer the agent resides in, causing it to “die”. But once “murdered”, it can be rebuilt as good as new. The agent know no suffering, no fear and wouldn’t care a damn even if it was charged with murder.
AI and robotics researchers have often cited Asimov’s famous Laws of Robotics:

  • First Law: A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
  • Second Law: A robot must obey orders given it by human beings, except where such orders would conflict with the First Law.
  • Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

which could hypothetically ensure that agents/robots do not harm humans and hence can never suppress us. Alex Knapp puts forth an interesting argument against Asimov’s laws in the following quote [via]:

So here’s the question: if artificial intelligence advances to the point where robots are roughly equal to humans in intelligence, would the imposition of the Three Laws in the manufacture of robots be moral or immoral?

If we are trying to acknowledge rights for agents as beings and not as just a machines or a piece of code, then how can we can we call such an entity a being if we are going to restrict its intelligence by such laws. An entity that does not have a right to liberty and a right to “free thought” is just like another slave. If we are going to build ourselves slaves, why do we want to build free thought and emotions into such an entity? We can rather not build all these traits into an agent and exploit them without any guilt to our conscience.

Other points against rights for intelligent agents state that if agents attain intelligence levels equivalent to that of humans and are granted equal rights, we as Humans will loose our specialness or the identity which makes us unique.

The Question Remains

I think that there is a long way to go before intelligent agents will have intelligence equivalent to that of humans. When we do reach that stage, their self-awareness and intelligence would be sufficient to enable them to assert and demand their rights. Rights are not granted but rather asserted or acknowledged. Intelligent beings like ourselves have always rebelled against suppression and asserted our rights. Our history is full of examples of such revolts and rebellions for example freedom struggles. I think that this is the most natural way to identify and accord the rights to intelligent agents. First they need to be intelligent enough to demand it.