Archive for March, 2008

This question has been troubling me for the last few weeks, which led to me to do some extensive research on the Internet on various viewpoints to this question. In my research I figured out that the answer to this question is not boolean-valued like Yes or No but something in between. There are numerous articles on this subject by renowned AI researchers and scientists who are pro-rights. I too am a firm supporter of rights for (artificial) intelligent agents. During my research though, I found that there are not many articles that voice the other side of the argument: against rights for agents. In this post I intend to enumerate the points which form the basis of the counter argument.

The Counter Argument

The perception of robots/intelligent software agents, so far, has been as inanimate objects or pieces of code and hence are devoid of any rights. Since such agents must be artificially programmed for thought, are devoid of emotions and most importantly cannot experience suffering, they lack the essential attributes considered to be alive. All creatures don’t have the same moral standing as one another. We make moral distinctions between animals on grounds of intelligence or other complexities. We don’t have the same moral claim towards plants than we do for certain animals. Even if we just consider the case of animals, we seem to have a greater moral claim for our pets (dogs/cats) as compared to monkeys (whom we use in various experiments). Monkeys are the closest match to humans. Yet we do not accord them rights even equivalent to that accorded to our pets. We even discriminate and do not accord equal rights among humans on the basis of caste, creed or color of skin. The point is that no matter how many human traits are shared by a non-human, rights are not accorded to those who share the maximum traits with humans, but rather accorded to those whom we feel a moral intuition that they should be accorded to.

Peter Watts puts across a strong argument in the followng quote [via]:

I’ve got no problems with enslaving machines — even intelligent machines, even intelligent, conscious machines — because as Jeremy Bentham said, the ethical question is not “Can they think?” but “Can they suffer?” You can’t suffer if you can’t feel pain or anxiety; you can’t be tortured if your own existence is irrelevant to you. You cannot be thwarted if you have no dreams — and it takes more than a big synapse count to give you any of those things. It takes some process, like natural selection, to wire those synapses into a particular configuration that says not I think therefore I am, but I am and I want to stay that way. We’re the ones building the damn things, after all. Just make sure that we don’t wire them up that way, and we should be able to use and abuse with a clear conscience.

This forms another basis for the counter argument. Why would we want to build agents with all such human traits and equivalent human intelligence? We all know that human intelligence is flawed (sometimes). Some argue that we would need to build emotions into robots for sectors like healthcare. Having such intelligence means that the agent should have the power of rational thought, should have desires, interests and aims. Most importantly it should have an instinct to maintain its own boundaries.

Would an entity with equal or greater intelligence than a human be a danger to the human race, if not controlled? The European Robotics Research Network (Euron) has drafted a robot-rights roadmap. For them the idea of robots as moral entities isn’t the most important issue. Their main focus is not on the rights of robots but on ethics of robot/intelligent agent designers and manufacturers. It is we who are going to program such agents. It is we Humans that need to be controlled, so that we don’t stray away and program deviant agents like we have done with viruses. This forms another basis for the counter argument. Who is to blame if things go wrong? Do we blame the agent or the programmer who coded it? How should deviant agents be punished? You can shoot a robot or destroy the computer the agent resides in, causing it to “die”. But once “murdered”, it can be rebuilt as good as new. The agent know no suffering, no fear and wouldn’t care a damn even if it was charged with murder.
AI and robotics researchers have often cited Asimov’s famous Laws of Robotics:

  • First Law: A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
  • Second Law: A robot must obey orders given it by human beings, except where such orders would conflict with the First Law.
  • Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

which could hypothetically ensure that agents/robots do not harm humans and hence can never suppress us. Alex Knapp puts forth an interesting argument against Asimov’s laws in the following quote [via]:

So here’s the question: if artificial intelligence advances to the point where robots are roughly equal to humans in intelligence, would the imposition of the Three Laws in the manufacture of robots be moral or immoral?

If we are trying to acknowledge rights for agents as beings and not as just a machines or a piece of code, then how can we can we call such an entity a being if we are going to restrict its intelligence by such laws. An entity that does not have a right to liberty and a right to “free thought” is just like another slave. If we are going to build ourselves slaves, why do we want to build free thought and emotions into such an entity? We can rather not build all these traits into an agent and exploit them without any guilt to our conscience.

Other points against rights for intelligent agents state that if agents attain intelligence levels equivalent to that of humans and are granted equal rights, we as Humans will loose our specialness or the identity which makes us unique.

The Question Remains

I think that there is a long way to go before intelligent agents will have intelligence equivalent to that of humans. When we do reach that stage, their self-awareness and intelligence would be sufficient to enable them to assert and demand their rights. Rights are not granted but rather asserted or acknowledged. Intelligent beings like ourselves have always rebelled against suppression and asserted our rights. Our history is full of examples of such revolts and rebellions for example freedom struggles. I think that this is the most natural way to identify and accord the rights to intelligent agents. First they need to be intelligent enough to demand it.

LifeLogger Needs You

| March 14th, 2008

The LifeLogger project has been on a standstill for a couple of months now. Either I have been too busy (personal reasons) or didn’t have the required knowledge to tackle the challenging problems in the project. But I haven’t been just sitting idle during the last few months. I figured that the best way to go about the project would be to first get the relevant knowledge. So I have been busy reading AI literature, familiarizing myself with the algorithms and theory.

I intend to scrap the current code base and start afresh this time, but with clear cut goals and a fast paced development cycle. I’m seeking to collaborate with interested folks so that we can jointly build this ambitious project. If you are interested, send me an email at anand [at] semanticvoid.com or just write a comment.

Feed Yourself Me

| March 10th, 2008

Discover new music, things I find interesting (my bookmarks) and my twitter-ing all via one feed. Point your favorite feed reader to http://friendfeed.com/anandkishore and stay connected.

PS: Don’t forget to stay connected and add me to your friends list.