Archive for the Artificial Intelligence Category


Although I started this project as an experimental weekend thingy (to play around with Google App Engine), the project has shaped up well. Before you surf over to another blog, wondering what the hell I’m talking about, let me introduce you to “Personalized ARTICLE” aggregator (read as PARTICLE). The aim is to personalize a users online reading (just like what Findory did). Findory was an excellent service and I’ll be glad if I can achieve even an iota of what Greg created. This project is at very rudimetary and experimental stage. Rather than tapping into the users reading history on the site (monitored by the links clicked), the idea is to study how a users *interests*, scattered around at various “databases of interest” like del.icio.us, could be used to personalize online reading (news articles, blogs and more). This way the user could merrily browse the world wide web, bookmarking pages, doing his usual stuff and let PARTICLE worry about making this data useful.

Click here to try PARTICLE

Presently you need to provide PARTICLE with your del.icio.us username, which it uses to analyze your *interests* and present you with recent news stories you may like. It works well if you have a decent number of bookmarks in del.icio.us. As I mentioned, the project is at a very rudimentary stage, so don’t feel disappointed by the results (ah! the unlucky few). I encourage you to play around with the app and recommend it to others to try. I’ll be making many changes/additions in the coming weeks.

Test drive PARTICLE at http://particle.semanticvoid.com. Kindly leave your feedback/comments/suggestions in the comments or send me an email at ‘anand at semanticvoid.com’.

[UPDATE] Yahoo! Research has a similar project called Garçon.

This question has been troubling me for the last few weeks, which led to me to do some extensive research on the Internet on various viewpoints to this question. In my research I figured out that the answer to this question is not boolean-valued like Yes or No but something in between. There are numerous articles on this subject by renowned AI researchers and scientists who are pro-rights. I too am a firm supporter of rights for (artificial) intelligent agents. During my research though, I found that there are not many articles that voice the other side of the argument: against rights for agents. In this post I intend to enumerate the points which form the basis of the counter argument.

The Counter Argument

The perception of robots/intelligent software agents, so far, has been as inanimate objects or pieces of code and hence are devoid of any rights. Since such agents must be artificially programmed for thought, are devoid of emotions and most importantly cannot experience suffering, they lack the essential attributes considered to be alive. All creatures don’t have the same moral standing as one another. We make moral distinctions between animals on grounds of intelligence or other complexities. We don’t have the same moral claim towards plants than we do for certain animals. Even if we just consider the case of animals, we seem to have a greater moral claim for our pets (dogs/cats) as compared to monkeys (whom we use in various experiments). Monkeys are the closest match to humans. Yet we do not accord them rights even equivalent to that accorded to our pets. We even discriminate and do not accord equal rights among humans on the basis of caste, creed or color of skin. The point is that no matter how many human traits are shared by a non-human, rights are not accorded to those who share the maximum traits with humans, but rather accorded to those whom we feel a moral intuition that they should be accorded to.

Peter Watts puts across a strong argument in the followng quote [via]:

I’ve got no problems with enslaving machines — even intelligent machines, even intelligent, conscious machines — because as Jeremy Bentham said, the ethical question is not “Can they think?” but “Can they suffer?” You can’t suffer if you can’t feel pain or anxiety; you can’t be tortured if your own existence is irrelevant to you. You cannot be thwarted if you have no dreams — and it takes more than a big synapse count to give you any of those things. It takes some process, like natural selection, to wire those synapses into a particular configuration that says not I think therefore I am, but I am and I want to stay that way. We’re the ones building the damn things, after all. Just make sure that we don’t wire them up that way, and we should be able to use and abuse with a clear conscience.

This forms another basis for the counter argument. Why would we want to build agents with all such human traits and equivalent human intelligence? We all know that human intelligence is flawed (sometimes). Some argue that we would need to build emotions into robots for sectors like healthcare. Having such intelligence means that the agent should have the power of rational thought, should have desires, interests and aims. Most importantly it should have an instinct to maintain its own boundaries.

Would an entity with equal or greater intelligence than a human be a danger to the human race, if not controlled? The European Robotics Research Network (Euron) has drafted a robot-rights roadmap. For them the idea of robots as moral entities isn’t the most important issue. Their main focus is not on the rights of robots but on ethics of robot/intelligent agent designers and manufacturers. It is we who are going to program such agents. It is we Humans that need to be controlled, so that we don’t stray away and program deviant agents like we have done with viruses. This forms another basis for the counter argument. Who is to blame if things go wrong? Do we blame the agent or the programmer who coded it? How should deviant agents be punished? You can shoot a robot or destroy the computer the agent resides in, causing it to “die”. But once “murdered”, it can be rebuilt as good as new. The agent know no suffering, no fear and wouldn’t care a damn even if it was charged with murder.
AI and robotics researchers have often cited Asimov’s famous Laws of Robotics:

  • First Law: A robot may not injure a human being, or, through inaction, allow a human being to come to harm.
  • Second Law: A robot must obey orders given it by human beings, except where such orders would conflict with the First Law.
  • Third Law: A robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

which could hypothetically ensure that agents/robots do not harm humans and hence can never suppress us. Alex Knapp puts forth an interesting argument against Asimov’s laws in the following quote [via]:

So here’s the question: if artificial intelligence advances to the point where robots are roughly equal to humans in intelligence, would the imposition of the Three Laws in the manufacture of robots be moral or immoral?

If we are trying to acknowledge rights for agents as beings and not as just a machines or a piece of code, then how can we can we call such an entity a being if we are going to restrict its intelligence by such laws. An entity that does not have a right to liberty and a right to “free thought” is just like another slave. If we are going to build ourselves slaves, why do we want to build free thought and emotions into such an entity? We can rather not build all these traits into an agent and exploit them without any guilt to our conscience.

Other points against rights for intelligent agents state that if agents attain intelligence levels equivalent to that of humans and are granted equal rights, we as Humans will loose our specialness or the identity which makes us unique.

The Question Remains

I think that there is a long way to go before intelligent agents will have intelligence equivalent to that of humans. When we do reach that stage, their self-awareness and intelligence would be sufficient to enable them to assert and demand their rights. Rights are not granted but rather asserted or acknowledged. Intelligent beings like ourselves have always rebelled against suppression and asserted our rights. Our history is full of examples of such revolts and rebellions for example freedom struggles. I think that this is the most natural way to identify and accord the rights to intelligent agents. First they need to be intelligent enough to demand it.

Doug Fisher is an associate professor of computer science and computer engineering at Vanderbilt University. I came across one of his interviews in which he discusses all about Artificial Intelligence , right from its definition to how it seems to be changing the world.

A ‘must listen‘ for all those aritificial intelligence and machine learning enthusiasts. Following are a few excerpts from the interview. Alternatively you can get hold of the audio as well:

His simplistic definition of AI:

Artificial Intellegence is the study in creation of programs that do what we would regard as intelligent if we saw them in humans and other animals.

When asked about how Artificial Systems actually work i.e. does a scientist need to program all the possibilities into the computer program, he responded as:

Typically No. The scientist has to think about a number of possibilities and think about…most people are familiar with when they took english in school, the idea of a grammar, what it means to be a legal english sentence. We don’t have to teach people all the possible legal english sentences in school but we have to teach them the grammar that they can use to piece together legal english sentences. And a scientist has to look at enough possibilities so that they can get and extract something like the idea of a grammar that the program can use to create, assess and simulate situations that it hasn’t explicitly seen.

The above seems to be one of the best explanations of artificial intelligence and machine learning systems that I have ever come across. On one hand it sums up the core working model of such systems and on the other hand it is easy enough for a three year old to comprehend.

One subtle point brought out in the interview was whether such systems could be trusted, with Doug stating that this surely seems to be one of the issues that has not been addressed.

They can be wrong. Who do you hold responsible if they are wrong. One complication on using AI vs a human. If a human does something wrong you know who is responsible.

In the interview Doug also talks about the various projects where AI is being used. One interesting application, he mentions, is being done at Vanderbilt University where there is a large library of cartoons. The aim of the application is to create novel cartoons by piecing together frames from older cartoons and resequencing them into new cartoons.

Q. How human like could these systems ever get?
Doug: It might be easy for an AI system to pretend to be sad, pretend to be happy or pretend to be emphatic. May be its relatively easy for an AI system to sense sadness in you but it is probably verv very difficult to actually create an AI system that is sad.

CAPTCHA This

| June 5th, 2006

Heres a cool CAPTCHA I came across at an IBM Developerworks Blog:

CAPTCHA

So brush up your mathematics before plan to comment (-;