Archive for the Machine Learning Category

Digital Immortality

| February 19th, 2008

Gordon Bell explains MyLifeBits in this article. A good read for those who still don’t know about the MyLifeBits project.

Gordon Bell and the Sense Cam

MyLifeBits is a memory surrogate. It’s digital immortality. It’s a database or transaction processing system to capture everything in your life, every keystroke, every mouse click. Basically I’m capturing all the minutiae of life.

Now that you know about MyLifeBits, you may also want to explore LifeLogger, my MyLifeBits inspired project.

[Update] You might also be interested in Momenta.

Logging My Life

| April 21st, 2007

Gordon Bell has been recording every bit of his life for the past seven years. His custom-designed software, “MyLifeBits” saves everything it can, from every email he sends and receives, every document he types, every chat session he engages in, every Web page he surfs. The advantages of such a software are obvious: total recall. It gives one the ability to search ones life for any reference of a person/thing.

Inspired by it I have decided to start logging my life as well. As of now its restricted to only my online life as I do not have resources like the SenseCam. The data collected in this process could be used in numerous ways: total recall, recommendations, predictions, and so on. As Peter Norvig says, “Its about the data and not the algorithm”.

Head over to the Life Logger project page, where I am documenting how I have been logging my life along with tools and algorithms for aggregating and analyzing such data.

Happy logging :-)

Click here for the Life Logger Project homepage

Doug Fisher is an associate professor of computer science and computer engineering at Vanderbilt University. I came across one of his interviews in which he discusses all about Artificial Intelligence , right from its definition to how it seems to be changing the world.

A ‘must listen‘ for all those aritificial intelligence and machine learning enthusiasts. Following are a few excerpts from the interview. Alternatively you can get hold of the audio as well:

His simplistic definition of AI:

Artificial Intellegence is the study in creation of programs that do what we would regard as intelligent if we saw them in humans and other animals.

When asked about how Artificial Systems actually work i.e. does a scientist need to program all the possibilities into the computer program, he responded as:

Typically No. The scientist has to think about a number of possibilities and think about…most people are familiar with when they took english in school, the idea of a grammar, what it means to be a legal english sentence. We don’t have to teach people all the possible legal english sentences in school but we have to teach them the grammar that they can use to piece together legal english sentences. And a scientist has to look at enough possibilities so that they can get and extract something like the idea of a grammar that the program can use to create, assess and simulate situations that it hasn’t explicitly seen.

The above seems to be one of the best explanations of artificial intelligence and machine learning systems that I have ever come across. On one hand it sums up the core working model of such systems and on the other hand it is easy enough for a three year old to comprehend.

One subtle point brought out in the interview was whether such systems could be trusted, with Doug stating that this surely seems to be one of the issues that has not been addressed.

They can be wrong. Who do you hold responsible if they are wrong. One complication on using AI vs a human. If a human does something wrong you know who is responsible.

In the interview Doug also talks about the various projects where AI is being used. One interesting application, he mentions, is being done at Vanderbilt University where there is a large library of cartoons. The aim of the application is to create novel cartoons by piecing together frames from older cartoons and resequencing them into new cartoons.

Q. How human like could these systems ever get?
Doug: It might be easy for an AI system to pretend to be sad, pretend to be happy or pretend to be emphatic. May be its relatively easy for an AI system to sense sadness in you but it is probably verv very difficult to actually create an AI system that is sad.

I came across this paper which brought out an interesting point that an ensemble of bayesian classifiers (called base classifiers) could predict an hypothesis sometimes more accurately than an individual bayesian classifier. Something that reminds me of James Surowiecki’s ‘The Wisdom Of Crowds’. There seems to be an analogy between the two both assume that the individual (base classifier) makes the decision independent of the others.

An excerpt from the paper:

A popular method for creating an accurate classifier from a set of training instances is to train several different classifiers, and then to combine their predictions. Previous theoretical and empirical research has shown that an ensemble is often more accurate than any of the single classifiers in the ensemble. To create an ensemble, one generally must focus on two aspects: (1) which classifiers to use as components of the ensemble (generation of the base classifiers); and (2) how to combine their individual predictions into one (the integration procedure).

Get hold of the paper here.