Archive for the Artificial Intelligence Category

It’s about time I shared one of the best articles (one of my favorites) I have come across yet – one that provides a unique perspective to AI and gets you a step closer to understanding Alan Turing. This is an article written by Edward A. Feigenbaum (co-recipient of the Turing Award ’94 along with Raj Reddy).

The *What* to *How* spectrum on page 100 is a must see.

How the “What” Becomes the “How”Upload a Document to Scribd

The Grammar Of Thought

| September 3rd, 2008

Update: Found this interesting book related to this post – The Language Instinct [link]

I have just started to scratch the surface of Natural Language Processing for my next project (involving NLP and Twitter – details to follow) and I already have a dozen questions bothering me. I shall attempt to put forth a few of the ideas and questions in this post. Lets talk briefly about the structure of language. Language has different levels of structure:

  1. dicourse – group of sentences
  2. sentences
  3. phrases
  4. words
  5. and so on…

Between the ‘sentences’ and ‘words’ lies the syntactic structure of language. This syntactic structure is built using the parts of speech of the words: nouns, verbs, etc. Words are grouped into phrases whose formation is governed by the grammar rules, for example:

Sentence -> ‘Noun Phrase’ . ‘Verb Phrase’
‘Noun Phrase’ -> Determiner . Adjective . Noun
‘Verb Phrase’ -> Verb . ‘Noun Phrase’

A sentence is grammatically correct if it adheres to the grammar of the language (like described above). With just the above knowledge about language (something you might have learnt in the 5th grade) we can see that for a candidate sentence to make sense in some language, it has to be composed of meaningful components and these components have to be in some specific order for it to logically make sense.

Grammar of Thought

This has led me to ponder if an analogous grammar exists for ‘thought’. Our thoughts can also be broken down into meaningful components and the components here also have to follow some implicit ordering for the ‘thought’ to make sense. If you think about the way you think, you will notice that as you run from one thought to another there is some logical connection between them just as between the sentences in a paragraph. If we could somehow get a formal representation of this grammar, wouldn’t it enable machines to think?

Language and Thought

There is enough literature out there which links the structure of language with the structure of thought. Benjamin Whorf states in his writings:

the structure of a human being’s language influences the manner in which he understands reality and behaves with respect to it

Thus, human cognition is based on the structure of language which in turn is the grammar defining the language. Hence a machine capable of generating sequence of grammatically correct sentences which also fit together logically (discourse), should have some ability of cognition. Even the Turing test uses natural language as a test for some level of cognition. Is this perspective of Natural Language Processing as a means of provisioning cognition to a machine, correct? Could this be another path for achieving artificial intelligence? I would love to get an answer to this from NLP experts out there.

Or is it just one of my other posts which don’t make sense because its 3am and I’m half asleep?

Only Humans

| August 5th, 2008

The path to true AI.

Only Humans
source: ‘The Age of Spiritual Machines’

I started working on my second weekend project, guess I’ll do something small every week. This one is an extension to LifeLogger. The aim is to analyze ones daily and weekly browsing history and extract themes which could aid in recommendations. It is still a ‘work in progress’ – currently I have been able to generate the following visualizations:

The following visualization depicts the dominant keywords/topics for one day (the terms are stemmed):

I had been reading a couple of Yahoo! related articles and visualization blogs. This is captured by the above visualization – but there is still alot of noise which I need to get rid of.

The next visualization depicts the linkages and clusters for the keywords. There exists a link between two terms if they occur in the same document. [may take sometime to load - you'll need to zoom in to get a better look - click on 'compute layout' if the clusters don't show]

Both the above visualizations depict important metrics that could be used to extract dominant themes from the browsing history. Dominance should not be just inferred from frequency but also from the prevalent of a term across multiple pages. I still need to work on removing noise and running this on larger datasets like browsing history for a week or so. If you have any ideas or good papers to recommend that would be nice.