Archive for the Yahoo! Category

WYCIWYS

| September 21st, 2011

Many a times I’ve stared at Explored Flickr Photos and tried grokking its artistic nuances. Due to lack of artistic sensibility, at times I fail to understand the techniques or properties that the photographer used or intended to capture. But the Flickr community is brimming with experts who often chime in about what they like/see in comments. My #nlproc hack (for the upcoming Yahoo! Winter Hackday) aims to solve this by summarizing this expert knowledge (wisdom of crowd) for a photograph.

What You Comment Is What You See (WYCIWYS) is a Flickr hack that harnesses the comments of photos to determine the attributes/properties of the photo that people are talking about. It also gives a sentiment score (+ve) for each attribute to help a user gauge what other users find most interesting about a photo. Following are some outputs for WSCIWYS (click to zoom):

click to zoom

click to zoomclick to zoomclick to zoomclick to zoom

what the bleep!

| March 4th, 2011

Profanity is often prevalent in user generated content (like comments). Websites that do not want to display such profane comments/content currently employ masking as a solution to get rid of profanity. Masking replaces the profanity in the content with characters like ####. The masked content still though conveys the existence of profanity to the user. Humans have built up a great language model to infer missing words. Try it yourself – it should be easy for you to guess a bunch of profanity words for the following sentence:

What the ####!

My hack (Bleep) for the Yahoo! Spring ’11 Hackday is yet another natural language hack that tries to remove the profanity from a comment without altering the semantics of the content. In brief, removing the profanity word from the content makes the parse tree less probable. The algorithm tries to alter this improbable parse tree to find the best local parse tree.

Following are some corrections suggested by Bleep:





the evolving spammer

| September 8th, 2010

Though I’ve only recently started tackling it (spam), what I hear from veterans is that spam is hard problem. It is so not because its difficult to model (unlike some sub-domains in NLP) but because essentially it is a battle of human-vs-human. The opponent is now a constantly evolving machine. They learn and they learn fast. This keeps those fighting spam on their toes and you need to react to new techniques that they learn to get past the filters. Most of the work thus involved is on a reactive basis. Basically you keep iterating the following cycle: deploy -> observe -> learn -> model -> deploy

Now lets consider a sample spam text: “Find sexy girls and guys at xyz.com”. The simplest classifier (lets assume Bayesian text classifier) will start to crumble once the spammer changes the text to “fin d sex y girl s an d guy s a t xyz.co m”. So you will label and retrain your classifier to catch this new trick.

To get out of this vicious reactive cycle, you need to test your model proactively against the possible techniques a spammer could come up with to get away. This is where comes in YODA (acronym for Overly Determined Abuser), a genetic programming based model of a spammer I built (yes we have 20% time as well) to break our spam detection models. As any other genetic algorithm framework, it needs implementations of fitness functions and genome functions. The idea is to model characteristics of a spammer (variables that a spammer can manipulate) as genome functions. The genome functions represent the minimalist change that can be made to the text. For instance, changing the case of characters, modifying sentence delimiters, modifying word delimiters etc. The genome functions need not be just text modification functions but could also represent other attributes of a spammer (like IP etc). The fitness functions represent the criteria the spammer is trying to optimize i.e. to get past the filters with minimal distortions to the spam text. This could be the edit distance combined with the score returned by the model/filter.

Once the fitness function and many such genome functions have been defined, you can set these spam bots free to undergo selection, crossover and mutation. In the end (when you decide to stop the evolution), you will end up with bots that are far more complex than just the basic genome functions defined. The transformations to the original text might be beyond what you could have thought of testing against.

Following are some results of this model on the same spam text using the above mentioned basic genome functions:

- F.ind.s.exy g.irls.a.nd.g.uys.a.t.x.yz.com
- f iñd s exy gi rls ã ñd g úys a t xy z.çom
- FI ND sE Xy Gir Ls anD gu yS AT XyZ. COM
- Find_sexy girls and_guys at_xyz.com

`Fact`orize Your Search

| August 14th, 2009

Dygest and a hackday later, @sudheer_624 and I (@semanticvoid) are back with ‘dfacto’, codename for our latest search hack for Yahoo! Hackday Summer 2009.

I think that search is undergoing a paradigm shift – its no longer about who presents the best ten blue links but now more about presenting the answers upfront. Dfacto (pronounced as ‘de facto‘, Latin for ‘by [the] fact‘) is aimed at addressing this issue. A large percentage (nearly 68%) of queries are informational queries – one where the searcher knows what she’d like to do or find but does not know how this can be achieved. Dfacto is aimed primarily at addressing this class of queries by presenting a set of facts associated with the query/topic to the searcher. It uses natural language algorithms to get facts that are most “semantically” related to the query. In lay terms, it literally tries to understand your query and the results. I’ll save the algorithmic details for another post. The few examples below show how it works:

Disclaimer: This is a work in progress, so you might notice a few ‘facts’ that are irrelevant to the query.

Lets say the searcher is (losing hair and) looking for causes of hair loss. Normally he/she would need to click through a bunch of links to get an overview on the causes. This hack on the other hand makes life a bit easier by presenting the causes upfront (click to enlarge):

click to enlarge
'hair loss cause'

Along with the facts, we also list the source from where it was extracted. Alternatively, the searcher can also select a bunch of facts he/she thinks are relevant and refine the search. This in turn would yield a new set of ‘web results’ along with new refined and related ‘facts’.

Another example (one which I particularly like) is a query about ‘table manners’. This precisely lists a set of etiquette’s to follow at the table (click to enlarge).

click to enlarge
'table manners'

Alternatively, Dfacto also serves well as a product research tool. A query for ‘iphone 3gs’ yeilds (click to enlarge):

click to enlarge
'iphone 3gs'

On another note, if you have a date in the coming weeks you might be interested in reading the list below (:

click to enlarge
'first date tips'

Happy hacking!