Archive for the Research Category

what the bleep!

| March 4th, 2011

Profanity is often prevalent in user generated content (like comments). Websites that do not want to display such profane comments/content currently employ masking as a solution to get rid of profanity. Masking replaces the profanity in the content with characters like ####. The masked content still though conveys the existence of profanity to the user. Humans have built up a great language model to infer missing words. Try it yourself – it should be easy for you to guess a bunch of profanity words for the following sentence:

What the ####!

My hack (Bleep) for the Yahoo! Spring ’11 Hackday is yet another natural language hack that tries to remove the profanity from a comment without altering the semantics of the content. In brief, removing the profanity word from the content makes the parse tree less probable. The algorithm tries to alter this improbable parse tree to find the best local parse tree.

Following are some corrections suggested by Bleep:





a speed gun for spam

| February 24th, 2011

Apart from the content there are various features from metadata (like IP etc) which can help tell a spammer and regular user apart. Following are results of some data analysis (done on roughly 8000+ comments) which speak of another feature which proves to be a good discriminator. Hopefully this will aid others fighting spam/abuse (if not already using a similar feature).

The discriminator referred above is typing speed. The graph above plots the content length of a comment posted by a user against the (approximate) time he took to write it. If a user posts more than one comment in window of 5-10 minutes, we can consider those comments as consecutive posts. Any comment falling outside this window is ignored. In the above plot, the time is inferred by [time_of_current_post - time_of_previous_post] for consecutive posts. Thus typing speed is estimated as (content_lenght)/(time_delta_between_two_posts). This is a non-intrusive way of measuring typing speed from the logs. Though for a more accurate number one could always instrument the page with javascript (see example here). The dataset was manually labeled as spam (depicted in red) and ham (depicted in blue).

Wikipedia lists the average words per minute (wpm) for a regular internet user at around 30 wpm. With a conversion factor of 5 to characters per minute (cpm), this amounts to ~2.5 characters per second. The green line in the plot depicts a projection of the content length a user could have typed in the given time with an average typing speed (of ~2.5 chars per sec). We observe that this line clearly separates out most spam from ham. The ham posts that fall above this line are usually trolls (as observed).

This turns out to be a nice feature to tell spammers (bots and non-bots), trolls, and regular users apart. Bots often fail the turing test and don’t try hard enough to be more human like. Non-bot spammers on the other hand have to take the pains to type their spam comment repeatedly and usually end up pasting it.Try out the example here .

So spammers fix yourselves cause we have the speed gun to pull you over.

the evolving spammer

| September 8th, 2010

Though I’ve only recently started tackling it (spam), what I hear from veterans is that spam is hard problem. It is so not because its difficult to model (unlike some sub-domains in NLP) but because essentially it is a battle of human-vs-human. The opponent is now a constantly evolving machine. They learn and they learn fast. This keeps those fighting spam on their toes and you need to react to new techniques that they learn to get past the filters. Most of the work thus involved is on a reactive basis. Basically you keep iterating the following cycle: deploy -> observe -> learn -> model -> deploy

Now lets consider a sample spam text: “Find sexy girls and guys at xyz.com”. The simplest classifier (lets assume Bayesian text classifier) will start to crumble once the spammer changes the text to “fin d sex y girl s an d guy s a t xyz.co m”. So you will label and retrain your classifier to catch this new trick.

To get out of this vicious reactive cycle, you need to test your model proactively against the possible techniques a spammer could come up with to get away. This is where comes in YODA (acronym for Overly Determined Abuser), a genetic programming based model of a spammer I built (yes we have 20% time as well) to break our spam detection models. As any other genetic algorithm framework, it needs implementations of fitness functions and genome functions. The idea is to model characteristics of a spammer (variables that a spammer can manipulate) as genome functions. The genome functions represent the minimalist change that can be made to the text. For instance, changing the case of characters, modifying sentence delimiters, modifying word delimiters etc. The genome functions need not be just text modification functions but could also represent other attributes of a spammer (like IP etc). The fitness functions represent the criteria the spammer is trying to optimize i.e. to get past the filters with minimal distortions to the spam text. This could be the edit distance combined with the score returned by the model/filter.

Once the fitness function and many such genome functions have been defined, you can set these spam bots free to undergo selection, crossover and mutation. In the end (when you decide to stop the evolution), you will end up with bots that are far more complex than just the basic genome functions defined. The transformations to the original text might be beyond what you could have thought of testing against.

Following are some results of this model on the same spam text using the above mentioned basic genome functions:

- F.ind.s.exy g.irls.a.nd.g.uys.a.t.x.yz.com
- f iñd s exy gi rls ã ñd g úys a t xy z.çom
- FI ND sE Xy Gir Ls anD gu yS AT XyZ. COM
- Find_sexy girls and_guys at_xyz.com

I started working on my second weekend project, guess I’ll do something small every week. This one is an extension to LifeLogger. The aim is to analyze ones daily and weekly browsing history and extract themes which could aid in recommendations. It is still a ‘work in progress’ – currently I have been able to generate the following visualizations:

The following visualization depicts the dominant keywords/topics for one day (the terms are stemmed):

I had been reading a couple of Yahoo! related articles and visualization blogs. This is captured by the above visualization – but there is still alot of noise which I need to get rid of.

The next visualization depicts the linkages and clusters for the keywords. There exists a link between two terms if they occur in the same document. [may take sometime to load - you'll need to zoom in to get a better look - click on 'compute layout' if the clusters don't show]

Both the above visualizations depict important metrics that could be used to extract dominant themes from the browsing history. Dominance should not be just inferred from frequency but also from the prevalent of a term across multiple pages. I still need to work on removing noise and running this on larger datasets like browsing history for a week or so. If you have any ideas or good papers to recommend that would be nice.