the evolving spammer

| September 8th, 2010

Though I’ve only recently started tackling it (spam), what I hear from veterans is that spam is hard problem. It is so not because its difficult to model (unlike some sub-domains in NLP) but because essentially it is a battle of human-vs-human. The opponent is now a constantly evolving machine. They learn and they learn fast. This keeps those fighting spam on their toes and you need to react to new techniques that they learn to get past the filters. Most of the work thus involved is on a reactive basis. Basically you keep iterating the following cycle: deploy -> observe -> learn -> model -> deploy

Now lets consider a sample spam text: “Find sexy girls and guys at xyz.com”. The simplest classifier (lets assume Bayesian text classifier) will start to crumble once the spammer changes the text to “fin d sex y girl s an d guy s a t xyz.co m”. So you will label and retrain your classifier to catch this new trick.

To get out of this vicious reactive cycle, you need to test your model proactively against the possible techniques a spammer could come up with to get away. This is where comes in YODA (acronym for Overly Determined Abuser), a genetic programming based model of a spammer I built (yes we have 20% time as well) to break our spam detection models. As any other genetic algorithm framework, it needs implementations of fitness functions and genome functions. The idea is to model characteristics of a spammer (variables that a spammer can manipulate) as genome functions. The genome functions represent the minimalist change that can be made to the text. For instance, changing the case of characters, modifying sentence delimiters, modifying word delimiters etc. The genome functions need not be just text modification functions but could also represent other attributes of a spammer (like IP etc). The fitness functions represent the criteria the spammer is trying to optimize i.e. to get past the filters with minimal distortions to the spam text. This could be the edit distance combined with the score returned by the model/filter.

Once the fitness function and many such genome functions have been defined, you can set these spam bots free to undergo selection, crossover and mutation. In the end (when you decide to stop the evolution), you will end up with bots that are far more complex than just the basic genome functions defined. The transformations to the original text might be beyond what you could have thought of testing against.

Following are some results of this model on the same spam text using the above mentioned basic genome functions:

- F.ind.s.exy g.irls.a.nd.g.uys.a.t.x.yz.com
- f iñd s exy gi rls ã ñd g úys a t xy z.çom
- FI ND sE Xy Gir Ls anD gu yS AT XyZ. COM
- Find_sexy girls and_guys at_xyz.com

stop words

| August 24th, 2010

In a recent implementation for a near duplicate detection task I relied on stop words as key features in extracting signatures from text. The results turned out to be good but that’s not what I’m focusing on here. This was quite contrary to the mindset in the IR/NLP domain we have been accustomed to, where these words are considered meaningless and need to be got rid of before building any model/index. These word on the other hand encode a plethora of information like tense, plurality, (un)certainty, subjectivity and more. They bind the semantics of a sentence together and give them context. Yet (atleast in the IR sense) we give them a negative connotation (STOP/NN -0.140192 sentiment). I would go a step ahead by saying that we should stop calling them *stop* words and instead accept the inability of some IR systems of making correct use of them. How about *glue* words for a change? Or maybe not.

PS: Incase you are looking for a list of stop words for different languages here is a good list – http://members.unine.ch/jacques.savoy/clef/

I came across this interesting pattern while trying to visualize some of the Twitter streaming data. The following charts plot the ‘following’ counts vs the ‘followers’ counts (for ~200K user accounts). The data represents one hours worth of data obtained via the streaming API. User accounts falling around the line y ~= 0 tend to generally be celebrities (musicians, sportsmen etc), companies, news and info bots (like the WSJ, CNN etc). The general population usually falls around the line y = x (the ‘I follow you, You follow me’ kind). But thats not whats interesting here (we all knew that). Looking at the zoomed in plots (figure 2 and figure 5), we see a distinct square formed by at (0,0) (2000,2000). This is also observed in another days data (figure 5) so its not just an anomaly. The plateau formed at y=2000 is a bit perplexing. I can’t seem to get my head around that. Figure (3) tries to look at the user accounts with ~2000 ‘following’ – a large number of these users turn out to be spam bots. I suspect most spam account (bots) are concentrated around this region. Its as if the spam bots tend to follow around 2000 users at max so as to not alert the spam controls by mass following users.

Any hypothesis that comes to your mind?

Figure 1: plot for day 1

Figure 2: plot for day 1 (zoomed)

Figure 3: plot for day 1 with y ~ 2000

Figure 4: plot for day 2

Figure 5: plot for day 2

Reading Less Is Reading More

| October 7th, 2009

If information is what drives you to the internet, like me, you might be spending roughly 60-70% of your time online reading blogs, news and feeds (not to forget twitter). For me at least, reading online has superseded email (and updating social networks) as the most time consuming activity. And yet everyone is busy generating more content rather than finding a solution to consume all this information. We are trying to tackle this problem precisely with Dygest. At its core Dygest is a summarization engine that tries to sift through all the noise and present only the *real* content/news contained in any (news) article/text. Recently, we released an experimental version of a feed summarizer that uses the Dygest engine to summarize blogposts/news for any RSS/ATOM feed. This summarized feed can be subscribed in any feed reader like Bloglines, Google Reader etc.

NOTE: A feed that has not been encountered by our system ever before should be summarized in a couple of minutes.

Feed Summarizer

On the whole with Dygest, reading blogs has now become much faster, much more concise and consuming information has become a great deal easier. Imagine the time saved reading the summarized version as compared to the original post (also you are not overwhelmed with useless information). See for yourself below:

Original Post

Original Post

Summarized Post

Summarized Post

While you might have the urge to head over to Dygest and summarize your entire subscription list on Google Reader, I would recommend reading this post a bit further for some real cool stuff we have in store. If you must though – click here to Dygest.


Summarizing Your Twitter Links

Readtwit is a really cool service launched recently, which extracts links from your twitter feed and packages them in a clean RSS format. The awesome combination of Readtwit along with Dygest yields a summarized twitter feed delivered to your favorite feed reader.

Steps to get a summarized twitter feed:

(1) Sign into Readtwit.
(2) Copy the link on the ‘Get me the feed’ button:


(3) Paste this link into the Dygest interface and subscribe to the summarized feed returned in your favorite feed reader.


More To Come

This is just an experimental release of Dygest and so do send in your feedback on the summaries and help us improve. In the coming months we are working on improving the algorithms and churning out other great applications of Dygest (there is something really cool in the works). So while we are busy teaching computers to read, Dygest your feeds – because reading less is reading more.

Follow us on twitter – @dygest