Symbols as Hash Codes for Super-fast Linear Classifiers

by

I was talking to John Langford, of Machine Learning(Theory) blog fame, after his talk in the Columbia machine learning reading group yesterday.

He mentioned a way to speed up linear classifiers at runtime that’s intriguing. A binary linear classifier is based on the dot product of a weight vector β (almost always dense) with a feature vector x (almost always sparse). Examples of linear classifiers include perceptrons, logistic regression, naive Bayes/multinomial, SVMs, and it’s even the innermost loop in discriminitive sequence models like CRFs.

The bottleneck at runtime for linear classifiers is converting the objects being classified into sparse vectors. If you use a symbol table, the bottleneck is the hash lookup in the symbol table. The feature weight vector β is almost always an array, so once you have the symbol ID, you pay the array lookup, multiplication of feature weight (typically 1 in NLP problems) by value found, and add to the sum for the dot product. Only the array lookup is time consuming here.

Actually constructing the sparse vector itself would also be expensive, but this can be done implicitly, because all we need is the dot product of the vector with the parameter

So what happens if we replace the symbol generated by a symbol table with a hash code? Instant speedup. We eliminate the expensive hash lookup, which requires an array lookup almost certainly out of L2 cache, and then iterating over the collision set doing a string-match until we get a match or exhaust the bucket.

The price we pay is possible collisions. In effect, any two features that have the same hash code get conflated. If we’re doing 20 newsgroups and trying to distinguish hockey posts from baseball posts, it’s going to hurt accuracy if the hashcode of “goalie” and “pitcher” are the same, as they’re highly discriminitive in this domain.

Now we’re going to use a hash code that produces numbers in a small range, say 0 to 2**18, or 18 bits, so that an array of floats or doubles of that size fits in L2 cache on our CPU. Now we’re really flying. The symbol we’re looking up will fit in a register, so computing its hash code will be pretty fast. It’s the lookup out of cache and subsequent matching that’s the time-sink.

In practice, John reports that experiments they’ve done have shown that this isn’t a problem. He found this somewhat surprising, but I didn’t. Language is highly redundant, so a few features being conflated is unlikely to hurt performance much. It’d be interesting to see a plot of size of hash table vs. number of features vs. accuracy.

This approach extends to the more complex, structured features common in discriminitive classifiers. We never need to build an explicit feature representation if we can generate a hash code for it.

If we ever have to make a simple classifier that really flies, this is what I’ll be thinking about. I might also be thinking about perfect hashing, because I’m a neat freak.

4 Responses to “Symbols as Hash Codes for Super-fast Linear Classifiers”

  1. craig Says:

    http://weblogs.java.net/blog/kohsuke/archive/2008/03/deep_dive_into.html

    This article gives a method for getting the assembly code of native code produced by the JIT. Take a peek.

  2. Machine Learning (Theory) » Eliminating the Birthday Paradox for Universal Features Says:

    […] want to expand on this post which describes one of the core tricks for making Vowpal Wabbit fast and easy to use when learning […]

  3. Feature Hash Code Collisions in Linear Classifiers « LingPipe Blog Says:

    […] a comment to John Langford’s response to my blog entry recapitulating his comments after his talk, Kuzman Ganchev points out that he and Mark Dredze did […]

  4. An Online Logistic Regression API with Regularization « LingPipe Blog Says:

    […] so a constant stream of new features won’t blow out memory (see my previous blog entry on using hash codes as features). Unfortunately, VW runs from the command-line and requires the training examples to be in a data […]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


Follow

Get every new post delivered to your Inbox.

Join 824 other followers