Author Archive

Git Rocks!!!

May 25, 2012

We’ve switched the version control system for Stan (my project at Columbia Uni) from Subversion to Git. I was skeptical when everyone told me how great Git was; the move from CVS to Subversion didn’t buy us much.

Git, on the other hand, is worth it. What I’ve liked about Git so far is:

  • Local Repository Copies: Every user gets a full copy of the repository. So you can work on a local version of the entire repository before “pushing” any changes to the main repository. (So what was a commit in Subversion is now a commit followed by a push.) This makes it easy to work on the subway, but it also means you can keep things under version control without polluting the public server.
  • Speed: Uploading the 40MB Boost C++ sources to Subversion took, roughly speaking, forever (tens of minutes). In Git, it’s super fast. (Both hosted by Google, so I don’t think it’s the network or servers.)
  • Branching: What makes local repositories work really well is branching; it’s way easier to branch and merge in Git than in Subversion.
  • Reports: All the commands like “git diff” and “git status” give you more information than Subversion, which is actually very helpful.

If you want to read about Git, I can recommend

  • Chacon, Scott. 2009. Git Pro. Apress.

It’s free online in every format imaginable from the author.

Ryan tells me that GitHub is the bomb, too, and when Ryan recommends something, I listen (he told me the move to Subversion was minor, by the way). It apparently has a great community and a great way to suggest pushes to other projects. We may move the Columbia project to there from Google Code. (We can’t do the same for LingPipe, at least in their free open source area, because of our quirky license.)

Interannotator Agreement for Chunking Tasks Like Named Entities and Phrases

May 18, 2012

From the Emailbox

Krishna writes,

I have a question about using the chunking evaluation class for inter annotation agreement : how can you use it when the annotators might have missing chunks I.e., if one of the files contains more chunks than the other.

The answer’s not immediately obvious because the usual application of interannotator agreement statistics is to classification tasks (including things like part-of-speech tagging) that have a fixed number of items being annotated.

Chunker Evaluation

The chunker evaluations built into LingPipe calculate the usual of precision and recall measures (see below). These evaluations compare a set of response chunkings to a set of reference chunkings. Usually the reference is drawn from a gold-standard corpus and the response from an automated system built to do chunking.

Precision (aka positive predictive accuracy) measures the proportion of chunks in the response that are also in the reference. Recall (aka sensitivity) measures the proportion of chunks in the reference that are in the response. If we swap the reference and response chunkings, we swap precision and recall.

True negatives aren’t really being counted here — theoretically there are a huge number of them — any possible span with any possible tag could have been labeled. LingPipe just sets the true negative count to zero, and as a result, specificity (TN/[TN+FP]) doesn’t make sense.

Interannotator Agreement

Suppose you have chunkings from two human annotators. Just treat one as the reference and one as the response and run a chunking evaluation. The precision and recall values will tell you which annotator returned more chunkings. For instance, if precision is .95 and recall .75, you know that the annotator assigned as the reference chunking had a whole bunch of chunks the other annotator didn’t think were chunks, but most of the chunks found by the response annotator were also chunks of the reference annotator.

You can use F-measure as an overall single-number score.

The base metrics are all explained in

and their application to chunking in

Examples of running chunker evaluations can be found in

LingPipe Annotation Tool

If you’re annotating entity data, you might be interested in our learn-a-little, tag-a-little tool.

Now that Mitzi’s brought it up to compatibility with LingPipe 4, we should move citationEntities out of the sandbox and into a tutorial.

Standard Output Ruins Everything!

May 9, 2012

The title is a a paraphrase from Dirk Eddelbuettel on the Rcpp mailing list (an interface tool for R and C++), but the lesson also applies to Java.

Don’t Write to Standard Output!

One of the first lessons of writing an API (as opposed to something that only runs from the command line) is that you never ever ever write to standard output in an API.

The reasons are that (1) you never know how someone might configure standard output around you (it’s resettable in Java), and (2) you never know what context your API will run in — it may be running in a servlet or in a Swing GUI where standard output where it’s invisible to your user (but does clog up the logs and the shell from which the Swing GUI was invoked).

So What do You do Instead?

1. Throw an exception if there’s some kind of error. See my previous post, “When to catch, pass, or throw exceptions?

Of course you have to be careful here about the context things are running in, too, especially if you try throwing a runtime exception instead of a checked exception. This is why the Google style guide for C++ forbids exceptions!

2. If there’s no error, the common advice is to write to a logger. We didn’t do that in LingPipe because we didn’t want any dependencies to other code built into LingPipe. We also didn’t want every user of LingPipe to have to configure a logger like log4j or Java’s built-in logger. The other issue with loggers is that they have one top-level config, so it gets confusing with multiple packages running if you use high-level config in the properties files (I know you can configure per-package, but people often don’t and get surprised).

Alternatively, you can write messages into something like a string builder. Then they can be sent to whatever output source you want.

The com.aliasi.io.Reporter class may look like a standard logger, but it’s only configurable programatically and is set by default to just accumulate results. Note how it’s passed into logistic regression fitting, not just there by default in the background.

A second alternative is to pass an OutputStream into the function that might want to write and write to that. In a command-line setting it can be set to the standard output. In an embedded context, it might be set to a byte array output stream wrapped in a PrintStream, which will just accumulate the results until they can be dealt with. For instance, they might be written into a servlet output stream for use in a web app.

Mavandadi et al. (2012) Distributed Medical Image Analysis and Diagnosis through Crowd- Sourced Games: A Malaria Case Study

May 5, 2012

I found a link from Slashdot of all places to this forthcoming paper:

The main body of the paper is about they reapplication to malaria diagnosis. But I’m more interested in the statistical techniques they used for crowd sourcing.

None of the nine authors, the reviewer(s) or editor(s) knew that their basic technique for analyzing crowd sourced data has been around for over 30 years. (I’m talking about the statistical technique here, not the application to distributed diagnosis of diseases, which I don’t know anything about.)

Of course, many of us reinvented this particular wheel over the past three decades, and the lack of any coherent terminology for the body of work across computer science, statistics, and epidemiology is part of the problem.

Previous Work

The authors should’ve cited the seminal paper in this field (at least it’s the earliest one I know — if you know earlier refs, please let me know):

  • Dawid, A. P. and A. M. Skene. 1979. Maximum likelihood estimation of observer error rates using the EM algorithm. Applied Statistics 28(1):20–28.

Here’s a 20-year old paper on analyzing medical image data (dental X-rays) with similar models:

  • Espeland, M. A. and S. L. Handelman. 1989. Using latent class models to characterize and assess relative error in discrete measurements. Biometrics 45:587–599.

Mavandadi et al.

Mavandadi et al. use an approach they call a “binary channel model for gamers”. On page 4 of part II of the supplement to their paper, they define a maximum a posteriori estimate that is the same as Dawid and Skene’s maximum likelihood estimate. It’s the same wheel I reinvented in 2008 (I added hierarchical priors because I was asking Andrew Gelman and Jennifer Hill for advice) and that several groups have subsequently reinvented.

I didn’t understand the section about “error control coding” (starting with whether they meant the same thing as what I know as an “error correcting code”). Why have an annotator annotate an item an odd number of times and then take a majority vote? You can build a probabilistic model for reannotation of any number of votes (that presumably would take into account the correlation (fixed effect) of having the same annotator).

Role of Automatic Classifiers

As in Raykar et al.’s 2009 JMLR paper, Mavandadi et al. also include a machine-based system. But it is not tightly linked as in the work of Raykar et al. It’s just trained from the data a la Padhraic Smyth’s mid-1990s model of crowdsourcing crater location data and then training image analysis models on the resulting crowdsourced data.

Mavandadi et al. instead run their automatic classifier first, then if it’s not confident, hand it over to the crowd. This is, by the way, the standard practice in speech-recognition-based automated call centers.

Mavandadi et al. should check out (Sheng et al. 2010), which analyzes when you need to find another label, also using a Dawid-and-Skene-type model of data annotation. It’s also a rather common topic in the epidemiology literature, because it’s the basis of the decision as to which diagnostic test to administer next, if any, in situations like breast cancer diagnosis (which involves notoriously false-positive-prone image tests and notoriously false-negative-prone tissue tests).

I didn’t see any attempt by Mavandadi et al. to calibrate (or even measure) their system’s confidence assessments. I’d wait for that analysis before trusting their output.

Quick iPad 3 Review: Wow!

April 9, 2012

My iPad 3 arrived Friday afternoon. I’ve been using the iPad 1 for the past year and a half or so for all of my technical reading.

Mirroring My Old iPad

After synching my iPad 1 with my Macbook Air, when I plugged in the iPad 3 for the first time, it gave me the option of just mirroring what I had on the old iPad. Yes, please. It worked like a charm. A guy could get spoiled with this kind of treatment. (On the other hand, I still feel like configuring an iPad is making a deal with the Borg; see my previous post, Resistance is futile — I’ve been assimilated by Apple.)

Pro

The Retina display on the iPad 3 is breathtaking. It’s a qualitatively different experience for reading text.

The iPad 1 was good, but it still looked like reading text on a computer. The iPad 3 feels more like reading a magazine or a journal article. The text is that sharp. Even on tiny subscripts in formulas, there’s no aliasing. The following link is to Apple’s demo, which, if anything, understates the perceivable difference:

The jump in quality from iPad 1 to iPad 3 seems much more noticeable than the jump from standard def video (480 vertical lines) to full 1080p high def (1080 vertical lines). In fact, HD video on the iPad 3 is just stunning (I’m running out of adjectives here). The speaker also sounds surprisingly clear for such a little device.

Cons

It’s heavier and fatter than the iPad 1. It’s just enough heavier that it’s much more uncomfortable to hold with one hand while reading, which is what I’m often trying to do on the subway. The iPad 2 is the thinnest and lightest of the three, but it’s hardly a Kindle.

The iPad 3 runs considerably hotter than the iPad 1. It doesn’t get as hot as my Macbook Air when running statistical simulations. But hot enough to notice. Nothing to worry about, but it adds to the unpleasantness of holding it.

I don’t notice any speed difference in the things I do, which is a bummer. It still takes GoodReader a dog’s age to load an old scanned PDF and flip the pages. It’s just that they’re much sharper when they come up.

It seems to take longer to recharge, but I’m not 100% sure.

Conclusion

For my use case, which is mainly for reading technical papers at home, on the subway, and at work, and secondarily for board games (Carcassone, Neuroshima Hex, Ticket to Ride) and for video (Vimeo and YouTube HD look awfully nice), it’s a no brainer. The iPad 3 blows away anything else I’ve ever seen, no contest.

All Aboard for Quasi-Productive Stemming

April 4, 2012

One of the words Becky and I are having annotated for word sense (collecting 25 non-spam Mechanical Turk responses per word) is the nominal (noun) use of “board”.

One of the examples was drawn from a text with a typo where “aboard” was broken into two words, “a board”. I looked at the example, and being a huge fan of nautical fiction, said “board is very productive — we should have the nautical sense”. Then I thought a bit longer and had to admit I didn’t know what “board” meant all by itself. I did know a whole bunch of terms that involved “board” as a stem:

  • inboard
  • outboard
  • aboard, onboard
  • overboard, “by the board”
  • larboard (port)
  • weatherboard (facing the weather [wind])
  • starboard
  • above board (on deck)

And what about “seaboard”? As in the “Eastern seaboard”.

The nautical meaning wasn’t listed in WordNet, but dictionary.com has an entry for board that lists it as one of two nautical senses. Words have a surprising number of meanings if you’re willing to go into low frequency, archaic/obsolete and domain-specific usages.

The nautical sense in play is “side of a ship”. It also lists an obsolete sense meaning edge or side of anything. So the nautical sense is just a specialization of this obsolete sense. That’s one way in which meaning drift occurs.

This is all consistent with “side”, cf., “inside”/”inboard”, “outside”/”outboard”, and “aside”/”aboard”. The “side” in question here seems to have drifted to something like “side of an enclosed structure”

This is the same problem we had with our morphological annotation project at LingPipe — there were words that seemed to be compounds, but one of the roots didn’t really stand alone in (common, everyday) English.

Natural Language Generation for Spam

March 31, 2012

In a recent comment on an earlier post on licensing, we got this spam comment. I know it’s spam because of the links and the URL.

It makes faculty adage what humans can do with it. We’ve approved to beacon bright of that with LingPipe’s authorization — we artlessly can’t allow the attorneys to adapt our own arbitrary royalty-free license! It was advised to accept some AGPL-like restrictions (though we’d never heard of AGPL). At atomic with the (A)GPL, there are FAQs that I can about understand.

ELIZA, all over Again

What’s cool is how they used ELIZA-like technologies to read a bit of the post and insert it into some boilerplate-type generation. There are so many crazy and disfluent legitimate comments that with a little more work, this would be hard to filter out automatically. Certainly the WordPress spam filter, Akismet, didn’t catch it, despite the embedded links.

Black Hat NLP is Going to Get Worse

It would be really easy to improve on this technology with a little topic modeling and better word spotting (though they seem to do an OK job of that) and better language modeling for generation. Plus better filtering a la modern machine translation systems.

The real nasty applications of such light processing and random regeneration will be in auto-generating reviews and even full social media, etc. It’ll sure complicate sentiment analysis at scale. You can just create blogs full of this stuff, link them all up like a good SEO practitioner, and off you go.

Cross Validation vs. Inter-Annotator Agreement

March 12, 2012

Time, Negation, and Clinical Events

Mitzi’s been annotating clinical notes for time expressions, negations, and a couple other classes of clinically relevant phrases like diagnoses and treatments (I just can’t remember exactly which!). This is part of the project she’s working on with Noemie Elhadad, a professor in the Department of Biomedical Informatics at Columbia.

LingPipe Chunk Annotation GUI

Mitzi’s doing the phrase annotation with a LingPipe tool which can be found in

She even brought it up to date with the current release of LingPipe and generalized the layout for documents with subsections.

Our annotation tool follows the tag-a-little, train-a-little paradigm, in which an automatic system based on the already-annotated data is trained as you go to pre-annotate the data for a user to correct. This approach was pioneered in MITRE’s Alembic Workbench, which was used to create the original MUC-6 named-entity corpus.

The chunker underlying LingPipe’s annotation toolkit is based on LingPipe’s character language-model rescoring chunker, which can be trained online (that is, as the data streams in) and has quite reasonable out-of-the-box performance. It’s LingPipe’s best out-of-the-box chunker. In contrast, CRFs can be engineered to outperform the rescoring chunker with good feature engineering.

A very nice project would be to build a semi-supervised version of the rescoring chunker. The underlying difficulty is that our LM-based and HMM-based models take count-based sufficient statistics.

It Works!

Mitzi’s getting reasonable system accuracy under cross validation, with over 80% precision and recall (and hence over 80% balanced F-measure).

That’s not Cricket!

According to received wisdom in natural language processing, she’s left out a very important step of the standard operating procedure. She’s supposed to get another annotator to independently label the data and then measure inter-annotator agreement.

So What?

If we can train a system to performa at 80%+ F-measure under cross-validation, who cares if we can’t get another human to match Mitzi’s annotation?

We have something better — we can train a system to match Mitzi’s annotation!

In fact, training such a system is really all that we often care about. It’s much better to be able to train a system than another human to do the annotation.

The other thing we might want a corpus for is to evaluate a range of systems. There, if the systems are highly comparable, the fringes of the corpus matter. But perhaps the small, but still p < 0.05, differences in such systems don't matter so much. What the MT people have found is that even a measure that's roughly correlated with performance can be used to guide system development.

Error Analysis and Fixing Inconsistencies

Mitzi’s been doing the sensible thing of actually looking at the errors the system’s making under cross validation. In some of these cases, she’d clearly made a braino and annotated the data wrong. So she fixes it. And system performance goes up.

What Mitzi’s reporting is what I’ve always found in these tasks. For instance, she inconsistently annotated time plus date sequences, sometimes including the times and sometimes not. So she’s going back to correct to do it all consistently to include all of the time information in a phrase (makes sense to me).

After a couple of days of annotation, you get a much stronger feeling for how the annotations should have gone all along. The annotations drifted so much over time in this fashion in the clinical notes annotated for the i2b2 Obesity Challenge that the winning team exploited time of labeling as an informative feature to predict co-morbidities of obesity!

That’s also not Cricket!

The danger with re-annotating is that the system’s response will bias the human annotations. System-label bias is also a danger with single annotation under the tag-a-little, learn-a-little setup. If you gradually change the annotation to match the system’s responses, you’ll eventually get to very good, if not perfect, performance under cross validation.

So some judgment is required in massaging the annotations into a coherent system, but one that you care about, not one driven by the learned system’s behavior.

On the other hand, you do want to choose features and chunkings the system can learn. So if you find you’re trying to make distinctions that are impossible for the system to learn, then change the coding standard to make it more learnable, that seems OK to me.

Go Forth and Multiply

Mitzi has only spent a few days annotating the data and the system’s already working well end to end. This is just the kind of use case that Breck and I had in mind when we built LingPipe in the first place. It’s so much fun seeing other people use your tools

When Breck and Linnea and I were annotating named entities with the citationEntities tool, we could crank along at 5K tokens/hour without cracking a sweat. Two eight-hour days will net you 80K tokens of annotated data and a much deeper insight into the problem. In less than a person-week of effort, you’ll have a corpus the size of the MUC 6 entity corpus.

Of course, it’d be nice to roll in some active learning here. But that’s another story. As is measuring whether it’s better to have a bigger or a better corpus. This is the label-another-instance vs. label-a-fresh-instance decision problem that (Sheng et al. 2008) addressed directly.

Settles (2011): Closing the Loop: Fast, Interactive Semi-Supervised Annotation with Queries on Features and Instances

February 23, 2012

Whew, that was a long title. Luckily, the paper’s worth it:

Settles, Burr. 2011. Closing the Loop: Fast, Interactive Semi-Supervised Annotation With Queries on Features and Instances. EMNLP.

It’s a paper that shows you how to use active learning to build reasonably high-performance classifier with only minutes of user effort. Very cool and right up our alley here at LingPipe.

The Big Picture

The easiest way to see what’s going on is with a screenshot of DUALIST, the system on which the paper is based:

It’s basically a tag-a-little, learn-a-little annotation tool for classifiers. I wrote something along these lines for chunk tagging (named entities, etc.) — you can find it in the LingPipe sandbox project citationEntities (called that because I originally used it to zone bibliogrphies in docs, citations in bibliographies and fields in citations). Mitzi just brought it up to date with the current LingPipe and generalized it for some large multi-part document settings.

In DUALIST, users provide two kinds of input:

  1. category classifications for documents
  2. words associated with categories

The left-hand-side of the interface presents a scrolling list of documents, with buttons for categories. There are then columns for categories with words listed under them. Users can highlight words in the lists that they believe are associated with the category. They may also enter new words that don’t appear on the lists.

Settles points out a difficult choice in the design. If you update the underlying model after every user choice, the GUI items are going to rearrange themselves. Microsoft tried this with Word, etc., arranging menus by frequency of use, and I don’t think anyone liked it. Constancy of where something’s located is very important. So what he did was let the user mark up a bunch of choices of categories and words, then hit the big submit button at the top, which would update the model. I did roughly the same thing with our chunk annotation interface.

There’s always a question in this kind of design whether to pre-populate the answers based on the model’s guesses (as far as I can tell, DUALIST does not pre-populate answers). Pre-populating answers makes the user’s life easier in that if the system is halfway decent, there’s less clicking. But it raises the possibility of bias, with users just going with what the system suggests without thinking too hard.

Naive-Bayes Classifier

The underlying classification model is naive Bayes with a Dirichlet prior. Approximate inference is carried out using a maximum a posteriori (MAP) estimate of parameters. It’s pretty straightforward to implement naive Bayes this in a way that’s fast enough to use in this setting. The Dirichlet is conjugate to the multinomial so the posteriors are analytically tractable and the sufficient statistics are just counts of documents in each category and the count of words in documents of each category.

The Innovations

The innovation here is twofold.

The first innovation is that Settles uses EM to create a semi-supervised MAP estimate. As we’ve said before, it’s easy to use EM or some kind of posterior sampling like Gibbs sampling over a directed graphical model with any subset of its parameters or labels being unknown. So technically, this is straightforward. But it’s still a really good idea. Although semi-supervised classifiers are very popular, I’ve never seen it used in this kind of active-learning tagging interface. I should probably add this to our chunking tagger.

The second (and in my opinion more important) innovation is in letting users single out some words as being important words in categories. The way this gets pushed through to the model is by setting the component of the Dirichlet prior corresponding to the word/category pair to a larger value. Settles fits this value using held-out data rather than rolling it into the model itself with a prior. The results seem oddly insensitive to it, which surprised me (but see below).

Comments on the Classifier

Gadzooks! Settles seems to be missing the single biggest tuning parameter typically applied to naive Bayes — the document length normalizer. Perhaps he did this because when you document-length normalize, you no longer have a properly generative model that corresponds to the naive Bayes paradigm. But it makes a huge difference.

The LingPipe EM tutorial uses naive Bayes and the same 20 Newsgroups corpus as (Nigam, McCallum and Mitchell 2000) used for evaluation, and I showed the effect of document length normalization is huge (the Nigam et al. article has been cited nearly 2000 times!). You can do way better than Nigam et al.’s reported results by setting the document length norm to a small value like 5. (What document length norm’s doing is trying to correct for the lack of covariance and overdispersion modeling in naive multinomial document model — it’s the same kind of shenanigans you see in speech recognition in weighting the acoustic and language models and the same trick I just saw Kevin Knight pull out during a talk last week about decoding encrypted documents and doing machine translation.)

I think one of the reasons that the setting of the prior for important words has so little effect (see the performance figures) is that all of the priors are too high. If Settles really is starting with the Laplace prior (aka add 1), then that’s already too big for naive Bayes in this setting. Even the uniform prior (aka add 0) is too big. We’ve found that we need very small (less than 1) prior parameters for word-in-topic models unless there’s a whole lot of data (and the quantity Settles is using hasn’t gotten there by a long shot — you need to get enough so that the low counts dominate the prior before the effect of the prior washes out, so we’re talking gigabytes of text, at least).

Also, this whole approach is not Bayesian. It uses point estimates. For a discussion of what a properly Bayesian version of naive Bayes would look like, check out my previous blog post, Bayesian Naive Bayes, aka Dirichlet Multinomial Classifiers. For a description of what it means to be Bayesian, see my post What is Bayesian Statistical Inference?.

Confusion with Dirichlet-Multinomial Parameterization and Inference

There’s a confusion in the presentation of the Dirichlet prior and consequent estimation. The problem is that the prior parameter for a Dirichlet is conventionally the prior count (amount you add to the usual frequency counts) plus one. That’s why a prior of 1 is uniform (you add nothing to the frequency counts) and why a prior parameter of 2 corresponds to Laplace’s approach (add one to all frequency counts). The parameter is constrained to be positive, so what does a prior of 0.5 mean? It’s sort of like subtracting 1/2 from all the counts (sound familiar from Kneser-Ney LM smoothing?).

Now the maximum a posteriori estimate is just the estimate you get from adding the prior counts (parameter minus one) to all the empirical counts. It doesn’t even exist if the counts are less than 1, which can happen with Dirichlet parameter components that are less than 1. But Settles says he’s looking at the posterior expectation (think conditional expectation of parameters given data — the mean of the posterior distribution). The posterior average always exist (it has to given the bounded support here), but it requires you to add another one to all the counts.

To summarize, the mean of the Dirichlet distribution \mbox{Dir}(\theta|\alpha) is

\bar{\theta} = \alpha/(\sum_k \alpha_k),

whereas the maximum (or mode) is

\theta^{*} = (\alpha - 1) / (\sum_k (\alpha_k - 1)).

where the -1 is read componentwise, so \alpha - 1 = (\alpha_1-1,\ldots,\alpha_K-1). This only exists if all \alpha_k \geq 0.

That’s why a parameter of 1 corresponds to the uniform distribution and why a parameter of 2 (aka Laplace’s “add-one” prior) is not uniform.

Settles says he’s using Laplace and taking the posterior mean (which, by the way, is the “Bayesian point estimate” (oxymoron warning) minimizing expected square loss). But that’s not right. If he truly adds 1 to the empirical frequency counts, then he’s taking the posterior average with a Laplace prior (which is not uniform). This is equivalent to the posterior mode with a prior parameter of 3. But it’s not equivalent to either the posterior mode or mean with a uniform prior (i.e., prior parameter of 1).

Active Learning

Both documents and words are sorted by entropy-based active learning measures which the paper presents very clearly.

Documents are sorted by the conditional entropy of the category given the words in the model.

Word features are sorted by information gain, which is the reduction in entropy from the prevalence category distribution to the expected category distribution entropy conditoned on knowing the feature’s value.

Rather than sorting docs by highest classification uncertainty, as conditional entropy does, we’ve found it useful to sort docs by the lowest classification uncertainty! That is, we ask humans to label the docs about which the classifier is least uncertain. The motivation for this is that we’re often building high-precision (and relatively low recall) classifiers for customers and thus have a relatively high probability threshold to return a guess. So the higher ranked items are closer to the boundary we’re trying to learn. Also, we find in real world corpora that if we go purely by uncertainty, we get a long stream of outliers.

Settles does bring up the issue of whether using what’s effectively a kind of active learning mechanism trained with one classifier will be useful for other classifiers. We need to get someone like John Langford or Tong Zhang in here to prove some useful bounds. Their other work on active learning with weights is very cool.

GUI comments

I love the big submit button. What with Fitts’s law, and all.

I see a big problem with this interface for situations with more than a handful of categories. What would the full 20 Newsgroups look like? There aren’t enough room for more columns or a big stack of buttons.

Also, buttons seems like the wrong choice for selecting categories. These should probably be radio buttons to express the exclusivity and the fact that they don’t take action themselves. Typically, buttons cause some action.

Discriminative Classifiers

Given the concluding comments, Settles doesn’t seem to know that you can do pretty much exactly the same thing in a “discriminative” classifier setting. For instance, logistic regression can be cast as just another directed graphical model with parameters for each word/category pair. So we could do full Bayes with no problem.

There are also plenty of online estimation procedures for weighted examples; you’ll need the weighting to deal with EM (see, e.g., (Karampatziakis and Langford 2010) for an online weighted training method that adjusts neatly for curvature in the objective; it’s coincidentally the paper I’m covering for the next Columbia Machine Learning Reading Group meeting).

The priors can be carried over to this setting, too, only now they’re priors on regression coefficients. See (Genkin, Lewis and Madigan 2007) for guidance. One difference is that you get a mean and a variance to play with.

Building it in LingPipe

LingPipe’s traditional naive Bayes implementation contains all that you need to build a system like DUALIST. Semi-supervised learning with EM is covered in our EM tutorial with naive Bayes as an example.

To solve some of the speed issues Settles brings up in the discussion section, you can always thread the retraining in the background. That’s what I did in the chunk tagger. With a discriminative “online” method, you can just keep cycling through epochs in the background, which gives you the effect of a hot warmup for subsequent examples. Also, you don’t need to run to convergence — the background models are just being used to select instances for labeling.

How to Prevent Overflow and Underflow in Logistic Regression

February 16, 2012

Logistic regression is a perilous undertaking from the floating-point arithmetic perspective.

Logistic Regression Model

The basic model of an binary outcome y_n \in \{ 0, 1\} with predictor or feature (row) vector x_n \in \mathbb{R}^K and coefficient (column) vector \beta \in \mathbb{R}^K is

y_n \sim \mbox{\sf Bernoulli}(\mbox{logit}^{-1}(x_n \beta))

where the logistic sigmoid (i.e., the inverse logit function) is defined by

\mbox{logit}^{-1}(\alpha) = 1 / (1 + \exp(-\alpha))

and where the Bernoulli distribution is defined over support y \in \{0, 1\} so that

\mbox{\sf Bernoulli}(y_n|\theta) = \theta \mbox{ if } y_n = 1, and

\mbox{\sf Bernoulli}(y_n|\theta) = (1 - \theta) \mbox{ if } y_n = 0.

(Lack of) Floating-Point Precision

Double-precision floating-point numbers (i.e., 64-bit IEEE) only support a domain for \exp(\alpha) of roughly \alpha \in (-750,750) before underflowing to 0 or overflowing to positive infinity.

Potential Underflow and Overflow

The linear predictor at the heart of the regression,

x_n \beta = \sum_{k = 0}^K x_{n,k} \beta_k

can be anywhere on the real number line. This isn’t usually a problem for LingPipe’s logistic regression, which always initializes the coefficient vector \beta to zero. It could be a problem if we have even a moderately sized coefficient and then see a very large (or small) predictor. Our probability estimate will overflow to 1 (or underflow to 0), and if the outcome is the opposite, we assign zero probability to the data, which is not good predictively.

Log Sum of Exponents to the Rescue

Luckily, there’s a solution. First, we’re almost always working with log probabilities to prevent underflow in the likelihood function for the whole data set y,x,

\log p(y|\beta;x) = \log \prod_{n = 1}^N p(y_n|\beta;x_n) = \sum_{n=1}^N \log p(y_n|\beta;x_n)

Working on the inner log probability term, we have

\log p(y_n|\beta;x_n)

{ } = \log \mbox{\sf Bernoulli}(y_n|\mbox{logit}^{-1}(x_n \beta))

{ } = \log \ \mbox{logit}^{-1}(x_n \beta) \mbox{ if } y_n = 1
{ } = \log (1 - \mbox{logit}^{-1}(x_n \beta)) \mbox{ if } y_n = 0

Recalling that

1 - \mbox{logit}^{-1}(\alpha) = \mbox{logit}^{-1}(-\alpha),

we further simplify to

{ } = \log \ \mbox{logit}^{-1}(x_n \beta) \mbox{ if } y_n = 1
{ } = \log \ \mbox{logit}^{-1}(-x_n \beta) \mbox{ if } y_n = 0

Now we’re in good shape if we can prevent the log of the inverse logit from overflowing or underflowing. This is manageable. If we let \alpha stand in for the linear predictor (or its negation), we have

{ } = \log \ \mbox{logit}^{-1}(\alpha)

{ } = \log (1 / (1 + \exp(-\alpha)))

{ } = - \log (1 + \exp(-\alpha))

{ } = - \mbox{logSumExp}(0,-\alpha)

Log Sum of Exponentials

Recall that the log sum of exponentials function is

\mbox{logSumExp}(a,b) = \log (\exp(a) + \exp(b))

If you’re not familiar with how it prevents underflow and overflow, check out my previous post:

In the logistic regression case, we have an even greater chance for optimization because the argument a is a constant zero.

Logit-transformed Bernoulli

Putting it all together, we have the logit-transformed Bernoulli distribution,

\mbox{\sf Bernoulli}(y_n|\mbox{logit}^{-1}(x_n\beta))

{ } = - \mbox{logSumExp}(0,-x_n\beta) \mbox{ if } y_n = 1
{ } = - \mbox{logSumExp}(0,x_n\beta) \mbox{ if } y_n = 0

We can just think of this as an alternatively parameterized Bernoulli distribution,

\mbox{\sf BernoulliLogit}(y|\alpha) = \mbox{\sf Bernoulli}(y|\mbox{logit}^{-1}(\alpha))

with which our model can be expressed as

y_n \sim \mbox{\sf BernoulliLogit}(x_n\beta).

Recoding Outcomes {0,1} as {-1,1}

The notation’s even more convenient if we recode the failure outcome as -1 and thus take the outcome y \in \{ -1, 1 \}, where we have

\mbox{\sf BernoulliLogit}(y|\alpha) = - \mbox{logSumExp}(0,-y \alpha)