## Archive for the ‘Statistics’ Category

May 18, 2012

### From the Emailbox

Krishna writes,

I have a question about using the chunking evaluation class for inter annotation agreement : how can you use it when the annotators might have missing chunks I.e., if one of the files contains more chunks than the other.

The answer’s not immediately obvious because the usual application of interannotator agreement statistics is to classification tasks (including things like part-of-speech tagging) that have a fixed number of items being annotated.

### Chunker Evaluation

The chunker evaluations built into LingPipe calculate the usual of precision and recall measures (see below). These evaluations compare a set of response chunkings to a set of reference chunkings. Usually the reference is drawn from a gold-standard corpus and the response from an automated system built to do chunking.

Precision (aka positive predictive accuracy) measures the proportion of chunks in the response that are also in the reference. Recall (aka sensitivity) measures the proportion of chunks in the reference that are in the response. If we swap the reference and response chunkings, we swap precision and recall.

True negatives aren’t really being counted here — theoretically there are a huge number of them — any possible span with any possible tag could have been labeled. LingPipe just sets the true negative count to zero, and as a result, specificity (TN/[TN+FP]) doesn’t make sense.

### Interannotator Agreement

Suppose you have chunkings from two human annotators. Just treat one as the reference and one as the response and run a chunking evaluation. The precision and recall values will tell you which annotator returned more chunkings. For instance, if precision is .95 and recall .75, you know that the annotator assigned as the reference chunking had a whole bunch of chunks the other annotator didn’t think were chunks, but most of the chunks found by the response annotator were also chunks of the reference annotator.

You can use F-measure as an overall single-number score.

The base metrics are all explained in

and their application to chunking in

Examples of running chunker evaluations can be found in

### LingPipe Annotation Tool

If you’re annotating entity data, you might be interested in our learn-a-little, tag-a-little tool.

Now that Mitzi’s brought it up to compatibility with LingPipe 4, we should move citationEntities out of the sandbox and into a tutorial.

March 12, 2012

### Time, Negation, and Clinical Events

Mitzi’s been annotating clinical notes for time expressions, negations, and a couple other classes of clinically relevant phrases like diagnoses and treatments (I just can’t remember exactly which!). This is part of the project she’s working on with Noemie Elhadad, a professor in the Department of Biomedical Informatics at Columbia.

### LingPipe Chunk Annotation GUI

Mitzi’s doing the phrase annotation with a LingPipe tool which can be found in

She even brought it up to date with the current release of LingPipe and generalized the layout for documents with subsections.

Our annotation tool follows the tag-a-little, train-a-little paradigm, in which an automatic system based on the already-annotated data is trained as you go to pre-annotate the data for a user to correct. This approach was pioneered in MITRE’s Alembic Workbench, which was used to create the original MUC-6 named-entity corpus.

The chunker underlying LingPipe’s annotation toolkit is based on LingPipe’s character language-model rescoring chunker, which can be trained online (that is, as the data streams in) and has quite reasonable out-of-the-box performance. It’s LingPipe’s best out-of-the-box chunker. In contrast, CRFs can be engineered to outperform the rescoring chunker with good feature engineering.

A very nice project would be to build a semi-supervised version of the rescoring chunker. The underlying difficulty is that our LM-based and HMM-based models take count-based sufficient statistics.

### It Works!

Mitzi’s getting reasonable system accuracy under cross validation, with over 80% precision and recall (and hence over 80% balanced F-measure).

### That’s not Cricket!

According to received wisdom in natural language processing, she’s left out a very important step of the standard operating procedure. She’s supposed to get another annotator to independently label the data and then measure inter-annotator agreement.

### So What?

If we can train a system to performa at 80%+ F-measure under cross-validation, who cares if we can’t get another human to match Mitzi’s annotation?

We have something better — we can train a system to match Mitzi’s annotation!

In fact, training such a system is really all that we often care about. It’s much better to be able to train a system than another human to do the annotation.

The other thing we might want a corpus for is to evaluate a range of systems. There, if the systems are highly comparable, the fringes of the corpus matter. But perhaps the small, but still p < 0.05, differences in such systems don't matter so much. What the MT people have found is that even a measure that's roughly correlated with performance can be used to guide system development.

### Error Analysis and Fixing Inconsistencies

Mitzi’s been doing the sensible thing of actually looking at the errors the system’s making under cross validation. In some of these cases, she’d clearly made a braino and annotated the data wrong. So she fixes it. And system performance goes up.

What Mitzi’s reporting is what I’ve always found in these tasks. For instance, she inconsistently annotated time plus date sequences, sometimes including the times and sometimes not. So she’s going back to correct to do it all consistently to include all of the time information in a phrase (makes sense to me).

After a couple of days of annotation, you get a much stronger feeling for how the annotations should have gone all along. The annotations drifted so much over time in this fashion in the clinical notes annotated for the i2b2 Obesity Challenge that the winning team exploited time of labeling as an informative feature to predict co-morbidities of obesity!

### That’s also not Cricket!

The danger with re-annotating is that the system’s response will bias the human annotations. System-label bias is also a danger with single annotation under the tag-a-little, learn-a-little setup. If you gradually change the annotation to match the system’s responses, you’ll eventually get to very good, if not perfect, performance under cross validation.

So some judgment is required in massaging the annotations into a coherent system, but one that you care about, not one driven by the learned system’s behavior.

On the other hand, you do want to choose features and chunkings the system can learn. So if you find you’re trying to make distinctions that are impossible for the system to learn, then change the coding standard to make it more learnable, that seems OK to me.

### Go Forth and Multiply

Mitzi has only spent a few days annotating the data and the system’s already working well end to end. This is just the kind of use case that Breck and I had in mind when we built LingPipe in the first place. It’s so much fun seeing other people use your tools

When Breck and Linnea and I were annotating named entities with the citationEntities tool, we could crank along at 5K tokens/hour without cracking a sweat. Two eight-hour days will net you 80K tokens of annotated data and a much deeper insight into the problem. In less than a person-week of effort, you’ll have a corpus the size of the MUC 6 entity corpus.

Of course, it’d be nice to roll in some active learning here. But that’s another story. As is measuring whether it’s better to have a bigger or a better corpus. This is the label-another-instance vs. label-a-fresh-instance decision problem that (Sheng et al. 2008) addressed directly.

### Settles (2011): Closing the Loop: Fast, Interactive Semi-Supervised Annotation with Queries on Features and Instances

February 23, 2012

Whew, that was a long title. Luckily, the paper’s worth it:

Settles, Burr. 2011. Closing the Loop: Fast, Interactive Semi-Supervised Annotation With Queries on Features and Instances. EMNLP.

It’s a paper that shows you how to use active learning to build reasonably high-performance classifier with only minutes of user effort. Very cool and right up our alley here at LingPipe.

### The Big Picture

The easiest way to see what’s going on is with a screenshot of DUALIST, the system on which the paper is based:

It’s basically a tag-a-little, learn-a-little annotation tool for classifiers. I wrote something along these lines for chunk tagging (named entities, etc.) — you can find it in the LingPipe sandbox project citationEntities (called that because I originally used it to zone bibliogrphies in docs, citations in bibliographies and fields in citations). Mitzi just brought it up to date with the current LingPipe and generalized it for some large multi-part document settings.

In DUALIST, users provide two kinds of input:

1. category classifications for documents
2. words associated with categories

The left-hand-side of the interface presents a scrolling list of documents, with buttons for categories. There are then columns for categories with words listed under them. Users can highlight words in the lists that they believe are associated with the category. They may also enter new words that don’t appear on the lists.

Settles points out a difficult choice in the design. If you update the underlying model after every user choice, the GUI items are going to rearrange themselves. Microsoft tried this with Word, etc., arranging menus by frequency of use, and I don’t think anyone liked it. Constancy of where something’s located is very important. So what he did was let the user mark up a bunch of choices of categories and words, then hit the big submit button at the top, which would update the model. I did roughly the same thing with our chunk annotation interface.

There’s always a question in this kind of design whether to pre-populate the answers based on the model’s guesses (as far as I can tell, DUALIST does not pre-populate answers). Pre-populating answers makes the user’s life easier in that if the system is halfway decent, there’s less clicking. But it raises the possibility of bias, with users just going with what the system suggests without thinking too hard.

### Naive-Bayes Classifier

The underlying classification model is naive Bayes with a Dirichlet prior. Approximate inference is carried out using a maximum a posteriori (MAP) estimate of parameters. It’s pretty straightforward to implement naive Bayes this in a way that’s fast enough to use in this setting. The Dirichlet is conjugate to the multinomial so the posteriors are analytically tractable and the sufficient statistics are just counts of documents in each category and the count of words in documents of each category.

### The Innovations

The innovation here is twofold.

The first innovation is that Settles uses EM to create a semi-supervised MAP estimate. As we’ve said before, it’s easy to use EM or some kind of posterior sampling like Gibbs sampling over a directed graphical model with any subset of its parameters or labels being unknown. So technically, this is straightforward. But it’s still a really good idea. Although semi-supervised classifiers are very popular, I’ve never seen it used in this kind of active-learning tagging interface. I should probably add this to our chunking tagger.

The second (and in my opinion more important) innovation is in letting users single out some words as being important words in categories. The way this gets pushed through to the model is by setting the component of the Dirichlet prior corresponding to the word/category pair to a larger value. Settles fits this value using held-out data rather than rolling it into the model itself with a prior. The results seem oddly insensitive to it, which surprised me (but see below).

Gadzooks! Settles seems to be missing the single biggest tuning parameter typically applied to naive Bayes — the document length normalizer. Perhaps he did this because when you document-length normalize, you no longer have a properly generative model that corresponds to the naive Bayes paradigm. But it makes a huge difference.

The LingPipe EM tutorial uses naive Bayes and the same 20 Newsgroups corpus as (Nigam, McCallum and Mitchell 2000) used for evaluation, and I showed the effect of document length normalization is huge (the Nigam et al. article has been cited nearly 2000 times!). You can do way better than Nigam et al.’s reported results by setting the document length norm to a small value like 5. (What document length norm’s doing is trying to correct for the lack of covariance and overdispersion modeling in naive multinomial document model — it’s the same kind of shenanigans you see in speech recognition in weighting the acoustic and language models and the same trick I just saw Kevin Knight pull out during a talk last week about decoding encrypted documents and doing machine translation.)

I think one of the reasons that the setting of the prior for important words has so little effect (see the performance figures) is that all of the priors are too high. If Settles really is starting with the Laplace prior (aka add 1), then that’s already too big for naive Bayes in this setting. Even the uniform prior (aka add 0) is too big. We’ve found that we need very small (less than 1) prior parameters for word-in-topic models unless there’s a whole lot of data (and the quantity Settles is using hasn’t gotten there by a long shot — you need to get enough so that the low counts dominate the prior before the effect of the prior washes out, so we’re talking gigabytes of text, at least).

Also, this whole approach is not Bayesian. It uses point estimates. For a discussion of what a properly Bayesian version of naive Bayes would look like, check out my previous blog post, Bayesian Naive Bayes, aka Dirichlet Multinomial Classifiers. For a description of what it means to be Bayesian, see my post What is Bayesian Statistical Inference?.

### Confusion with Dirichlet-Multinomial Parameterization and Inference

There’s a confusion in the presentation of the Dirichlet prior and consequent estimation. The problem is that the prior parameter for a Dirichlet is conventionally the prior count (amount you add to the usual frequency counts) plus one. That’s why a prior of 1 is uniform (you add nothing to the frequency counts) and why a prior parameter of 2 corresponds to Laplace’s approach (add one to all frequency counts). The parameter is constrained to be positive, so what does a prior of 0.5 mean? It’s sort of like subtracting 1/2 from all the counts (sound familiar from Kneser-Ney LM smoothing?).

Now the maximum a posteriori estimate is just the estimate you get from adding the prior counts (parameter minus one) to all the empirical counts. It doesn’t even exist if the counts are less than 1, which can happen with Dirichlet parameter components that are less than 1. But Settles says he’s looking at the posterior expectation (think conditional expectation of parameters given data — the mean of the posterior distribution). The posterior average always exist (it has to given the bounded support here), but it requires you to add another one to all the counts.

To summarize, the mean of the Dirichlet distribution $\mbox{Dir}(\theta|\alpha)$ is

$\bar{\theta} = \alpha/(\sum_k \alpha_k)$,

whereas the maximum (or mode) is

$\theta^{*} = (\alpha - 1) / (\sum_k (\alpha_k - 1))$.

where the $-1$ is read componentwise, so $\alpha - 1 = (\alpha_1-1,\ldots,\alpha_K-1)$. This only exists if all $\alpha_k \geq 0$.

That’s why a parameter of 1 corresponds to the uniform distribution and why a parameter of 2 (aka Laplace’s “add-one” prior) is not uniform.

Settles says he’s using Laplace and taking the posterior mean (which, by the way, is the “Bayesian point estimate” (oxymoron warning) minimizing expected square loss). But that’s not right. If he truly adds 1 to the empirical frequency counts, then he’s taking the posterior average with a Laplace prior (which is not uniform). This is equivalent to the posterior mode with a prior parameter of 3. But it’s not equivalent to either the posterior mode or mean with a uniform prior (i.e., prior parameter of 1).

### Active Learning

Both documents and words are sorted by entropy-based active learning measures which the paper presents very clearly.

Documents are sorted by the conditional entropy of the category given the words in the model.

Word features are sorted by information gain, which is the reduction in entropy from the prevalence category distribution to the expected category distribution entropy conditoned on knowing the feature’s value.

Rather than sorting docs by highest classification uncertainty, as conditional entropy does, we’ve found it useful to sort docs by the lowest classification uncertainty! That is, we ask humans to label the docs about which the classifier is least uncertain. The motivation for this is that we’re often building high-precision (and relatively low recall) classifiers for customers and thus have a relatively high probability threshold to return a guess. So the higher ranked items are closer to the boundary we’re trying to learn. Also, we find in real world corpora that if we go purely by uncertainty, we get a long stream of outliers.

Settles does bring up the issue of whether using what’s effectively a kind of active learning mechanism trained with one classifier will be useful for other classifiers. We need to get someone like John Langford or Tong Zhang in here to prove some useful bounds. Their other work on active learning with weights is very cool.

I love the big submit button. What with Fitts’s law, and all.

I see a big problem with this interface for situations with more than a handful of categories. What would the full 20 Newsgroups look like? There aren’t enough room for more columns or a big stack of buttons.

Also, buttons seems like the wrong choice for selecting categories. These should probably be radio buttons to express the exclusivity and the fact that they don’t take action themselves. Typically, buttons cause some action.

### Discriminative Classifiers

Given the concluding comments, Settles doesn’t seem to know that you can do pretty much exactly the same thing in a “discriminative” classifier setting. For instance, logistic regression can be cast as just another directed graphical model with parameters for each word/category pair. So we could do full Bayes with no problem.

There are also plenty of online estimation procedures for weighted examples; you’ll need the weighting to deal with EM (see, e.g., (Karampatziakis and Langford 2010) for an online weighted training method that adjusts neatly for curvature in the objective; it’s coincidentally the paper I’m covering for the next Columbia Machine Learning Reading Group meeting).

The priors can be carried over to this setting, too, only now they’re priors on regression coefficients. See (Genkin, Lewis and Madigan 2007) for guidance. One difference is that you get a mean and a variance to play with.

### Building it in LingPipe

LingPipe’s traditional naive Bayes implementation contains all that you need to build a system like DUALIST. Semi-supervised learning with EM is covered in our EM tutorial with naive Bayes as an example.

To solve some of the speed issues Settles brings up in the discussion section, you can always thread the retraining in the background. That’s what I did in the chunk tagger. With a discriminative “online” method, you can just keep cycling through epochs in the background, which gives you the effect of a hot warmup for subsequent examples. Also, you don’t need to run to convergence — the background models are just being used to select instances for labeling.

### How to Prevent Overflow and Underflow in Logistic Regression

February 16, 2012

Logistic regression is a perilous undertaking from the floating-point arithmetic perspective.

### Logistic Regression Model

The basic model of an binary outcome $y_n \in \{ 0, 1\}$ with predictor or feature (row) vector $x_n \in \mathbb{R}^K$ and coefficient (column) vector $\beta \in \mathbb{R}^K$ is

$y_n \sim \mbox{\sf Bernoulli}(\mbox{logit}^{-1}(x_n \beta))$

where the logistic sigmoid (i.e., the inverse logit function) is defined by

$\mbox{logit}^{-1}(\alpha) = 1 / (1 + \exp(-\alpha))$

and where the Bernoulli distribution is defined over support $y \in \{0, 1\}$ so that

$\mbox{\sf Bernoulli}(y_n|\theta) = \theta \mbox{ if } y_n = 1$, and

$\mbox{\sf Bernoulli}(y_n|\theta) = (1 - \theta) \mbox{ if } y_n = 0$.

### (Lack of) Floating-Point Precision

Double-precision floating-point numbers (i.e., 64-bit IEEE) only support a domain for $\exp(\alpha)$ of roughly $\alpha \in (-750,750)$ before underflowing to 0 or overflowing to positive infinity.

### Potential Underflow and Overflow

The linear predictor at the heart of the regression,

$x_n \beta = \sum_{k = 0}^K x_{n,k} \beta_k$

can be anywhere on the real number line. This isn’t usually a problem for LingPipe’s logistic regression, which always initializes the coefficient vector $\beta$ to zero. It could be a problem if we have even a moderately sized coefficient and then see a very large (or small) predictor. Our probability estimate will overflow to 1 (or underflow to 0), and if the outcome is the opposite, we assign zero probability to the data, which is not good predictively.

### Log Sum of Exponents to the Rescue

Luckily, there’s a solution. First, we’re almost always working with log probabilities to prevent underflow in the likelihood function for the whole data set $y,x$,

$\log p(y|\beta;x) = \log \prod_{n = 1}^N p(y_n|\beta;x_n) = \sum_{n=1}^N \log p(y_n|\beta;x_n)$

Working on the inner log probability term, we have

$\log p(y_n|\beta;x_n)$

${ } = \log \mbox{\sf Bernoulli}(y_n|\mbox{logit}^{-1}(x_n \beta))$

${ } = \log \ \mbox{logit}^{-1}(x_n \beta) \mbox{ if } y_n = 1$
${ } = \log (1 - \mbox{logit}^{-1}(x_n \beta)) \mbox{ if } y_n = 0$

Recalling that

$1 - \mbox{logit}^{-1}(\alpha) = \mbox{logit}^{-1}(-\alpha)$,

we further simplify to

${ } = \log \ \mbox{logit}^{-1}(x_n \beta) \mbox{ if } y_n = 1$
${ } = \log \ \mbox{logit}^{-1}(-x_n \beta) \mbox{ if } y_n = 0$

Now we’re in good shape if we can prevent the log of the inverse logit from overflowing or underflowing. This is manageable. If we let $\alpha$ stand in for the linear predictor (or its negation), we have

${ } = \log \ \mbox{logit}^{-1}(\alpha)$

${ } = \log (1 / (1 + \exp(-\alpha)))$

${ } = - \log (1 + \exp(-\alpha))$

${ } = - \mbox{logSumExp}(0,-\alpha)$

### Log Sum of Exponentials

Recall that the log sum of exponentials function is

$\mbox{logSumExp}(a,b) = \log (\exp(a) + \exp(b))$

If you’re not familiar with how it prevents underflow and overflow, check out my previous post:

In the logistic regression case, we have an even greater chance for optimization because the argument $a$ is a constant zero.

### Logit-transformed Bernoulli

Putting it all together, we have the logit-transformed Bernoulli distribution,

$\mbox{\sf Bernoulli}(y_n|\mbox{logit}^{-1}(x_n\beta))$

${ } = - \mbox{logSumExp}(0,-x_n\beta) \mbox{ if } y_n = 1$
${ } = - \mbox{logSumExp}(0,x_n\beta) \mbox{ if } y_n = 0$

We can just think of this as an alternatively parameterized Bernoulli distribution,

$\mbox{\sf BernoulliLogit}(y|\alpha) = \mbox{\sf Bernoulli}(y|\mbox{logit}^{-1}(\alpha))$

with which our model can be expressed as

$y_n \sim \mbox{\sf BernoulliLogit}(x_n\beta)$.

### Recoding Outcomes {0,1} as {-1,1}

The notation’s even more convenient if we recode the failure outcome as $-1$ and thus take the outcome $y \in \{ -1, 1 \}$, where we have

$\mbox{\sf BernoulliLogit}(y|\alpha) = - \mbox{logSumExp}(0,-y \alpha)$

### Bob’s ML Meetup Talk — Stan: A Bayesian Directed Graphical Model Compiler

January 6, 2012

I (Bob) am going to give a talk at the next NYC Machine Learning Meetup, on 19 January 2012 at 7 PM:

There’s an abstract on the meetup site. The short story is that Stan’s a directed graphical model compiler (like BUGS) that uses adaptive Hamiltonian Monte Carlo sampling to estimate posterior distributions for Bayesian models.

The official version 1 release is coming up soon, but until then, you can check out our work in progress at:

### “Smoothing” Measurement Errors

October 27, 2011

The issue came up in the last blog post, Discontinuities in ROC calculations, about smoothing out Area-under-the-curve (AUC) calculations for ROC and PR curves given observed data points that are very close together.

If you have a bunch of point measurements that you assume were taken with some error, what you can do is model the error itself. Suppose we don’t want to think too hard and just make the error normal, so that a measurement of $a$ is replaced with a distribution $\mbox{\sf Norm}(a,\sigma^2)$, where $\sigma$ is the standard deviation of the error.

Then, rather than computing a function $f(a)$ based on the empirical measurement $a$, instead compute the posterior of $f(X)$ given a random variable $X \sim \mbox{\sf Norm}(a,\sigma^2)$.

To get back to something on the same scale, it’s typical to use a Bayesian point estimate that minimizes expected square error, namely the expectation. Now, instead of a point $f(a)$, such as an AUC statistic, defined precisely by the observation $a$, we have a point defined by the expectation of $a$ characterized by our normal measurement error model.

$\mathbb{E}[f(X)] = \int_X f(x) \ \mbox{\sf Norm}(x|a,\sigma^2) \ dx$.

We can also compute even probabilities, such as comparing two systems with noisy measurements $a$ and $b$, with noise $\sigma$, by

$\mbox{Pr}[f(X) < f(Y)]$

${} = \int \int \mathbb{I}[f(X) < f(Y)] \ \mbox{\sf Norm}(x|a,\sigma^2) \ \mbox{\sf Norm}(y|b,\sigma^2) \ dx \ dy$.

Of course, this leaves the problem of how to estimate $\sigma$. Ideally, we'll have lots of measurements of the same thing and be able to estimate $\sigma$ directly, amounting in what's called an "empirical Bayes" estimate (note that "empirical Bayes" is not a full Bayesian method and is no more empirical than full Bayes, which is also estimated from data). Of course, we could go the full Bayes route integrate over our posterior $p(\sigma|...)$ for $\sigma$, giving us

$\mathbb{E}[f(X)] = \int_0^{\infty} \int_X f(x) \ \mbox{\sf Norm}(x|a,\sigma^2) \ p(\sigma|...) \ dx \ d\sigma$.

As Mike Ross pointed out, the result is going to be sensitive to $\sigma$. In particular, as Mike pointed out, as measurement error goes to zero, we get our original statistic back,

$\lim_{\sigma \rightarrow 0} \mathbb{E}[f(X)] = f(a)$.

Similarly, as $\sigma$ gets big, the limit of $f(X)$ and $f(Y)$ converge if $X \sim \mbox{\sf Norm}(a,\sigma)^2$ and $Y \sim \mbox{\sf Norm}(b,\sigma^2)$.

The bottom line is that you'd like a sensible model of measurement error.

Real AUC calculations depend on more than one point. If you have a vector points $\hat{x} \in [0,1]^N$ with discrete true values $x \in \{0, 1\}^N$, you just chain the integrals together, as in

$\mathbb{E}[\mbox{AUC}(\hat{x},x)]$

${} = \int \cdots \int \ \mbox{AUC}(y,x) \ \prod_{n=1}^N \ \mbox{\sf Norm}(y_n|\hat{x}_n,\sigma) \ dx_1 \cdots dx_N$.

With a normal noise model (or really any model you can sample from), it's easy to compute this integral using Monte Carlo methods. Just take a sequence of $M$ samples $y^{(m)}$, where $y^{(m)}_n$ is drawn independently from $\mbox{\sf Norm}(\hat{x}_n,\sigma^2)$, and approximate the integral required to compute the expectation by

$\frac{1}{M} \ \sum_{m=1}^M \mbox{AUC}(y^{(m)},x)$.

As the number of samples $M$ grows, this summation converges to the expectation,

$\lim_{M \rightarrow \infty} \frac{1}{M} \ \sum_{m=1}^M \mbox{AUC}(y^{(m)},x) = \mathbb{E}[\mbox{AUC}(\hat{x},x)]$.

### Discontinuities in ROC Calculations

October 26, 2011

Amaç Herdagdelen brought up some interesting examples on our mailing list that caused LingPipe’s ROC curve (and PR curve) implementation to provide some unexpected (and arguably wrong) results. (We then took the detailed discussion off line, and are now reporting back.)

#### Computing ROC Curves by Sorting

The basic problem is that the computation of an ROC curve involves first sorting the responses by the system’s estimated probability a response should be positive, then incrementally computing sensitivity versus 1 – specificty operating points by working down the ranked list. You can see a worked example in the class documentation for

#### What if there are Ties?

Mike Ross realized there was a problem when you had two examples with the same score, but different reference values. For instance, a system might be looking at two Tweets and estimate a 0.98 probability that both are positive sentiment. If one was classified as positive in the reference (i.e., the gold standard) and the other negative, then there’s a problem. With one order, you get one curve, and with the other order, you get a different curve. Mike figured out we might as well just leave off the intermediate point, because we get to the same operating point after handling both of them.

#### What if there are Near Ties?

Then Amaç wrote back after noticing he had some cases that were very close, but not quite, the same. This comes up with arithmetic precision all the time. In his case, it was different one-versus-all results for a multi-category classifier. He suggested treating two results as the same if they were within some small absolute value of each other. This is the typical thing to do when checking results are the same in unit tests (or code) — it’s built into junit for Java and googletest for C++.

#### Discretized ROC Curve Estimates are Discontinuous

That led me (Bob) to realize that the real problem is that these measures are discontinuous in the underlying scores. Suppose I have three items, A, B and C, with true values POS, NEG, and POS. Now if I have assigned scores to them of 0.97, 0.96, and 0.98, I get a curve based on the sequence TP, TP, FP, or a perfect score. I get the same reslt as the score for B moves from 0.96 to any value less than 0.97. But at the point it crosses 0.97, I wind up with a sequence TP, FP, TP, which is a whole different story.

The upshot is that area under the ROC curve (or PR curve) is not continuous in the underlying scores.

#### Help? Should we Probabilistically Smooth?

Does anyone have any idea what we should do here?

I’m thinking moving to some kind of probabilistic version might be able to smooth things out in a well-defined way. For instance, if I add normally-distributed noise around each point and then integrate it out, things are smooth again.

### Domain Adaptation with Hierarchical Logistic Regression

September 29, 2011

Last post, I explained how to build hierarchical naive Bayes models for domain adaptation. That post covered the basic problem setup and motivation for hierarchical models.

### Hierarchical Logistic Regression

Today, we’ll look at the so-called (in NLP) “discriminative” version of the domain adaptation problem. Specifically, using logistic regression. For simplicity, we’ll stick to the binary case, though this could all be generalized to K-way classifiers.

Logistic regression is more flexible than naive Bayes in allowing other features (aka predictors) to be brought in along with the words themselves. We’ll start with just the words, so the basic setup look more like naive Bayes.

#### The Data

We’ll use the same data representation as in the last post, with $D$ being the nubmer of domains, with $I_d$ docs in domain $d$. Document $i$ in domain $d$ will have $N[d,i]$ tokens. We’ll assume $V$ is the size of the vocabulary.

Raw document data provides a token $x[d,i,n] \in 1{:}V$ for word $n$ of document $i$ of domain $d$. Assuming a bag-of-words representation like we used in naive Bayes, it’ll be more convenient to convert each document into a word frequency vector $u[d,i]$ of dimensionality $V$. To match naive Bayes, define a word frequency vector $u[d,i]$ with value at word (or intercept) $w$ given by

$u[d,i,w] = \sum_{n = 1}^{N[d,i]} \mathbb{I}(x[d,i,n] = w)$,

where $\mathbb{I}(\phi)$ is the indicator function, taking on value 1 if $\phi$ is true and 0 otherwise. Now that we have continuous data, we could transform it using something like TF/IDF weighting. For convenience, we’ll prefix an intercept predictor $1$ to each vector of predictors $u[d,i]$. Because it’s logistic regression, other information like the source of the document may also be included in the predictor vectors.

The labeled training data consists of a classification $z[d,i] \in \{ 0, 1 \}$ for each document $i$ in domain $d$.

#### The Model, Lower Level

The main parameter to estimate is a coefficient vector $\beta[d]$ of size $V + 1$ for each domain $d$. Here, the first dimension will correspond to the intercept and the other dimensions each correspond to a word in the vocabulary.

The probabilty that a given document is of category 1 (rather than category 0) is given by

$\mbox{Pr}[z[d,i] = 1] = \mbox{logit}^{-1}(\beta[d]^T u[d,i])$.

The inner (dot) product is just the sum of the dimensionwise products,

$\beta[d]^T u[d,i] = \sum_{v = 0}^V \beta[d,v] \times u[d,i,v]$.

This value, called the linear prediction, can take values in $(-\infty,\infty)$. The logistic sigmoid function

$\mbox{logit}^{-1}(x) = 1/(1 + \exp(-x))$

maps the unbounded linear prediction to the range $(0,1)$, where it is taken to represent the probabilty that the category $z[d,i]$ of document $i$ in domain $d$ is 1.

In sampling notation, we define the category as being drawn from a Bernoulli distribution with parameter given by the transformed predictor; in symbols,

$z[d,i] \sim \mbox{\sf Bern}(\mbox{logit}^{-1}(\beta[d]^T u[d,i])$.

Unlike naive Bayes, logistic regression model does not model the word data $u$, instead treating it as constant.

#### The Model, Upper Levels

We are treating the coefficient vectors as estimated parameters, so we need a prior for them. We can choose just about any kind of prior we want here. We’ll only consider normal (Gaussian, L2) priors here, but the Laplace (L1) prior is also very popular. Recently, statisticians have argued for using a Cauchy distribution as a prior (very fat tails; see Gelman et al.’s paper) or combining L1 and L2 into a so-called elastic net prior. LingPipe implements all of these priors; see the documentation for regression priors.

Specifically, we are going to pool the prior across domains by sharing the priors for words $v$ across domains. Ignoring any covariance for the time being (a bad idea in general, but it’s hard to scale coveriance to NLP-sized coefficient vectors), we draw the component for word $v$ of the coefficients for domain $d$ from a normal distribution,

$\beta[d,v] \sim \mbox{\sf Normal}(\mu[v],\sigma[v]^2)$.

Here $\mu[v]$ represents the mean value of the coefficient for word (or intercept) $v$ across domains and $\sigma[v]^2$ is the variance of the coefficient values across domains. A strong prior will have low variance.

All that’s left is to provide a prior for these location and scale parameters. It’s typical to use a simple centered normal distribution for the locations, specifically

$\mu[v] \sim \mbox{\sf Normal}(0,\tau^2)$,

where $\tau^2$ is a constant variance parameter. Typically, the variance $\tau^2$ is set to a constant to make the entire prior on the means weakly informative. Alternatively, we could put a prior on $\tau$ itself.

Next, we need priors for the $\sigma[v]$ terms. Commonly, these are chosen to be inverse $\chi^2$ (a specific kind of inverse gamma distribution) distributions for convenience because they’re (conditionally) conjugate. Thus the typical model would take the variance $\sigma[v]^2$ to be the parameter, giving it an inverse gamma prior,

$\sigma[v]^2 \sim \mbox{\sf InvGamma}(\alpha,\beta)$.

Gelman argues against inverse gamma priors on variance because they have pathological behavior when the hierarchical variance terms are near zero.
Gelman prefers the half-Cauchy prior, because it tends not to degenerate in hierarchical models like the inverse gamma can. The half Cauchy is just the Cauchy distribution restricted to positive values (thus doubling the usual density, but restricting support to non-negative values). In symbols, we generate the deviation $\sigma[v]$ (not the variance) using a (non-negative) Half-Cauchy distribution:

$\sigma[v] \sim \mbox{HalfCauchy}()$.

And that’s about it. Of course, you’ll need some fancy-pants software to fit the whole thing, but it’s not that difficult to write.

### Properties of the Hierarchical Logisitic Regression Model

Like in the naive Bayes case, going hierarchical means we get data pooling (or “adaptation” or “transfer”) across domains.

One very nice feature of the regression model follows from the fact that we are treating the word vectors as unmodeled predictors and thus don’t have to model their probabilities across domains. Instead, the coefficient vector $\beta[d]$ for domain $d$ only needs to model words that discriminate positive (category 1) from negative (category 0) examples. Thus the correct value for $\beta[d,v]$ for a word $v$ is 0 if the word does not discriminate between positive and negative documents. Thus our hierarchical hyperprior has location 0 in order to pull the prior location $\mu[v]$ for each word $v$ to 0.

The intercept is just acting as a bias term toward one category or the other (depending on the value of its coefficient).

### Comparison to SAGE

If one were to approximate the full Bayes solution with maximum a posteriori point estimates and use Laplace (L1) priors on the coefficients,

$\beta[d,v] \sim \mbox{\sf Laplace}(\mu[d],\sigma[d]^2)$

then we recreate the regression counterpart of Eisenstein et al.’s hybrid regression and naive-Bayes style sparse additive generative model of text (aka SAGE).

Eisenstein et al.’s motivation was to represent each domain as a difference from the average of all domains. If you recast the coefficient prior formula slightly and set

$\beta[d,v] = \alpha[d,v] + \mu[d]$

and sample

$\alpha[d,v] \sim \mbox{Laplace}(0,\sigma[d]^2)$

you’ll see that if the variance term $\sigma[d]$ is low enough, many of the $\alpha[d,v]$ values will be zero. Of course, in a full Bayesian approach, you’ll integrate over the uncertainty in $\alpha$, not set it to a point estimate of zero. So sparseness only helps when you’re willing to approximate and treat estimates as more certain than we have evidence for. Of course, resarchers do that all the time. It’s what LingPipe’s logistic regression and CRF estimators do.

Presumably, the values of coefficients have will have non-negligible covariance. That is, I expect words like “thrilling” and “scary” to covary. For thriller movies, they’re positive sentiment terms and for appliances, rather negative. The problem with modeling covariance among lexical items is that we usually have tens of thousands of them. Not much software can handle a 10K by 10K matrix.

Another way to add covariance instead of using a covariance model for “fixed effects” is instead convert to random effects. For instance, a model like latent Dirichlet allocation (LDA) models covariance of words by grouping them into topics. For instance, two words with high probabilities in the same topic will have positive covariance.

I’m a big fan of Gelman and Hill’s multilevel regression book. You definitely want to master that material before moving on to Gelman et al.’s Bayesian Data Analysis. Together, they’ll give you a much deeper understanding of the issues in hierarchical (or more generally, multilevel) modeling.

### Domain Adaptation with Hierarchical Naive Bayes Classifiers

September 22, 2011

This will be the first of two posts exploring hierarchical and multilevel classifiers. In this post, I’ll describe a hierarchical generalization of naive Bayes (what the NLP world calls a “generative” model). The next post will explore hierarchical logistic regression (called a “discriminative” or “log linear” or “max ent” model in NLP land).

Within natural language processing, the term “domain adaptation” has come to mean something like “using data in one domain to improve results in another domain”. Examples that have received attention include positive/negative sentiment for reviews of different kinds of products, with, say, DVDs being one domain and kitchen appliances another (Blitzer et al. 2007). Other examples classifying sequences of tokens as names across different genres of text, such as newswire and transcribed speech (Daumé III 2007, Finkel and Manning 2009).

The work I’ve seen on adaptation has largely been in the supervised case, but what we’re going to do here can be unsupervised, in which case you get something like soft K-means clustering. It can also be semi-supervised, with some documents having category labels and some not having labels. I’ve talked about semi-supervised models before, too, as recently as the very last blog post, covering Tang and Lease’s semi-supervised annotation model and in our replication of Nigam et al.’s semi-supervised naive Bayes EM estimator in the LingPipe tutorial on semi-supervised naive Bayes classification.

### Hiearchical Models

The standard Bayesian approach to domain adaptation is to build a hierarchical model. The hierarchy in this case is the hieararchy of domains. We’ll just consider the flat case, where there’s a collection $D$ of what we’ll call “domains”. In the next post, we’ll generalize this assumption to richer organizations of domains with cross-cutting features.

### Naive Bayes

I’ve blogged before about properly Bayesian approaches to naive Bayes classifiers, specifically about Bayesian naive Bayes inference and collapsed Gibbs sampling for computing it.

### Pooling, Exchangeability and Hieararchy

What we’re going to do is let information in one domain, such as which words are positive or which phrases are people names, inform estimates for other domains. This is sometimes referred to as “pooling” information across sub-populations in statistics. A simple example from political science that illustrates the general flavor of models we’re working on at Columbia is allowing estimates of the effect of income on voting in one state to inform estimates in other states, while still allowing the states to differ (an example is given in Gelman et al.’s What’s wrong with Connecticut?).

Theoretically, what we’re going to do amounts to assuming that the domains themselves are exchangeable. We use exchangeable models when we don’t have any information to distinguish the domains. We’ll discuss the case where we do have such information in the next post.

In Bayesian models, we treat exchangeable variables as being drawn from a population distribution (aka, a prior). In hierarchical models, we model the population directly. The reason it’s called hierarchical is that in a proper Bayesian model, there must be a prior distribution on the domain model. So we get a prior on the parameters of the domain model, which then acts as a prior on the estimates for each domain. The pooling is achieved by information flow through the dependencies in the joint distribution implied by the graphical model. In practice, this is simpler than it sounds.

### Hieararchical Naive Bayes

Once we’ve got our heads around the Bayesian formulation of naive Bayes, extending it to hieararchical models is straightforward. We’re going to use the language of documents and tokens in describing naive Bayes, but it’s really a general multinomial model, so don’t assume this is only valid for text classifiers.

For simplicity, we’ll assume we have a binary classifier. I’ll say more about the multi-category case later and in the next post.

#### The Data

Let $D$ be the number of domains. Let $I_d$ be the number of documents in domain $D$ and $N_{d,i}$ the number of tokens in document $i \in 1{:}I_d$ of domain $d \in 1{:}D$. Let $V$ be the size of the vocabulary from which the tokens are drawn.

The raw document data we have provides a token $x[d,i,n] \in 1{:}V$ for each token $n \in N_{d,i}$ for each document $i \in 1{:}I_d$ for each domain $d \in 1{:}D$.

The labeled training data we have is a classification $z[d,i] \in \{ 0, 1 \}$ for each document $i \in 1{:}I_d$ in each domain $d \in 1{:}D$.

#### The Model, Lower Levels

The hierarchical naive Bayes model is a directed graphical model and as such, easy to describe with sampling notation.

We have a parameter $\pi[d]$ for the prevalence of category 1 documents in domain $d \in 1{:}D$. It is sampled from a beta distribution with prior count parameters $\alpha_{\pi}, \beta_{\pi}$ (more about which later),

$\pi[d] \sim \mbox{\sf Beta}(\alpha_{\pi}, \beta_{\pi})$.

The categories assigned to each document are sampled from a Bernoulli distribution parameterized by prevalence in the document’s domain,

$z[d,i] \sim \mbox{\sf Bern}(\pi[d])$.

Each domain $d$ and outcome category $c \in \{ 0, 1 \}$ will have its own naive Bayes model with multinomial $V$-simplex parameter $\phi[d,c] \in [0,1]^V$, such that

$\sum_{v = 1}^V \phi[d,c,v] = 1$,

describing the distribution of vocabulary items $v \in V$ in category $c \in \{ 0, 1 \}$ in domain $d \in D$.

These category word distributions for each domain are drawn from a Dirichlet prior $\gamma[c]$ appropriate to the outcome category $c \in \{ 0, 1 \}$,

$\phi[d,c] \sim \mbox{\sf Dir}(\gamma[c])$.

We then model the tokens in the standard way for naive Bayes, as being drawn independently (that’s the “naive” part) from the discrete distribution associated with their domain $d$ and the category $z[d,i]$ of document $i$,

$x[d,i,n] \sim \mbox{\sf Disc}(\phi[d, z[d,i]])$.

#### The Model, Upper Levels

So far, that’s only the lower level of the model. As I mentioned earlier, the priors characterize the populations of prevalences and vocabulary distributions for categories across domains. And we’re using hierarchical models on both sides of naive Bayes, for the categories themselves and for the topic distributions. There are many choices for how to model these populations. We’ll go with a very simple model that only characterizes prior means and variance, not prior covariance; as an example of a prior modeling covariance for multinomials, see the prior for topics in Lafferty and Blei’s correlated topic model.

It’s easier to start with the prevalences. The prior model here will model the mean prevalence of outcome 1 (say, the “positive” outcome for sentiment) across domains, as well as its variance. The mean of the density $\mbox{\sf Beta}(\alpha_{\pi},\beta_{\pi}$ is $\alpha_{\pi}/(\alpha_{\pi} + \beta_{\pi})$ and the variance is inversely related to the total prior count $\alpha_{\pi} + \beta_{\pi}$.

For convenience, we’ll reparameterize $\alpha_{\pi}, \beta_{\pi}$ in terms of total prior count $\kappa_{\pi} \in (0,\infty)$ and prior mean $\psi_{\pi} \in [0,1]$, by setting

$\alpha_{\pi} = \psi_{\pi} \kappa_{\pi}, \ \ \beta_{\pi} = (1 - \psi_{\pi}) \kappa_{\pi}$.

We’ll put a simple conjugate uniform prior on the prior mean $\psi_{\pi}$, taking

$\psi_{\pi} \sim \mbox{\sf Beta}(1,1)$.

(Note that $\mbox{\sf Beta}(1,1)$ is uniform on its support, $[0,1]$.)

We’ll take a very weakly informative prior favoring high variance (low prior count) on the total prior counts,

$\kappa_{\pi} \sim \mbox{\sf Pareto}(0.5, 1.5)$,

where for $x > 0.5$,

$\mbox{\sf Pareto}(x|0.5,1.5) \propto x^{-1.5}$.

The effect of this decision is that the estimate for prevalence $\pi[d]$ in a single domain $d$ will be somewhere between the overall average $\psi_{\pi}$ and the training data average, depending on the relative strength $\kappa_{\pi}$ estimated for the prior and the number of training examples for domain $d$. (A standard shortcut here would be to use either cross-validation or “empirical Bayes” to set the priors $\alpha_{\pi}, \beta_{\pi}$.

We do the same thing for the Dirichlet prior, reparameterizating the prior count $V$-vector $\gamma[c]$ for vocabularly items for outcome $c \in \{0,1\}$ as a prior mean $V$-simplex $\theta[b]$ and prior total count $\rho[b]$, setting

$\gamma[c] = \rho[b] \theta[b]$.

We set a uniform prior on the prior vocabulary distribution $\theta[b]$,

$\theta[b] \sim \mbox{Dir}(1)$,

and another weakly informative Pareto prior on the prior counts,

$\rho \sim \mbox{Pareto}(0.5,1.5)$.

As the mathematicians are wont to say, Q.E.D..

### Tell Us What We’ve Won

What we do get is sharing of discriminative terms across domains. For instance, with polarity, positive term weights in each domain affect the prior, and the prior affects the average in each domain. The prior’s effect on estimates goes by the name “smoothing” in NLP. More specifically, the prior pulls the estimate for each domain and category toward the prior mean for that category by an amount inversely related to the prior variance (low prior variance exerting a stronger smoothing effect) and inversely related to the data count (with higher data counts, the prior’s effect is weaker).

We also get sharing of the non-discriminative terms among category 0 instances across domains and sharing among non-discriminative terms among category 1 instances.

… the fact that the model has to estimate the whole vocabulary of non-discriminative terms in both the negative and positive vocabulary priors $\theta[0]$ and $\theta[1]$?

For instance, we’d expect “awesome” and “crap” to have a very different distribution in positive and negative sentiment product reviews, whereas “the” and “of” should be less discriminative.

### Turtles All the Way Up

The next level here would be to add a more informative prior for the prior per-category vocabulary distribution $\theta[d]$. A very good place to start here would be with a prior mean distribution $\theta[d]$ estimated from a large corpus of text without category labels. That would put an empirical Bayes step into the mix and make the prior over priors much more informative (as is, we’ve set it to be symmetric). We could apply full Bayesian inference by just building the unsupervised data into the model with what the statisticians like to call “missing” labels.

We could do the same thing for the prior mean for prevalence of categories if we have samples of other domains we could use.

One nice feature of hierarchical models is that it’s easy to extend the hierarchy (for an extreme example, see Li and McCallum’s paper on their aptly named technique, pachinko allocation).

It’s trivial to extend this hiearchical naive Bayes model to the multi-category setting. We kept things simple here because it’ll greatly simplify the next post on multilevel modeling with logistic regression. In the naive Bayes setting, just replace the Bernoulli distribution with a discrete distribution and the beta distribution with a Dirichlet.

It’s also trivial to extend the semi-supervised case. Any number of variables may be “missing” in a directed graphical model without hampering inference (theoretically it’s simple; computationally it requires more bookkeeping and perhaps more samplers). As we mentioned earlier, this is especially helpful in the setting where we’re estimating hyperpriors (here the top-level vocabulary prior over per-category vocabulary priors over per-category/per-domain vocabulary distributions).

### Empirical Evaluations?

This will work. The hieararchical modeling strategy pretty much always works. The reason is that with the right hyper-hyper priors, it’ll reduce to the original model. The real question is will it help?

We should see big gains in small count data.

I haven’t done any. If anyone else has evaluated hierarchical naive Bayes, please say so in the comments or e-mail me and I’ll write a link into the post itself.

A good data set would be Dredze et al.’s multi-domain sentiment data set. I’d at least like to see the results of this relatively simple and standard naive Bayes model and the one I’ll describe in the next post before delving into “deep” models a la (Glorot et al. 2011).

Wow, that was all only four lines of notes in my notebook. I wasn’t expecting this to run so long! The next post has six lines of notes, so watch out.

### Tang and Lease (2011) Semi-Supervised Consensus Labeling for Crowdsourcing

September 12, 2011

I came across this paper, which, among other things, describes the data collection being used for the 2011 TREC Crowdsourcing Track:

But that’s not why we’re here today. I want to talk about their modeling decisions.

Tang and Lease apply a Dawid-and-Skene-style model to crowdsourced binary relevance judgments for highly-ranked system responses from a previous TREC information retrieval evaluation. The workers judge document/query pairs as highly relevant, relevant, or irrelevant (though highly relevant and relevant are collapsed in the paper).

The Dawid and Skene model was relatively unsupervised, imputing all of the categories for items being classified as well as the response distribution for each annotator for each category of input (thus characterizing both bias and accuracy of each annotator).

### Semi Supervision

Tang and Lease exploit the fact that in a directed graphical model, EM can be used to impute arbitrary patterns of missing data. They use this to simply add some known values for categories (here true relevance values). Usually, EM is being used to remove data, and that’s just how they pitch what they’re doing. They contrast the approach of Crowdflower (nee Dolores Labs) and Snow et al. as fully supervised. They thus provide a natural halfway point between Snow et al. and Dawid and Skene.

### Good Results vs. NIST Gold

The key results are in the plots in figures 4 through 7,which plot performance versus amount of supervision (as well as fully unsupervised and majority vote approaches). They show the supervision helping relative to the fully unsupervised approach and the approach of training on just the labeled data.

Another benefit of adding supervised data (or adding unsupervised data if viewed the other way) is that you’ll get better estimates of annotator responses (accuracies and biases) and of topic prevalences.

### Really Gold?

They get their gold-standard values from NIST, and the notion of relevance is itself rather vague and subjective, so the extra labels are only as golden as the NIST annotators. See below for more on this issue.

### Voting: Quantity vs. Quality

Tang and Lease say that voting can produce good results with high quality annotators. It’ll also produce good results with a high quantity of annotators of low quality. As long as their results are independent enough, at least. This is what everyone else has seen (me with the Snow et al. data and Vikas Raykar et al. very convincingly in their JMLR paper).

### Regularized Estimates (vs. MLE vs. Bayesian)

I think it’d help if they regularized rather than took maximum likelihood estimates. Adding a bit of bias from regularization often reduces variance and thus expected error even more. It helps with fitting EM, too.

For my TREC entry, I went whole hog and sampled from the posterior of a Bayesian hierarchical model which simultaneously estimates the regularization parameters (now cast as priors) along with the other parameters.

I also use Bayesian estimates, specifically posterior means, which minimize expected squared error. MLE for the unregularized case and maximum a posterior (MAP) estimates for the regularized case can both be viewed as taking posterior maximums (or modes) rather than means. These can be pretty different for the kinds of small count beta-binomial distributions used in Dawid and Skene-type models.

How in the world did they get a Mechanical turker to have an accuracy of 0 with nearly 100 responses? That’s very very adversarial. I get higher accuracy estimates using their data for TREC and don’t get very good agreement with the NIST gold standard, so I’m really wondering about this figure and the quality of the NIST judgments.

### Active Learning

Choosing labels for items on the margin of a classifier is not necessarily the best thing to do for active learning. You need to balance uncertainty with representativeness, or you’ll do nothing but label a sequence of outliers. There’s been ongoing work by John Langford and crew on choosing the right balance here.

Vikas Raykar et al. in their really nice JMLR paper add a regression-based classifier to the annotators. I think this is the kind of thing Tang and Lease are suggesting in their future work section. They cite the Raykar et al. paper, but oddly not in this context, which for me, was its major innovation.

### Not Quite Naive Bayes

Tang and Lease refer to the Dawid and Skene approach as “naive Bayes”, which is not accurate. I believe they’re thinking of generating the labels as analogous to generating tokens. But the normalization for that is wrong, being over annotators rather than over annotator/label pairs. If they had a term estimating the probability of an annotator doing an annotation, then it would reduce to naive Bayes if they allow multiple annotations by the same annotator independently (which they actually consider, but then rule out).

So it’s odd to see the Nigam et al. paper on semi-supervised naive Bayes text classification used as an example, as it’s not particularly relevant, so to speak. (I really like Nigam et al.’s paper, by the way — our semi-supervised naive Bayes tutorial replicates their results with some more analysis and some improved results.)

### Two-Way vs. K-Way Independence

Another nitpick is that it’s not enough to assume every pair of workers is independent. The whole set needs to be independent, and these conditions aren’t the same. (I was going to link to the Wikipedia article on independent random variables, but it only considers the pairwise case. So you’ll have to go to a decent probability theory textbook like Degroot and Schervish or Larsen and Marx, where you’ll get examples of three variables that are not independent though each pair is pairwise independent.

### One Last Nitpick

A further nitpick is equation (6), the second line of which has an unbound i in the p[i] term. Instead, i needs to be bound to the true category for instance m.

### Synthetic Data Generation?

I also didn’t understand their synthetic data generation in 3.1. If they generate accuracies, do they take the sensitivities and specificities to be the same (in their notation, pi[k,0,0] = pi[k,1,1]). In my (and others’) experience, there’s usually a significant bias so that sensitivity is not equal to specificity for most annotators.