## Archive for the ‘Data Annotation’ Category

### Processing Tweets with LingPipe #2: Finding Duplicates with Hashing and Normalization

December 2, 2010

This post is about taking the csv file format used in post #1 and eliminating the huge number of annoying duplicates or near duplicates that exist in Twitter search results. The next entry in the series will cover more open ended near duplicate detection strategies such as Jaccard Distance and will introduce the very important area of evaluation.

As before you can download the code with this tarball or get it from subversion with the command

 svn co https://aliasi.devguard.com/svn/teaching/lingpipe4twitter 

We will start with a simple ingest and representation of the csv data and consider how to find near or exact duplicates without concern for computational efficiency. The research literature in this area is very concerned with scalable performance, I will instead focus on more ideal solutions that focus on simplicity and quality.

### Hashing Driven Deduplication

A simple way to find exact duplicates is to use a HashSet data structure in Java for comparison of tweets. Once we get the tweets into a list we iterate over the tweets to see whether the the exact string is in the HashSet. Below is the recognition phase from src/DedupeSimpleHash.java:

	List texts = parseFile(dataDir);
HashSet seenBefore = new HashSet();
for (int i=0; i < texts.size(); ++i) {
String tweet = texts.get(i);
if (seenBefore.contains(tweet)) {
++seenCounter;
System.out.println("Seen Before at postition:" + i +": " + tweet);
}
else {
}
}


The code iterates over the CSV format search results, tests to see if a HashSet already contains the string and adds it if the set does not contain the string. I could have just added all the strings and the HashSet would have behaved the same but that is slightly automagical and might prove confusing. Below the code for writing out the CSV format for presumably unique tweets:

	File output = new File(outDir,outFile);
System.out.println("Writing to: " + output.toString());
FileOutputStream stream =  new FileOutputStream(output);
OutputStreamWriter streamWriter
= new OutputStreamWriter(stream,Strings.UTF8);
CsvListWriter writer
= new CsvListWriter(streamWriter,CsvPreference.EXCEL_PREFERENCE);
ArrayList row = new ArrayList();
writer.write(row); //header for csv file
for (String tweet : seenBefore) {
row.clear();
writer.write(row);
}
writer.close();
System.out.println(seenCounter + " seen of " + texts.size());



Note that I have kept the tweet in the same row as the search results. This convention will allow interaction with other components.

Running the program on searches/Obama.csv yields the following:

>ant dedupeSimpleHash -Ddata=searches/Obama.csv -Doutdir=runs -Doutfile=Obama.simpleDedupeHash.csv
Buildfile: build.xml

compile:

jar:

dedupeSimpleHash:
[java] 2
[java] Seen Before at postition:48: RT @rolandsmartin: President Obama: 'I Don't Think About Sarah Palin' - http://bit.ly/gXZfqN http://fb.me/NRKRnQWy
[java] Seen Before at postition:58: RT @New_federalists: Obama lets Islamic terrorists pour across the Mexican border, but treats US citizens as criminals at airports!!! #RESIST #REBEL !
....[snip]....
[java] Seen Before at postition:1490: RT @Cubachi: RT @PalinTV: Obama can't even pardon a Turkey without notes
[java] Seen Before at postition:1499: RT @BreakingNews: President Barack Obama pardons turkeys ?Apple? and ?Cider?; says feels good to prevent one ?shellacking? this year
[java] Writing to: runs/Obama.simpleDedupeHash.csv
[java] 297 seen of 1500



### Duplicate Detection via Normalization

The simple deduplication approach gets rid of 297 exact duplicates, or 20% of the tweets. Pretty good but looking at the ‘unique’ tweets and sorting on the texts in a spread sheet shows some other issues:

Latest Gallup poll: Barack Obama gets another shellacking from the Tea Party: By Nile Gardiner World Last update... http://bit.ly/dHWLod
Latest Gallup poll: Barack Obama gets another shellacking from the Tea Party: By Nile Gardiner World Last update... http://bit.ly/geCufk
Latest Gallup poll: Barack Obama gets another shellacking from the Tea Party: By Nile Gardiner World Last update... http://bit.ly/hp7In9
Latest Gallup poll: Barack Obama gets another shellacking from the Tea Party: By Nile Gardiner World Last update... http://bit.ly/iggM61
Latest Gallup poll: Barack Obama gets another shellacking from the Tea Party: By Nile Gardiner World Last update... http://bit.ly/ijwRNA


Note that your results will vary since we are not running the searches on the same day. These posts only differ in the url for bit.ly which is still pretty much the same tweet as far as I am concerned. Looking further I see that tweets get re-tweeted with a prefix of ‘RT@:’ pretty often as well.

RT @mattyglesias: Repealing the whole Affordable Care Act would be a mistake, but Obama should compromise and agree to scrap the death...
Repealing the whole Affordable Care Act would be a mistake, but Obama should compromise and agree to scrap the death...


It appears that removing urls and the retweet prefixes will find more duplicates. That suggests normalization to help make the tweets more uniform by eliminating "trivial" differences. A look at src/DedupeNormalizedHash.java shows regular expressions that replace the url and retweets with the empty string.

	List texts = parseFile(dataDir);
HashSet seenBefore = new HashSet();
for (int i=0; i < texts.size(); ++i) {
String tweet = texts.get(i);
System.out.println("Before:" + tweet);
tweet = tweet.replaceAll("\\s*RT\\s*@\\w+:\\s*","");//removes "RT @foobar:"
tweet = tweet.replaceAll("https?:[^\\s]*",""); //removes "http://foo" "https://bar"
System.out.println("After :" + tweet);
if (seenBefore.contains(tweet)) {
++seenCounter;
System.out.println("Seen Before at postition:" + i +": " + tweet);
}
else {
}
}



Running the normalized approach the modifications are evident:

> ant dedupeNormalizedHash -Ddata=searches/Obama.csv -Doutdir=runs -Doutfile=Obama.dedupeNormalizedHash.csv
[snip]
[java] Before:RT @rolandsmartin: President Obama: 'I Don't Think About Sarah Palin' - http://bit.ly/gXZfqN http://fb.me/NRKRnQWy
[java] After :President Obama: 'I Don't Think About Sarah Palin' -
[snip]
[java] Writing to: runs/Obama.dedupeNormalizedqHash.csv
[java] 454 seen of 1500



Both patterns apply to the tweet removing ‘RT @rolandsmartin: ‘ and ‘http://bit.ly/gXZfqN http://fb.me/NRKRnQWy&#8217;. The normalization allows for identification of another 157 duplicates. Normalization is a standard technique used when working with text data. You see it in search engines where ‘walks’, ‘walking’ and ‘walked’ are normalized to the root word ‘walk’ with a stemmer. Both documents and queries are normalized the same way allowing a query ‘Books about walking’ to match documents that don’t mention ‘walking’ but do mention ‘walk’, ‘walks’ or ‘walked.’ Normalization can bring more trouble than the problems it solves and should be used carefully–perhaps a future blog post is called for on our preference to use character n-grams in situations where stemming is typically used.

Looking at the output of the normalized hash approach reveals more opportunity to eliminate near duplicates and there may be more patterns to exploit but eventually diminishing returns will bring the pattern driven normalization effort to a close. Lets move to more open ended approaches that better address the long tail side of the issue:

Has Obama written a stern letter to North Korea yet? They will respect that.
Has Obama written a stern letter to North Korea yet? They will respect that. <--- indeed!

Latest Gallup poll: Barack Obama gets another shellacking from the Tea Party
Latest Gallup poll: Barack Obama gets another shellacking from the Tea Party: By Nile Gardiner W...  #tcot #p2 #tlot
Latest Gallup poll: Barack Obama gets another shellacking from the Tea Party: By Nile Gardiner W...  #tcot #tlot #p2
Latest Gallup poll: Barack Obama gets another shellacking from the Tea Party: By Nile Gardiner World Last update...


The first pair only differ in an additional ‘<—indeed!’ which is not a likely pattern in the data that can be exploited much beyond this case. The second example shows that the basic tweet has many small variations. In these situations it is useful to deduce what algorithm you use to say that the tweets are (mostly) the same.

In my next post in the series I will cover Jaccard Distance as the method of identifying near duplicates and bring in the importance of evaluation metrics to drive development.

Breck

### Need Another Label(s)!

November 23, 2010

It occurred to me while working through the math for my last post that there are situations when you not only need another label for an existing item, but need more than one to achieve an expected positive payoff.

### Prior and Posterior Odds Ratios

First, let’s summarize the result from adding another annotator. For a given image $i$, suppose the current odds (aka prior) of clean ($c_i = 1$) versus porn ($c_i = 0$) are $V:1$, which corresponds to a probability of being clean of $\mbox{Pr}[c_i =1] = V/(1+V)$. For instance, if the odds are 4:1 the image is clean, the probability it is clean is 4/5.

Now suppose we get a label $y_{i,j}$ from an annotator $j$ for image $i$. We update the odds to include the annotator’s label to get new (posterior) odds $V'$, by the following formula if the label is

$V' = V \times \mbox{Pr}[y_{i,j}|c_i=1] \, / \, \mbox{Pr}[y_{i,j}|c_i=0]$.

We just multiply the prior odds $V$ by the likelihood ratio of the annotation $y_{i,j}$ given that $c_i = 1$ or $c_i = 0$. The new probability of a clean image is thus $V'/(1+V')$.

### The Porn Payoff

Suppose the payoff for the porn task is as follows. A porn image classified as clean has a payoff of -100, a porn image classified as porn has a cost of 0, a clean image classified as clean has a payoff of 1, and a clean image classified as porn has a payoff of -1.

In this setup, when we have an image, we need odds of better than 100:1 odds (a bit more than 99% probability) that it is clean before we return the decision that it’s clean; otherwise we maximize our payoff saying it’s porn. Unless we are 100% sure of our decision, we always have an expected loss from returning a decision that an image is porn, because the payoff is zero for a true negative (classifying porn as porn), whereas the payoff is -1 for rejecting a non-porn image.

Now suppose that the prevalence of clean images is 20 in 21 (of course, we work with an estimate). So we start with odds of 20:1. The decision to classify the item as clean before annotation has an expected payoff of [20*1 + 1*(-100)]/21 or about -4 per decision. The decision to classify an image as porn before an annotation has a payoff of [20*(-1) + 1*0]/21 which is around -1 per decision.

### Need Multiple Annotators

Clearly we need to annotate an item before we can have a positive expected payoff.

Suppose for the sake of argument (and easy arithmetic) that we have an annotator with sensitivity 0.9 (they correctly classify 90% of the clean images as clean and reject 10% of the clean images as porn) and specificity 0.8 (they correctly classify 80% of the porn images as porn and let 20% through as clean).

In this context, we actually need more than one annotator to annotate an item before we get a positive expected payoff. We first need to work through our expectations properly. First, we start with 20:1 odds (probability 20/21) an image is clean, so we can expect to see 20 clean images for each porn image. Then we have to look at the annotator’s expected response. If the image is clean, there’s a 80% chance the annotator says it’s clean and a 20% chance they say it’s porn. If the image is porn, there’s a 90% chance they say it’s porn and a 10% chance they say it’s clean. That let’s us work out the expectations for true category and response a priori. For instance, the chance the image is clean and the annotator says its porn is 20/21 * 0.2 = 4/21.

We then calculate the updated odds under both possible annotator responses and figure out the new odds and weight them by our expectations before seeing the label.

I’ll let you work out the arithmetic, but the upshot is that until you have more than one annotator, you can’t get 100:1 odds or more of an image not being porn. The key step is noting that we only get positive expected payoff if we return that the image is clean, but the posterior odds if the annotator provides a 1 label, which are 20/1 * 0.9/0.2 = 90:1. And don’t forget to factor in that we only land in the happy situation of a getting a non-porn label around 90% of the time, so the expected gain in payoff from the annotation is less than the improvement from 20:1 to 90:1 we get in the ideal case.

In reality, you’re likely to have slightly better porn/not-porn annotators than this because it’s a relatively easy decision problem. But you’re also likely to have spammers, as we mentioned last time, which is really a mixed pool of spammers and cooperators.

### Unknown Annotators

I’ll say again that one of the pleasant properties of the hierarchical model extension of Dawid and Skene is that it allows us to predict the behavior for a new unknown annotator. This is particularly useful in a mechanical Turk setting where we can’t choose our annotators directly (though we can feed items to an annotator if we write our own web app and they volunteer to do more).

We just sum over predictions from each posterior sample of the hyperpriors, then sample an annotator from that, calculate what the outcome would be for them, and average the result. What’s extra cool is that this includes all the extra dispersion in posterior inference that’s warranted due to our uncertainty in the hyperpriors representing the population of annotators.

### Information Gain and Task-Based Costs: Biased versus Spam Annotators

November 22, 2010

Following on from Sheng et al.’s Get Another Label? paper, Panos and crew have a Human Computation Workshop paper,

The motivation for the new paper is to try to separate bias from accuracy. That is, some annotators would try hard, but the bias in their answers would give them an overall low exact accuracy. But their responses can still be useful if we correct for their bias. Luckily, the Dawid-and-Skene-type models for estimating and adjusting for annotator’s accuracies does just that.

### G, PG, R, or X?

As an example, Ipeirotis et al. consider having workers on Amazon Mechanical Turk classify images into G, PG, R, and X ratings following MPAA ratings guidelines.

This is really an ordinal classification problem where the approach of Uebersax and Grove (1993) seems most appropriate, but let’s not worry about that for right now. We can imagine a purely categorical example such as classifying tokens in context based on part-of-speech or classifying documents based on a flat topic list.

### Bias or Inaccuracy?

Uebersax and Grove discuss the bias versus accuracy issue. It’s easy to see in a two-category case that sensitivity and specificity (label response for 1 and 0 category items) may be reparameterized as accuracy and bias. Biased annotators have sensitivities that are lower (0 bias) or higher (1 bias) than their specificities.

What Ipeirotis et al. point out is that you can derive information from a biased annotator if their biases can be estimated. As they show, a model like Dawid and Skene’s (1979) performs just such a calculation in its “weighting” of annotations in a generative probability model. The key is that it uses the information about the likely response of an annotator given the true category of an item to estimate the category of that item.

### Decision-Theoretic Framework

Ipeirotis et al. set up a decision-theoretic framework where there is a loss (aka cost, which may be negative, and thus a gain) for each decision based on the true category of the item and the classification.

One nice thing about the image classification task is that it makes it easy to think about the misclassification costs. For instance, classifying an X-rated image as G and having it land on a children’s site is probably a more costly error than rating a G image as PG and banning it from the same site. In the ordinal setting, there’s a natural scale, where rating an X-rated image as R is closer to the true result than PG or G.

### Getting Another Label

Consider a binary setting where the true category of item $i$ is $c_i$, the prevalence (overall population proportion) of true items is $\pi$, and annotator $j$ has sensitivity (accuracy on 1 items; response to positive items) of $\theta_{1,j}$ and a specificity (accuracy on 0 items; 1 – response to negative items) of $\theta_{0,j}$. If $y_{i,j}$ is the response of annotators $j \in 1{:}J$, then we can calculate probabilities for the category $c_i$ by

$\mbox{Pr}[c_i = 1 | y_i] \propto \pi \times \prod_{j=1}^J \theta_{1,j}^{y_{i,j}} \times (1 - \theta_{1,j})^{1 - y_{i,j}}$ and

$\mbox{Pr}[c_i = 0 | y_i] \propto (1 - \pi) \times \prod_{j=1}^J \theta_{0,j}^{1 - y_{i,j}} \times (1 - \theta_{0,j})^{y_{i,j}}$.

The thing to take away from the above formula is that it reduces to an odds ratio. Before annotation, the odds ratio is $\pi / (1-\pi)$. For each annotator, we multiply the odds ratio by the annotator’s ratio, which is $\theta_{1,j} / (1 - \theta_{0,j})$ if the label is $y_{i,j} = 1$, and is $(1 - \theta_{1,j})/\theta_{0,j}$ if the label is negative.

### Random Annotators Don’t Affect Category Estimates

Spammers in these settings can be characterized by having responses that do not depend on input items. That is, no matter what the true category, the response profile will be identical. For example, an annotator could always return a given label (say 1), or always choose randomly among the possible labels with the same distribution (say 1 with 20% chance and 0 with 80% chance, correspdonding, say, to just clicking some random checkboxes on an interface).

In the binary classification setting, if annotations don’t depend on the item being classified, we have specificity = 1 – sensitivity, or in symbols, $\theta_{0,j} = 1 - \theta_{1,j}$ for annotator $j$. That is, there’s always a $\theta_{1,j}$ chance of returning the label 1 no matter what the input.

Plugging this into the odds ratio formulation above, it’s clear that there’s no effect of adding such a spammy annotator. The update to the odds ratios have no effect because $\theta_{1,j}/(1 - \theta_{0,j}) = 1$ and $(1-\theta_{1,j})/\theta_{0,j}=1$.

### Cooperative, Noisy and Adversarial Annotators

In the binary case, as long as the sum of sensitivity and specificity is greater than one, $\theta_{0,j} + \theta_{1,j} > 1$, there is positive information to be gained from an annotator’s response.

If the sum is 1, the annotator’s responses are pure noise and there is no information to be gained by their response.

If the sum is less than 1, the annotator is being adversarial. That is, they know the answers and are intentionally returning the wrong answers. In this case, their annotations will bias the category estimates.

### Information Gain

The expected information gain from having an annotator label an item is easily calculated. We need to calculate the probability of true category and then probability of response and figure out the odds of each and the contribution to our overall estimates.

The random variable in question is $c_i$, the category of item $i$. We will assume a current odds ratio after zero or more annotations of $\phi/(1-\phi)$ and consider the so-called information gain from observing $y_{i,j}$, the label provided for item $i$ by annotator $j$,

$\mbox{H}[c_i] - \mathbb{E}[\mbox{H}[c_i|y_{i,j}]]$,

where the expectation is with respect to our model’s posterior.

Expanding the expectation in the second term gives us

$\mathbb{E}[\mbox{H}[c_i|y_{i,j}]] = \mbox{Pr}[y_{i,j} = 1] \times \mbox{H}[c_i|y_{i,j}=1] + \mbox{Pr}[y_{i,j} = 0] \times \mbox{H}[c_i|y_{i,j}=0]$

The formulas for the terms inside the entropy are given above. As before, we’ll calculate the probabilities of responses using our model posteriors. For instance, carrying this through normalization, the probabilities on which the expectations are based are

$\mbox{Pr}[y_{i,j} = 1] = Q_1/(Q_1+Q_2)$, and

$\mbox{Pr}[y_{i,j} = 0] = Q_2/(Q_1 + Q_2)$, where

$Q_1 = \phi \times \theta_{1,j} \ + \ (1-\phi) \times (1 - \theta_{0,j})$, and

$Q_2 = (1-\phi) \times \theta_{0,j} \ + \ \phi \times (1 - \theta_{1,j})$.

so that the probability the annotation is 1 is proportional the sum of the probability that the true category is 1 (here $\phi$) and the response was correct ($\theta_{1,j}$) and the probability that the true category is 0 ($1 - \phi$) and the response was incorrect ($1 - \theta_{0,j}$).

It’s easy to see that spam annotators who have $\theta_{0,j} = 1 - \theta_{1,j}$ provide zero information gain because as we showed in the last section, $p(c_i|y_{i,j}) = p(c_i)$ if annotator $j$ provides random responses, then $\mbox{H}[c_i|y_{i,j}] = \mbox{H}[c_i]$.

### Decision-Theoretic Framework

Ipeirotis et al. go one step further and consider a decision-theoretic context in which the penalities for misclassifications may be arbitrary numbers and the goal is minimizing expected loss (equivalently maximizing expected gain).

Rather than pure information gain, the computation would proceed through the calculation of expected true positives, true negatives, false positives, and false negatives, each with a weight.

The core of Bayesian decision theory is that expected rewards are always improved by improving posterior estimates. As long as an annotator isn’t spammy, their contribution is expected to tighten up our posterior estimate and hence improve our decision-making ability.

Suppose have have weights $L_{k,k'}$, which are the losses for classifying an item of category $k$ as being of category $k'$. Returning to the binary case, consider an item whose current estimated chance of being positive is $\phi$. Our expected loss is

$\mathbb{E}[\mbox{loss}] = L_{1,1} \phi^2 + L_{1,0} \phi (1-\phi) + L_{0,1} (1-\phi) \phi + L_{0,0} (1-\phi)^2$.

We are implicitly assuming that the system operates by sampling the category from its estimated distribution. This is why $\phi$ shows up twice after $L_{1,1}$, once for the probability that the category is 1 and once for the probability that’s the label chosen.

In practice, we often quantize answers to a single category. The site that wants a porn filter on an image presumably doesn’t want a soft decision — it needs to either display an image or not. In this case, the decision criterion is to return the result that minimizes expected loss. For instance, assigning category 1 if the probability the category is 1 is $\phi$ leads to expected loss

$L_{1,1} \phi + L_{0,1} (1-\phi)$

and the loss for assigning category 0 is expected to be

$L_{0,0} (1-\phi) + L_{1,0} \phi$.

The decision is simple: return the result corresponding to the smallest loss.

After the annotation by annotator $j$, the positive and negative probabilities get updated and plugged in to produce a new estimate $\phi'$, which we plug back in.

I’m running out of steam on the $\LaTeX$ derivation front, so I’ll leave the rest as an exercise. It’s a hairy formula, especially when unfolded to the expectation. But it’s absolutely what you want to use as the basis of the decision as to which items to label.

### Bayesian Posteriors and Expectations

In practice, we work with estimates of the prevalence $\pi$, sensitivities $\theta_{1,j}$, and specificities $\theta_{0,j}$. For full Bayesian inference, Gibbs sampling lets us easily compute the integrals required to use our uncertainty in the parameter estimates in calculating our estimate of $\mbox{Pr}[c_i = 1|y_i]$ and its uncertainty.

### Confused by Section 4

I don’t understand why Ipeirotis say, in section 4,

The main reason for this failure [in adjusting for spam] is the inability of the EM algorithm to identify the “strategic” spammers; these sophisticated spammers identify the class with the highest class prior (in our case the “G” class) and label all the pages as such.

Huh? It does in my calculations, as shown above, and in my experience with the Snow et al. NLP data and our own NE data.

One problem in practice may be that if a spammer annotates very few items, we don’t have a good estimate of their accuracies, and can’t adjust for their inaccuracy. Otherwise, I don’t see a problem.

### LREC 2010 Tutorial: Modeling Data Annotation

May 17, 2010

Massimo and I are giving our tutorial this afternoon in Malta at LREC 2010. Here’s a link to my slides:

Thanks to Massimo for lots of great suggestions and feedback from when I gave the talk in Rovereto Italy last week (U. Trento), when we talked about it at length between then and now, and on the first few drafts of the slides.

And here’s a link to Massimo’s slides:

Massimo’s part was based on his and Ron’s CL Journal review of kappa-like stats, and the tutorials they’ve given in the past. As such, they’re much more mature. The examples are really tight and to the point.

### Stochastic Gradient Descent for Probabilistic Training Data

March 23, 2010

I’ve been talking for a while about using the Bayesian hierarchical models of data annotation to generate probabilistic corpora. It’s really easy to extend stochastic gradient descent (SGD) optimization algorithms to handle probabilistic corpora.

I’m going to consider maximum likelihood estimation for binary logistic regression, but the same thing can be done for conditional random field (CRF) taggers, support vector machine (SVM) classifiers, or perceptron classifiers. I’ll discuss priors and multinomials after the main algorithm.

### The Training Data

Suppose we have $N$ labeled training instances, each consisting of a $D$-dimensional input vector $x_n$ and probabilistic outcome $y \in [0,1]$:

double[N][D] x;  // inputs
double[N] y;  // outputs


A non-probabilistic corpus constrains $y_n$ to be either $0$ or $1$.

### The Model

Binary logistic regression for $D$-dimensional inputs is parameterized by a $D$-dimensional vector $\beta$. The probability of a 1 outcome for a $D$-dimensional input $x$ is

$\mbox{Pr}(y_n = 1 | x_n, \beta) = \mbox{logit}^{-1}(\beta^{\top}x_n)$,

where

$\mbox{logit}^{-1}(z) = \exp(z)/(1 + \exp(z)) = 1/(1 + \exp(-z))$.

### Maximizing Log Likelihood

Our goal’s to find the parameter $\beta$ that maximizes the training data likelihood

$\mathcal{L}(\beta;x,y) = \sum_{n = 1}^N \log \mbox{Pr}(y=y_n | x=x_n, \beta)$.

The maximum likelihood estimate is thus

$\beta^* = \arg\max_{\beta} \mathcal{L}(\beta;x,y)$.

### Stochastic Gradient Descent

SGD finds the maximum likelihood estimate $\beta^*$ given training data $x,y$. For simplicity, we’ll only consider a fixed learning rate ($\lambda > 0$), though the algorithm obviously generalizes to annealing schedules or other learning rate enhancement schemes. We’ll let $E$ be the number of epochs for which to run training, though you could also measure convergence in the algorithm.

β = 0
for (e in 1:E)
for (n in 1:N)
yHat[n] = inverseLogit(dotProduct(β,x[n]))
err = y[n] - yHat[n]
β += λ * err * x[n]


Easy peasy, as they say in the old country.

### Priors

Priors don’t actually interact with the probabilistic training. So you just add them in as usual. I’d recommend either blocking them (as I did for CRFs in LingPipe) to update every few hundred training instances, or using lazy updates (as I did for logistic regression in LingPipe).

### Multinomial Classifiers

It’s also easy to generalize to multinomials, where the data consists of a probability for each of K outcomes. You just add an inner loop over the dimensions (minus 1 or not depending on whether you use K or K-1 parameter vectors for a K-dimensional problem), and the error term is calculated versus the probability for that outcome.

### Dekel and Shamir (2009) Vox Populi: Collecting High-Quality Labels from a Crowd

January 22, 2010

Aleks just sent me a pointer to this paper from last year’s COLT:

The paper presents, proves theorems about, and evaluates a very simple idea in the context of crowdsourcing binary classifier data coding. Unlike other approaches I’ve blogged about, Dekel and Shamir use only one coder per item, and often coders only label a handful of items.

They train an SVM on all the labels, then use the SVM to evaluate coders. They then prune out low-quality coders using the SVM as a reference.

This paper reminded me of (Brodley and Friedl 1999), which was also about using multiple trained classifiers to remove bad labels. Brodley and Friedl remove items on an item by item basis rather than on an annotator by annotator basis.

This paper is also somewhat reminiscent of (Raykar et al. 2009), who train a classifier and use it as another annotator.

### Lots of Theory

There’s lots of theory saying why this’ll work. It is a COLT paper after all.

### Evaluation: Web Page Query Relevance

They ran an eval on Amazon’s Mechanical Turk asking people to judge a pair consisting of a web query and web page as to whether the page is relevant to the query.

They used 15 coders per query/page pair in order to define a gold standard by voting. They then evaluated their approach using only one coder per item by subsampling their larger data set.

One thing I noticed is that as they lowered the maximum number of items labeled by an annotator, their average annotator accuracy went up. This is consistent with our findings and those in Snow et al. that the spammy Mechanical Turkers complete a disproportionate number of tasks.

With only one coder per item, Dekel and Shamir can’t evaluate the way everyone else is evaluating crowdsourcing, because they know their resulting data will remain noisy because it’s only singly annotated.

Estimates of coder sensitivity and specificity could be made based on their performance relative to the SVM’s best guesses. That’d provide some handle on final corpus quality in terms of false positives and negatives relative to ground truth.

Rather than evaluating trained classifier performance, Dekel and Shamir measure the “noise level” of the resulting data set after pruning. What we really care about in practice is how much pruning bad annotators helps train a more reliable classifier (or help evaluate if that’s the goal). They discuss this kind of end-to-end evaluation under “future work”! It’s really striking how different the evaluation versus theory requirements are for different conferences.

January 4, 2010

### 17 May 2010, Malta

Massimo Poesio and I are going to be giving a half-day tutorial at LREC 2010 in the afternoon of Monday 17 May 2010. The title is Statistical Models of the Annotation Process. We’ll be covering most of the material I’ve discussed in this blog’s data annotation thread and more.

What’s really cool is that it’s in Malta!

I’ll be there for most of the LREC 2010 programme. I’m particularly excited about the 2nd Workshop on Building and Evaluating Resources for Biomedical Text Mining and the New Challenges for NLP Frameworks Workshop (where by “framework”, they mean tools like GATE and LingPipe). I’ve always liked workshops more than conferences.

### 5–6 May 2010, Edinburgh

Before Malta, I’ll be in Edinburgh 6–7 May 2010 for a reunion of the department in which I did my Ph.D., the Centre for Cognitivie Science (originally School of Epistemics). I’ll be giving a talk on data annotation there, too.

### Multimodal Priors for Binomial Data

December 23, 2009

Andrey Rzhetsky gave me some great feedback on my talk at U. Chicago. I want to address one of the points he rasied in this blog post:

### What if the prior is multimodal?

This is a reasonable question. When you look at the Mechanical Turk annotation data we have, sensitivities and specificities are suggestively multimodal, with the two modes representing the spammers, and what I’ve decided to call “hammers” (because the opposite of “spam” is “ham” in common parlance).

Here’s a replay from an earlier post, Filtering Out Bad Annotators:

The center of each circle represents the sensitivity and specificity of an annotator relative to the gold standard from Snow et al.’s paper on Mech Turk for NLP.

### Beta Priors are Unimodal or Amodal

But a beta distribution $\mbox{\sf Beta}(\alpha,\beta)$ has a single mode at $(\alpha-1)/(\alpha + \beta - 2)$ in the situation where $\alpha, \beta > 1$, but has no modes if $\alpha \leq 1$ or $\beta \leq 1$.

### Mixture Priors

One possibility would be to use a mixture of two beta distributions, where for $\alpha_1,\beta_1,\alpha_2,\beta_2 > 0$ and $0 \leq \lambda \leq 1$, we define:

$p(\theta|\alpha_1,\beta_1,\alpha_2,\beta_2,\lambda) = \lambda \ \mbox{\sf Beta}(\theta|\alpha_1,\beta_1) + (1 - \lambda) \ \mbox{\sf Beta}(\theta|\alpha_2,\beta_2)$.

### Hierarchical Fit

We can fit the mixture component $\lambda$ along with the parameters of the beta mixture components in the usual way. It’s just one more variable to sample in the Gibbs sampler.

### Hierarchical Priors

On the other hand, if you consider baseball batting data, there would be at least two modes in the prior for batting average corresponding to fielding position (i.e. shortstops and second-basemen typically don’t bat as well as outfielders and I imagine there’s more variance in batting ability for infielders). If we didn’t know the fielding position, it’d make sense to use a mixture prior. But if we knew the fielding position, we’d want to create a hierarchical model with a position prior nested inside of a league prior.

### Coding Inference Talk at University of Chicago, Wed 16 Dec 2009

December 13, 2009

Update: It’s still Wednesday, but the right day is 16 December.

This Wednesday, I (Bob) will be giving a talk at the brand-spanking new Knapp Center for Biomedical Discovery at the University of Chicago.

When:
2 PM, Wed 16 Dec 2009

Where:
Knapp Center for Biomedical Discovery (KCBD)
10th Floor, South Conference Room
900 East 57th Street
University of Chicago

### Multilevel Models of Coding and Diagnosis with Multiple Tests and No Gold Standards

Bob Carpenter, Alias-i

I’ll introduce multilevel generalizations of some well known models drawn from the epidemiology literature and evaluate their fit to diagnostic testing and linguistic annotation data. The analogy is that data annotation for machine learning (or, e.g., ICD-9 coding) is the same kind of process as diagnostic testing.

The observed data consists of multiple testers supplying results for multiple tested units. True labels for some units may be known (perhaps with selection bias), and not all units need to undergo every test.

In all models, there are parameters for outcome prevalence, the result for each unit, and some form of test accuracy. I’ll also consider models with parameters for individual test accuracy and bias (equivalently sensitivity and specificity in the binary case) and item difficulty/severity.

I’ll focus on the priors for annotator accuracy/bias and item difficulty, showing how diffuse hyperpriors allow them to be effectively inferred along with the other parameters using Gibbs sampling. The posterior samples may be used for inferences for diagnostic precision, multiple comparisons of test accuracies, population prevalence, unit-level labels, etc.

I’ll show that the resulting multilevel models can be fit using data simulated according to the model. I will then fit the model to a range of clinical and natural language data. I’ll discuss their advantages for inference with epidemiology data ranging from dentists diagnosing caries based on x-rays, oncologists diagnosing tumors based on slides, infection diagnosis based on exams and serum tests, and with natural language data including name spotting, word stemming, and classification.

I’ll conclude by discussing extensions to further pooling through random effects for different testing facilities, different kinds of annotators (e.g. doctors vs. ICD-9 specialists), different kinds of subjects/units (e.g. genetic predisposition to diseases, or articles drawn from different journals), etc.

All the software (Java, R, BUGS) and data discussed in this talk are freely available from the LingPipe sandbox in the hierAnno project.

You may already be familiar with all this from the data annotation thread on this blog.

### Chang et al. (2009) Reading Tea Leaves: How Humans Interpret Topic Models

November 18, 2009

This forthcoming NIPS paper outlines a neat little idea for evaluating clustering output:

The question they pose is how to evaluate clustering output, specifically topic-word and document-topic coherence, for human interpretability.

### Bag of Words

Everything’s in the bag of words setting, so a topic is modeled as a discrete distribution over words.

### Multiple Topics per Document

They only consider models that allow multiple topics per document. Specifically, the clusterers all model a document as discrete distribution over topics. The clusterers considered share strong family resemblances: probabilistic latent semantic indexing (pLSI) and two forms of latent Dirichlet allocation (LDA), the usual one and one in which topics for documents are drawn from a logistic prior modeling topic correlations rather than a uniform Dirichlet.

To judge the coherence of a topic, they take the top six words from a topic, delete one of the words and insert a top word from a different topic. They then measure whether subjects can detect the “intruder”.

To judge the coherence of the topics assigned to a document, they do the same thing for document distributions: they take the top topics for a document, delete one and insert a topic not assigned with high probability to the document.

### Analysis

They considered two small corpora of roughly 10K articles, 10K word types and 1M tokens, one from Wikipedia pages and one from NY Times articles. These can be relatively long documents compared to tweets, customer support requests, MEDLINE abstracts, etc, but are shorter than full-text research articles or corporate 10K or 10Q statements.

They only consider 50, 100, and 150 topic models, and restrict parameterizations to add-1 smoothing (aka the Laplace form of the Dirichlet prior) for per-document topic distributions. I didn’t see any mention of what the prior was for the per-topic word distributions. I’ve found both of these parameters to have a huge effect on LDA output, with larger prior counts in both cases leading to more diffuse topic assignments to documents.

They only consider point estimates of the posteriors, which they compute using EM or variational inference. This is is not surprising given the non-identifiability of topics in the Gibbs sampler.

### Mechanical Turker Voting

They used 8 mechanical Turkers per task (aka HIT) of 10 judgments (wildly overpaying at US$0.07 to US$0.15 per HIT).

### (Pseudo Expected) Predictive Log Likelihood

They do the usual sample cross-entropy rate evaluations (aka [pseudo expected] predictive log likelihoods). Reporting these to four decimal places is a mistake, because the different estimation methods for the various models have more variance than the differences shown here. Also, there’s a huge effect from the priors. For both points, check out Asuncion et al.’s analysis of LDA estimation, which the first author, Jonathan Chang, blogged about.

### Model Precision

Their evaluation for precision is the percentage of subjects who pick out the intruder. It’d be interesting to see the effect of adjusting for annotator bias and accuracy. This’d be easy to evaluate with any of our annotation models. For instance, it’d be interesting to see if it reduced the variance in their figure 3.

There’s variation among the three models at different topics over the different corpora. I’m just not sure how far to trust their model precision estimates.

### Their Take Home Message

The authors drive home the point that traditional measures such as expected predictive log likelihood are negatively correlated with their notion of human evaluated precision. As I said, I’m skeptical about the robustness of this inference given the variation in estimation techniques and the strong effect of priors.

The authors go so far as to suggest using humans in the model selection loop. Or developing an alternative estimation technique. If they’d been statisticians rather than computer scientists, my guess is that they’d be calling for better models, not a new model selection or estimation technique!

### The Real Take Home Message

I think the real contribution here is the evaluation methodology. If you’re using clustering for exploratory data analysis, this might be a way to vet clusters for further consideration.

### What They Could’ve Talked About

Although they mention Griffiths and Steyvers’ work on using LDA for traditional psychometrics, I think a more interesting result is Griffith and Steyvers’ use of KL-divergence to measure the stability of topics across Gibbs samples (which I describe and recreate in the LingPipe clustering tutorial). Using KL divergence to compare different clusters may give you a Bayesian method to automatically assess Chang et al.’s notion of precision.