Author Archive

Deleting Values in Welford’s Algorithm for Online Mean and Variance

July 7, 2009

To take a Gibbs sample, the previous sampled value for a variable is deleted from the sufficient statistics of the estimators that depend on it, the variable is resampled, and the the variable is added back into the estimators. What if our variable’s normal? Well, it turns out we can generalize Welford’s algorithm, which I describe in a comment to:

Here’s the basic algorithm, cribbed and simplified from John Cook’s site (see reference below), along with an extra method (unHandle(double)) added in for deletes:

private long mN = 0L;
private double mM = 0.0;
private double mS = 0.0;
public void handle(double x) {
     ++mN;
    double nextM = mM + (x – mM) / mN;
    mS += (x – mM) * (x – nextM);
    mM = nextM;
}
public void unHandle(double x) {
    if (mN == 0) {
        throw new IllegalStateException();
    } else if (mN == 1) { 
        mN = 0; mM = 0.0; mS = 0.0;
    } else {
        double mMOld = (mN * mM - x)/(mN-1);
        mS -= (x -  mM) * (x - mMOld);
        mM = mMOld;
        --mN;
    }
}
public double mean() {
    return mM;
}
public double varianceUnbiased() {
    return mN > 1 ? mS/(mN-1) : 0.0;
}

Works like a charm. I’m sure someone’s done this already, but I didn’t find any references in a quick check.

The same technique can obviously be applied to covariance and correlation calculations.

This’ll be in the next release of LingPipe.

References

  • Welford, B. P. 1962. Note on a method for calculating corrected sums of squares and products. Technometrics 4(3): 419-420.
  • Cook, John. Accurately computing running variance. Web page.

Collapsed Gibbs Sampler for Hierarchical Annotation Model

July 6, 2009

The R and BUGS combination is fine as far as it goes, but it’s slow, hard to debug, and doesn’t scale well. Because I’m going to need to process some big Turker-derived named entity corpora in the next month (more about that later), I thought I’d work on scaling the sampler.

Mitzi was away over the weekend, so I naturally spent my newly found “free time” coding and reading up on stats. While I was procrastinating refactoring feature extraction for CRFs reading a really neat paper (On Smoothing and Inference for Topic Models) from the Irvine paper mill (I also blogged about another of their paper’s on fast LDA sampling), it occurred to me that I could create a fast collapsed sampler for the multinomial form of the hierarchical models of annotation I’ve blogged about.

Hierarchical Model of Binomial Annotation

The model’s quite simple, at least in the binomial case. I’ve further simplified here to the case where every annotator labels every item, but the general case is just as easy (modulo indexing):

Variable Range Status Distribution Description
I > 0 input fixed number of Items
J > 0 input fixed number of annotators
π [0,1] estimated Beta(1,1) prevalence of category 1
c[i] {0,1} estimated Bern(π) category for item i
θ0[j] [0,1] estimated Beta(α0,β0) specificity of annotator j
θ1[j] [0,1] estimated Beta(α1,β1) sensitivity of annotator j
α0/(α0+β0) [0,1] estimated Beta(1,1) prior specificity mean
α0 + β0 (0,∞) estimated Pareto(1.5)* prior specificity scale
α1/(α1+β1) [0,1] estimated Beta(1,1) prior sensitivity mean
α1 + β1 (0,∞) estimated Pareto(1.5)* prior sensitivity scale
x[i,j] {0,1} input Bern(c[i,j]==1
? θ1[j]
: 1-θ0[j])
annotation of item i by annotator j

The Collapsed Sampler

The basic idea is to sample only the category assignments c[i] in each round of Gibbs sampling. With categories given, it’s easy to compute prevalence, annotator sensitivity and specificity given their conjugate priors.

The only thing we need to sample is c[i], and we can inspect the graphical model for dependencies: the parent π of the c[i], and the parents θ0 and θ1 of the descendants x[i,j] of c[i]. The formula’s straightforwardly derived with Bayes’ rule:

p(c[i]|x, θ0, θ1) p(c[i]) * Πj in 1:J p(x[i,j] | c[i], θ0[j], θ1[j])

Moment-Matching Beta Priors

*The only trick is estimating the priors over the sensitivities and specificities, for which I took the usual expedient of using moment matching. Note that this does not take into account the Pareto prior on scales of the prior specificity and sensitivity (hence the asterisk in the table). In particular, given a set of annotator specificities (and there were 200+ annotators for the named-entity data), we find the beta prior with mean matching the empirical mean and variance matching the empirical variance (requires some algebra).

I’m not too worried about the Pareto scale prior — it’s pretty diffuse. I suppose I could’ve done something with maximum likelihood rather than moment matching (but for all I know, this is the maximum likelihood solution! [update: it’s not the ML estimate; check out Thomas Minka’s paper Estimating a Dirichlet Distribution and references therein.]).

Initialization

The inputs are initial values for annotator specificity, annotator sensitivity, and prevalence. These are used to create the first category sample given the above equation, which allows us to define all the other variables for the first sample. Then each epoch just resamples all the categories, then recomputes all the other estimates. This could be made more stochastic by updating all of the variables after each category update, but it converges so fast as is, that it hardly seemed worth the extra coding effort. I made sure to scale for underflow, and that’s about it.

It took about an hour to think about (most of which was working out the moment matching algebra, which I later found in Gelman’s BDA book’s appendix), about two hours to code, and about forty-five minutes to write up for this blog entry.

Speed and Convergence

It’s very speedy and converges very very quickly compared to the full Gibbs sampler in BUGS. I spent another hour after everything was built and running writing the online sample handler that’d compute means and deviations for all the estimated variables, just like the R/BUGS interface prints out. Having the online mean and variance calculator was just what I needed (more about it later, too), especially as many of the values were very close to each other and I didn’t want to store all of the samples (part of what’s slowing BUGS down).

Identifiability

I didn’t run into identifiability problems, but in general, something needs to be done to get rid of the dual solutions (you’d get them here, in fact, if the initial sensitivities and specificities were worse than 0.5).

Open Question (for me)

My only remaining question is: why does this work? I don’t understand the theory of collapsed samplers. First, I don’t get nearly the range of possible samples for the priors given that they’re estimated from discrete sets. Second, I don’t apply beta-binomial inference in the main sampling equation — that is, I take the prevalence and annotator sensitivities and specificities as point estimates rather than integrating out their beta-binomial form. Is this kosher?

Downloading from LingPipe Sandbox

You can find the code in the LingPipe Sandbox in the project hierAnno (the original R and BUGS code and the data are also in that project). It’s not documented at all yet, but the one Ant task should run out of the box; I’ll probably figure out how to move the application into LingPipe.

[Update: The evaluation looks fine for named entities; I’m going to censor the data the same way as in the NAACL submission, and then evaluate again; with all the negative noise, it’s a bit worse than voting as is and the estimates aren’t useful because the specificites so dominate the calculations. For Snow et al.’s RTE data, the collapsed estimator explodes just like EM, with sensitivity scales diverging to infinity; either there’s a bug, the collapsed sampler isn’t stable, or I really do need a prior on those scales!]

[Update 2: The censored NE evaluation with collapsed Gibbs sampling and simple posterior evaluation by category sample worked out to have one fewer error than the full sampling model in BUGS (versus the original, albeit noisy, gold standard): collapsed Gibbs 232 errors, full Gibbs 233 errors, voting 242.5 errors (the half is from flipping coins on ties). Voting after throwing out the two really noisy annotators is probably competitive as is. I still don’t know why the RTE data’s blowing out variance.]

Lacks a Convincing Comparison with Simple Non-Bayesian Approaches

January 23, 2009

That was just one comment from my 2009 NAACL rejection letter (reprinted in full with a link to the paper below). As Andrew says, I’m glad to hear they had so many great submissions.

This was for a paper on gold-standard inference that I’ve been blogging about. Here’s a link to the submission:

The only thing it adds to the white paper and graphs on the blog is another mechanical Turk experiment that recreated the MUC 6 PERSON entity annotations. As with the Snow et al. work, the Turkers did better than the gold standard.

I should’ve paid more attention to Simon Peyton Jones’s most excellent advice. Ironically, I used to give most of that advice to our own students. Always easier medicine to give than to take.

In particular, I needed to make the use cases much clearer to those who weren’t knee deep in Bayesian analysis and inter-annotator agreement statistics. The trap I fell into has a name. It’s called “the curse of knowledge”. The best general study of this phenomenon I know is Elizabeth Newton’s research, which is described in Heath and Heath’s great book Made to Stick (see the tappers and listeners section of the excerpt). Good psych experiments are so compelling they give me goosebumps.

It’s hard to compare the Bayesian posterior intervals with non-Bayesian estimates. The whole point of the Bayesian analysis is to analyze the posteriors. But if you’re not a Bayesian, what do you care?

The standard in NLP is first-best analyses on held-out data sets with a simple accuracy measure. No one ever talks about log loss except in model fitting and language modeling evaluation. So even a probabilistic model that’s not Bayesian can’t get evaluated on its own terms. This goes for simple posterior predictive inferences in Bayesian analyses. I guess the reviewer missed the comparison with simple voting, which is the de facto standard (coupled with censorship of uncertain cases).

What I really should’ve addressed is two sides of the issue of what happens with fewer annotators. The answer is that the Bayesian approach has an even stronger advantage in agreeing with the gold standard annotations than simple voting. I only did that after the analysis after the submission in a “Doh!” moment anticipating the problem reviewers might have with 10 annotators. The second aspect is to push harder on the fools gold standard point that the state of the art produces very poor corpora in terms of consistency.

It is possible to see if the informative priors help with cross-validation. But even without cross-validation, it’s obvious with 20 samples when the annotator made no mistakes — they’re unlikely to have 100% accuracy on the rest of the corpus. You don’t estimate a player’s batting average (in baseball) to be .500 after he goes 10 for 20 in his first 20 at bats. Everyone in NLP knows low count estimates need to be smoothed. So now that we’ve decided we need a prior (call it regularization or smoothing if you like), we’re just haggling about price. So I just optimized that, too. It’s what any good Bayesian would do.

Here’s the full set of comments and ratings (on a 1-5 scale, with 5 being the best).

============================================================================ 
NAACL-HLT 2009 Reviews for Submission #138
============================================================================ 

Title: A Multilevel Bayesian Model of Categorical Data Annotation

Authors: Bob Carpenter
============================================================================
                            REVIEWER #1
============================================================================ 


---------------------------------------------------------------------------
Reviewer's Scores
---------------------------------------------------------------------------

                         Appropriateness: 3
                                 Clarity: 3
            Originality / Innovativeness: 2
                 Soundness / Correctness: 4
                   Meaningful Comparison: 4
                            Thoroughness: 4
              Impact of Ideas or Results: 2
                     Impact of Resources: 3
                          Recommendation: 2
                     Reviewer Confidence: 4
                                Audience: 2
                     Presentation Format: Poster
             Resubmission as short paper: recommended


---------------------------------------------------------------------------
Comments
---------------------------------------------------------------------------

The paper presents a simple Bayesian model of labeling error in multiple
annotations.  The goal is to evaluate and potentially clean-up errors of
sloppy/misguided annotation, for example, obtained via Amazon's Mechanical
Turk.  Standard Gibbs sampling is used to infer model parameters from observed
annotations and produce likely 'true' annotations.

I found section 2 confusing, even though the model is fairly simple.  Overall,
I didn't find the model or method very innovative or the results very
illuminating.
Nevertheless, I think this paper could be good as short one, since there is
some interest in empirical assessment of noisy annotation.

============================================================================
                            REVIEWER #2
============================================================================ 


---------------------------------------------------------------------------
Reviewer's Scores
---------------------------------------------------------------------------

                         Appropriateness: 5
                                 Clarity: 4
            Originality / Innovativeness: 3
                 Soundness / Correctness: 3
                   Meaningful Comparison: 3
                            Thoroughness: 3
              Impact of Ideas or Results: 2
                     Impact of Resources: 1
                          Recommendation: 2
                     Reviewer Confidence: 4
                                Audience: 3
                     Presentation Format: Poster
             Resubmission as short paper: recommended


---------------------------------------------------------------------------
Comments
---------------------------------------------------------------------------

This paper deals with modelling the annotation process using a fully specified
Bayesian model.

The paper itself is nicely written and I like the mixture of synthetic and real
experiments.  The problems I have however are:

--what does this actually buy us?  It would be nice to see exactly what is
gained through this process.  As it stands, it seems to tell us that annotators
can disagree and that some tasks are harder than other ones.  This is not
surprising really.

--it is highly unrealistic to have tens of annotators per item.  A far more
realistic situation is to have only one or possibly two annotators.  What would
this mean for the approach?

I would have liked to have seen some kind of baseline experiment, rather than
assuming that the various priors are actually warranted.  

Why assume binary labels?  Although it is possible to simulate non-binary
labels, this will introduce errors and it is not necessarily natural for
annotators.

============================================================================
                            REVIEWER #3
============================================================================ 


---------------------------------------------------------------------------
Reviewer's Scores
---------------------------------------------------------------------------

                         Appropriateness: 5
                                 Clarity: 3
            Originality / Innovativeness: 4
                 Soundness / Correctness: 3
                   Meaningful Comparison: 3
                            Thoroughness: 4
              Impact of Ideas or Results: 3
                     Impact of Resources: 1
                          Recommendation: 3
                     Reviewer Confidence: 3
                                Audience: 4
                     Presentation Format: Poster
             Resubmission as short paper: recommended


---------------------------------------------------------------------------
Comments
---------------------------------------------------------------------------

Statistical approaches to NLP have been largely depending on human annotated
data. This paper uses a Bayesian approach to address the uncertainty in human
annotation process, which is an important and interesting problem that may
directly affect statistical learning performance. The proposed model is able to
infer true labels given only a set of human annotations. The paper is
in general sound; the experiments on both simulated data and real data are
provided and with detailed analysis. Overall the work seems a contribution to
the field which I recommend for acceptance, but I have a few suggestions for
revision.

My major concern is that the work lacks a convincing comparison with simple
non-Bayesian approaches in demonstrating the utility of the model. The paper
has a excessively sketchy description of a non-hierarchical version of the
model (which actually gives similar result in label inference). Moreover, it is
not very clear how the proposed approach is related to all previous works
listed in Section 9.

The authors should explain more on their evaluation metrics. What points is it
trying to make in Figure 6? Why using residual errors instead of ROC curves?   

Some minor comments:
-- Typo at end of Section 3: "residual error error"
-- Section 4: "39% of all annotators perform no better than chance, as
indicated by the diagonal green line in Figure 6". Maybe I missed something but
I don't see this in Figure 6.
-- Section 7: define "item-level" predictors.

Good Kappa’s not Enough

July 22, 2008

I stumbled across Reidsma and Carletta’s Reliability measurement without limits, which is pending publication as a Computational Lingusitics journal squib (no, not a non-magical squib, but a linguistics squib).

The issue they bring up is that if we’re annotating data, a high value for the kappa statistic isn’t enough to guarantee what they call "reliability". The problem is that the disagreements may not be random. They focus on simulating the case where an annotator over-uses a label, which results in kappa way overestimating reliability when compared to performance versus the truth. The reason is that the statistical model will be able to pick up on the pattern of mistakes and reproduce them, making the task look more reliable than it is.

This discussion is similar to the case we’ve been worrying about here in trying to figure out how we can annotate a named-entity corpus with high recall. If there are hard cases that annotators miss (over-using the no-entity label), random agreement assuming equally hard problems will over-estimate the "true" recall.

Reidsma and Carletta’s simulation shows that there’s a strong effect from the relationship between true labels and features of the instances (as measured by Cramer’s phi).

Review of Cohen’s Kappa

Recall that kappa is a "chance-adjusted measure of agreement", which has been widely used in computational linguistics since Carletta’s 1996 squib on kappa, defined by:

kappa = (agreement - chanceAgreement)           
           / (1 - chanceAgreement)

where agreement is just the empirical percentage of cases on which annotators agree, and chanceAgreement is the percentage of cases on which they’d agree by chance. For Cohen’s kappa, chance agreement is measured by assuming annotators pick categories at random according to their own empirical category distribution (but there are lots of variants, as pointed out in this Artstein and Poesio tech report, a version of which is also in press at The Journal of Kappa Studies, aka Computational Linguistics). Kappa values will range between -1 and 1, with 1 only occurring if they have perfect agreement.

I (Bob) don’t like kappa, because it’s not estimating a probability (despite being an arithmetic combination of [maximum likelihood] probability estimates). The only reason to adjust for chance is that it allows one, in theory, to compare different tasks. The way this plays out in practice is that an arbitrary cross-task threshold is defined above which a labeling task is considered "reliable".

A final suggestion for those using kappa: confidence intervals from bootstrap resampling would be useful to see how reliable the estimate of kappa itself is.


Follow

Get every new post delivered to your Inbox.

Join 839 other followers