Here’s a 2 page write-up of one of the models I’ve been looking at for evaluating data annotation in order to evaluate coding standards and annotator sensitivity and specificity:
- Carpenter, Bob. 2008. Hierarchical Bayesian Models of Categorical Data Analysis.
I’ve submitted it as a poster to the New York Academy of Sciences 3rd Annual Machine Learning Symposium, which will be October 10, 2008.
Please let me know what you think (email@example.com). I didn’t have room to squeeze in the more complex model that accounts for “easy” items. This model and the one for easy items derive from the epidemiology literature (cited in the paper), where they’re trying to estimate disease prevalence from a heterogeneous set of tests. I’ve added some more general Bayesian reasoning, and suggested applications for annotation (though Bruce and Wiebe were mostly there in their 1999 paper, which I cite), and for training using probabilistic supervision (don’t think anyone’s done this yet).
I’m happy to share the R scripts and BUGS models I used to generate the data, fit the models, and display the results. I’d also love to know how to get rid of those useless vertical axes in the posterior histogram plots.