Author Archive

Another Linguistic Corpus Collection Game

November 12, 2012

Johan Bos and his crew at University of Groningen have a new suite of games aimed at linguistic data data collection. You can find them at:

Wordrobe is currently hosting four games. Twins is aimed at part-of-speech tagging, Senses is for word sense annotation, Pointers for coref data, and Names for proper name classification.

One of the neat things about Wordrobe is that they try to elicit some notion of confidence by allowing users to “bet” on their answers.

They also discuss prizes, but I didn’t see any mention of what the prizes were.

The project is aimed at imrpoving the Groningen Meaning Bank. I hope they release the raw user data as well as their best guess at a gold standard. I had some background discussion with Johan about annotation models, but they’re going to go with something relatively simple, which means there’s an opportunity to compare a richer statistical models like the other ones I’ve cited on the Data Annotation category of this blog.

Other Linguistic Games

The first linguistic game of which I was aware was Ahn’s reCAPTCHA. Although aimed at capturing OCR annotations as a side effect, it is more of a security wall aimed at filtering out bots than a game. Arguably, I’ve been played by it more than the other way around.

A more linguistically relevant game is Poesio et al.’s Phrase Detectives, which is aimed at elucidating coreference annotations. I played through several rounds of every aspect of it. The game interface itself is very nice for a web app. Phrase Detectives occassionally has cash prizes, but it looks like they ran out of prize money because the last reference to prizes was July 2011.

Are they Really Games?

Phrase Detectives is more like an Amazon Mechanical Turk task with a backstory and leaderboard. I didn’t create a login for Wordrobe to try it, but I’m going out on a limb to guess it’s going to be similar given the descriptions of the games.

Upgrading from Beta-Binomial to Logistic Regression

October 30, 2012

Bernoulli Model

Consider the following very simple model of drawing the components of a binary random N-vector y i.i.d. from a Bernoulli distribution with chance of success theta.

data {
  int N;  // number of items
  int y[N];  // binary outcome for item i
}
parameters {
  real theta;  // Prob(y[n]=1) = theta
}
model {
  theta ~ beta(2,2); // (very) weakly informative prior
  for (n in 1:N)
    y[n] ~ bernoulli(theta);
}

The beta distribution is used as a prior on theta. This is the Bayesian equivalent to an “add-one” prior. This is the same model Laplace used in the first full Bayesian analysis (or as some would have it, Laplacian inference) back in the Napoleonic era. He used it to model male-vs.-female birth ratio in France, with N being the number of samples and y[n] = 1 if the baby was male and 0 if female.

Beta-Binomial Model

You can get the same inferences for theta here by replacing the Bernoulli distribution with a binomial:

model {
  theta ~ beta(2,2);
  sum(y) ~ binomial(N,theta);
}

But it doesn’t generalize so well. What we want to do is let the prediction of theta vary by predictors (“features” in NLP speak, covariates to some statisticians) of the items n.

Logistic Regression Model

A roughly similar model can be had by moving to the logistic scale and replacing theta with a logistic regression with only an intercept coefficient alpha.

data {
  int N;
  int y[N];
}
parameters {
  real alpha;  // inv_logit(alpha) = Prob(y[n]=1)
}
model {
  alpha ~ normal(0,5);  // weakly informative
  for (n in 1:N)
    y[n] ~ bernoulli(inv_logit(alpha));
}

Recall that the logistic sigmoid (inverse of the logit, or log odds function) maps

\mbox{logit}^{-1}:(-\infty,\infty)\rightarrow(0,1)

by taking

\mbox{logit}^{-1}(u) = 1 / (1 + \mbox{exp}(-u)).

The priors aren’t quite the same in the Bernoulli and logistic models, but that’s no big deal. In more flexible models, we’ll move to hierarchical models on the priors.

Adding Features

Now that we have the inverse logit transform in place, we can replace theta with a regression on predictors for y[n]. You can think of the second model as an intercept-only regression. For instance, with a single predictor x[n], we could add a slope coefficient beta and write the following model.

data {
  int N;  // number of items
  int y[N];  // binary outcome for item n
  real x[N];  // predictive feature for item n
}
parameters {
  real alpha;  // intercept
  real beta;  // slope
}
model {
  alpha ~ normal(0,5);  // weakly informative
  for (n in 1:N)
    y[n] ~ bernoulli(inv_logit(alpha + beta * x[n]));
}

Stan

I used Stan’s modeling language — Stan is the full Bayesian inference system I and others have been developing (it runs from the command line or from R). For more info on Stan, including a link to the manual for the modeling language, see:

Stan Home Page:  http://mc-stan.org/

Stan’s not a competitor for LingPipe, by the way. Stan scales well for full Bayesian inference, but doesn’t scale like LingPipe’s SGD-based point estimator for logistic regression. And Stan doesn’t do structured models like HMMs or CRFs and has no language-specific features like tokenizers built in. As I’ve said before, it’s horses for courses.

Mystery Novel with Natural Language Processing

October 24, 2012

For those of you who like mystery novels, Mitzi’s just written one. The added bonus for readers of this blog is that there’s natural language processing involved in the detective work (I don’t want to give too much away, so I can’t tell you how).

Mitzi Morris, Poetic Justice Cover

Poetic Justice is in the cozy mystery sub-genre, where the focus is on the amateur sleuths and their milieu, not on grisly multiple homicides.

The Back-Cover Blurb

Once you’ve made it in Manhattan, why would you be caught dead in Staten Island? That’s what Jay Alfred, editor-in-chief of Ars Longa Press, can’t understand. Jay and his partner Ken live on the best block in Chelsea. They’re an attractive pair of opposites. Jay would never stoop to snoop. Ken exercises his right to know every chance he gets.

When Sheba Miller, literary agent and downtown doyenne, is found dead in a bar on Staten Island, Ken can’t wait to investigate. He hustles Jay onto the Staten Island Ferry and into adventure. Then Sheba’s tell-all memoir surfaces. It’s a catalog of white nights with hot artists and liquid lunches with idiot publishers. Jay’s the idiot-in-chief, but he’s not alone. It’s a good thing that Sheba’s dead, because half of literary New York is ready to kill her.

That’s not the only book in town and Ken’s not the only amateur detective. Sheba’s old friends and lovers and the junior members of Ars Longa are all ready and willing to explore New York City and beyond in search of authors, books, killers, and a killer martini.

Look Inside!

If you follow the Amazon link below, the first six chapters are available free online through Amazon’s “Look Inside!” feature.

Early Reviews

For what it’s worth, at least twenty people have read it and said they enjoyed it. Some (including me) are already clamoring for the second book in the series. Other mystery writers told Mitzi this would happen; luckily for us fans, she already has two follow-ons in the pipeline.

Details

  • Publisher:  Colloquial Media
  • Language:  English
  • Pages:  324
  • ISBN-10: 0-9882087-0-9 (Paperback)
  • ISBN-13: 978-0-9882087-0-4 (Kindle)

Ordering

On Amazon, it’s eligible for the 4-for-3 deal (order 4 books, get the cheapest one free).

It’s also available from Amazon UK.

If you want a review copy, add a comment to this post or send me e-mail at carp@alias-i.com.

Refactoring in the Zone

September 17, 2012

I remember very clearly when I first started to work as a professional programmer. I was tasked with first designing and then integrating a new semantic interpreter for the grammars in SpeechWorks’s speech recognizer.

I didn’t know my ass from my elbow and pretty much couldn’t get off the ground on my own.

Get Adopted by a Great Mentor

Luckily, Sasha Caskey pretty much saved my professional programming life by pair programming with me until I “got it” and on a continuing basis after that so I didn’t forget.

One of Sasha’s memorable early lessons involved refactoring or adding new features. After everything was designed (something I’m still more comfortable with than coding), there was the integration. This involved a C implementation of JavaScript interpreting user programs with high-level dialog control and low-level speech integration. I just couldn’t see how the ends could be made to meet.

Use the Force

Sasha said something along the lines of “use the force”. What he really said is probably more along the lines of “when you’re dealing with good code like this you just have a sense of where everything should go and if you stick to the plan, it usually works”. It sounded awfully reckless to me, but then I was having trouble seeing how version control could work with 20 programmers sharing a code base.

He applied this philosophy on percolating arguments through call chains, propagating return codes, dealing with exceptions (all hacked up with gotos to end-of-function cleanup blocks because we were using straight-up C), and even figuring out what the bounds of a loop should be.

It works. But only once you get to a certain level of expertise where you know what to expect and how things look if they’re “right”. And only if the other people you work with write idiomatic code.

I was reminded of Sasha’s early lessons on two occasions recently.

Sometimes it Works

First, I just added print statements to Stan’s modeling language. I needed to pass a standard output stream as well as a standard error stream to be the target of code writes. The error stream was already getting propagated. Even though I didn’t write a lot of the code I’m dealing with, the other people I work with are well-trained C++ coders, so everything just works as expected. It was like the current code was sprinkled with bread crumbs I could follow to do what I needed.

The force worked!

Sometimes it Doesn’t

Second, I was refactoring some student-written Java code and it’s so non-idiomatic I can’t make heads or tails of it. (Think Daily WTF levels of insanity here.) I had no idea what the original programmer intended or how the code was supposed to implement those intentions.

The force completely failed me.

Chess Memory in Experts and Novices

This all reminds me of some of my favorite psychology experiments ever, by de Groot in the 1950s with followups by Simon and Chase in the 1970s. (I heard about them in Herb Simon’s wonderful cog psych class/seminar at CMU.) It also relates to the seminal short-term memory experiments of George Miller in the 1950s.

The main takeaway is that chess experts have great memories for board positions if and only if the boards make sense tactically. They’re no better than amateurs at remembering random board positions. Even the errors they make in reconstructing board positions also tend to preserve the tactical arrangement even if all the pieces. Herb’s takeaway message was that “memory” is very tied up with expectations and the ability to “chunk” information into bundles. Miller’s experiments and followups showed we could remember as many bundles of information as single random pieces of information (the famous “7 +/- 2” figure for the number of items humans can hold in short term memory).

Here’s a nice survey of the psychology of chess that covers the above experiments and more.

Dilbert Meets Big Unstructured Data … and Builds a Framework

September 5, 2012

Best Dilbert ever. Or at least the most relevant to this blog:

Dilbert, 5 September 2012

I’ll give you the setup. Dilbert walks into a bar and strikes up a conversation with a woman who asks him what he does for a living. Dilbert replies, “I’m working on a framework to allow construction of large-scale analytical queries on unstructured data.”

I’ll leave the punchline to the strip.

MacBook Pro 15″ Retina Display Awesomeness

July 14, 2012

I just received my new MacBook Pro 15″ with the Retina display.

First, I have to mention how blown away I was that Apple has a feature (the “Migration Assistant“) that lets you clone your last computer. An hour or two after setting up, the new MacBook Pro had all the software, data, and settings (well, almost all) from my previous computer, a MacBook Air. All done over my home wireless network (though our sysadmin here at Columbia strongly recommended a wired connection, my Air doesn’t have a port and I didn’t have a dongle).

Yes, text is just as beautiful as on the iPad3. So are photos and images. Everything else I use is looking awfully pixelated in comparison (such as this blog post I’m typing into Safari on my 27″ iMac).

The biggest downside is that it’s big (15″ diagonal screen vs. 13″ on my MacBook Air) and heavy (4.5 lbs vs. 2.9 lbs for the Air). Though big isn’t so bad — the 15” screen seems luxurious after the Air’s rather cramped confines. Some software’s not up to the display, so the text looks really bad on the new MacBook Pro. Firefox and Thunderbird, for instance, look terrible. Overall, it’s just not as nice to handle as the Air. (Not to mention Columbia slapping the ugliest anti-theft stickers ever on it. Now I look like both a hipster clone and a corporate drone at the same time.) The magsafe cable has a very strong magnet compared to the Air’s and sticks out a bit more. And to add insult to injury, they’re not interchangeable, so we had to throw more money toward Cupertino.

I’d say the price is a downside (mine came out to about $2700 before Columbia discounts, including AppleCare). Even if I were buying this myself it’d be worth it, because I’ll average at least 20 hours/week use for two or more years.

Additional upsides are 16GB of memory and four cores. With that, it runs the Stan C++ unit tests in under 3 minutes (it takes around 12 minutes on the Air and the Air starts buzzing like an angry fly). The HDMI port saves a dongle, but then the change to Thunderbolt meant buying another one. I don’t know that I’ll get much use of out of USB 3.0 (the iPad 3 is only USB 2.0). I also get 256GB of SSD, though I never filled the 128GB I had on the Air. The ethernet port and HDMI port are handy — two less dongles compared to the MacBook Air if you need either of these ports.

I haven’t heard the fan. I’ve heard about it — it’s asymmetrical, which according to my signal processing geek friends, reduces the noise tremendously. It’s either super quiet or the machine’s so powerful the Stan unit tests don’t stress it out.

grammar why ! matters

July 11, 2012

For all those of you who might have read things like this, this, or this, I want to explain why the answer is “yes”, spelling and grammar matter.

Language is a Tool

Language is a tool used for many purposes.

If your goal is to entertain, there are different conventions. Singers like Bob Dylan can be highly entertaining while remaining nearly incomprehensible. If your goal is to connect to friends or loved ones, yet other conventions come into play.

Sometimes language is used for multiple things at once.

Language is a Convention

Language is a matter of convention. We simply cannot write or say whatever we want to however we want to and be understood.

If your purpose is communication, it behooves you to make your message clear. There are exceptions to this, too. I might be trying to communicate how worldly I am by using French or Italian food terms or pronunciations instead of English, even knowing the audience won’t understand them.

Communicating means using shared conventions.

Word Order

For instance, consider word order. Consider the following “understood be and to want we however to want we whatever say or write cannot simply we”. You’ve seen that sentence above, only in reverse. In reverse, it’s pretty much impossible to understand.

Even in the CBS piece by Steve Tobak, the author mocks bad grammar with “me want food”. Well, that has a subject, verb and object, in perfect English order, which is why it’s so easy to understand. It even has the tense of “want” and the number and lack of determiner for “food” right. The only mistake is the object/subject distinction in “me” vs. “I”!

Spelling?

Tobak goes on to quote a comment, “I jus read your article; ___. Very interesting!” What’s wrong with bad spelling? It’s unpleasant because it slows us down as readers. If it gets bad enough, it can block understanding. I had no problem detangling the last example, but how about “I js rd y ar — int!!!!!!!”?

Spelling used to be even more chaotic in English. It’s better in some other languages.

Disclaimers

I’m all for telegraphic speech. It works best in shared contexts. It’s a little harder with a bare Tweet. Language is incredibly tied up with context. Enough world knowledge can get you by, too. I might be able to refer to a TV show by “ST:TNG”, but my mom would have no idea what I was talking about.

For some purposes, precision and clarity matter much less. Consider drafting legislation vs. planning to meet at a restaurant vs. saying hello. Telegraphic speech can be very precise. Doctors’ notes to each other are a prime example. You don’t need a verb if everyone knows there’s only one thing to do with a device or a noun if there’s only one device to use.

Saying language is conventional and conventions should be followed is a subtly different stance from traditional linguistic prescriptivism. Languages change. If they didn’t, English wouldn’t even exist. I’m not railing against split infinitives, dangled prepositions, a complete failure to understand “who”/”whom” or even “I”/”me”, abandoning adverbial morphology, using “ain’t”, pronouncing “ask” like “axe”, etc. etc. I think these all have a good chance of achieving “proper” English status one day.

More on the Terminology of Averages, Means, and Expectations

June 21, 2012

I missed Cosma Shalizi’s comment on my first post on averages versus means. Rather than write a blog-length reply, I’m pulling it out into its own little lexicographic excursion. To borrow stylistically from Cosma’s blog, I’ll warn you, the reader, that this post is more linguistics than statistics.

Three Concepts and Terminology

Presumably everyone’s clear on the distinctions among the three concepts,

1. [arithmetic] sample mean,

2. the mean of a distribution, and

3. the expectation of a random variable.

The relations among these concepts is very rich, which is what I conjecture is causing their conflation.

Let’s set aside the discussion of “average”, as it’s less precise terminologically. But even the very precision of the term “average” is debatable! The Random House Dictionary lists the meaning of “average” in statistics as the arithmetic mean. Wikipedia, on the other hand, agrees with Cosma, and lists a number of statistics of centrality (sample mean, sample median, etc.) as being candidates for the meaning of “average”.

Distinguishing Means and Sample Means

Getting to the references Cosma asked for, all of my intro textbooks (Ash, Feller, Degroot and Schervish, Larsen and Marx) distinguished sense (1) from senses (2) and (3). Even the Wikipedia entry for "mean" leads off with

In statistics, mean has two related meanings:

the arithmetic mean (and is distinguished from the geometric mean or harmonic mean).

the expected value of a random variable, which is also called the population mean.

Unfortunately, the invocation of the population mean here is problematic. Random variables aren’t intrinsically related to populations in any way (at least under the Bayesian conception of what can be modeled as random). Populations can be characterized by a set of (conditionally) independent and identically distributed (i.i.d.) random variables, each corresponding to a measureable quantity of a member of the population. And of course, averages of random variables are themselves random variables.

This reminds me to the common typological mistake of talking about “sampling from a random variable” (follow the link for Google hits for the phrase).

Population Means and Empirical Distributions

The Wikipedia introduces a fourth concept, population mean, which is just the arithmetic mean of a given population. This is related to the point Cosma brought up in his comment that you can think of a sample mean as the mean of a distribution with the same distribution as the empirically observed distribution. For instance, if you observe three heads and a tail in three coin flips, you create a discrete random variable X with p_X(1) = 0.75 and p_X(0) = 0.25, then the average number of heads is equal to the expectation of X or the mean of p_X.

Conflating Means and Expectations

I was surprised that like the Wikipedia, almost all the sources I consulted explicitly conflated senses (2) and (3). Feller’s 1950 Introduction to Probability Theory and Applications, Vol 1 says the following on page 221.

The terms mean, average, and mathematical expectation are synonymous. We also speak of the mean of a distribution instead of referring to a corresponding random variable.

The second sentence is telling. Distributions have means independently of whether we’re talking about a random variable or not. If one forbids talk of distributions as first-class objects with their own existence free of random variables, one might argue that concepts (2) and (3) should always be conflated.

Metonomy and Lexical Semantic Coercion

I think the short story about what’s going on in conflating (2) and (3) is metonymy. For example, I can use “New York” to refer to the New York Yankees or the city government, but no one will understand you if you try to use “New York Yankees” to refer to the city or the government. I’m taking one aspect of the team, namely its location, and using that to refer to the whole.

This can happen implicitly with other kinds of associations. I can talk about the “mean” of a random variable X by implicitly invoking its probability function p_X(x). I can also talk about the expectation of a distribution by implicitly invoking the appropriate random variable. Sometimes authors try to sidestep random variable notation by writing \mathbb{E}_{\pi(x)}[x], which to my type-sensitive mind appears ill-formed; what they really mean to write is \mathbb{E}[X] where p_X(x) = \pi(x).

I found it painfully difficult to learn statistics because of this sloppiness with respect to the types of things, especially among less careful authors. Bayesians, myself now included, often write x for both a random variable and a bound variable; see Gelman et al.’s Bayesian Data Analysis, for a typical example of this style.

Settling down into my linguistic armchair, I’ll conclude by noting that it seems to me more felicitous to say

the expectation of a random variable is the mean of its distribution.

than to say

the mean of a random variable is the expectation of its distribution.

0/1 Loss Meaningless for Predicting Rare Events such as Exploding Manholes

June 14, 2012

[Update: 19 June 2012: Becky just wrote me to clarify which tools they were using for what (quoted with permission, of course — thanks, Becky):

… we aren’t using BART to rank structures, we use an independently learned ranked list to bin the structures before we apply BART. We use BART to do a treatment analysis where the y values represent whether there was an event, then we compute the role that the treatment variable plays in the prediction. Here’s a journal paper that describes our initial ranking method

http://www.springerlink.com/content/3034h0j334211484/

and the pre-publication version

http://www1.ccls.columbia.edu/%7Ebeck/pubs/ConedPaperRevision-v5.pdf

The algorithm for doing the ranking was modified a few years ago, and now Cynthia is taking a new approach that uses survival analysis.]

Rare Events

Let’s suppose you’re building a model to predict rare events, like manhole explosions in the Con-Ed system in New York (this is the real case at hand — see below for more info). For a different example, consider modeling the probability of a driver getting into a traffic accident in the next week. The problem with both of these situations is that even with all the predictors in hand (last maintenance, number of cables, voltages, etc. in the Con-Ed case; driving record, miles driven, etc. in the driving case), the estimated probability for any given manhole exploding (any person getting into an accident next week) is less than 50%.

The Problem with 0/1 Loss

A typical approach in machine learning in general, and particularly in NLP, is to use 0/1 loss. This forces the system to make a simple yea/nay (aka 0/1) prediction for every manhole about whether it will explode in the next year or not. Then we compare those predictions to reality, assigning a loss of 1 if you predict the wrong outcome and 0 if you predict correctly, then summing these losses over all manholes.

The way to minimize expected loss is to predict 1 if the probability estimate of failure is greater than 0.5 and 0 otherwise. If all of the probability estimates are below 0.5, all predictions are 0 (no explosion) for every manhole. Consequently, the loss is always the number of explosions. Unfortunately, this is the best you can do if your loss is 0/1 and you have to make 0/1 predictions.

So we’ve minimized 0/1 loss and in so doing created a useless 0/1 classifier.

A Hacked Threshold?

There’s something fishy about a classifier that returns all 0 predictions. Maybe we can adjust the threshold for predicting explosions below 0.5. Equivalently, for 0/1 classification purposes, we could rescale the probability estimates.

Sure, it gives us some predicted explosions, but the result is a non-optimal 0/1 classifier. The reason it’s non-optimal in 0/1-loss terms is that each prediction of an explosion is likely to be wrong, but in aggregate some of them will be right.

It’s not a 0/1 Classification Problem

The problem in 0/1 classification arises from converting estimates of explosion of less than 50% per manhole to 0/1 predictions minimizing expected loss.

Suppose our probability estimates are close, at least in the sense that for any given manhole there’s only a very small chance it’ll explode no matter what its features are.

Some manholes do explode and the all-0 predictions are wrong for every exploding manhole.

What Con-Ed really cares about is finding the most at-risk properties in its network and supplying them maintenance (as well as understanding what the risk factors are). This is a very different problem.

A Better Idea

Take the probabilities seriously. If your model predicts a 10% chance of explosion for each of 100 manholes, you expect to see 10 explosions. You just don’t know which of the 100 manholes they’ll be. You can measure these marginal predictions (number of predicted explosions) to gauge how accurate your model’s probability estimates are.

We’d really like a general evaluation that will measure how good our probability estimates are, not how good our 0/1 predictions are. Log loss does just that. Suppose you have N outcomes y_1,\ldots,y_N with corresoponding predictors (aka features), x_1,\ldots,x_N, and your model has parameter \theta. The log loss for parameter (point) estimate \hat{\theta} is

      {\mathcal L}(\hat{\theta}) - \sum_{n=1}^N \, \log \, p(y_n|\hat{\theta};x_n)

That is, it’s the negative log probability (the negative turns gain into loss) of the actual outcomes given your model; the summation is called the log likelihood when viewed as a function of \theta, so log loss is really just the negative log likelihood. This is what you want to optimize if you don’t know anything else. And it’s exactly what most probabilistic estimators optimize for classifiers (e.g., logistic regression, BART [see below]).

Decision Theory

The right thing to do for the Con-Ed case is to break out some decision theory. We can assign weights to various prediction/outcome pairs (true positive, false positive, true negative, false negative), and then try to optimize weights. If there’s a huge penalty for a false negative (saying there won’t be an explosion when there is), then you are best served by acting on low-probability information, such as servicing even low-probability manholes. For example, if there is a $100 cost for a manhole blowing up and it costs $1 to service a manhole so it doesn’t blow up, then even a 1% chance of blowing up is enough to send out the service team.

We haven’t changed the model’s probability estimates at all, just how we act on them.

In Bayesian decision theory, you choose actions to minimize expected loss conditioned on the data (i.e., optimize expected outcomes based on the posterior predictions of the model).

Ranking-Based Evaluations

Suppose we sort the list of manholes in decreasing order of estimated probability of explosion. We can line this up with the actual outcomes. Good system performance is reflected in having the actual explosions ranked high on the list.

Information retrieval supplies a number of metrics for this kind of ranking. The thing I like to see for this kind of application is a precision-recall curve. I’m not a big fan of single-number evaluations like mean average precision, though precision-at-N makes sense in some cases, such as if Con-Ed had a fixed maintenance budget and wanted to know how many potentially exploding manholes it could service.

There’s a long description of these kind evaluations in

Just remember there’s noise in this received curves and that picking an optimal point on them is unlikely to produce such good behavior on held-out data.

With good probability estimates for the events you will get good rankings (there’s a ton of theory around this I’ve never studied).

About the Exploding Manholes Project

I’ve been hanging out at Columbia’s Center for Computational Learning Systems (CCLS) talking to Becky Passonneau, Haimanti Dutta, Ashish Tomar, and crew about their Con-Ed project of predicting certain kinds of events like exploding manholes. They built a non-parametric regression model using Bayesian additive regression trees with a fair amount of data and many features as predictors.

I just wrote a blog post on Andrew Gelman’s blog that’s related to issues they were having with diagnosing convergence:

But the real problem is that all the predictions are below 0.5 for manholes exploding and the like. So simple 0/1 loss just fails. I thought the histograms of residuals looked fishy until it dawned on me that it actually makes sense for all the predictions to be below 0.5 in this situation.

Moral of the Story

0/1 loss is not your real friend. Decision theory is.

The Lottery Paradox

This whole discussion reminds me of the lottery “paradox”. Each ticket holder is very unlikely to win a lottery, but one of them will win. The “paradox” arises from the inconsistency of the conjunction of beliefs that each person will lose and the belief that someone will win.

Oh, no! Henry Kyburg died in 2007. He was a great guy and decades ahead of his time. He was one of my department’s faculty review board members when I was at CMU. I have a paper in a book he edited from the 80s when we were both working on default logics.

Ranks in Academia vs. Nelson’s Navy

June 5, 2012

I’m a huge fan of nautical fiction. And by that, I mean age of sail stuff, not WWII submarines (though I loved Das Boot ). The literature is much deeper than Hornblower and Aubrey/Maturin (though it doesn’t get better than O’Brian). I’ve read hundreds of these books. If you want to join me, you might find the following helpful.

I think I’ve pretty much read every nautical fiction book published in the last 50 years. I had to go back to sci-fi and even fantasy (thank you, Patrick Rothfuss, for making my life better a book at a time).

Officer Grades

Given that nautical fiction almost always focuses on the officers, I’ve come to realize that the books are really about organizational structure and management. I see a strong relation to the academic pecking order, which I summarize in the following table.

Academia Navy
undergrad nipper
grad student midshipman
post-doc lieutenant
junior faculty commander
tenured faculty post captain
department head, dean admiral

Non-Commmissioned and Warrant Officers

What about the rest of us?

Academia Navy
research scientist sailing master
research programmer boatswain (aka ‘bosun’)
grants officier Admiralty bureaucrat

Sailing master because us research scientists know the technical bits of being an officer, namely navigation and how the ship works. Programmers are bosuns because they’re the most technically adept at the low-level functionality of academia. I guess if you weren’t in computer science, the research programmer would be a lab tech.