Annoyingly, they wouldn’t allow the arXiv.org tags for computational linguistics and machine learning, only ones from the math set.

]]>If $\beta$ is unidimensional, we’d have a closed form if we could analytically evaluate

.

Nope, don’t know how to integrate that.

]]>Intuitively it seems like this should exist, and be baby math to figure out, but I haven’t done it. If intuitiion is wrong, here’s some of the possible failure modes:

1. The univariate distribution might exist, but not have a closed form.

2. The univariate distribution might exist, but be improper (doesn’t integrate to 1).

3. The elastic net solution might not correspond to the Bayesian MAP solution under any product of univariate prior on coefficients.

4. The elastic net solution might not correspond to the Bayesian MAP solution under any multivariate prior on coefficients.

The lack of a Bayesian MAP interpretation of course wouldn’t change the happy fact that that the penalized likelihood with elastic net is nicely behaved: convex, single global optimum at a sparse solution (if alpha > 0), etc.

]]>That’ why I’m becoming a fan of Bayesian methods. I suggest you try the relevance vector machine (RVM) which uses evidence approximation method (also called type-2 maximum likelihood) to select the optimal prior.

For my problems, it works just fine, though I have not tried it on really large scale data yet. ]]>

I’ve been hoping to get the time to take glmnet out for a spin myself. I’ve been doing more and more of my results analyses in R these days. What I really need to do is provide nice LingPipe outputs that can be read back into R. The only problem is scaling.

]]>