It’s nicely symmetric in 0 and 1, too. You’ll need to do all of your manipulations on the logit (i.e., log-odds) scale. I wonder how the arithmetic works out:

logit_add(a,b) =def= logit(inv_logit(a) + inv_logit(b))

logit_multipy(a,b) =def= logit(inv_logit(a) * inv_logit(b))

I don’t have time to work out the algebra. Obviously we can’t just apply inv_logit or we’re back in hot water with floating point.

]]>Had to look that one up: http://www.mpfr.org

The web page says, “The MPFR library is a C library for multiple-precision floating-point computations with correct rounding.”

This is still a relativelys standard floating-point representation, just with more bits. In order to represent numbers near one, you’re going to need a whole lot of bits.

]]>The Church folks are working on probabilistic programming languages under the new DARPA PPAML program. But I haven’t heard any of them talk about new data structures for continous values between 0 and 1.

]]>