Limits of Floating Point and the Asymmetry of 0 and 1

by

If we are representing probabilities, we are interested in numbers between 0 and 1. It turns out these have very different properties in floating-point arithmetic. And it’s not as easy to solve as just working on the log scale.

Smallest Gaps between Floating Point Numbers of Different Magnitudes

The difference between 0 and the next biggest number representable in double-precision floating point (Java or C++ double) is on the order of 1e-300. In constrast, the difference between 1 and the next smallest number representable is around 1e-15. The reason is that to exploit the maximum number of significant digits, the mantissa for the number near 1 has to be scaled roughly like 0.999999999999999 or 1.0000000000001 and the exponent has to be scaled like 0.

CDFs and Complementary CDFs

This is why there are two differently translated error functions in math libs, erf() and erfc(), which are rescaled cumulative and complementary cumulative distribution functions. That is, you can use erf() to calculate the cumulative unit normal distribution function (cdf), written Phi(x); erfc() can be used to calculate the complementary cumulative unit normal distribution function (ccdf), written (1 – Phi(x)).

Log Scale no Panacea

Switching to the log scale doesn’t sove the problem. If you have a number very close to 1, you need to be careful in taking its log. If you write log(1-x), you run into the problem that x can only be so close to 1. That’s why standard math libraries provide a log1p() function defined by log1p(x) = log(1 + x). This gives you back the precision you lose by subtraction two numbers close to each other (what’s called “catastrophic cancellation” in the arithmetic processing literature).

A Little Quiz

I’ll end with a little quiz to get you thinking about floating point a little more closely. Suppose you set the bits in a floating point number randomly, say by flipping a coin for each bit.

Junior Varsity:  What’s the approximate probability that the random floating point number has an absolute value less than 1?

Varsity:  What is the event probability of drawing a number in the range (L,U). Or, equivalently, what’s the density function to which the draws are proportional?

7 Responses to “Limits of Floating Point and the Asymmetry of 0 and 1”

  1. David D. Lewis Says:

    Time for a programming language with probability as a built-in data type?

  2. Bob Carpenter Says:

    You’d need some kind of new continuous number representation other than mantissa/exponent as in floating point. It’s not so much the operations as the way we represent numbers. And unless we also increase the precision, no way we could represent likelihoods without underflow, at least on the linear scale.

    The Church folks are working on probabilistic programming languages under the new DARPA PPAML program. But I haven’t heard any of them talk about new data structures for continous values between 0 and 1.

  3. David D. Lewis Says:

    That’s what I’m intrigued by. If the representation was decoupled from the need to make the bit patterns convenient for doing operations in hardware, what would it look like?

  4. salviati Says:

    GMP/MPFR?

    • Bob Carpenter Says:

      Had to look that one up: http://www.mpfr.org

      The web page says, “The MPFR library is a C library for multiple-precision floating-point computations with correct rounding.”

      This is still a relativelys standard floating-point representation, just with more bits. In order to represent numbers near one, you’re going to need a whole lot of bits.

  5. kenahoo Says:

    If you need to represent numbers both very close to 0 and very close to 1, it’s convenient to transform using logit and invlogit and then deal with numbers on the whole real line. R has `plogis()` and `qlogis()` built in. I often do this just so I don’t have to restrict the range of my parameters too.

    • Bob Carpenter Says:

      It’s nicely symmetric in 0 and 1, too. You’ll need to do all of your manipulations on the logit (i.e., log-odds) scale. I wonder how the arithmetic works out:

      logit_add(a,b) =def= logit(inv_logit(a) + inv_logit(b))

      logit_multipy(a,b) =def= logit(inv_logit(a) * inv_logit(b))

      I don’t have time to work out the algebra. Obviously we can’t just apply inv_logit or we’re back in hot water with floating point.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


Follow

Get every new post delivered to your Inbox.

Join 819 other followers