Computing Sample Mean and Variance Online in One Pass

by

Update, 30 April 2009:

OK, the good method mentioned in the links below, and in the final comment of mine below, is now implemented in LingPipe. Here’s the Javadoc and code:

Update, 6 April 2009:

Just check out:

In particular, Welford’s algorithm, which is both online and fairly numerically stable. There’s a nice evaluation and description in:

It shows the problem with computing variances for a set of large values that are close together.

A two pass algorithm that first estimates the mean is even more robust; that combines Welford’s algorithm with Andrew’s suggestion in the comments about subtracting out the mean.

Original Post:

Algebra week continues here at Alias-i with this piece of advice on how to compute sample means and variances in one pass through the data. Here’s the standard textbook definition of the sample mean of an array x of length n:

\hat{\mu} = \frac{1}{n} \sum_{i=1}^n x_i

Here’s the formula for sample variance:

\hat\sigma^2 = \frac{1}{n} \sum_{i=1}^n(\hat{\mu} - x_i)^2

From the way they’re written, it looks like you need to make two passes through the array x, first to compute the mean and then again to compute the variance. But it turns out with a little algebra, it’s easy to see we only need a single pass. The trick is to keep a running total of the sums of the x values:

\sum_{i=1}^n x_i

and the sum of the squares of the x values:

\sum_{i=1}^n x^2

Of course, you need to keep track of n, the number of examples seen.

It’s easy to compute the mean, you just divide by the length as in the first equation above. With the mean in hand, the variance is easy to compute, too. We just expand out the square:

\sum_{i=1}^n (\hat{\mu} - x_i)^2 = \sum_i (\hat{\mu}^2 - 2x_i\hat{\mu} + x_i^2)

and then rearrange:

{ } =  n\hat{\mu}^2 -2\hat{\mu} \sum_{i=1}^n x_i + \sum_{i=1}^n x_i^2

to derive:

\hat{\sigma}^2 =  \frac{1}{n} ( n\hat{\mu}^2 -2\hat{\mu} \sum_{i=1}^n x_i + \sum_{i=1}^n x_i^2) = \hat{\mu}^2 - 2\hat{\mu}^2 + \frac{1}{n} \sum_{i=1}^n x_i^2

and the final formulas for mean (repeated from above) and variance:

\hat{\mu} = \frac{1}{n} \sum_{i=1}^n x_i
\hat{\sigma}^2 = -\hat{\mu}^2 + \frac{1}{n} \sum_{i=1}^n x_i^2

The sample mean and sample variance are maximum likelihood estimates of the true mean and variance. But the sample variance is biased. The unbiased variance estimate is larger by a factor of n/(n-1):

\bar{\sigma}^2 = \frac{n}{n-1} \hat{\sigma}^2

In conclusion, the sum of values, sum of squares, and number of samples are collectively sufficient statistics for estimating a normal distribution. The other nice thing about this is that you can also do it online by updating the sum of values and sum of squares as each value arrives. The Wikipedia has a nice discussion of unbiased estimators covering sample mean and variance.

7 Responses to “Computing Sample Mean and Variance Online in One Pass”

  1. santi Says:

    This is not completely safe, as it can suffer numerical problems. In fact it is a recurrent example in textbooks for catastropic cancellation.

  2. Daniel Tunkelang Says:

    I’ve used this one-pass formulation without consequence, but santi’s right that it’s a danger.

    http://en.wikipedia.org/wiki/Variance#Computational_formula_for_variance

  3. lingpipe Says:

    Ouch. That’s particularly worrisome because near-zero variance estimates are problematic for downstream inference in general.

    The place I want(ed?) to use this is in scaling feature values to z-scores to improve the numerical stability of stochastic gradient descent:

    z(x) = (x – mean)/sqrt(variance))

    As variance approaches zero, the z-scores diverge. We’re just tossing out zero-variance dimensions (no way to estimate coefficients in that case anyway).

    That involves walking over the corpus of vectors once collecting the sum of x and x**2, or twice, collecting first the sum of x, then the sum of (x – mean)**2.

    Don’t we get the same problem with the terms in the usual definition that uses (x-mean)**2? In fact, the only way we’d seem to get catastrophic cancellation (that is, the difference between two close numbers losing all or most of its arithmetic precision) is when these terms were all near zero.

    Is there a better way to compute this? How do packages like R compute this kind of thing (probably lots of ways, but I just meant the built-in)?

    More generally, is there some kind of textbook presentation of this kind of thing I can read?

  4. Andrew Gelman Says:

    Hi, Bob. This isn’t something I’ve ever looked at in detail, but my quick thought is that instead of keeping running totals of x and x^2, you can keep totals of z and z^2, where z is a linear transformation of x: z = (x-A)/B, and A and B are estimates of the mean and sd of x (as computed, for example, based on the first 100 numbers in your series, maybe with some small value such as .01 added to B to deal with potential zero variance estimates when your first data points are identical).

    I actually seem to recall that there are simple recursive formulas you can use for the mean and variance, but the above approach should work ok, I think.

  5. Brendan O'Connor Says:

    I did a C++ implementation based on someone else’s C# implementation for running variance, based on those recursive formulas. I didn’t look at the near-zero issues but for what I was doing at the time it was fast & accurate.

    there’s also a link in the post to john cook’s blog which talks about the recursive formulas some more.
    http://anyall.org/blog/2008/11/calculating-running-variance-in-python-and-c/

  6. lingpipe Says:

    Thanks for the link — I’d found it tracing the Wikipedia article linked by Dan.

    I just implemented a class OnlineNormalEstimator for the next version of LingPipe. Here’s the Java, which uses fewer member variables than John D. Cook’s implementation:

    private long mN = 0L;
    private double mM = 0.0;
    private double mS = 0.0;
    public void handle(double x) {
         ++mN;
        double nextM = mM + (x – mM) / mN;
        mS += (x – mM) * (x – nextM);
        mM = nextM;
    }
    public double mean() {
        return mM;
    }
    public double varianceUnbiased() {
        return mN > 1 ? mS/(mN-1) : 0.0;
    }

  7. Welford算法-方差计算 - 堂子柯 Says:

    [...] [2] http://lingpipe-blog.com/2009/03/19/computing-sample-mean-variance-online-one-pass/ [...]

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s


Follow

Get every new post delivered to your Inbox.

Join 807 other followers