Central Limit Theorem

From AstroBaki
Jump to navigationJump to search

Course Home

Short Topical Videos[edit]

Reference Material[edit]

Related Topics[edit]

The Central Limit Theorem

Gaussian (normal) distributions are so important because the describe the probability distribution from most measurements made in the natural world. This phenomenon can be attributed to the central limit theorem.

Definition

Let be a set of independent random variables drawn from some distribution that need not be Gaussian, with each distributed around a mean value with a finite variance . Then the variable

in the limit of , will have a Gaussian distribution with zero mean () and unit variance ().

It turns out that the Central Limit Theorem can also hold for random variables, each drawn from a different, not-necessarily-Gaussian distribution, but only under the additional (not unreasonable) condition that at least one higher order moment of the distribution of the ensemble converges to zero.

Implications

There are two big implications of the Central Limit theorem:

  1. Ensembles of many random processes/variables converge to Gaussian distributions. That’s why normal distributions are everywhere.
  2. When adding together random numbers, the variance of the sum is the sum of the variances of those numbers.

Statement 2 is important. It means that, if you are averaging a bunch of samples drawn from the same distribution (e.g. they are all measurements with the same random error):

then the standard deviation of (which you’d see if you computed this average with new random samples over and over again) decreases as the square root of the number of samples averaged:

That’s why you get a better estimate of a quantity by making lots of (independent) measurements. But you only do so as the square-root of the number of measurements.