# Radiometer Equation

### From AstroBaki

### [edit] Prerequisites

### [edit] Short Topical Videos

### [edit] Reference Material

- Synthesis Imaging in Radio Astronomy II, ed. Taylor, Carilli, Perley, Ch. 9 (Wrobel & Walker)
- The NRAO Course on Radiometers

## Contents |

## The Radiometer Equation

The radiometer equation, at its heart, is a relatively straight-forward application of the Central Limit Theorem. It describes how the uncertainty in measuring a noise temperature decreases as the square-root of the number of samples averaged together:

where σ is the residual (root-mean-square) uncertainty in a noise temperature measurement, *T*_{sys} is the noise temperature of a circuit (or “system"), *B* is the bandwidth over which a single measurement is made (i.e. the integrated bandwidth), and *t* is the time over which a measurement is averaged (i.e. the integration time).

Although the above equation is correct, to fully understand it, it would be better to write it as follows:

In the denominator, 2*B**t*, is simply the number of independent samples (i.e. *N* in the Central Limit Theorem) that were averaged together into a single measurement. The accounting goes as follows: according to the Nyquist Theorem of sampling, it takes two samples per period to uniquely characterize a sine wave. Said another way, a signal with bandwidth *B*, expressed in Hz, contains 2*B* independent pieces of information each second. Thus, for an measurement made over *t* seconds, we have averaged 2*B**t* independent samples.

The numerator of the (re-expressed) radiometer equation, , has one bit of trickiness in it, and to understand it, we first need to understand what exactly we are measuring, and why there is uncertainty in our measurements. *T*_{sys} is a *noise* temperature, which means that it characterizes the variance of a noise signal with zero mean. When you go to measure a noise temperature, you are really trying to measure the variance of a random noisy signal.

For any limited number of samples generated by a random process, there is an inherent uncertainty in the variance, σ^{2}, you compute for that sample, just as there would be an inherent uncertainty in the mean, . However, whereas σ characterizes the per-sample uncertainty for the purpose of calculating , if you are trying to measure σ^{2}, the uncertainty is actually for each sample. This is because, to compute the variance, you have to square each sample (e.g. ), and then average. The math can get messy, but if you just take your computer and calculate the standard deviation of *x*^{2}, for *x* drawn from a Gaussian distribution with σ = 1, you’ll find that it is .

This is all to say that if you are trying to measure the noise temperature, *T*_{sys}, which relates to the variance of a distribution, then the uncertainty of each variance measurement is . That is the measurement error that we are beating down by according to the Central Limit Theorem (where *N* = 2*B**t*), and so that is what goes in the numerator of the Radiometer equation. It’s just a confusing accident that the for the measurement error associated with measuring the variance cancels out the factor of 2 in *N* associated with Nyquist sampling.