Fourier Transform

From AstroBaki

Jump to: navigation, search

[edit] Short Topical Videos

[edit] Reference Material

Contents


The Fourier Transform

Overview

The Fourier Transform is important for two key reasons:

  1. Sine waves are easy to work with mathematically, and
  2. Sine waves form a basis over the space of functions. That is, just like you can express any point in a 2D plane as a sum of an \hat x component and a \hat y component, with an appropriate coefficient multiplying each unit vector, you can express any function as a sum of a bunch (perhaps an infinite number) of sine waves. Moreover, just as \hat x and \hat y are orthogonal to one another (which is nice, because then the coefficients you derive for \hat x and \hat y have nothing to do with one another), sine waves are orthogonal with one another, as judged by integrating their product over the interval in question.

Fourier analysis is a whole field; we’ll only have time to touch on some of the important practical aspects.

Definition

First, we should begin by clarifying that when we say "sine wave", we really mean both sine and cosine waves. Both are necessary to be able to express arbitrary functions (for starters, you need a cosine wave in order for f(0)\ne 0). And when we say "function", we mean complex functions that satisfy various mathy requirements. Since we are interested in complex functions (which includes real-valued functions), and we need both sine and cosine waves, we’ll make use of a handy identity:

  e^{i\omega t}=\cos (\omega t) + i \sin (\omega t). \,\!

So instead of carrying sine and cosine waves around separately, we’ll carry them together as one frequency, ω, and our coefficients will be complex.

Here are the definitions we will use for the forward (\mathcal{F}) and inverse (\mathcal{F}^{-1}) Fourier transforms:

\begin{align}  f(t)& = \mathcal{F}(\hat f(\omega )) = \frac1{2\pi }\int {\hat f(\omega )e^{i\omega t}d\omega }\\ \hat f(\omega )& = \mathcal{F}^{-1}(f(t)) = \int {f(t)e^{-i\omega t}dt}  \end{align}\,\!

where \omega \equiv 2\pi \nu is the angular frequency coordinate that is the Fourier complement of time t, a hat is generally used to denote Fourier-domain quantities, and integrals are from -\infty to \infty unless otherwise noted. There are other Fourier transform conventions out there, differing usually by the normalization used in front of the forward and inverse Fourier transforms.

Orthogonality

As mentioned above, sine waves at different frequencies are orthogonal. This means that

  \int {e^{i\omega _1 t}e^{-i\omega _2 t}dt}=0 \,\!

for \omega _1\ne \omega _2. The shorthand we use for this is δ(ω1 − ω2), where δ is a function called the Dirac delta function, and has the properties that it is infinite when its argument is 0, it is equal to zero everywhere else, and the integral of a Dirac delta function is unity.

Positive and Negative Frequencies

The concept of a “negative” frequency can sometimes trip people up at first. We think of frequencies as a number of oscillations per second. How can there be a negative number of oscillations?

In fact, negative frequencies oscillate just the same way (and the same number of times) as positive frequencies. The only difference is that

\begin{align}  \sin (-\omega t)& =-\sin (\omega t),\\ \cos (-\omega t)& =\cos (\omega t).\\ \end{align}\,\!

So from a practical standpoint, a negative frequency is just the same as a positive frequency, except that instead of the cosine leading the sine by π / 2, it trails by π / 2. It turns out that for real-valued functions, positive and negative frequencies contain essentially the same information (the coefficient for eiωt is the complex conjugate of the coefficient for eiωt). For complex-valued functions, positive and negative frequencies contain independent information.

Parseval’s Theorem

Parseval’s Theorem proves a very important property of Fourier transforms: they preserve power. More specifically, the average variance of a signal in one domain is equal to the average variance of a signal in its Fourier complement, up to a normalization factor:

  \langle \sigma _ t^2\rangle = \langle \sigma _\omega ^2\rangle , \,\!

or alternately:

  \int {|f(t)|^2~ dt} = \int {|\hat f(\omega )|^2~ d\omega } \,\!

Implementation

One other reason why the Fourier transform is so powerful is because there are very efficient algorithms for computing them for finite intervals with discretely sampled data points. These algorithms are generally called Fast Fourier Transforms, and scale as \mathcal{O}(n \log n) with the number of data points, n. This is significantly better than what you’d do naively by integrating n different sine/cosine waves against n different time-domain samples, which would be \mathcal{O}(n^2).

Doing Fourier Transforms in Your Head

There are a few Fourier transforms that are good to just know.

  1. Gaussians — the Fourier transform of a Gaussian (e^{-\frac{x^2}2}) is a Gaussian. The width of the Fourier-transformed Gaussian is the inverse of the width of the original Gaussian.
  2. Dirac Delta Functions — As we kind of saw above, the Fourier transform of an infinite sine wave is a Dirac Delta Function (and, of course, the Fourier transform of a Dirac Delta function is an infinite sine wave).
  3. Top Hat — A top-hat function (which is zero everywhere, except over an interval where it is one) Fourier transforms into sin(x) / x, which is otherwise known as a “sinc" function. As you might guess, the width of the sinc function is inversely proportional to the width of the top-hat.

That’s about it. A good general principle is that the sharper the features in the function being Fourier transformed, the more power there will be at higher Fourier modes. Hence, narrow functions have wide Fourier transforms, and vice versa.

Personal tools