The practicing instrumentation engineer's guide to the DFT - Part 1: Understand DFT and FFT implementations

-June 19, 2012

Understand DFT and FFT Implementations
Most people use the term Fast Fourier Transform (FFT) to describe the mathematical process of converting a time domain waveform into a frequency domain waveform - like the familiar spectrum analyzer display. What we are really doing is applying a Discrete Fourier Transform (DFT) to the process [6].

The DFT is a generalized case that works with any number of input sample points and produces a complex representation of the spectrum as an output. With the arrival of computers people soon discovered that there were clever and computationally fast implementations of the DFT that could depend on sample points constrained to powers of two or even prime numbers to name a few. These are then called Fast Fourier Transforms - and we as engineers like the term so much that it has become the universal descriptor no matter what the actual transform method used.

Why use one over the other? A generalized DFT requires a lot of multiplies and multiplies take a lot of computer processing time. FFT's on the other hand cleverly reduce the number of multiplies by a significant amount, the tradeoff is that they generally require the input number of samples to be powers of two.

Do you have to pick? Most of the DFT packages that I have seen implement a general DFT and they check for either a even power of two input or sometimes prime number inputs and then they automatically use the proper computer code to calculate the DFT. ALGLIB's Fourier Transforms are implemented this way. So it doesn't matter for the code implementations we will be looking at. If speed of computation is an issue we can constrain our sample points to powers of two and get a large speed advantage.

Generally I find that the difference from say a decently sized FFT of 8192 points and going to a DFT of 8193 points takes longer computationally than going to a full 16,536 point FFT.

So the speed saving can be significant - but the time is still very small - on a 2 GHz processor running windows the time for these FFT's is still in the low 100 milliseconds.

So this brings up an interesting concept - what costs money and is hard to get? Is it the input data or is the computing time? Sometimes input data is quite expensive and perhaps slow to get. In that case you want to optimize your processing around that constraint.

But if input data is always streaming by you in large quantities you can then easily grab power of two quantities of it and process it very quickly.

Another trick used by the DSP people is if you have some hard to get data and it is not an even power of two, but you need the fast processing of a FFT then you can append zeros to the data points to push the input data length up to the next even power of two. This is called "Zero Padding" - we will explore this technique later.

Data - Real or Complex
In the instrumentation process domain we many times have just scalar input data - that is there is no phase information in the input data.

The output of a DFT on the other hand consists of complex data, usually in rectangular format - that is it has a real and an imaginary part.

An interesting feature of the DFT algorithms developed is that if you DFT some data then inverse the DFT process you should get the same data back. And this is how the software writers actually test their algorithms. This then would require the DFT to take complex data as an input and an output. Again most of the software packages do take time data input in either complex or scalar format - it is a simple matter of adjusting the exact routine called. If you find a DFT routine that only takes complex data as the input and you have only scalar input data you can simply set the imaginary part of your input data array to zero. In ALGLIB there are different calls for scalar and complex input data.

In the rest of the discussion we will be talking about only scalar input and output data as magnitudes, but if you have input data with phase information you can apply the techniques here as a start to getting your full complex input data design going.

Loading comments...

Write a Comment

To comment please Log In