Programming Note # 12 Ravi Kochhar and Bill Rhode Dept. of Physiology UW-Madison Feb. 1976 Rev. Level 1.009 (1/2/2003)

- Definitions
- Convolution Theorem
- Minimum Phase Networks
- Auto Correlation and Cross Correlation
- Fourier Analysis
- Discrete Fourier Series
- The Convolution Integral
- The Fast Fourier Transform
- Applications of the FFT
- References

Some important definitions, theorems and results are stated below without proof. A more complete discussion can be found in the references.

Given a function of the real variable t, we can form the integral

If this integral exists for every real value of the parameter w, it defines a function F(jw) known as
the **Fourier Transform** of f(t).

The function F(jw) is in general complex:

A(w) is called the **Fourier Spectrum** of f(t),
its
**Energy Spectrum**, and its **Phase Angle**.

The following basic equation, known as the inversion formula,
permits the representation of f(t) in terms of its Fourier transform F(jw):

This is sometimes known as the **Inverse Fourier Transform**.

Given two functions f_{1}(x) and f_{2}(x) we can form
the integral,

................(1)

This integral defines a function f(x) known as the **Convolution** of
f_{1}(x) and f_{2}(x).

The Fourier transform F(jw) of the convolution f(t) of two functions f1(t) and f2(t) equals the product of the Fourier transforms F1(jw) and F2(jw) of these two functions.

It follows from the symmetry property that the Fourier transform F(jw) of
the product f1 (t)f2(t) of two functions equals the convolution of
their respective transforms divided by .

The Fourier transform is an essential tool of analysis of linear
time invariant systems.
The principal reason is the fact that, if the input f(t) to such a
system is an exponential

then the response g(t) is proportional to the input:

where the proportionality constant H(jw) is the **System (or Transfer)
function**,
It then follows from the linearity of the system and the definition of the Fourier
transform that, with F(jw) the Fourier transform of the input,
its response is given by

In fact H(jw), known as the Transfer function, is the Fourier transform of the
impulse response h(t).

Thus,

A system is said to be with **constant parameters**, if, with g(t)
its response to f(t), the response to f(t-t_{1})
equals g(t-t_{1}) where t_{1} is an arbitrary constant.
(This is a way of saying that the parameters of the system are independent
of time; e.g., if the defining law is a differential equation, then
its coefficients are constant.)

Denoting by h(t) the response of such a system to an impulse
we see
that its response to is given by h(t-t1).
The output to an arbitrary input f(t) can be expressed in terms of h(t)
alone (4):

Thus a linear system with constant parameters is uniquely
determined from the knowledge of a single function h(t).
With X(jw) and Y(jw) the Fourier transforms of the input x(t) and output
y(t), we obtain from the above expression for g(t) the important formula:

Hence y(t) can be written in the form:

One type of problem where this result may be applied usefully is that of determining the response of a given system (an electrical circuit, for example) to various inputs.

Consider the formal procedure for solving problems of this type by Fourier transforms. The steps are as follows:

1. Find the Fourier Transform of the input excitation function:

X(jw) = F[n(t)]

2. Determine the transfer function, H(jw) of the given system.

3. Form the product

which is the spectrum function of the desired response at the output.

4. Find the response function, y(t), by taking the inverse Fourier
transform of the output spectrum function Y(jw)

Fourier transforms enable us to discuss system response to transient excitations in terms of the steady-state response to sinusoidal excitations.

The distinction between minimum and nonminimum-phase functions applies only to transfer functions; functions which represent the ratio of an input to an output, or of an output to an input.

Consider the transfer function:

It is the transform of the impulse response h(t). In general is not uniquely determined from if no additional assumptions are made about H(jw).

For example, the two transforms

have the same amplitudes:

but their phases are quite different from each other.

The question arises of whether it is possible, by imposing certain conditions on H(jw), to obtain a class of causal functions (the impulse response) in which is uniquely determined from .

and are the real
and imaginary parts of ln[H(jw)]

can be uniquely determined from
if ln[H(jw)] is an analytic function in
the right hand plane (4),
but such is not the case in general. ln[H(p)] is not, in general,
analytic for .
H(p) might have zeros with positive real parts, and these zeros are
singularity points of ln[H(p)].
However, if we assume that H(p) is not only analytic but has
no zeros for , then
ln[H(p)] will also be analytic in the right-hand plane,
and will be uniquely determined from
.
The class of functions with this property is called **minimum-phase-shift**.
Of the two transfer functions discussed above,
only H_{1}(jw) is minimum-phase-shift.

For such networks it can be shown(6) that if the angle of the network function is defined as the angle by which the output lags the input and that if the network function is so defined that this angle passes through zero at zero frequency (except if there is a zero at the origin), then of all functions having the same magnitude function, the minimum-phase function has at all frequencies an angle smaller than that of a nonminimum-phase function, and also the smallest possible change in phase as frequency goes from zero to infinity.

Consider signals with finite energy.
Suppose that f(t) is real and its Fourier transform exists and is given by:

The inverse transform of their energy spectrum:

is known as the autocorrelation, and is denoted by

where

Consider the real functions f_{1}(t) and f_{2}(t),
their Fourier transforms F_{1}(jw) and F_{2}(jw),
and their cross-energy spectrum

The inverse transform of E_{12}(w) is denoted by
and is called the cross-correlation between f_{1}(t)
and f_{2}(t). Thus:

Since is real but in general not even.

As an illustration, consider the real functions f_{1}(t)
and f_{2}(t);
if we interpret f_{1}(t) as the voltage of a source and
f_{2}(t) the current that it supplies to some load, then the
integral

equals the energy delivered by this source. With

the energy equals the area of
For this reason the quantity E_{12}(w) is often called the
cross energy spectrum of f_{1}(t) and f_{2}(t).

Fourier discovered that periodic functions can be represented by an infinite sum of properly weighted sine and cosine functions of the proper frequencies. That is:

where T is the period of x(t) i.e. x(t) = x(t+T)

An equivalent representation is to represent the components of x(t) in
magnitude-phase relation. i.e.

There are some conditions that the periodic functions must satisfy. They are:

1) that it have at most a finite number of discontinuities in one period.

2) that the integral is finite.

The coefficients of the Fourier series are calculated by:

These relations are easily derived by multiplying each side of equation (1)
by and integrating both sides.
A similar manipulation, multiplication by will
verify the relation for b_{k}.

The Fourier series expansion of the periodic rectangular wave is shown
below.

A periodic triangular wave is shown below

Since f(t) = -f(-t) only sine terms will exist in the Fourier series for f(t).

We only have to determine the b_{n}. i.e.:

note that implies that only odd harmonics will be
present and that we only have to integrate over a quarter period; or:

Therefore:

If the function to be transformed is not periodic then the Fourier transform
will be a continuous function of frequency.
For example the pulse shown below along with its transform.

In general the Fourier transform is a complex quantity

= Real + Imaginary

The inverse of H(f) is specified by the relation

An example is to transform the Fourier transform of the pulse in the
previous figure. That is:

since the is an odd function,
its contribution is zero.

Then using the relation:

one can show that

which is the pulse that we started with.

This illustrates the symmetry property of Fourier Transforms:

The Fourier transform is useful for simplifying theoretical analysis and in many data operations. In data analysis it is the discrete Fourier transform which is used. We must work with sampled series since digital computers are used for the computations.

The continuous transform is

The equivalent discrete transform is

noting that

and letting

where T is the length of the record to be analyzed.

for k = 0,1,2,......,N/2

The number of computations required for the discrete transform is proportional
to N^{2}.
For N = 1000 there will be 1,000,000 operations.

If each operation is 100 microsecs then 100 seconds would be required for the
computation of X(f_{k}).
If N = 10000 then 167 minutes would be required for X(f_{k}).

There are recursive methods for calculating the sine and cosine functions which are usually much faster than the 'library' routines. They do have a problem in accumulating errors if they are not restarted after an appropriate number of iterations.

The complex form of the transform is

let and let

then which is a Polynomial in W
which can be evaluated by nesting the terms of the polynomial.

The convolution integral, as expressed in Eqn.(1), holds for all cases
as long as the system is linear and time-invariant.
In addition, if the system is causal (i.e., the system is physically
realizable), then h(t) = 0 for all t < 0 and we have no contribution
to the integration in Eqn.(1) for

..............(2)

Thus we see, from Eqn.(2), that the output of a causal system is the result of all the past history of the input f(t) weighted by the impulse response of the system in proportion to how far back in the past it occurred. In general, a system weights more recent values of the input heavier than those values which arrived far in the past. No future values of the input are allowed in producing the output in a causal system.

Often it turns out that the input, f(t), is also causal in
which case f(t) = 0 for t < 0-and Eqn.(2) further simplifies to:

......(3)

Both Eqns.(2) and (3) are special cases of the convolution integral.
It is usually more convenient to remember the general form,
Eqn.(1), from which the special cases follow readily.
It is also very convenient to use a short hand notation for convolution:

This Symbolic notation suggests that convolution is a special kind of multiplication. Carrying this idea farther, it is possible to write down the laws of a convolution algebra with similarities to those for ordinary multiplication. Let us examine one of these.

**Commutative Law**:

This can be demonstrated by writing out the integral definition:

and then changing the variable of integration, ,
to (t-x):

Convolution as a Superposition of Impulse Responses:

An innovation that has made digital analysis more practical is the introduction of the "Fast Fourier Transform" algorithm. This algorithm is a highly efficient method of calculating the discrete Fourier transform. It was introduced in 1965 by J.W. Cooley and J.W. Tukey. Earlier methods of computing the discrete Fourier transform (DFT) for N samples consisted of multiplying an N-vector by an (N x N) complex matrix. The fast Fourier transform (FFT) method of computing the Fourier coefficients requires much less time than the matrix method. For 1024 data points, the FFT algorithm requires about 1/100 as much time as the earlier method (Ref. 5).

The FFT algorithm is based on a series of iterative steps that eliminate redundant operations. The iterative step sequentially combines the data into progressively larger weighted sums of data. This iteration may be used to reduce the computation whenever N is a composite number. The most advantage from the algorithm is gained by selecting N as a power of a small number. Three is the optimum radix of the number of data points, although two is the most commonly used radix.

When two is a factor of the number of data points, the data points may be summed and difference and the differences multiplied by the appropriate complex trigonometric functions. The Fourier coefficients may then be found by using two DFTts of N/2 points instead of one N point DFT. Since an N point DFT requires N*N multiplications and additions, the amount of computation is reduced to two DFT's requiring (N*N)/4 operations each and N operations to combine the data, or a total of ((N*N)/2)+N operations. The figure below shows how the data may be combined algebraically to allow the use of two 4-point DFT's instead of one 8-point DFT (from (5)).

The first 4-point DFT is formed by simply adding the first half of the data
to the second half of the data.
The second 4-point DFT is obtained by differencing the data and multiplying
by the complex variable

When N is a power of two, N = 2^{m} (say), this iterative step
may be used m times to yield the Fourier coefficients without using matrix
multiplication.

The fast Fourier transform requires only N log_{2}N operations
instead of N^{2} operations.
By making a few changes the basic FFT algorithm may also be used to compute the
inverse FFT.
Since fewer computations are required, the amount of round-off error is
also reduced.

The speed of the FFT algorithm makes it practical to use it in various types of digital analysis including:

- Autocorrelation (and autocovariance) function estimates.
- Cross-correlation (and cross-covariance) function estimates.
- Power spectra estimates.
- Cross spectra estimates.
- Moving average digital filter.
- Convolution integrals
- Harmonic Analysis and Synthesis.
- Numerical solution of differential equations.

(1) Van Valkernberg, M.E. "Network Analysis", 2nd ed., Prentice Hall Inc., 1964.

(2) Cheng, D.K., "Analysis of Linear Systems", Addison-Wesley, 1959.

(4) Papoulis, A., "The Fourier Integral and its Applications", McGraw-Hill, 1962.

(5) Ewald, D. and Bollinger, J.G., "Theory and Application of Fast Fourier Transforms for Data Analysis", Design Engr. Laboratories, Univ. of Wisconsin, Madison.

(6) Balabanian, N. and Lepage, W.R., "What is a Minimum Phase Network ?"

(A scanned copy of this article is available, click on any of the next four
links to view it - [Page 1][Page 2][Page 3][Page 4])

(7) Bergland, G.D., "A guided Tour of the Fast Fourier Transform", IEEE Spectrum, Vol. 6, No. 7, July 1969.

(8) Bergland, G.D., "A Radix-Eight Fast Fourier Transform Subroutine for Real-Valued Series", IEEE Trans. Aud. Electroacoustics, Vol. AU-17, No. 2, June 1969.

If you have questions about, or suggestions for, this document, please send e-mail to kochhar@physiology.wisc.edu

Return to Documentation Page

Back to The Basement

*This page last modified on : Jan. 2, 2003*