Spectral analysis: Foundations
Orthogonal functions
Fourier Series
Discrete Fourier Series
Fourier Transform: properties
Chebyshev polynomials
Convolution
DFT and FFT
Scope: Understanding where the Fourier Transform comes from. Moving from the continuous to the discrete world.
(Almost) everything we need to understand for filtering.
Fourier Series: one way to derive them
The Problem
we are trying to approximate a function f(x) by another function gn(x) which consists of a sum over N orthogonal functions (x) weighted by some coefficients an.
) ( )
( )
(
0
x a
x g
x
f N
i
i i
N
... and we are looking for optimal functions in a least squares (l2) sense ...
... a good choice for the basis functions (x) are orthogonal functions. What are orthogonal functions? Two functions f and g are said to be
orthogonal in the interval [a,b] if
b
a
dx x g x
f ( ) ( ) 0
How is this related to the more conceivable concept of orthogonal vectors? Let us look at the original definition of integrals:
The Problem
( ) ( )
Min!) ( )
(
2 / 1 2
2
b
a
N
N x f x g x dx
g x
f
Orthogonal Functions
... where x0=a and xN=b, and xi-xi-1=x ...
If we interpret f(xi) and g(xi) as the ith components of an N component vector, then this sum corresponds directly to a scalar product of vectors.
The vanishing of the scalar product is the condition for orthogonality of vectors (or functions).
N i
i i
b
a f x g x dx N f x g x x
1
) ( ) ( lim
) ( ) (
fi gi i 0
i i i
i g f g
f
Periodic functions
-15 -10 -5 0 5 10 15 20
0 10 20 30 40
Let us assume we have a piecewise continuous function of the form
) ( )
2
(x f x
f
) 2
( )
2
(x f x x
f
... we want to approximate this function with a linear combination of 2
periodic functions:
) sin(
), cos(
),..., 2
sin(
), 2 cos(
), sin(
), cos(
,
1 x x x x nx nx
N
k
k k
N x a a kx b kx
g x
f
1
0 cos( ) sin( )
2 ) 1 ( )
(
Orthogonality
... are these functions orthogonal ?
0 ,
0 0
) sin(
) cos(
0 0 ,
, 0
) sin(
) sin(
0 0 2
0 )
cos(
) cos(
k j
dx kx jx
k j
k j k j
dx kx jx
k j
k j
k j
dx kx jx
... YES, and these relations are valid for any interval of length 2.
Now we know that this is an orthogonal basis, but how can we obtain the coefficients for the basis functions?
from minimising f(x)-g(x)
Fourier coefficients
optimal functions g(x) are given if
( ) ( ) 0
! Min )
( )
(x f x 2 or g x f x 2
gn a n
k
leading to
... with the definition of g(x) we get ...
dx x
f kx
b kx
a a a
x f x
a g
N k
k k
k n
k
2
1 0
2 cos( ) sin( ) ( )
2 ) 1
( )
( 2
N k
dx kx x
f b
N k
dx kx x
f a
kx b
kx a
a x
g
k k
N k
k k
N
,..., 2 , 1 ,
) sin(
) 1 (
,..., 1 , 0 ,
) cos(
) 1 (
with )
sin(
) 2 cos(
) 1 (
1 0
Fourier approximation of |x|
... Example ...
.. and for n<4 g(x) looks like leads to the Fourier Serie
...
5 ) 5 cos(
3 ) 3 cos(
1 ) cos(
4 2
) 1
( 2 x 2 x 2 x
x
g
x x
x
f ( ) ,
-20 -15 -10 -5 0 5 10 15 20
0 1 2 3 4
Fourier approximation of x2
... another Example ...
2 0
, )
(x x2 x f
.. and for N<11, g(x) looks like leads to the Fourier Serie
N
k
N kx
kx k x k
g
1 2
2
) 4 sin(
) 4 cos(
3 ) 4
(
-10 -5 0 5 10 15
-10 0 10 20 30 40
Fourier - discrete functions
N i xi 2
.. the so-defined Fourier polynomial is the unique interpolating function to the function f(xj) with N=2m
it turns out that in this particular case the coefficients are given by
,...
3 , 2 , 1 ,
) sin(
) 2 (
,...
2 , 1 , 0 ,
) cos(
) 2 (
1
*
1
*
k kx
x N f
b
k kx
x N f
a
N j
j j
N j
j j
k k
cos( )
2 ) 1 sin(
) 2 cos(
) 1
( 1 *
1
*
*
*
*
0 a kx b kx a kx
a x
g m m
k
m k k
... what happens if we know our function f(x) only at the points
) ( )
* (
i i
m x f x
g
Fourier - collocation points
... with the important property that ...
... in our previous examples ...
-10 -5 0 5 10
0 0.5 1 1.5 2 2.5 3 3.5
f(x)=|x| => f(x) - blue ; g(x) - red; xi - ‘+’
Fourier series - convergence
f(x)=x2 => f(x) - blue ; g(x) - red; xi - ‘+’
Fourier series - convergence
f(x)=x2 => f(x) - blue ; g(x) - red; xi - ‘+’
Gibb’s phenomenon
f(x)=x2 => f(x) - blue ; g(x) - red; xi - ‘+’
0 0.5 1 1.5
-6 -4 -2 0 2 4 6
N = 32
0 0.5 1 1.5
-6 -4 -2 0 2 4 6
N = 16
0 0.5 1 1.5
-6 -4 -2 0 2 4 6
N = 64
0 0.5 1 1.5
-6 -4 -2 0 2 4 6
N = 128
0 0.5 1 1.5
-6 -4 -2 0 2 4 6
N = 256
The overshoot for equi- spaced Fourier
interpolations is 14% of the step height.
Chebyshev polynomials
We have seen that Fourier series are excellent for interpolating (and differentiating) periodic functions defined on a regularly
spaced grid. In many circumstances physical phenomena which are not periodic (in space) and occur in a limited area. This quest leads to the use of Chebyshev polynomials.
We depart by observing that cos(n) can be expressed by a polynomial in cos():
1 cos
8 cos
8 )
4 cos(
cos 3
cos 4
) 3 cos(
1 cos
2 )
2 cos(
2 4
3 2
... which leads us to the definition:
Chebyshev polynomials - definition
N n
x x
x T T
n ) n(cos( )) n( ), cos( ), [1,1],
cos(
... for the Chebyshev polynomials Tn(x). Note that because of x=cos() they are defined in the interval [-1,1] (which - however - can be extended to ). The first polynomials are
0 2
4 4
3 3
2 2
1 0
and ]
1 , 1 [ for
1 )
(
where 1
8 8
) (
3 4
) (
1 2
) (
) (
1 ) (
N n
x x
T
x x
x T
x x
x T
x x
T
x x
T x T
n
Chebyshev polynomials - Graphical
The first ten polynomials look like [0, -1]
The n-th polynomial has extrema with values 1 or -1 at
0 0.2 0.4 0.6 0.8 1
-1 -0.5 0 0.5 1
x
T_n(x)
n n k
xk(ext) cos(k ), 0,1,2,3,...,
Chebyshev collocation points
These extrema are not equidistant (like the Fourier extrema)
n n k
xk(ext) cos(k ), 0,1,2,3,...,
k
x(k)
Chebyshev polynomials - orthogonality
... are the Chebyshev polynomials orthogonal?
2 0 1
1
, ,
0 0 2
/ 0 ) 1
( )
( k j N
j k
for
j k
for
j k
for x
x dx T
x
Tk j
Chebyshev polynomials are an orthogonal set of functions in the interval [-1,1] with respect to the weight function
such that
1 2
/
1 x
... this can be easily verified noting that
) cos(
) ( ),
cos(
) (
sin ,
cos
j x
T k
x T
d dx
x
j
k
Chebyshev polynomials - interpolation
... we are now faced with the same problem as with the Fourier series. We want to approximate a function f(x), this time not a periodical function but a function which is defined between [-1,1].
We are looking for gn(x)
) ( )
2 ( ) 1
( )
(
1 0
0T x c T x
c x
g x
f n
k
k k
n
... and we are faced with the problem, how we can determine the coefficients ck. Again we obtain this by finding the extremum
(minimum)
0
) 1 ( )
( 2
1
1
2
x
x dx f
x ck gn
Chebyshev polynomials - interpolation
... to obtain ...
n x k
x dx T
x f
ck k , 0,1,2,...,
) 1 ( ) 2 1 (
1 2
... surprisingly these coefficients can be calculated with FFT techniques, noting that
n k
d k f
ck 2 (cos )cos , 0,1,2,...,
0
... and the fact that f(cos) is a 2-periodic function ...
n k
d k f
ck 1 (cos )cos , 0,1,2,...,
... which means that the coefficients ck are the Fourier coefficients ak of the periodic function F()=f(cos )!
Chebyshev - discrete functions
N i
xi
cos
... leading to the polynomial ...
in this particular case the coefficients are given by
2 / ,...
2 , 1 , 0 ,
) cos(
) 2 (cos
1
* f k k N
c N N
j
j
k j
m
k
k k
m x c T c T x
g
1
*
* 0
* ( )
2 ) 1
( 0
... what happens if we know our function f(x) only at the points
... with the property
N 0,1,2,..., j
j/N) cos(
x at )
( )
( j
* x f x
gm
Chebyshev - collocation points - |x|
f(x)=|x| => f(x) - blue ; gn(x) - red; xi - ‘+’
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1
0 0.2 0.4 0.6 0.8
1 N = 8
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1
0 0.2 0.4 0.6 0.8
1 N = 16
8 points
16 points
Chebyshev - collocation points - |x|
f(x)=|x| => f(x) - blue ; gn(x) - red; xi - ‘+’
32 points
128 points
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1
0 0.2 0.4 0.6 0.8
1 N = 32
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1
0 0.2 0.4 0.6 0.8
1 N = 128
Chebyshev - collocation points - x2
f(x)=x2 => f(x) - blue ; gn(x) - red; xi - ‘+’
8 points
64 points
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1
0 0.2 0.4 0.6 0.8 1
1.2 N = 8
-1 -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1
0 0.2 0.4 0.6 0.8 1
1.2 N = 64
The interpolating function gn(x) was shifted by a small amount to be visible at all!
Chebyshev vs. Fourier - numerical
f(x)=x2 => f(x) - blue ; gN(x) - red; xi - ‘+’
0 0.2 0.4 0.6 0.8 1
0 0.2 0.4 0.6 0.8
1 N = 16
0 2 4 6
-5 0 5 10 15 20 25 30 35
N = 16
Chebyshev Fourier
Chebyshev vs. Fourier - Gibb’s
f(x)=sign(x-) => f(x) - blue ; gN(x) - red; xi - ‘+’
Gibb’s phenomenon with Chebyshev? YES!
Chebyshev Fourier
-1 -0.5 0 0.5 1
-1.5 -1 -0.5 0 0.5 1
1.5 N = 16
0 2 4 6
-1.5 -1 -0.5 0 0.5 1 1.5
N = 16
Chebyshev vs. Fourier - Gibb’s
f(x)=sign(x-) => f(x) - blue ; gN(x) - red; xi - ‘+’
Chebyshev Fourier
-1 -0.5 0 0.5 1
-1.5 -1 -0.5 0 0.5 1 1.5
N = 64
0 2 4 6 8
-1.5 -1 -0.5 0 0.5 1 1.5
N = 64
Fourier vs. Chebyshev
Fourier Chebyshev
N i xi 2
i
xi N
cos
periodic functions limited area [-1,1]
) sin(
),
cos(nx nx
cos
), cos(
) (
x
n x
Tn
) 2 cos(
1
) sin(
) cos(
2 ) 1 (
* 1 1
*
*
*
*
0
kx a
kx b
kx a
a x
g
m m k m
k k
m
k
k k
m x c T c T x
g
1
*
* 0
* ( )
2 ) 1
( 0
collocation points
domain
basis functions
interpolating function
Fourier vs. Chebyshev (cont’d)
Fourier Chebyshev
coefficients
some properties
N j
j j
N j
j j
kx x
N f b
kx x
N f a
k k
1
*
1
*
) sin(
) 2 (
) cos(
) 2 (
N
j
j
j k
N f ck
1
* 2 (cos )cos( )
• Gibb’s phenomenon for discontinuous functions
• Efficient calculation via FFT
• infinite domain through periodicity
• limited area calculations
• grid densification at boundaries
• coefficients via FFT
• excellent convergence at boundaries
• Gibb’s phenomenon
The Fourier Transform Pair
d e
F t
f
dt e
t f F
t i
t i
) (
) (
) 2 (
) 1
( Forward transform
Inverse transform
Note the conventions concerning the sign of the exponents and the factor.
The Fourier Transform Pair
) (
) arctan (
) ( arg
) (
) ( )
( )
( )
(
) ( )
( )
( )
(
2 2
) (
R F I
I R
F A
e A
iI R
F i
) (
) (
A Amplitude spectrum
Phase spectrum
In most application it is the amplitude (or the power) spectrum that is of interest.
The Fourier Transform: when does it work?
G dt
t f ( )
Conditions that the integral transforms work:
f(t) has a finite number of jumps and the limits exist from both sides
f(t) is integrable, i.e.
Properties of the Fourier transform for special functions:
Function f(t) Fouriertransform F()
even even
odd odd
real hermitian
imaginary antihermitian
hermitian real
… graphically …
Some properties of the Fourier Transform
Defining as the FT: f (t) F()
Linearity
Symmetry
Time shifting
Time differentiation
) ( )
( )
( )
( 2 1 2
1 t bf t aF bF
af
) ( 2 )
(t F f
) ( )
(t t e F
f i t
) ( ) ) (
( i F
t t
f n
n
n
Differentiation theorem
Time differentiation ( ) ( i) F() t
t
f n
n
n
Convolution
( ) ( ') ( ') ' ( ') ( ') ' )
(t g t f t g t t dt f t t g t dt
f
The convolution operation is at the heart of linear systems.
Definition:
Properties: f (t)g(t) g(t) f (t) )
( )
( )
(t t f t
f
H t f t dt t
f ( ) ( ) ( )
H(t) is the Heaviside function:
The convolution theorem
A convolution in the time domain corresponds to a multiplication in the frequency domain.
… and vice versa …
a convolution in the frequency domain corresponds to a multiplication in the time domain
) ( ) ( )
( )
(t g t F G
f
) ( )
( )
( )
(t g t F G
f
The first relation is of tremendous practical implication!
The convolution theorem
From Bracewell (Fourier transforms)
Discrete Convolution
Convolution is the mathematical description of the change of waveform shape after passage through a filter (system).
There is a special mathematical symbol for convolution (*):
Here the impulse response function g is convolved with the input signal f. g is also named the „Green‘s function“
) ( )
( )
(t g t f t
y
n m
k
f g
y m
i
i k i k
, ,
2 , 1 , 0
0
m i
gi 0,1,2,...., n j
f j 0,1,2,....,
Convolution Example(Matlab)
>> x x =
0 0 1 0
>> y y =
1 2 1
>> conv(x,y) ans =
0 0 1 2 1 0
>> x x =
0 0 1 0
>> y y =
1 2 1
>> conv(x,y) ans =
0 0 1 2 1 0
Impulse response
System input
System output
Convolution Example (pictorial)
x „Faltung“ y
0 1 0 0
1 2 1
0 1 0 0
1 2 1
0 1 0 0
1 2 1
0 1 0 0
1 2 1
0 1 0 0
1 2 1
0 1 0 0
1 2 1
0 0 1 2 1 0
y x*y
The digital world
The digital world
j
s t g t t jdt
g ( ) ( ) ( )
gs is the digitized version of g and the sum is called the comb function.
Defining the Nyquist frequency fNy as
fNy dt
2
1
after a few operations the spectrum can be written as
1
) 2
( )
2 (
) 1 (
) (
n
Ny Ny
s G f G f nf G f nf
f dt G
… with very important consequences …
The sampling theorem
fNy dt
2
1
The implications are that for the calculation of the spectrum at frequency f there are also contributions of frequencies f±2nfNy, n=1,2,3,…
That means dt has to be chosen such that fN is the largest frequency contained in the signal.
The Fast Fourier Transform FFT
... spectral analysis became interesting for computing with the
introduction of the Fast Fourier Transform (FFT). What’s so fast about it ?
The FFT originates from a paper by Cooley and Tukey (1965, Math.
Comp. vol 19 297-301) which revolutionised all fields where Fourier transforms where essential to progress.
The discrete Fourier Transform can be written as
1 ,...,
1 , 0 1 ,
1 ,...,
1 , 0 ,
/ 1 2
0
/ 1 2
0
N k
e N F
f
N k
e f F
N N ikj
j
j k
N N ikj
j
j k