• Keine Ergebnisse gefunden

5 Image Processing Excursus: The Gaussian Pyramid for Illumination Correction

5.1 Illumination Correction Framework

For more than two decades, pyramid methods have been used for image processing tasks such as image enhancement, compression, interpolation and extrapolation of missing image data and numerous others [O����et al.,����]. Given a gray-valued input image with intensity valuesG(x,y)at discrete locationsx∈[�,m−�]andy∈[�,n−�]the Gaussian pyramid is an e�cient data structure for spectral decomposition: For each levelithe imageGi(x,y)is low-pass�ltered and downsampled by a factor of two to produce the imageGi+(x,y).�is is commonly referred to as thereduceoperation, i.e.

Gi+=reduce(Gi)=(↓�)(h∗Gi) with (↓�)f(x,y)= f(�x,�y) where the�lter operator∗is de�ned as

(f ∗g)(x,y)=

+∞

i=−∞

+∞

j=−∞

g(i,j)f(x−i,y−j). (�.�)

�ereduceoperation is repeated up to a certain levell.�en, theexpandoperation is applied successively to get an image with the same size asG, but containing only the low frequency components ofGl:

Gi−=expand(Gi)=�h∗((↑�)Gi).

�e weighting functionh— thegenerating kernel— is commonly chosen to be a�-by-�binomial

�lter to approximate the Gaussian.�is method for computing the low frequency components of the input image is more e�cient than direct convolution with a large�lter but also more e�cient than using standardfast Fourier transform(���) [A������et al.,����]. In practise, as

images are of�nite size the�ltering operation (�.�) would be unde�ned for boundary pixels such that the image needs to be extended by two pixels — either virtually or explicitly — in each direction when using�-by-� �lters. Commonly, the pixels outside the image are set to a constant value or to the value of the nearest boundary pixel (replicate boundary) or are computed by assuming a periodic image (circular boundary). �e following general framework allows to de�ne arbitrary boundary conditions: Given the input image f(x,y)withx∈[�,m−�]and y∈[�,n−�]we de�ne the extended imagef(x,y)forx∈[−�,m+�]andy∈[−�,n+�].�e set of pixels

Px,y=��pix,piy��i=Px,y�−

contains all those pixels that in�uence the intensity at(x,y)in the extended area of the image.

�e functiongPx,y(x,y)de�nes how to calculate the intensity at(x,y)from the intensities of the set of pixelsPx,y.�us,

f(x,y)=���

����

f(x,y) ifx∈[�,m−�]andy∈[�,n−�]

gPx,y(x,y) otherwise.

For convenience, we useg(x,y)instead ofgPx,y(x,y). Assuming areplicate boundary, the intensity values of pixels outside of the image equal those of the nearest boundary pixel:

Px,yrep=��px,py� � ppxy=min(max(x,�),m�),

=min(max(y,),n) �.

�us,�Px,yrep�=�for allxandy, andgrep(x,y)= f(px,py). Analogously, acircular boundary condition corresponds to

Px,ycirc=��px,py� � px =xmodm, py=ymodn �.

Again,�Px,ycirc�=�for allxandyandgcirc(x,y)= f(px,py). Both assumptions — replicate and circular — are inapproriate when modelling illumination gradients. When compensating for vignetting, the application of replicate or circular boundary assumptions overestimates the intensity gradient at the boundary. We propose to use extrapolation methods that allow to model smoothly continuing intensity gradients beyond the image boundary. With the above framework,linear extrapolationwould be achieved by:

Px,ylin=��px,py�,�px,py� � px =min(max(x,),m), py=min(max(y,),n) px =min(max(x,�),m�), py=min(max(y,�),n�)

and

A more stable extrapolation involves least-squares regression. AssumePx,y=��pix,piy��to contain a�-by-�block of pixels within the image bounds that is closest to the point(x,y). By

minimising �b we obtain the coe�cients of a two-dimensional second order polynomial, modelling the intensity in the regionPx,ywith minimum error.�us, the intensity at(x,y)is approximated by

g(x,y)=�x y x y x y ��·β with β=�ATAATb.

Alternative regression approaches may be applied. However, the complexity of a regression model, i.e. the number of parameters to be estimated, should be low, as illumination gradients are by de�nition smooth functions.�e choice of the neighbourhood size is a trade-o�between stability and runtime, but, as our experiments show,�-by-�blocks are su�cient to remove almost all visible boundary artefacts.�e computational complexity of the regression method is no crucial factor, as extrapolation is done only for a low number of boundary pixels. So, other operations, such as�ltering of the whole image, dominate the complexity of the illumination correction method. In the overall algorithm (see Figure�.�), extrapolation of boundary pixels is used in steps (�) and (��).

In the downsampling loop (�–�), the image is�rst extended by two rows and columns in each direction and then the intensity values of these pixels are determined by extrapolation.

A�er�ltering, the image is cropped to the original size. In the upsampling loop (��–��), the image is�rst�ltered and then the intensity values of the�rst and last two rows and columns are recalculated from the inner image by extrapolation. Finally, the low-frequency image is subtracted from the original image and the mean intensity of the input image is added to obtain the�nal image with the proper intensity level.

�e removal of low frequent image components with the aforementioned method may alter the image histogram, especially if the input image contained saturated pixels. �us, further postprocessing steps, such as histogram matching [R������et al.,����], may be required to improve the visual impression.

Input : Intensity image f(x,y), pyramid depthl Output: Corrected imagef(x,y)

G← f;

fori←�tol do

ExtendGi;

Extrapolate imageGi;

Filter image, i.e.Gi ←h∗Gi;

Crop imageGito original size;

Downsample, i.e.Gi+←(↓�)(Gi);

end

Gl←Gl;

�� fori←ldown todo

�� Upsample, i.e.Gi−←(↑�)Gi;

�� Filter image, i.e.Gi−←h∗Gi−;

�� Extrapolate imageGi−;

�� Scale image, i.e.Gi−←�·Gi−;

�� end

�� Calculate mean intensity, i.e. f =

xyf(x,y)

nm ;

�� f←f −G+ f;

Figure�.�: Overall illumination correction algorithm.

With a minor change, the above framework applies to arbitrary non-rectangular images where the input image f(x,y)is de�ned for a set of pixelsI⊆[�,m−�]×[n−�]. We de�ne the extended image by:

f(x,y)=���

����

f(x,y) if(x,y)∈I

gPx,y(x,y) otherwise. (�.�) Here,Px,ycontains those pixels with minimum Euclidean distance to the query point.

�e above method can analogously be applied to�-dimensional volumetric data — f(x,y) becomes f(x,y,z), the generating kernel is now a�-by-�-by-�binomial�lter, and so on.�us it can be used for volumetric medical imaging data such as f���data to remove illumination gradients or for downsampling.