• Keine Ergebnisse gefunden

46 Denoising by 3D Wavelet Shrinkage

The quality measures:

The following quality measures are considered for the determination of the good-ness of the denoising result:

1. Fraction of non-zero coefficients (equiv. with the physical density):

D = #{ui = 0|ui ∈CL∪D}

n , (5.8)

in the wavelet coefficient matrix (i.e. the low-pass coefficients CL and all high-pass coefficientsD) of the denoised image is a measure of the reduction of noise, pointing to images which fulfill Conditions 1, and 4. Better denoised images will have a smaller density. The drawback of this measure is that, it does not distinguish between cut noise and signal components, and is thereby unable to detect a violation of Condition 2.

2. Risk: The “SURE/Hybrid Shrink” method determines the cut-off threshold of the coefficients by minimizing the “SURE” risk function (Donoho and Johnstone, 1995) (Eq. 5.5), which depends on the statistics of the wavelet coefficients. Therefore the risk function is used to evaluate noise suppression.

Small risk means that only small coefficients - which are likely to represent noise - are set to zero, such that Condition 2 is probably fulfilled. But risk can also be small when the wavelet decomposition was not sparse and consequently only few coefficients were cut off, such that almost no denoising has taken place, as opposed to Condition 1.

The risk taken by the shrinkage of the coefficients is given by the sum of SURE-Risks taken for the shrinkage of each subspace or level with given thresholds:

R =

r

1 n

n−2#

|dk|

˘ σ ˘tr

dkD˜r

+

dkD˜r

min2

|dk|

˘ σ ,˘tr

, (5.9)

where ˜Dr’s could be either all high-pass subspaces from all decomposition levels together ( ˜Dr ={Djq}j=1,...,L, q=1,...,7), or the subspaces on a single res-olution level j ( ˜Dr = {Dqj}q=1,...,7), or one single subspace q at a specific resolution levelj ( ˜Dr =Djq), such that ˘tr is the “SURE” risk determined for D˜r.

3. Entropy: Good wavelet decompositions have sparse representations (Con-dition 4) e.g. contain few nonzero elements of high amplitude which then generate images of high contrast, fulfilling Condition 3. Good decompositions in this sense are characterized by a small entropy (Wickerhauser, 1994b). But some of the sparse denoising results arose from cutting away also coefficients

5.3 Blind Best Basis Selection 47

representing neuronal structures (violating Condition 2), which lead to bad reconstructions.

The following cost functional (Wickerhauser, 1994b) is small when the en-tropy is also small:

E(d) =

dkCLD

|d2k|log(|d2k|) . (5.10)

4. Correlation: Since the neuronal branches are linear structures, which have in at least one direction a lower frequency than the noise, it can be assumed that cell branches will be represented by corresponding wavelet coefficients at more than one resolution level.

The local correlation between the coefficient bqj,l,m,p at location (l, m, p) in subspaceq at resolution levelj and the corresponding coefficientsbqj1 at the next higher resolution level j−1 has following form:

Lqj,l,m,p(B) = bqj,l,m,p{bqj1,2l,2m,2p+bqj1,2l+1,2m,2p +bqj1,2l,2m+1,2p+ bqj1,2l+1,2m+1,2p+bqj1,2l,2m,2p+1+bqj1,2l+1,2m,2p+1+ bqj1,2l,2m+1,2p+1+bqj1,2l+1,2m+1,2p+1}. (5.11) As such, it will be higher for coefficients representing linear structures than for coefficients representing noise.

A high total correlation (e.g. sum of all local correlations, see Eq. 5.13) will then point to images which have more undisturbed neuronal branches, fulfill-ing Condition 2. This measure discriminates between noise and structure by penalizing the structure removal more. In order to calculate the correlation, three processing steps have to be taken:

Algorithm 5.2 (Across-Scales Correlation Computation)

(a) Dark background in all images: Some wavelets lead to high con-trast images after denoising but add a constant gray level to the back-ground. Since only the contrast contents of the images are of interest, the DC level is subtracted in all filtered versions of one image before the transform is applied. This operation has complexity O(n).

(b) Necessity of the Haar Wavelet Transform: The computation of correlation between spatially correspondent coefficients of different res-olution levels requires the knowledge of their correspondence relation.

Wickerhauser computes the shift introduced by the convolution of a sig-nal with a filter function in (Wickerhauser, 1994c) and finds that the shift introduced by the filter is constant for all frequencies only if the

48 Denoising by 3D Wavelet Shrinkage

filter has a linear phase, whereas non-symmetric filters might shift dif-ferent signals by different amounts, depending on the frequency spectra of both the filter and the signal (see Appendix D.1 for details). Unfor-tunately, among all quadrature mirror filters (QMF’s) only the Haar wavelet has linear phase. The calculation of shifts between the corre-sponding coefficients of two neighboring resolution levels would become computationally untractable for all other orthogonal wavelets. Conse-quently, correlations cannot be computed directly on the denoised wavelet coefficients. Instead, the Haar wavelet has to be applied on the denoised and reconstructed images.

The above two steps are described in Eq. 5.12: Suppose A is the filtered 3D image, where A ={ax,y,z}x,y,z=1,...,n, are the voxel gray values, n = 2N is the number of voxels along an image edge, and (x, y, z)ZZ3 are spatial indices. Then the corrected image is:

B = 3D DWTHaar

A− 1 n3

n x,y,z=1

ax,y,z

, (5.12)

where 3D DWT is the 3D discrete wavelet transform and B = {Bjq}j=1,...,L, q=0,...,7, with B0L CL and Bqj Djq, q = 0. This step has a complexity of O(n).

(c) Correlation:

The quality of the filtered image is then determined by the sum over all resolution levels of local correlations.

C(B) = 1 8B2

L j=2

7 q=1

2N−j

l,m,p=1

Lqj,l,m,p(B) (5.13)

This operation has a complexity of O(n).

Consequently, the total computational cost required by this quality measure is of complexity O(n)

5. Mean Squared Error: When using the “SURE/Hybrid Shrink” (Donoho and Johnstone, 1995) denoising method, Donoho’s theory guarantees that the Mean Squared Error (MSE) between the denoised data and the original data is minimal. Using the MSE between denoised and original data as an error measure would then be a natural choice for a quality measure. How-ever, neuronal structures occupy only small parts of the confocal microscope images, the rest being background pixels. Background pixels will have a large influence on the MSE measure, such that its value may become meaningless.

Therefore it will not be considered further.

Figure 5.1 shows the sorted measure values of all denoised variants of six analyzed images (the used data and experiments are presented in Section 5.4). Since no

5.3 Blind Best Basis Selection 49

single measure fulfills all quality criteria imposed for a good denoising, a function of several of the measures given above are proposed in the next section.

5.3.2 Composed Quality Measures

It is expected that a combination of the outputs of two or more of the above presented quality measures would give more relevant best answers.

However, since the violation of Condition 2 generates low density values (i.e.

optimal values), D will not be considered further as a candidate component for a good quality measure, because it could influence too strongly the composed outcome.

Therefore, since the MSE is not considered either (see above), the minima of the entropy and risk measures and the maxima of the correlation measure are of further interest. It is desired that the measures have steep slopes in the above mentioned interest zones, such that good differentiation between values is made.

First each measure has to be normalized appropriately:

If the quality measure shows a significant slope in its interesting area, then a linear mapping of the measure values for all denoising results to the interval (0,1] is suited, or

if the values in the interesting area of the quality measure are too similar, a nonlinear transformation has to be applied, e.g. the log or the exp function, in order to stretch the gaps between values (see Eqs. 5.14- 5.17).

Because of the shape of the individual Quality Measures, the logarithmic normal-ization is taken for the risk and entropy measures (Fig. 5.1.b and c) and the linearly normed correlation measure or the normed exponent is taken for the correlation (Fig. 5.1 d).

After normalizing, the operation has to be chosen by which the three measures are to be combined. Addition is equivalent to an unification operation and multi-plication is equivalent to an intersection operation. For the addition the correlation measure is reversed by sign change, such that all measures have their interesting zone in the minima, whereas for multiplication the reciprocal value of the risk and the entropy functions is taken.

The four alternative methods which arise from these considerations would then be:

the additive combination:

with normed correlation:

Ck+Mk = norm (−C(Bk)) + (norm (log (norm(Mk)))), or (5.14) with normed exponent of correlation:

exp(Ck) +Mk= norm−eC(Bk)+ (norm (log (norm(Mk)))). (5.15)

50 Denoising by 3D Wavelet Shrinkage

a) Density D b) Risk R c) Entropy E d) Correlation C

Figure 5.1: The Shapes of the Component Quality Measures: The zones of interest are the minima of the risk and the entropy and the maxima of the correla-tion. In order to enlarge the distances between values in these zones, logarithmic transforms are applied to the risk and the entropy, and exponential transforms are applied to the correlation.

the multiplicative combination:

with normed correlation:

Ck

Mk = norm (C(Bk))·norm

1

logMkminδk(Mk)+δ+

, or (5.16)

with normed exponent of correlation:

exp(Ck)

Mk = norm

eC(Bk)·norm

1

logMkminδk(Mk)+δ+

,(5.17)

∀k∈S(W, DM) , whereS(W, DM) is the set of wavelets W, each combined with all possible denoising methods DM, δ, ∈IR, with δ, 0, norm : IR→ (0,1], is a linear mapping transformation,M is eitherR or E.