• Keine Ergebnisse gefunden

When reconstructing an undersampled radial data set by optimizing Eq. (5.2), the obtained image still exhibits streaking artifacts. This is not surprising as the procedure does not measure the accuracy of the estimate at any other position in k-space than at the acquired spokes. To illustrate the point, Figure 5.2 shows two reconstructions of a rectangle from just two spokes. While Figure 5.2a represents the true object, the image in Figure 5.2b is degraded by streaking artifacts. Equation (5.2) can not tell which solution is better because in k-space both solutions yield an identical pattern at the positions of the spokes, and differences occur only in-between the spokes. In other words, because the estimate’s FT is projected onto a limited set of spokes, which is an idempotent operation, it is impossible to recover the missing information with a “plain”

deconvolution approach like Eq. (5.2).

In order to obtain a better estimate than the regridding solution, it is, therefore, nec-essary to extend the cost function (5.2) by some kind of complementary plausibility

Figure 5.3: Schematic diagram of the proposed iterative reconstruction technique. The procedure has been formulated as an inverse problem. The solution employs a non-linear conjugate gradient method to obtain an image estimate that complies with the measured data as well as prior knowledge.

rating. This can be achieved by adding penalty functions to the cost function, which is known asregularization. Penalty functions assign a high value to solutions that are in-consistent with a priori knowledge about the true object and, thus, drive the algorithm to find a solution complying with both, the measured data and certain prior assump-tions – as illustrated in the schematic flow chart shown in Figure 5.3. However, the challenge in selecting respective criteria is that they should not be too specific about the object and keep the problem optimizable. This requires the penalties to be convex functions, which can be handled with global optimization techniques. Accordingly, the regularized functional takes the form

Φ(x) = 1

2kAx−yk22+X

i

λi·Ri(x), (5.5)

whereRi(x) are the penalty functions. The coefficientsλi represent tuning factors that allow for shifting the preference from matching the measured data to satisfying the a priori knowledge. In fact, because the measured data is always contaminated by noise to some degree, the search for a perfect match to the measured data is usually not a good strategy as it drives the image estimate to render the experimental noise. To compensate for this effect, it is necessary to adjust the coefficients λi in accordance with the signal-to-noise ratio of the acquired data.

For radial acquisitions, there are several reasonable choices of how to restrict the solution space of the image estimation process. If knowledge about the size of the object is

available, it is possible to penalize image intensity outside of the assumed object. In particular, due to the rotational symmetry of radial sampling, all image intensity outside a circular FOV can be usually considered as artifactual. A corresponding penalty function can be formulated as and cFOV denotes the circular FOV.

Another penalty which turned out to be very effective in general image restoration is the restriction of the parameter space to positive values. It can be achieved by using the penalty function

Suppression of negative values prevents the algorithm from inserting negative fill values into the image, a tendency often performed by the unconstrained algorithm to better match the measured data. This, however, leads to inaccurate image estimates.

At first glance, the exclusion of negative values seems to ideally apply to the MRI situation where the measured physical quantity, that is the proton density modulated by some relaxation process, is a positive unit. Unfortunately, however, the use of coil arrays with complex-valued sensitivity profiles as well as the occurrence of phase variations within the object forbid a direct application of this criterion. In fact, in most imaging situations neither the real nor the imaginary part of the image can be restricted to positive values.

A third penalty, which has been successfully employed in image restoration, is the restriction of the total variation (TV), initially presented by Rudin et al. in 1992 for denoising applications [111]. The basic assumption of a TV penalty is that the true object consists of areas with constant (or only mildly varying) intensity, which applies quite well to a large class of medical tomographic images. Thus, if the true object is piecewise constant, then the best representation of all image estimates that match at the measured samples should be given by the one with the lowest derivatives at all pixel

positions, that is the one minimizing the total variation RTV(x) =X

i

|Dxxi|+|Dyxi|, (5.10) where Dx and Dy denote the derivatives in x and y direction, respectively. The first-order derivative at the pixel position (u, v) ≡ xu+v·n can be calculated from the finite difference between neighboring pixels

Dx(1)(u, v) = (u, v)−(u−1, v)

Dy(1)(u, v) = (u, v)−(u, v−1). (5.11) It is very important to note that the total variation in Eq. (5.10) depends on the modulus of the derivatives. This dependency, well-known as `1 norm, ensures edge preservation in the image and penalizes especially oscillations, which helps to suppress noise patterns as well as Gibbs ringing artifacts (refer to Chapter 8). Replacing the modulus by a square dependency leads to an image with global smoothness, because intensity changes between neighboring pixels become very strongly penalized.

The simple use of first-order derivatives for the TV constraint (5.10) tends to create im-ages with a patchy-looking appearance, because in this case the TV value is minimized for regions with a constant intensity. These images are sometimes appraised as comic-alike and unnatural looking. In contrast, when employing second-order derivatives, the TV value gets small for image regions with constant intensity gradients

D(2)x (u, v) = (u−1, v)−2·(u, n) + (u+ 1, v) D(2)y (u, v) = (u, v−1)−2·(u, n) + (u, v+ 1)

D(2)xy(u, v) = (u, v)−(u−1, v)−(u, v−1) + (u−1, v−1). (5.12) Therefore, it is often advantageous to use a combination of first-order and second-order derivatives

RTV2(x) = X

i

σ·(|D(1)x xi|+|Dy(1)xi|)

+ (1−σ)·(|D(2)x xi|+|Dy(2)xi|+|Dxy(2)xi|), (5.13) where σ ∈ [0,1] is a weighting factor that allows for slightly tuning the smoothness of the image appearance. It was set to σ = 0.77 according to the work presented by Geman et al. [112].

The upper row of Figure 5.4 shows reconstructions of the numerical Shepp-Logan phan-tom from 24 spokes, obtained either by regridding or the proposed inverse formulation

Figure 5.4: Radial image reconstructions (Shepp-Logan phantom, 256×256matrix) using simulated data from 24 spokes (256 samples). (Top left) Regridding and (top right) the pro-posed iterative technique with prior knowledge. (Bottom) Corresponding Fourier transforms.

The iterative technique reconstructs the image of the object without streaking artifacts. Ac-cordingly, its Fourier transform recovers the unmeasured gaps in k-space in-between spokes.

with penalties as presented in this section. A comparison of the images clearly demon-strates the superior performance of the iterative method in reducing streaking artifacts, which for the simulated data have been effectively removed. The lower row of Figure 5.4 depicts the corresponding Fourier transforms. It turns out that the incorporation of a priori information by appropriate penalty functions leads to a proper recovery of k-space information in-between the spokes.