• Keine Ergebnisse gefunden

2.4 Boundary Conditions

2.4.3 Neumann Boundary Conditions

The Neumann boundary condition. For the Neumann boundary condition, we assume that the data outside f are a reflection of the data inside f. More precisely, we set





f0 = f1

... ... ... f−m+1 = fm

and





fn+1 = fn

... ... ... fn+m = fn−m+1

(2.37)

2 Regularization for Image Deblurring and Denoising

(a) (b) (c) Figure 2.12: Homogeneous Neumann boundary condition. (a) An original MRI head image. (b)(c) Homogeneous Neumann boundary condition can be implemented by mirroring many boundary pixels in four directions. Eq. 2.37 shows that the standard Neumann boundary condition is implemented by mirroring one boundary pixel in four directions.

Thus the original equation becomes

Af = [( 0|Tl)J+T+ (Tr|0)J] f = g (2.38)

where J is the n-by-n reversal matrix. We remark that the coefficient matrix A in Eq. 2.38 is neither Toeplitz nor circulant. It is a Toeplitz-plus-Hankel matrix. Although these matrices have more complicated structures, the matrix A can always be diagonalized by the discrete cosine transform matrix provided that the blurring function h is symmetric, i.e.,hj =h−j for allj. It follows that Eq. 2.38 can be solved by using three FCTs inO(nlogn) operations . This approach is computationally attractive as FCT requires only real operations and is about twice as fast as the FFT [160]. Thus solving a problem with the Neumann boundary condition is twice as fast as solving a problem with the periodic boundary condition. Ng et al. [175] proposed to establish similar results in the two-dimensional case for deblurring, where the blurring matrices will be block Toeplitz-plus-Hankel matrices with Toeplitz-plus-Hankel blocks (BTHTHB).

Neumann boundary conditions can be written in this PDE form (∂f∂n(x, t) on∂R). As discussed in [7], the choice of Neumann boundary conditions is a natural choice in image diffusion. It corresponds to the reflection of the image across the boundary and has the advantage of not imposing any value on the boundary and not creating edges on it shown in Fig.2.12. The Neu-mann boundary conditions work well because these diffusion-based image processing methods in the PDE guarantee conservation properties as well as a continuous extension of the image at its boundary. Indeed, the Neumann condition corresponds to the reflection of the image across the boundary with the advantages of not imposing any value on the boundary and not creating

”edges” on it. If we assume that the boundary of the image is an arbitrary cutoff of a large scene in view, the Neumann boundary condition is a therefore a natural method.

There is also a boundary conditions that is not often used. Reflexive boundary conditions imply that the scene outside the image boundaries is a mirror image of the scene inside the image boundaries. i.e. A is a sum of a BT T B matrix and a block Hankel matrix wit Hankel blocks (BHHB). This case is similar to zero boundary conditions, except that the values that are padded around the outside of the image are obtained by reflecting the pixel values from the inside of the image boundaries. However, the utilization of reflecting boundary conditions for image deconvolution with space-invariant kernels is bound to fail if the kernel is not symmetric w.r.t. the image boundary directions. The reason is that the reflected parts of the image would be blurred with a reflected kernel, violating the model assumptions.

Based on the variational regularization, we model the unsupervised blur identification, image restoration and segmentation in a convex problem in that the convex problem can be solved reliably and efficiently. Several basic convex components can help to get convex optimization so that we can still achieve the results in a reliable and robust manner. When there are many feasible solutions that are consistent with both known prior information and the measured blurred image. A restored image can be defined as a continuous, strictly convex functional that assigns a cost to each feasible solution and the selects the one which minimizes the cost. There are two factors affecting the complexity of the algorithms. One is the cost functional itself and the other is the set of constraints for the restored image. Different strategy and choices may influence the restoration results at different degree. In addition, the prior information about the image and noise must be expressed in the form of equality of inequality constraints.

2 Regularization for Image Deblurring and Denoising

Blur Identification

“A knowledge of statistics is like a knowledge of foreign languages or of algebra; it may prove of use at any time under any circumstances.” – A. L. Bowley

In the previous chapter, we recall and discuss regularization for ill-posed inverse problems, PDE based image diffusions and energy functionals, and convex optimization for the construction of an image deblurring and denoising framework. In this chapter, firstly, we will introduce the important concepts in information theory, and related model selection methods as a necessary preparation of further discussion. Most of algorithms in statistical learning, including feature extraction and classification, are essentially process of information collection, transmission and utilization. The introduction of information theory into statistical learning opens a new perspec-tive for us to explore the nature of these statistical learning topics. Secondly, we will introduce a nonparametric model selection based method for blur identification and analyze the exper-imental results for a large image or video sequence. The integration of nonparametric model selection techniques and locally parametric optimization techniques for blur identification are presented in the next chapter.

3.1 Introduction

In pattern classification, three main estimation techniques are intensively investigated, e.g., parametric estimation techniques, nonparametric estimation techniques, and semi-parametric estimation techniques.

Parametric estimation techniques are based on the assumption that the data set has a predefined distribution [63], [128], [188], which describes the data set in a compact way. For example, parametric density estimation assumes the data is drawn from a density in a parametric class, e.g., the class of Gaussian densities. The estimation problem can thus be assumed to finding the parameters of the Gaussian that fits the data set. However, in most pattern recognition and model selection, this assumption is suspect. The common parametric forms rarely fit the densities actually encountered in practice.

The Gaussian mixture modeling technique with isotropic covariance and anisotropic covariance is known as a semi-parametric density estimation technique. It is also to be placed in between two extremes such as parametric and non-parametric density estimation. To apply mixture models, we firstly need to think some basic questions, the number of components and which classes of component densities should be used. Therefore, the questions become a model selection problem.

The nonparametric estimation techniques that can be used with arbitrary distributions without any assumptions, i.e., the forms of underlying densities are unknown. There are several types of nonparametric methods of interest. One consists of procedures for estimating the density

3 Bayesian Model Selection and Nonparametric Blur Identification

functions from sample patterns. If these estimates are satisfactory, they can be substituted for the true densities for designing the classifier. Another one consists of procedures for directly es-timating thea posterioriprobability. This is closely related to nonparametric design procedures such as nearest-neighbor rule, which bypass probability estimation and go directly to decision functions. In this chapter, we focus on some non-parametric techniques to classify blur kernels and blurred images in large image or video sequences.