• Keine Ergebnisse gefunden

Our Approach: Perceptual Image Segmentation and Restoration

5.7 Conclusions

6.1.3 Our Approach: Perceptual Image Segmentation and Restoration

Deduction Transduction

Function

Data Data

Induction

Figure 6.3: The underlying relationship among induction, deduction and transduction.

Statistic methods are incorporated with more information for image segmentation. Swendsen-Wang Cuts [20] use Bayesian based Markov Chain Monto Carlo to split and merge the sub-regions. Berkeley nature boundary detector [159], was recently successfully applied to object recognition. Region cues are computed as the similarity in brightness, color, and texture between image patches. Boundary cues are incorporated by looking for the presence of “intervening contour”. The self-tuning clustering method [286] suggests that local adaption of the scaling parameter improves the image segmentation results.

Spectral Clustering for Segmentation Given Partial Constraints

Researchers have tried to give some interactive constraints to guide the segmentation. Yu et al. [282] enforce grouping smoothness and fairness on labeled data points so that sparse partial grouping information can be effectively propagated to the unlabeled data. The given partial grouping prior as constraints can often be derived based on a crude spatial attentional map that places common salient features and focuses on expected object locations. By generalizing the Rayleigh-Ritz theorem to project matrices, the global optimum in the relaxed continuous domain by eigen-decomposition, from which a near-global optimum to the discrete labeling problem can be obtained effectively.

Since the publication of Karmarkar’s famous paper [122] in 1984, the area of interior-point polynomial-time methods for convex programming have been intensively developed by many researchers, focusing on linear and quadratic programming. Problems of special interest covered by the approach are those with positive semidefinite matrices as variables. These problems in-clude numerous applications in modern control theory, combinatorial optimization, graph theory and computer sciences. Keuchel et al. [130] apply the semidefinite programming relaxations to the combinatorial problem of minimizing quadratic functions in binary decision variables subject to linear constraints. They introduce an interior-point methods (convex programming) and a random hyperplane to achieve parameter-free and high-quality combinatorial solutions based on spectral graph theory. Recently, the random walking algorithm [96] has used a similar affinity function for the segmentation problem, but the affinity value is computed after applying a linear transformation to the distance measure with human interactive interface.

per-6 Nonuniform Blurred Image Identification, Segmentation and Restoration

ceptually identify and segment blurred regions or objects. The second step is how to identify blur kernels and perceptually restore these identified and segmented regions or objects without influencing unblurred regions or objects.

Regularized Spectral Graph Clustering

We present a novel approach for perceptual image segmentation in this chapter. Here we present some closely related theory and work. Any supervised learning algorithm can be applied an inference problem, e.g., by training a classifier based on a certain data set, and then using the trained classifier to predict the labels of the unlabeled objects. Following this approach, one will have estimated a classification function defined on the whole domain of data set before predicting the labels of the unlabeled objects. According to Vapnik, [248], Zhou and Sch¨olkopf [300], [301](see also page 221-232) estimating a classification function defined on the while domain is more complex than the original problem which only requires predicting the labels of the given unlabeled objects, and a better approach is to directly predict the labels of the given unlabeled objects. Therefore, we consider estimating a discrete classification function which is defined on the given objects only. Such estimation problem is called transductive inference [248], [300]. In psychology, transductive reasoning means linking particular to particular with no consideration of the general principles. It is generally used by young children. In contrast, deductive reasoning, which is used by adults and older children, means the ability to come to a specific conclusion based on a general premise. The diagram is shown in Fig. 6.3. It is well known that many meaningful inductive methods such as support vector machines (SVMs) can be derived from a regularization framework based on a empirical cost and a regularization term. Inspired by this work [248], [300], we consider to construct an approach by integrating regularization theory and spectral graph theory. Much existing work including spectral clustering, transductive inference and dimensionality reduction can be understood in this framework.

We formulate the problem of partially-blurred image restoration including identification, parti-tion and restoraparti-tion of blurred regions or objects. To motivate the algorithm, different charac-teristic properties [80], [153] (gradient, frequency, entropy, etc.) [67], [191] between blurred and unblurred regions or objects endowed with pairwise relationships can be naturally considered as a graph. We treat blind image restoration (BIR) of partially-blurred images as a combinatorial optimization problem [130], [85], [80] based on regularization theory [241], and spectral clus-tering theory on discrete graph spaces [51] and its related algebraic graph transformation [65], [80], [226], [282]. Some connections between some of these interpretations are also observed in [300], [282], [142], [51] based on transductive inferences and differential geometry. More impor-tant, this integration brings crucial insights to the understanding of these theories, underlying relationships and their potential roles.

As we know, segmentation is only a computing process not a final target. A meaningful segmen-tation needs to be integrated with a specific task. Discrete regularization can achieve meaningful segmentation from intrinsic ambiguities of a given image in that this approach induces and stores high-level knowledge (top-down: identify and segment partially-blurred regions or object) to con-trol low-level image processing (bottom-up: pairwise measure between blurred and unblurred pixels and regions) via regularization. For example, the penalty term in regularization becomes a carrier of learned priors with certain smoothing weights and scales. Also, the concepts of using non-negative physical constraints are well matched and integrated into the discrete reg-ularization. For example, blur kernels and images are non-negative. Therefore, the resulting

simplicity of this approach differs in an interesting way from those algorithms generated without the non-negativity constraint and generated descriptive priors.

A more fundamental problem that arises in ill-posed inverse problems is the scale problem. In other words, which scale is the right resolution to operate on? Scale-space theory [191], [267], [67]

considers the behavior of the result across a continuum of scales. On the other hand, scale-space theory is an asymptotic formulation of the Tikhonov regularization [241]. Based on the regu-larization theory, the concept of scale is related quite directly to the reguregu-larization parameter.

The discrete regularization can obtain an optimal regularization parameter as the optimal scale for the associated instance. The global optimization solution is guaranteed to directly relate to the energy function rather than to a numerical problem during the minimization. Therefore, the consistency of multiple levels of local distributions and the global convergence can be achieved in a reliable and robust manner.

In a summary, the main objective of the standard regularization techniques is to obtain a rea-sonable reconstruction which is resistant to noise in inverse problems. Based on these inherit-ing advantages, discrete regularization is about convertinherit-ing high-level targets (human demands), guiding low-level image processing and learning the optimal scale for achieving the global conver-gence with multi-levels of local distributions. Conceptually, the discrete regularization paradigm also reveals the roles of some well-known optimization algorithms. Algorithms such as graph-cuts [132], and variational regularization [184], [173], [267] can be viewed as either discrete regularization [24] with energy in binary discrete spaces or in continuous bounded variation spaces. Compared to Markov random fields based stochastic optimization approaches [85], [80], this paradigm in the discrete graph space is optimized in a deterministic way.

Natural Image Statistics and Variational Bayesian Leaning based Image Restoration

On the second step, these identified and segmented blurred regions or objects are mostly non-stationary blurred. It means that we cannot directly to represent these real blur kernels using some simple parametric blur kernels. Therefore, based on previous work, we reformulate and extend our previous double regularized Bayesian estimation approach to a more tractable vari-ational Bayesian learning approach based on natural image statistics.

Our work relates to statistical approximation inference [181], variational free energy [183], vari-ational Bayesian learning [14], ensemble learning [114], [27], [166], [167], natural image statistics based image restoration [227], [209], [73], [109] and variational methods in graphical models [121]. In the Bayesian estimation, in general, we may consider two approaches to determining the posterior distribution of the weights. The first is to find the maximum of the posterior distribution, and then to fit a Gaussian function centered on this maximization. The second approach is to express the posterior distribution in terms of a sample of representative vectors, generated using Monte Carlo techniques. The third method is called Bayesian ensemble learning which has been firstly introduced by Hinton [114], [27] and further developed by Miskin and Mackay [166], [167]. Although the Bayesian estimation provides a structured way to include prior knowledge concerning the quantities to be estimated. However, it is often intractable to perform inferences using the true posterior density over the unknown variables, especially for ill-posed inverse problems. Ensemble learning allows the true posterior to be approximated by a simpler approximate distribution for which the required inference are tractable.

This approach allows the true posterior to be approximated by a simpler approximate distri-bution for which the required inference are tractable. Moreover, reasonable and effect prior

6 Nonuniform Blurred Image Identification, Segmentation and Restoration

probability is important in Bayesian learning. Natural image learning can help us find transla-tion and scale-invariant spatial prior distributransla-tion. In particular, the approach makes effective use of the natural image statistics through the whole variational learning scheme. Our exper-iments show that the results derived from the algorithm are superior to this type of blurred images. The scheme can be further extended to other types blurred image restoration in real environments.