• Keine Ergebnisse gefunden

In order to perform the closest vector search efficiently for building a larger size codebook.

We adopted the approach for fast codeword search algorithm developed by Ra et al. [200]

which is about 15 times quicker than ”full search” algorithm for each codebook size, e.g., size

=(128,256,512,· · ·). This equal-average nearest neighbor search (ENNS) algorithm uses the mean value of an input vector to reject impossible codewords. It also reduces a great deal off-line computational time compared with other fast search algorithms with onlyN additional memory. In the proposed algorithm, the band passed pixel value in the codebook is treated as the label. VQ is subsequently applied to all vectors with the same label based on the LBG [145]

algorithm. The VQ can be generated in a hierarchical way. The ENNS algorithm adapted as a kernel for VQ encoding by the proposed algorithm is briefly described,

1. Let X = (x1, x2, ..., xk) be a k-dimensional vector, the sum of k-dimensional vectorX as SX =sum(xi), i= 1,2, , k.

2. Assuming the current distortionDmin, the main sprit of ENNS can be stated as: If (SX− SCj)2 ≥k·Dmin, thenD(X, Cj)≥Dmin. This means Cj will not be the nearest neighbor toX, if (SX −SCj)2 ≥k·Dmin satisfied.

3. The sum of each codeword is calculated and these values are sorted in ascending order. The squared Euclidean distortion Dmin between the input vector and this tentative matching codeword is calculated. Then the codewords Cj for which SX ≥ SCj + (k·Dmin)12 or SX ≤SCj−(k·Dmin)12 are eliminated.

4. The search is performed up and down, left and right directions iteratively till the nearest codeword is found.

To apply VQ method for blur identification, blurred images are vector quantized in terms of the enhancement of blur representation. There are many potential features which can be used to represent the largest blur in an image. We use local non-flat region features to train the codebook so that a lot of redundancy in homogenous image regions can be avoided. Fig. 3.2 shows results of a blurred frame with representative vectors.

3 Bayesian Model Selection and Nonparametric Blur Identification

Idenitfied blurred frames

Off-line training On-line testing

Training Frames

VQ

VQ Video Sequences

Meas. distortion Multiple

Codebooks Initial codebook

Identified Blur

Test data

Figure 3.3: Diagram of blur identification and find blurred images in large video data

1. Off-line training. To apply VQ method for blur identification, blurred images are vector quantized in terms of the enhancement of blur representation. There are many potential features which can be used to represent the largest blur in an image. We use local non-flat region features to train the codebook so that a lot of redundancy in homogenous image regions can be avoided. In a consequence, blur is identified from a few dominant candidate blur functions in a set of training images. Each of the training sets with their related blur functions is used to train the codebook-based on LBG algorithm [87], [145], [187]. These trained codebooks can thus be used to measure the similarity of other blurred images.

Fig. 3.2 shows results of a blurred frame with representative vectors.

2. On-line testing (measuring). After the off-line training period, on-line blur identifica-tion can be processed. Fast VQ encoding method speeds up the on-line blur identificaidentifica-tion in video sequence. Each frame will be checked by a trained codebook via VQ encoding approach. The distortion between the trained codebook and testing frames are measured by mean square error (MSE). The values of different distortion are used to classify the video frames into different blur clusters. VQ encoding of different frames get different mean square error distortions based on the similarity measurement of statistical intensity value. The testing frame with minimum distortion is identical blur in the frame which generated this codebook.

Experimental Results

In the first experiment, we have tested simulated images to demonstrate the accuracy of VQ-based blur identification and classification of blur degraded images. In Fig. 3.4, three groups of images with motion blur, Gaussian blur and mixed types of blur are tested in three different signal-to-noise ratios (SNR). The minimum VQ encoding distortion (MSE) of the testing image is identical with the trained codebook. The up-right diagram shows the motion blur identifica-tion where the codebook has a blur angle of 20 degree. The second curve diagram shows the Gaussian blur identification, codebook has a variance 1.5. The third curve diagram shows blur identification of mixed blur types, Gaussian variance = {1.5,2.5,3.5,4.5,5.5} and motion blur with different blur angle. The codebook is generated by the image with index 3. The experiment also demonstrates that the approach is robust with respect to correlated noises.

The second experiment has been performed on real-life video sequences in Fig. 3.5. Firstly, one

Figure 3.4: (a) Three images with 10dB, 20dB and ∞dB respectively. (b) An unblurred image with five blurred images. Right diagrams: The minimum MSE is blur identified.

Figure 3.5: Blur Identification of frames in dendrogram (taken by “ptgrey” video camera, 15f/s). The abscissa is an index for 9 frames (index 012-020 from 201 frames), the ordinate denotes the encoding distortion values).

blurred frame is blur identified based on Bayesian MAP estimation in the off-line period. VQ-based codebook of this blur identified frame is used to check other unknown frames in the on-line period. We present the checking results in a clustering tree to demonstrate its efficiency. The results are visualized by a dendrogram clustering method based on the VQ encoding distortion.

From the dendrogram, we can easily find the frames with different blur status are classified into different sub-tree. The blur frames are classified into two main classes. The first main class of images with index of {1,2,4} are relatively stronger blurred. The second class of images with index of {3,5,6,7,8,9} are relatively weak-blurred or without blur.

The images with index of {2,4}, {5,6} and {7,9} have most similar blur status. The PSF of images with index of 2 and 4 can be easily predicted in the cluster of{1,2,4}because the image

3 Bayesian Model Selection and Nonparametric Blur Identification

Figure 3.6: (a) PSNR-MSE distribution of different size of codebooks. (b) The dendragram of 18 frames (index: 012-029)

with index of 1 is trained as a codebook. If we continue identify more blurred frames precisely, we can continue the on-line process and add more codebooks. The Bayesian MAP estimation for more video frames uses the prior knowledge from the blur identified codebook and classified datasets. Higher accuracy PSF estimation follows the direction to a child on the sub-tree.

In Fig. 3.6(a), the influence of blur identification is also evaluated by checking the size of code-books. For this case, codebooks with 256 blocks and 64 dimensions per block get encoding distortion in a large range. Large encoding distortions cause distinct classification. The size of codebook is selected based on such criteria. The PSNR-MSE diagram is drawn by measuring the relationship between the image degradation and VQ encoding MSE. The image degradation is quantified by peak-to-noise ratio (PSNR):

P SN R= 10 log10(2552

M SE)(dB) (3.25)

In Fig. 3.6(b), we perform the algorithm on more images. 18 frames with continuous indices are classified. The dendrogram in Fig. 3.6(b) has a similar sub-tree structure to the dendrogram in Fig. 3.5. The blurred frames are classified and added in each sub-tree.

Compared to the existing methods, the approach can efficiently find out blurred images in different groups for given video sequences. Mechanisms with both off-line and on-line phases make the on-line performance in real-time. The approach is confirmed more practical in different video acquisition environments.