• Keine Ergebnisse gefunden

Edgels

Im Dokument Error Propagation (Seite 71-75)

The work described in this thesis is completely edge-based. All features described later on can ultimately be reduced to edgels (edge-elements). We can distinguish two kinds of edges in 3D. The first one corresponds to a change in luminance, hue, saturation, or all three within one surface. These changes, which we will call surface markings, are always detectable using an appropriate setup. The second kind of edge corresponds to a surface discontinuity. This is not necessarily associated with any apparent change in visual properties, and the detectability of these kinds of edges within a certain image depends on the object’s orientation towards the camera, lighting, and other external conditions. Figure 4.1 shows examples for surface markings as well as visible and invisible surface-discontinuities.

These edgels are located, often with subpixel accuracy, in the image using an edge detector [24]. The next two sections describe possible sources for error in the location of the edgels and how to model this error, as well as a possible parameterisation of edgel location and its probability distribution.

4.2.1 Error Sources

So what are the particular types of errors encountered in the imaging (and recon-struction) process, and which of those will I address in this thesis? Aberrations of the lens, which have been well documented in many publications [16, 18, 86, 144], include chromatic

:::::::(axial

::::and

::::::::lateral(axial and lateral) as well as monochromatic aberrations (also called Seidel aberrations after an 1857 paper by Ludwig von Seidel, i. e. spherical aberration,:::::::coma,:::::::::::::::astigmatism,coma, astigmatism, field curvature::::and

::::::::::::

curvilinear and curvilinear — barrel

::::and

:::::::::::::pincushionand pincushion — distortions).

72 Edgels In practical applications we also see ::::::::::::vignetting,:::::::flares::::::and:vignetting, flares and diffraction — and of course simple defocus. Additional errors are being introduced by the CCD chip, foremost of course the discretisation itself, but we are also dealing with (thermal) pixel noise and with differences in the sensitivity of neighbouring (or further away) sensors which create a bias. In most of todays 1-chip colour cameras we get additional errors due to the interpolation of colour information from the mosaicing (usually Bayer) filter and possibly also from lossy compression (usually jpeg), both of which are particularly pronounced near edges. And finally, as a last source for error, we also have the effect of the edge detector itself.

The error sources given above can be grouped into two different categories, reversible and non-reversible effects. Curvilinear distortions and bias in the individual pixel values are easily removed by a simple calibration of the camera; as such they are really systematic errors and will be ignored in the following; the same is true for some of the errors introduced by the edge detector [106].

Most of the other lens effects, however, although quite systematic in their formation, are not easily reversible. In their sum total they will serve to make the image less sharp, and as such act as a low-pass filter blurring the image; approximating their influence by a convolution with a Gaussian is not uncommon [86]. If we then proceed to apply an edge filter like the well known Canny filter to the image this additional, non-uniform blurring will have a negative effect on the positional accuracy of the edgels found [24], at least in the neighbourhood of texture or additional edges. Lossy compression and demosaicing, on the other hand, tend to introduce random artefacts near edges, as will the pixel noise of the CCD chip, and these will directly influence the positional accuracy, since they violate the continuity assumptions which are the foundation of subpixel approximation.

Accurately modelling all these different error sources and their effect on the posi-tional accuracy of edgels is well beyond the scope of this thesis. However, Figure 4.2 shows the histogram of the positional errors for typical edgels along a typical (but perfectly straight) line for images taken with two different cameras, as well as the equivalent Gaussian distribution (i. e. one with the same standard deviation). And although the measured distributions are clearly not Gaussian (note in particular the high number of nearly accurate edgels, about 5 % to 10 % for this particular test-case), they are none the less reasonably well approximated by a Gaussian — and this is in fact what I will do for the remainder of this thesis.

It might be worth pointing out that both histograms in Figure 4.2 have approxi-mately the same standard deviation (aroundσ ≈0.1 pxl). This demonstrates nicely that the locational error for a perfect line is mostly a function of the sensor type used (a Bayer-type mosaicing filter in both cases). However, in reality most lines aren’t quite perfect1, and might suffer additional distortion depending on the lens used.

So in addition to the one hand-selected (perfect) line above I also calculated the

his-1For this test I used lines printed on paper — and since the paper wasn’t perfectly flat when the images were taken, not all lines are perfectly straight.

4.2.1 Error Sources 73

Histogram of edgel’s positional errors, Canon Histogram for one line

Histogram of edgel’s positional errors, Sony Histogram for one line

equivalent Gaussian

Figure 4.2: Typical distributions of positional error for a perfect line. Plotted are the deviations of edgels from the true line for a Canon 6 Mpxl camera (left) and a Sony 800 kpxl camera (right). Overlayed are the equivalent Gaussian distributions.

Figure 4.3: Typical distributions of positional errors (below) for all edges within an image (to the left).

Plotted are the deviations of edgels from the true edge for a Canon 6 Mpxl camera (left) and a Sony 800 kpxl camera (right). Overlayed are the equivalent Gaussian distribu-tions.

Histogram of edgel’s positional errors, Canon Histogram for all lines

Histogram of edgel’s positional errors, Sony Histogram for all lines

equivalent Gaussian visually equivalent Gaussian

74 Edgels

0 200 400 600 800 1000 1200

-1 -0.5 0 0.5 1

Count

Deviation in pxl.

Histogram of edgel’s positional errors, real image Histogram for all lines

equivalent Gaussian

Figure 4.4: Typical distribution of positional errors for all edges within a real-world image (Figure 5.5, σ ≈ 0.22 pxl, and equivalent Gaussian

:::dis:::tri:::bu::::tiondistribution.

tograms over all lines, and those are given in Figure 4.3. And here the two variances are clearly different for the two cameras, with σ ≈0.27 pxl for the Canon but only σ ≈0.15 pxl for the Sony — here the higher resolution camera (the Canon) registers the higher standard deviation, since the same positional error in 3D will result in a bigger error (measured in pixel) in the image; the better lens of the Canon, suffering from fewer of the above-mentioned defects than the lens of the Sony, by comparison does not have as much of an effect due to the particular test-image chosen.

Figure 4.2.1 finally shows the distribution of positional errors for a real-world image (a street scene also used throughout most of Chapter 5, e. g. in Figure 5.5); here too we see the typical, Gauss-like distribution with its overpronounced peak for very small errors.

In this section we have seen that a Gaussian distribution is not altogether an unre-alistic approximation for the particular distribution of location errors when fitting edgels. In the following I will describe how to represent the edgel coordinates and their distribution.

4.2.2 Geometric Representation

Edgels can be represented by their Euclidean coordinates within the image plane (x, y)T. Other possible representations include (pseudo) homogeneous coordinates,

x= (x, y,1)T, (4.1)

which will be used throughout this chapter unless stated otherwise. Kanatani [69]

and others suggested a parametrisation (x, y, f)T with x2 +y2+f2 = 1, where f is of the same order of magnitude as x and y, often the focal length the image was taken with (if known, compare Section 2.9), the length of the image diagonal, or some other image dimension.

Lines 75 All coordinates computed in the image, independent of the parametrisation used, will contain errors due to the measurement process. These can be characterised by the edgels’ covariance matrix, which for the pseudo homogeneous representation in Equation (4.1) would be structured as follows:

Σx =

σ2x σxy 0 σxy σy2 0

0 0 0

. (4.2)

If the error can be modelled by a Gaussian distribution, as is the case for many practical applications [100], this covariance matrix is sufficient to completely char-acterise the edgel’s distribution. Using Equation (3.53) it is possible to calculate the covariance matrix for all other parametrisations from the covariance matrix in Equation (4.2). Note that Σx is of course singular, since an edgel has only two degrees of freedom, independent of the parametrisation used. It is therefore not possible to directly compute its inverse. Instead the Moore-Penrose generalised or pseudo inverse should be used, or the problem should be reduced to the equivalent problem in fewer dimensions, again using Equations (3.52) and (3.53). However, in the context of projective geometry the latter is usually not desirable. Section 4.3.2.2 shows that (4.2) can be approximated by a diagonal matrix for many practical ap-plications — the main thrust of the argument being that the covariance along the edge is usually of no consequence.

Im Dokument Error Propagation (Seite 71-75)