• Keine Ergebnisse gefunden

Directional Statistics

Im Dokument Error Propagation (Seite 71-77)

Statistics commonly deals with distributions inIRn, most commonly with distri-butions on a line (n = 1) or plane (n= 2). Many measurements, however, are concerned with quantities of a cyclic nature, in computer vision usually angles.

Indeed, when Gauss developed the theory of errors he did so primarily to analyse certain directional measurements in astronomy. It is one of the ironies of statis-tics that the measurements under consideration were sufficiently accurate to allow him to develop the theory in relation to an infinite linear continuum rather than the actual topology, a sphere [95]. The subject of directional statistics received increased interest only after the 1953 landmark-paper by R. A. Fisher [47] and is thus a comparatively new branch of statistics.

The aim of this section is to introduce some very basic concepts of directional statistics needed later on; the reader is referred to [95, 156] for good introductory texts on the subjects.

3.5.1 Directions and Orientations

Directions in computer vision are usually associated with contours fitted to discon-tinuities in luminance, where one side will be lighter than the other. We can then differentiate between a contour’sorientation, which can take values in the interval αo [−π/2, π/2), and its direction, taking values in the interval αd [−π, π).

The orientation describes a geometric entity; turning this entity by 180 does not change it’s representation, and the possible range of orientations is therefore limited to a semi-circle (semi-hypersphere) — the directionsαoandαo+180 repre-sent the same orientationαo. Data like this is calledaxial data. The direction, on

72 Directional Statistics

the other hand, is uniquely defined by a normal vectorn= (sin(αd),−cos(αd))T pointing from the dark to the light side of a grey-level discontinuity, as shown in Figure 3.2, making each direction unique within one complete turn of a circle (hypersphere).

3.5.2 Mean and Variance

Directional data requires a notion of mean and variance different from usual statis-tics. Assume that two directionsα1 = 1 and α2 = 359 are given. Naively applying Equation (3.19) to calculate the mean would give a value ofα= 180, while intuition tells us thatα= 0. If, however, directions are instead understood as points on the unit-circle7 xi= (cos(αi),sin(αi)T) (or, alternatively, vectors of unit-length), we can calculate8:

C

It suggests itself to also calculate the lengthR=√

C2+S2of the resulting vector, themean resultant length. It is easy to see that Rwill be close to its maximum valueR=Nif theαiare very concentrated, while it will be close to its minimum value R = 0 if theαi are very dispersed. ThusN−Ris a sensible measure of the dispersion of the whole sample about its estimated centre — analogous to the variance on a straight line — and indeed

S0= N−R

N−1 (3.60)

is called the (sample)spherical variance. Note that 0≤S0≤1, while of course for the variance on a line 0≤σ2≤ ∞. A value which is more similar in magnitude to the usual variance on the line is given by

s20=−2 ln(1−S0). (3.61)

7Or hypersphere in the general casexiIRn.

8Many programming-languages provide a functionatan2(x,y)for this purpose.

3.5.2 Mean and Variance 73

For axial data (i. e.−π2 ≤αi< π2) the corresponding equations are [95]:

C S

= XN

i=1

cos(2αi) sin(2αi)

(3.62) α= 1

(3.63)

S0= 1−(1−S0)1/4. (3.64) There is no known distribution on the circle which has all the properties of the normal distribution. It is most closely approximated by the von Mises distribution or the wrapped normal distribution, see e. g. [95, 156] for details.However, as was the case for Gauss’ original use of his distribution on strictly speaking cyclic data, I too found that for the applications discussed in this thesis the Gaussian distribution is sufficient.

74 Directional Statistics

Chapter 4

Combining Projective Geometry and Error Propagation

. . . f ¨ugte ich rittlings zusammen, was zusammengeh ¨orte.

. . . astraddle I joined together what belonged together

Felix Salten, Josefine Mutzenbacher, 1869–1945

76 Introduction

4.1 Introduction

This chapter, which lies at the heart of this thesis, combines the projective geom-etry constructs described in Chapter 2 with the statistical principles of Chapter 3, and in particular error propagation.

Starting from first principles, with an error model for single edgels in Section 4.2, I revisit the line-fitting problem in Section 4.3. There the covariance of a line is computed, based on the covariances of the single edgels; for the case of indepen-dently, identically, and isotropically distributed (iiid) edgels (which is the usual assumption when fitting a line to edgels) I also present, in Section 4.3.2, an excel-lent but previously unpublished approximation to that covariance based mainly on line-length; and in Section 4.3.3 I introduce a new stopping-criterion for incre-mental fits which is based on aχ2-test. Section 4.4 compares several algorithms for vanishing-point calculation, clearly demonstrating that algorithms based on Euclidean distance, which are unfortunately still all too common in the literature, are inadequate for intersections far away from the image. In Section 4.5 I introduce a new algorithm for the calculation of the cross-ratio of 4 lines, which performs nearly as well as the best possible algorithms, but without knowledge about the lines’ intersection — which makes the algorithm about an order of magnitude faster than other algorithms with comparable performance. Extensive Monte Carlo sim-ulations are used throughout this section to evaluate and compare the relative performance of several competing algorithms both with regard to accuracy as well as speed. Section 4.6 finally demonstrates how to compare stochastic projective entities, and how to account for additional uncertainty in the model, e. g. due to an imperfect world.

The use of error propagation, while being a staple of photogrammetrists, geodesists, and many other scientists, has always been somewhat neglected in computer vision.

Most notable is probably the influence of Kanatani [71–75, 77], who can be said to have pioneered this particular field. The main difference which sets this work apart from Kanatani’s is its focus on applicability — whereas Kanatani concentrates on thecorrectsolution, I mostly concentrate on the mostadequatesolution, weighing computational cost and implementational complexity against the gain in accuracy.

Also related to the work described here is the work by Brillault-O’Mahony [20, 21], who used statistical considerations for vanishing-point detection, and grouping and recognition of high-level 3D-structures (see Section 6). More recent work includes [11, 52, 66, 115–117, 130, 141], of which the work by Pennec [116, 117]

is closest to the work presented here. A very recent addition is F¨orstner’s [49]

contribution to the “Handbook of Computational Geometry”, which collects a number of simple to use tools for uncertain geometric reasoning, and in particular

Edgels 77

Figure 4.1: Surface discontinuities do not always lead to visible edges.

gives a number of explicitly calculated Jacobians which retain the elegance of projective algebra. It can serve as a nice and concise introduction into my work;

however, as was the case with Kanatani’s work, F¨orstner’s focus is on elegant rather than computationally efficient solutions, which are at the heart of this thesis; his work differs also in the use of a less rigorous approach testing uncertain geometric relations — he directly tests observed entities against each other, rather than against the estimated true value, as I will recommend in e. g. Section 4.6.2. Finally I should point out that I have already published some of the results presented here in [6].

Im Dokument Error Propagation (Seite 71-77)