• Keine Ergebnisse gefunden

Mean and Variance

Im Dokument Error Propagation (Seite 66-71)

3.5 Directional Statistics

3.5.2 Mean and Variance

Directional data requires a notion of mean and variance different from usual statis-tics. Assume that two directionsα1 = 1 andα2 = 359are given. Naively applying Equation (3.19) to calculate the mean would give a value of α = 180, while intu-ition tells us that α = 0. If, however, directions are instead understood as points on the unit-circle7 xi = (cos(αi),sin(αi)T) (or, alternatively, vectors of unit-length), we can calculate8:

C

It suggests itself to also calculate the resulting vector’s length R = √

C2+S2, the mean resultant length. It is easy to see that R will be close to its maximum value R = N if the αi are very concentrated, while it will be close to its minimum value R = 0 if theαiare very dispersed. ThusN−Ris a sensible measure of the dispersion

7Or hypersphere in the general casexiIRn.

8Many programming-languages provide a functionatan2(x,y)for this purpose.

3.5.2 Mean and Variance 67 of the whole sample about its estimated centre — analogous to the variance on a straight line — and indeed

S0 = N −R

N−1 (3.60)

is called the (sample) spherical variance. Note that 0≤S0 ≤1, while of course for the variance on a line 0 ≤σ2 ≤ ∞. A value which is more similar in magnitude to the usual variance on the line is given by

s20 =−2 ln(1−S0). (3.61)

For axial data (i. e. −π2 ≤αi < π2) the corresponding equations are [95]:

C0 S0

= XN

i=1

cos(2αi) sin(2αi)

(3.62) α = 1

0 (3.63)

S0 = 1−(1−S00)1/4. (3.64) There is no known distribution on the circle which has all the properties of the normal distribution. It is most closely approximated by the von Mises distribution or the wrapped normal distribution, see e. g. [95, 156] for details.However, as was the case for Gauss’ original use of his distribution on strictly speaking cyclic data, I too found that for the applications discussed in this thesis the Gaussian distribution is sufficient.

68 Directional Statistics

Chapter 4

Combining Projective Geometry and Error Propagation

. . . f ¨ugte ich rittlings zusammen, was zusammengeh ¨orte.

. . . astraddle I joined together what belonged together

Felix Salten, Josefine Mutzenbacher, 1869–1945

70 Introduction

4.1 Introduction

This chapter, which lies at the heart of this thesis, combines the projective geometry constructs described in Chapter 2 with the statistical principles of Chapter 3, and in particular error propagation.

Starting from first principles, with an error model for single edgels in Section 4.2, I revisit the line-fitting problem in Section 4.3. There the covariance of a line is com-puted, based on the covariances of the single edgels; for the case of independently, identically, and isotropically distributed (iiid) edgels (which is the usual assumption when fitting a line to edgels) I also present, in Section 4.3.2, an excellent but pre-viously unpublished approximation to that covariance based mainly on line-length;

and in Section 4.3.3 I introduce a new stopping-criterion for incremental fits which is based on a χ2-test. Section 4.4 compares several algorithms for vanishing-point cal-culation, clearly demonstrating that algorithms based on Euclidean distance, which are unfortunately still all too common in the literature, are inadequate for intersec-tions far away from the image. In Section 4.5 I introduce a new algorithm for the calculation of the cross-ratio of 4 lines, which performs nearly as well as the best possible algorithms, but without knowledge about the lines’ intersection — which makes the algorithm about an order of magnitude faster than other algorithms with comparable performance. Extensive Monte Carlo simulations are used throughout this section to evaluate and compare the relative performance of several compet-ing algorithms both with regard to accuracy as well as speed. Section 4.6 finally demonstrates how to compare stochastic projective entities, and how to account for additional uncertainty in the model, e. g. due to an imperfect world.

The use of error propagation, while being a staple of photogrammetrists, geode-sists, and many other scientists, has always been somewhat neglected in computer vision. Most notable is probably the influence of Kanatani [71–75, 77], who can be said to have pioneered this particular field. The main difference which sets this work apart from Kanatani’s is its focus on applicability — whereas Kanatani concentrates on the correct solution, I mostly concentrate on the most adequate so-lution, weighing computational cost and implementational complexity against the gain in accuracy. Also related to the work described here is the work by Brillault-O’Mahony [20, 21], who used statistical considerations for vanishing-point detec-tion, and grouping and recognition of high-level 3D-structures (see Section 6). More recent work includes [11, 52, 66, 115–117, 130, 141], of which the work by Pen-nec [116, 117] is closest to the work presented here. A very recent addition is F¨orstner’s [49] contribution to the “Handbook of Computational Geometry”, which collects a number of simple to use tools for uncertain geometric reasoning, and in particular gives a number of explicitly calculated Jacobians which retain the ele-gance of projective algebra. It can serve as a nice and concise introduction into my work; however, as was the case with Kanatani’s work, F¨orstner’s focus is on elegant rather than computationally efficient solutions, which are at the heart of this thesis;

his work differs also in the use of a less rigorous approach testing uncertain

geomet-Edgels 71

Figure 4.1: Surface discontinuities do not always lead to visible edges.

ric relations — he directly tests observed entities against each other, rather than against the estimated true value, as I will recommend in e. g. Section 4.6.2. Finally I should point out that I have already published some of the results presented here in [6].

Im Dokument Error Propagation (Seite 66-71)