• Keine Ergebnisse gefunden

5 Texturing with Thermal Data

5.6 Quality Assessment of Extracted Textures

5.6.1 Geometric Quality Measures

The geometric qualities are qualities derived from the acquisition geometry, including the camera and object pose, as well as the inner orientation parameters of the camera. Therefore, these qualities can be used to assess the expected quality of textures in the planning stage or to assess the achieved textures after the flight.

Resolution

The texture’s level of detail depends on its resolution. The resolution of 3D objects seen on the image plane is usually not unique along their surfaces. Unique resolution is possible only for planar objects that are parallel to the image plane. In nadir view photogrammetry, the ground resolution of the images is usually expressed using a ground sample distance (GDS), which is the distance between the pixel centers on the ground. It is calculated using the intercept theorem:

ck s0 = H

s, (5.1)

wheres is a distance on the ground,s0 is its image in the sensor, ck is the camera constant, and H is the flight height (see Fig.5.10a). Ifs0= 1pix, then sis the ground sampling distance. Here, it is assumed that the ground is parallel to the sensor; therefore, all pixels have the same ground resolution. In oblique view, the GSD varies within the image significantly; it is smaller in the foreground and bigger in the background (Fig. 5.10b). The GSD does not give any information about the resolution of the 3D objects, such as fa¸cade or roofs, which is the most interesting aspect for texture mapping. Therefore, in this thesis, a local resolution for every object is defined as the length of a line segment placed on this object, which is depicted within one pixel. This line segment is parallel to one of the axes of the image coordinate system. Accordingly, two resolutions for one pixel can be calculated: in x- and in y-direction of the camera coordinate system.

An oblique view is equivalent to a nadir view of a sloped surface as shown in Fig. 5.10:

Fig.5.10b is equivalent to Fig. 5.10c.

y x z

H H D1 D2

D3

D4

ck

ck

a b

c

ck

s’

s

s’ s’ s’

s s s

ss’s’s’

l1 l2 l3 l4

s’ s’ s’ s’

s1

s2

s3

s4

l1 l2

l3

l4

Figure 5.10: Geometry of nadir and oblique view: a) nadir view; b) oblique view ; c) nadir view of a sloped surface, which is equivalent to an oblique view of a flat surface.

This representation is suitable not only for the ground surfaces but also for other surfaces, e.g. fa¸cades or roofs. In this representation, a planar surface can be defined for each pixel. This

72 5. Texturing with Thermal Data

surface is parallel to the sensor and intersects with the photographed surface in the intersection point of the ray from the middle of the pixel with the photographed surface (Fig. 5.11a).

O

O'

Di

ck

s ’i

½·si

li-1

li-2 ri-2

ri

ri-1

li-1

l-i2

½·s

½·s φi

φi-2

φi-1

γi

γi

γi

γi αi-1

βi-1

αi-2

βi-2

½·si

n

z z

δi-1

δi-2 γi

a b

P'

A2

A1

B1

B2

P

Figure 5.11: Detailed geometry for calculations of the resolution

If the distanceDi, which is the distance from the projection center to the photographed surface is known, the resolution of this parallel surface can be easily calculated using (5.1) by replacing H with Di, which results in

ck

s0 = Di

si =⇒ si= Dis0

ck . (5.2)

Here, the index idenotes the pixel; however, in many cases, the photographed object is rotated by an angle

γi = arccos(

−→

−z◦ −→n

k−→zkk−→nk), (5.3)

where −→n is the normal vector of the photographed surface and −→z = [0, 0, 1]. For every γi > 0, the length of the line segment on the photographed object is li > si. The ray from middle of the pixel does not intersect the line segment on the photographed object in the middle of this segment, but instead divides this segment into two line segments with the lengths li−1 and li−2

respectively (Fig.5.11b). To calculateli, the triangles∆A1B1P and ∆A2B2P should be solved.

Using the Law of Sines, li−1 is calculated from ∆A1B1P li−1 = sisin(αi−1)

2 sin(βi−1), (5.4)

whereαi−1= 180−(90φi)−δi−1 = 90+φiδi−1 and βi−1 = 180γiαi−1 = 90φi+ δi−1γi. Similarly, li−2 is calculated from ∆A2B2P

li−2 = sisin(αi−2)

2 sin(βi−2). (5.5)

where αi−2 = 90 + φi0δi−2 and βi−2 = 90φiδi−2γi. Here δi−1 = φiφi−1 and δi−2=φi−2φi. The length li is calculated as the sum of li−1 and li−2:

li=li−1+li−2 = si 2

sinαi−1

sinβi−1 +sinαi−2

sinβi−2

. (5.6)

φi is calculated by solving the triangle ∆OO0P0 as follows tanφi = ri

ck =⇒ φi = arctanri

ck

. (5.7)

Analogously,

φi−1 = arctanri−1

ck

(5.8)

and

φi−2= arctanri−2

ck

. (5.9)

If s0 = 1 [pix], thenδi−1 and δi−2 are very small angles. If we assume that δi−1δi−2 ≈0, it implies that αi−1αi−2 ≈90+φi =αi and βi−1βi−2 ≈90φiγ =βi. Then li can be simplified to

li =sisinαi

sinβi = Dis0sinαi

cksinβi . (5.10)

Another simplification is presented in Fig. 5.12. Here li is length of the line segment, which has to be orthogonally projected onto the surface parallel to the sensor to fill one pixel

li= si

cosγ = Dis0

ckcosγ. (5.11)

Di ck

s’

si

γi

li

Figure 5.12: Simplified geometry for calculations of the resolution

Occlusion

Occlusion of a texture is a quality measure which is calculated based on acquisition geometry when considering self occlusion, or when considering extrinsic occlusion from additional data.

This quality gives information about which percentage of the texture can be seen in a frame.

Knowing the depth image of the scene, the occlusion factoroij is defined as oij = nvis

N , (5.12)

where nvis is the number of visible pixels in face, j in frame i and N is the number of pixels occupied by face j. The qualityoij ∈[0, 1] takes valueoij = 1 for fully visible textures.

General Geometric Quality

For best texture selection, which was described in Section 5.3, one significant quality measure is needed. Calculating the local pixel is computationally expensive; therefore, a simplified quality that takes resolution and occlusion into account is needed.

The more pixels in the texture are occluded, the lower the quality of the image. However, it is possible that a strongly occluded texture has a significantly higher resolution than the resolution of a completely visible texture. On one hand, if we want to extract a texture with the highest resolution, we should always select the parts of the texture with the highest resolution and combine them into one texture. On the other hand, we should keep in mind that every combination can cause small errors on the seam lines. Accordingly, an optimal balance between the occlusion and the resolution should be found using a quality measure

qij = a1oij +a2dij +a3cosγxijcosγyij

a1+a2+a3 , (5.13)

where a1+a2+a3 6= 0. qij is computed for every face j in every frame i. γx,γy denote angles between the normal of a model polygon and the viewing angle of the camera, a1, a2, a3 are coefficients,oij is the occlusion factor, anddij denotes distance factor calculated by

dij = DmaxDij

DmaxDmin. (5.14)

Here,Dmax denotes maximum possible distance from the projection center to model points,Dmin denotes minimum possible distance from the projection center to model points, and Dij denotes the distance from the projection center to the center of a model polygon. For each face, a texture with the best qualityqij is selected for texture mapping. In cases when a partially occluded face is selected for texturing, the missing part of it is searched in other frames, again considering their quality.