• Keine Ergebnisse gefunden

Radargrammetric Pre-Processing

Pre-processing aims at bringing both radargrammetric acquisitions into the same radiometric range and geometry for facilitating the later matching. Both pre-processing steps are explained in the following.

4.4. Radargrammetric Pre-Processing 81

4.4.1 Calibration

Changes of incidence angles or of orbit orientation produce radiometric dierences between im-ages that should be minimized for better comparison. The aim of image calibration is to reduce and eliminate in each image all radiometric contributions that are not directly due to the target characteristics.

Two dierent calibration coecients can be computed: beta-naught β0 and sigma-naught σ0. Beta-naught corresponds to the RADAR brightness and is expressed as:

β0 =ks· |A|2, wherebyA= q

Ip2+Q2p (4.1)

A is the amplitude of the signal at pixel position p. The calibration factor ks is given in the product les, and is sensor and product dependent (Fritz & Werninghaus 2007). The RADAR brightness β0 only takes into account radiometric dierences due to changes of the sensor or of the mode of acquisition (e.g., StripMap, Spotlight). It does not consider the incidence angle, which is mandatory for radargammetric analysis. The second coecient sigma-naught σ0 takes into account the orientation and distance of each resolution cell towards the sensor. The inuence of the local incidence angleθloc is therefore considered:

σ0= (ks· |A|2−N EBN)·sinθloc (4.2)

N EBN is the Noise Equivalent Beta Naught, and represents the inuence of several noise con-tributions to the signal. It is a more precise estimation of β0, computed as a summation of polynomials of dierent degrees, weighted by the calibration factor ks. The polynomials' coef-cients represent the inuence of dierent noise factors such as transmitted power and antenna noise, and are given in the product le.

In this work, the second coecient sigma-naughtσ0 is used, in order to obtain images of similar radiometry, independently of the incidence angle of the acquisition. Master and slave images are both calibrated, producingσ0m and σs0, respectively. Those resulting images are used for further processing.

4.4.2 Coregistration

Both calibrated imagesσm0 andσs0 are then coregistered. In this work, the slave image is coregis-tered on the master image, so that points situated on the ground are at the same position. This ensures that the double-bounce lines of the buildings have same position in both images after coregistration.

Coregistration may be performed in three ways. The rst consists in georeferecing both images, using a specic reference system. This usually leads to distortion of image features, as the refer-ence system may not be a plane. In urban areas, double-bounce reections at building location are characterized by lines and layover areas have parallelogram shapes. Distortion of such linear features complicates feature recognition and extraction for later building parameter estimation.

Besides, relief implied image distortion would hinder the matching. The second possibility is to project both images into ground geometry, using a reference surface. An ellipsoid would yield the same drawbacks as georeferencing, but a common plane surface allows the preservation of the

image geometric shapes. Yet, both images still undergo a transformation, implying interpolation errors and changes in their radiometry. The third possibility for image coregistration implies the slave image to be reprojected into the slant geometry of the master image. Here, only one image is interpolated and modied. Besides, the projection surface is still a plane, preserving the shape of geometric features. This last option is chosen in this work, as conservation of geometric features is mandatory for disparity calculation and later fusion.

Coregistration of radargrammetric SAR images often combines the dierent presented ways. The goal is to preserve the geometric and radiometric information of both images by allowing a facil-itated matching, respecting the epipolar constraint (Méric et al. 2009). In (Simonetto 2002), an approach similar to optical stereoscopy is used, whereby points of the master image are projected to several ground heights and then projected into the slave image, creating epipolar curves in the slave image, and vice-versa. Homologous points are searched along the direction of the epipolar lines. This approach applies multiple transformations, as each image point is projected into several heights. Similar to this approach, (Nascetti 2013) proposes to project both master and slave images into ground planes of several heights. Each image provides an image stack, or voxel grid, and matches are rst searched in the height direction between both stacks. This approach, as the previous one, combines the two steps of image coregistration and matching. In (Perko et al. 2011), a regular grid of points is rst dened in the master image and projected onto a coarse DSM. This created DSM grid is then projected into the slave image geometry, permitting the denition of an ane transformation between master and slave image, in a least-squares sense. The slave image is then registered to the master image using this ane transformation.

As a result, the direction of the disparity is aligned with one image dimension. Considering par-allel ight tracks, across-track geometry of both acquisitions and constant altitude of the sensor, as for spaceborne imagery, the search for matches can then be reduced to the range direction.

This quasi epipolar geometry is also used by (Méric et al. 2009), whereby the search in azimuth direction is extended to a small stripe (3 pixels) in order to balance the estimation error of the coregistration. All those approaches necessitate a coarse DSM, or an approximate terrain height of the area of interest. Besides, they suer from interpolation errors, as at least two projections have to be performed.

An original approach is proposed in (Schubert 2004), where the slave image is simply resampled into the geometry of the master image, by eliminating regularly complete columns of the slave in range direction. Yet, it involves information loss.

In this work, a new approach is used, not necessitating any external information and requiring only one image reprojection. It is based on the SAR-SIFT algorithm presented in (Dellinger et al. 2015). The main change compared to the original SIFT algorithm (Lowe 1999) is the intro-duction of a gradient computation called gradient by ratio (GR). It is based on the logarithm of the ratio of exponentially weighted averages (ROEWA), dened in (Fjortoft et al. 1998). During key-point detection, a multiscale representation of the original image is used. Contrary to the original SIFT scale space, which is dened using Laplacian of Gaussian (LoG), the SAR-SIFT uses a multiscale SAR-Harris function based on the GR. The scale parameter is replaced by the smoothing parameter of the GR calculation. It considers adaptive smoothing of the image without changing its scale. Namely, both SAR images have the same scale. A comparison with key-points extracted with the original SIFT detector shows that the extracted SAR key-points

4.4. Radargrammetric Pre-Processing 83

a b c

range

Figure 4.3: SAR-SIFT: (a) extracted key-points using gradient by ratio; (b) master (red) and slave (blue) images before SAR-SIFT coregistration; (c) master (red) and slave (blue) images after SAR-SIFT coregistration

are situated on distinct features, as corners and edges, and that their extraction suers less from inherent multiplicative speckle. Figure 4.3a shows key-points extracted with the new method.

During orientation assignment and descriptor extraction, the GR is used again for computing the histograms of gradient orientation. In (Dellinger et al. 2015), a circular neighborhood, separated in polar sectors, is used for computing the histograms of gradient orientations. In this work, a square neighborhood as in the original SIFT is used. Indeed, in urban areas, key-points are sit-uated on double-bounce lines and linear edges. Square windows allow to preserve the rectilinear geometry of urban structures. Key-points of both master and slave images are then matched by computing nearest neighbor analysis on both sets of descriptors. For coregistration of master and slave images, a transformation has to be tted to the corresponding points of both images.

In (Dellinger et al. 2015), an ane transformation is dened. However, both images are in their respective slant geometry. Considering spaceborne acquisitions on parallel ight-tracks in across-track geometry, this is a plane-to-plane transformation, corresponding to a homography.

Thus, in this work, instead of an ane transformation, a homography is dened between the two sets of points. This transformation is dened by eight parameters, requiring at least four pairs of corresponding points between both images. In order to robustly estimate the transformation parameters, outlier correspondences are ltered by employing a RANSAC (RANdom SAmple Consensus (Bolles & Fischler 1981)) scheme. Figure 4.3(b,c) shows in false-color representation both images before and after SAR-SIFT coregistration. It is observable that the double-bounce lines of all buildings are well aligned after coregistration. Parameter settings of this approach, as well as subsequent accuracy tests are presented in Section 6.3.1. A comparison to a standard coregistration method is shown.

Using SAR-SIFT allows to directly project the slave image into the slant geometry of the master image, without using external information. Compared to the methods presented previously, only one transformation occurs, reducing interpolation errors, and only one image -the slave image- is modied. During transformation, resampling ensures that the coregistered slave image has the same sampling as the master image. Calibrated amplitude values of the coregistered slave image are determined using bilinear interpolation. Furthermore, the nal coregistered images as repre-sented in Figure 4.3c are in epipolar geometry, whereby epipolar lines correspond to the range direction. The search for homologous points for disparity calculation can thus be performed

along horizontal direction for each pixel. Yet, due to slight dierences of heading angles between both SAR acquisitions, a slightly wider search window has to be dened. This is explained in Section 4.5.3, considering layover areas.

Finally, SAR-SIFT is a very attractive method for the coregistration of two images taken from dierent incidence angles. It allows to nd enough matches between both images for dening a projective transform. Having a closer look at the extracted key-points, it is obvious that they are mainly situated on the double-bounce lines, corresponding to ground level, where the dis-parity between both coregistrated images is zero. Modifying the parameters in order to obtain key-points corresponding to other building parts (e.g. layover) is possible. This would allow to calculate disparities between both images. However, only sparse disparity calculation would be allowed with such an approach. Besides, the dierence of incidence angles induces very dierent appearance of the layover areas, as mentioned in Section 4.1. Finding corresponding key-points in layover areas may prove nearly impossible. Hence in this work, a dierent approach for disparity calculation is used, and described in Section 4.5.