• Keine Ergebnisse gefunden

Synthetic Aperture RADAR (SAR)

The principle of Synthetic Aperture RADAR is to create an articial very long antenna. Practi-cally, instead of one single sensor position, the whole scene is illuminated at several consecutive sensor locations, creating a very long antenna. In the following, a straight ight path is assumed.

The derived formulas can be found in more details in (Klausing & Holpp 2000).

2.2.1 SAR Azimuth Resolution

The angular spread βA of the antenna at each sensor position is the same. An object in the scene is illuminated for each beam at a dierent position. This corresponds to dierent ranges to the sensor, as shown in Figure 2.2. Therefore, a Doppler eect occurs, with increasing signal frequency when the sensor comes closer to the object and decreasing signal frequency when the sensor moves away. The signal frequency f0 at the object is expressed as:

f0 = 1 +v

c

fs, (2.4)

wherebyvis the component of the sensor velocity in range direction andfsthe frequency emitted by the sensor. The frequency shift between sensor and object isf0−fs. Furthermore, the returned echo is shifted by the same amount, leading to a total Doppler frequency shift fd at the object of:

fd= 2(f0−fs) = 2v

c fs. (2.5)

2.2. Synthetic Aperture RADAR (SAR) 21

RAR SAR

LA βA LSA β

Figure 2.2: SAR Principle

βd θ va

R

xd

Figure 2.3: Geometry of a Synthetic Aperture RADAR - SAR

This expression can be rewritten as function of the velocityvaof the sensor along its ight path:

fd= 2vasinβd

c · c

λ = 2vaxd

λR . (2.6)

whereby βd is the angular dierence between the object and the range direction and xd is the azimuth coordinate of the object in a coordinate system with the sensor as origin, as represented in Figure 2.3.

The shift between emitted and returned frequency yields the correct object location, by extracting xd.

Considering these observations, two objects can be distinguished if they have dierent Doppler frequency shifts. Thus, the azimuth resolution depends on the resolution ρfd of the Doppler frequency shift:

ρASAR = λR

2va

ρfd. (2.7)

Assuming that the Doppler frequency shift is constant during the illumination time ∆t of the object, ρfd can be expressed as:

ρfd = 1

∆t

, whereby∆t= Rβ va

= Rλ LAva

. (2.8)

Rβ is the length between the start and the last position of the sensor from which the object is observed (cf. Figure 2.2). It corresponds to the length of the synthetic apertureLSA=Rβ = L

A. From Figure 2.2, it becomes clear that the angular apertureβat the object is equal to the angular spread of the signal βA of a single antenna position (β = βA). Combining Equations (2.7)and (2.8)leads to the azimuth resolution of SAR systems:

ρASAR = LA

2

. (2.9)

Contrary to RAR systems, the azimuth resolution of SAR systems is independent of the range of the sensor platform. This result should be interpreted cautiously, as the assumption that the Doppler frequency shift stays constant during the illumination of the object is only true if the antenna is steadily perpendicular to the ight path and has constant velocity during the acquisition (cf. StripMap mode, Section 2.2.5). In cases where the antenna direction does not

stay perpendicular to the ight path during the acquisition, or rotate around the ight direction, this assumption is not validated (e.g., ScanSAR or Spotlight data, Section 2.2.5). In general, it can be retained that by increasing the rangeRbetween sensor and object, an increasing synthetic aperture LSA has to be used in order to maintain the azimuth resolution.

2.2.2 Pulse compression

This previous formula corresponds to the azimuth resolution of compressed data, i.e. the multiple echos due to the multiple antenna positions are compressed in order to recover the original object position. This step is called pulse compression` and is enlightened in the following. To this goal, it is necessary to recall that RADAR systems in general, and SAR systems in particular, are coherent. The phase of the transmitted signal is well dened and can be retrieved with high accuracy. Only under this condition, it is possible to analyze the phase of the returned echo and deduce the path of the signal.

As for RAR systems, yet at each sensor position, the antenna emits a pulsed signal. Mostly, transmitted pulses are modulated, so that the signal can be transmitted and returned back to the sensor without loss of the phase information. The frequency modulated emitted signal, or chirp, was already mentioned in Section 2.1.1. For every sensor position, the signal received by the sensor is rst demodulated, converted into a digital signal, and mapped in a 2D matrix. At this point, it is necessary to notice the dierence between resolution and pixel sampling (Raney &

Wessels 1988). In order to preserve all signal information during the analog to digital conversion, the Nyquist criterion has to be respected, i.e. the sampling frequency fns must be greater than twice the signal bandwidth B. Only under this condition, a perfect, unambiguous signal can be reconstructed, without aliasing. This oversampling of the signal leads consequently to a pixel spacing in the resulting 2D matrix smaller than the resolution.

Rows of the matrix correspond to the dierent sensor positions in azimuth direction, and columns to the range of the illuminated objects. The raw matrix has to be processed in order to acquire the nal image data. Indeed, the representation of point targets is spread in both azimuth and range directions, respectively due to the Doppler eect and the pulse dispersion. As a scene contains more than one object of interest, spread signals of several objects mostly overlap, and the raw data is hardly directly interpretable (cf. Figure 2.4).

The processing happens in two main steps: range compression on each RADAR echo, followed by azimuth compression on each column of the range compressed data. For both of them, so-called 2D matched lters are employed (Chan & Koo 2008). Such lters consist of a reference function corresponding to the transmitted chirp for the range compression and to the Doppler eect for the azimuth compression. By convolving these reference functions with the 2D signal, peaks corresponding to the responses objects are extracted, and superimposed signals of dierent objects are therefore separated. Usually, those convolutions in the spatial domain are performed as correlation in the frequency domain, using the Fourier transform (Moreira et al. 2013). Near the main peak (i.e. main lobe) of the lter response, side-lobes are present that blur the response of the object. For an object presenting a strong reection (i.e. a strong backscatterer), its side-lobes could interfere with main side-lobes of other objects, presenting lower backscattering properties.

Specic weighting functions are applied in order to smooth those side-lobes (Harris 1978).

2.2. Synthetic Aperture RADAR (SAR) 23

Figure 2.4: Pulse compression (source: (Moreira 2000))

2.2.3 SAR Image Data

The resulting image data is a complex matrix, containing in-phase Ip and quadrature Qp com-ponents (i.e. respectively real and imaginary part). The complex data can also be expressed in amplitude and phase information, whereby the amplitude A is the backscattering coecient of the signal, and the phase φis the phase of the returned signal. The relations between complex (Ip, Qp) and polar (A, φ) coordinates are recalled in the following equations:

Ip =Acosθ, Qp =Asinθ (2.10)

A= q

Ip2+Q2p, I =A2 (2.11)

C =Ip+iQp=Aexp(iφ) (2.12)

I is the intensity of the signal. C is the complex image.

The amplitude image has a grainy appearance, even on homogeneous areas. This salt-and-pepper` pattern is called speckle and is due to the superimposition of the backscattering of dierent targets within the same resolution cell (Goodman 1976). Due to the sensor resolution, single scatterers cannot be resolved individually. Within every resolution cell, echoes of several scatterers interfere. Their coherent summation is either constructive or destructive, depending on their relative phase. This results in brighter or darker pixels, producing the salt-and-pepper`

appearance (Moreira et al. 2013).

Especially for land-use classication or object detection, speckle makes SAR image interpreta-tion more dicult and has to be reduced. Several methods exist for reducing, or ltering, the speckle. They mostly assume a multiplicative speckle model, i.e. the speckle is directly propor-tional to the intensity, or grey level, of the area. It implies that its variance increases with the

a b c d e f

range

Figure 2.5: Example of standard Speckle Filtering: (a) original intensity image; (b) multilook 5x5; (c) Frost filter 5x5, damping=2; (d) Gamma MAP 5x5, ENL=1; (e) Lee 5x5; (f) Refined Lee 5x5, edge threshold=0.5

intensity. The probability density function (pdf) of the intensity of the received signal under fully developed speckle follows the Rayleigh Speckle model (Lee et al. 1994):

pdf(I) = 1 σ2exp

−I σ2

(2.13)

whereσ is the mean RADAR cross section (RCS) of the considered area: µ(I) =σ2.

Speckle lters can be separated into two main categories: local and non-local lters. The most common local lter is a multilook, practically an averaging of the intensity image within a specied neighborhood. This spatial averaging induces a degradation of the image resolution. Other local lter methods permit to reduce the speckle by preserving the spatial resolution. Adaptive local lters, such as the Frost (Frost et al. 1982), Kuan (Kuan et al. 1985), Lee (Lee 1980) and Gamma MAP lter (Lopes et al. 1993), take the local image statistics into account for weighting the lter coecients. Such lters are nowadays state-of-the-art and implemented in most SAR softwares. Particularly interesting are the rened Lee lter (Lee 1981) and the enhanced Lee lter (Lopes et al. 1990). The rst uses the local gradient information, considering eight mask directions. Fluctuations along the edges are removed but edges are preserved. The second separates the image into three dierent regions: homogeneous, heterogeneous with structures, and point targets. The weighting coecients are dierent for these three regions, allowing the preservation of structure information, without blurring it. Based on this principle, non-local lters have been developed lately, as in (Deledalle et al. 2010). They permit to complete the local information with information of similar areas, situated at other locations in the image. A similarity criterion has to be dened for nding analogous areas. Such lters are very ecient for preserving structural information, such as roads, in the image, by guarantying a very good smoothing of homogeneous areas. However, due to their non-local approach, the results of such lters are still noisy at the boundaries between two dierent areas, i.e. around edges. Figure 2.5 shows the results of some speckle ltering methods for the same window size.

2.2.4 Geometrical Image Distortion

Due to the side-looking conguration of RADAR acquisitions, not only simple reections, but also typical geometric distortions appear in the SAR image, caused by the existing relief. These phenomena are imaged schematically in Figure 2.6 and recalled briey here. The dierent grey levels represent the intensity of the corresponding eect in the SAR image, bright grey repre-senting high and dark very low intensity.

2.2. Synthetic Aperture RADAR (SAR) 25

Ground Range

Slant Range Slant Range

Ground Range

θ θ

A B

D C

E

F

f l s db

Figure 2.6: Schematic representation of side-look induced geometrical image distortions

Foreshortening (f)

This phenomenon appears principally in mountainous areas, when the illuminated terrain shows a slope smaller than the incidence angle θ. Considering the points A and B in Figure 2.6, situ-ated at the foot and on top of the hill, respectively, their representation in slant range direction appears compressed (green lines). This terrain compression in the slant range representation yields brighter areas. Maximum foreshortening occurs when the slope increases until it becomes equal to the incidence angle. Indeed, it is then perpendicular to the RADAR beam, so that the whole slope has about the same range to the sensor. Its representation is then compressed into a few range pixels, leading to a very bright line (yellow point E).

Layover (l)

When the slope continues increasing and becomes greater than the incidence angle, layover oc-curs. There, the RADAR beam reaches the top of the feature before it reaches its foot. In the resulting image, the foot of the hill (D) is mapped at a farther range than the top (C). Layover induces therefore the summation of several contributions within one range cell. Slope points are situated at the same distance to the sensor as ground points (red lines) and therefore represented at the same position, leading to brighter areas.

Double-bounce (db)

This phenomenon appears principally on man-made objects, as it occurs mostly on vertical struc-tures. There, part of the signal coming on the vertical wall undergoes a specular reection that redirects the signal to the ground. Here again, the same phenomenon occurs, redirecting the signal towards the sensor (Franceschetti et al. 2002, Dong et al. 1997). This phenomenon occurs also in the opposite direction, i.e. from ground to wall. This is represented on the right of Fig-ure 2.6, whereby the RADAR beam is reduced to a parallel illumination front. The path of the signal that undergoes a so-called double-bounce reection has the same length for every point on the vertical structure. It corresponds to the range between sensor and foot of the vertical feature.

Therefore, every scatterer of the ground and of the wall contributing to the double-bounce is

represented at the same range, corresponding to the distance to the foot of the structure (yellow point F in Figure 2.6). This leads to very bright lines in the resulting image.

Such phenomenon is not rare on man-made structures, and under the same principle, triple bounces or even higher reections can be observed (Auer & Gernhardt 2014, Auer et al. 2011a).

Shadow (s)

The last phenomenon observable in SAR images are shadow areas. They result from foreshort-ening and layover. Hill sides looking back from the sensor are not illuminated (area marked grey under blue line in Figure 2.6). Therefore, no information returns to the sensor. Those areas are very dark in the SAR image.

2.2.5 SAR Acquisition Modes

Apart from the dierent utilizable wavelengths, SAR systems allow several imaging modes, which also inuence the spatial resolution. The choice of the best suited acquisition mode depends on the application. SAR systems destined for land-use applications do not need the same spatial resolution as SAR systems designed for ner object analysis. In the following, focus is laid on modes enabled by the German satellite TerraSAR-X (Fritz et al. 2008), launched in June 2007, whose data are exploited in this work. Other SAR systems propose, at least, one of those imaging modes, with similar specications.

StripMap Mode

The StripMap mode is the most basic imaging mode, following the SAR principle described previously. It consists in illuminating the scene with the same incidence angle and xed range direction during an arbitrary time. This results in a single stripe of arbitrary azimuth length.

The standard TerraSAR-X products have a coverage of 30 km in range direction and 50 km in azimuth direction. Depending on the incidence angle, the ground range resolution goes from 1.7 m up to 3.5 m. The obtained azimuth resolution comes to 3.3 m. StripMap data are mostly used for mapping and monitoring of land use, but can also be employed for object recognition such as ship detection.

ScanSAR Mode

The ScanSAR mode enables a very large range coverage, by switching slightly the incidence angle of the antenna. This results in multiple stripes along the range direction. During the change of incidence angle, transmitter and receiver are o. Therefore, each stripe is illuminated for a shorter time as for the StripMap mode, leading to a degradation of the azimuth resolution. For TerraSAR-X, the usual coverage with ScanSAR is 100 km in range direction and 150 km in az-imuth direction. The ground range resolution varies between 1.7 m and 3.5 m, depending on the incidence angle. The azimuth resolution is about 17 m. A new wide ScanSAR mode is available since 2013 that enables a coverage of 270 km x 200 km with a resolution of 40 m. The ScanSAR mode nds application for large-area monitoring and mapping of wide disasters, as oil slick, sea ice or glacier monitoring, or mapping of devastated areas by a forest re. Using interferometry (cf. Section 2.3), topography mapping and deformation monitoring is also possible.

2.2. Synthetic Aperture RADAR (SAR) 27

Spotlight Mode

In the Spotlight mode, the incidence angle of the antenna stays xed, but the antenna is steered in azimuth direction, permitting a longer illumination of the scene. The rotation center of the antenna steering is situated behind the scene at far range. The azimuth resolution is highly enhanced compared to the previous modes, at the detriment of the coverage. For high resolution Spotlight products, the scene coverage is 10 km in range direction and 5 km in azimuth direction.

The ground range resolution is about the same as for the previous modes, varying from 1.5 m to 3.5 m depending on the looking angle. However, the achieved resolution in azimuth direction is 1.1 m. Due to their high resolution in both directions, it is possible to distinguish small objects.

Therefore, Spotlight data are used for the detection and recognition of man-made structures and objects, such as roads, buildings and even vehicles.

Staring Spotlight Mode

The Staring Spotlight mode is available since 2013 on the TerraSAR-X platform (Mittermayer et al. 2014). It is similar to the Spotlight mode, with the exception that the rotation center for the antenna steering is situated at nearer range position, within the illuminated scene. For a comparable acquisition length, the normal Spotlight mode has a smaller antenna steering. The achieved coverage for the Staring Spotlight mode is about 5 km in range direction and 2.4 - 3.3 km in azimuth direction, for an azimuth resolution up to 0.25 m. With such a resolution, not only single objects can be detected, but also their structure and shape can be analyzed.

Further enhancements of the resolution are planned for the future, in particular within the TerraSAR-X Next Generation mission, that should continue to provide and enhance very high resolution modes up to 0.25 m beyond 2025 (Janoth et al. 2014).

Due to the available modes at the beginning of this work, but also due to the necessary and sucient resolution for building analysis, the following work focuses on the processing and anal-ysis of Very High Resolution Spotlight data.

2.2.6 Special SAR Techniques

Due to the coherent imaging capability of SAR sytems, and considering the polarization of the transmitted electromagnetic wave, several techniques have been developed in order to analyze ter-rain topography and enhance object detection. This subsection briey presents the most common SAR techniques. A good overview is given in (Moreira et al. 2013) and (Toutin & Gray 2000).

Sections 2.3 and 2.4 explain two of these techniques in more details, as they are used in this work.

Polarimetry

The technique of polarimetry relies on the analysis of the polarization of the signal. The emit-ted electromagnetic signal can be described by two components: one in horizontal and one in vertical direction. The targeted emission or reception of those wave planes allows the analysis of the polarimetric properties of specic targets. Depending on the roughness and backscattering properties of the surface of the illuminated object, they appear brighter or darker depending on

the analyzed polarization state. This allows their distinction and enables the creation of physical objects models. Polarimetry is therefore a very convenient method for classication. Current research on polarimetry includes land-use and soil moisture classication (Gosselin et al. 2014), as well as the detection of man-made objects (Bhattacharya & Touzi 2012).

Interferometry (InSAR)

The technique of SAR interferometry analyzes the phase dierence between two SAR images taken from sensor positions characterized by slightly dierent incidence angles. The sensor-object range diers between both sensor positions, leading to dierent phases of the returned signal. The phase dierence is directly related to the range dierence. Thus, the surface topog-raphy can be deduced with high accuracy for the whole scene. The wide research eld of SAR interferometry is yet not limited to the creation of Digital Elevation Models (DEM). Considering dierent baselines, i.e. dierent distances between both sensor positions, increasing the time between both acquisitions, or changing the conguration of the sensors with respect to the ight direction, provides a lot more elds of applications. Among them, surface displacement (Hu et al. 2012) and glacier ow analysis (Li et al. 2014), atmosphere estimation (Gong et al. 2011), as well as monitoring of land subsidence (Chaussard et al. 2014) or detection of moving targets (Budillon et al. 2013) are just a few examples. InSAR is explained in more details in Section 2.3.

Radargrammetry

The rst denition of radargrammetry was given by (Leberl 1990): Radargrammetry is the technology of extracting geometric information from RADAR images`. It contains all techniques permitting to retrieve geometric information based on measurements on SAR images. Shape-from-shadow analysis from a single SAR image as in (Bolter & Leberl 2000b) is an example of radargrammetric analysis. Lately, the term radargrammetry` has evolved to the description of the RADAR equivalence to the optical stereoscopy. It is under this denition that it is used in this work. It requires at least two images of the same scene. Compared to InSAR, two main dierences exist. First, the incidence angles of both radargrammetric acquisitions are very dif-ferent. Considering the provided incidence angles from TerraSAR-X, the dierence of incidence angles reaches from about 10 up to more than 20. Second, instead of phase dierences, dis-parities between amplitude images are calculated. Indeed, such a large dierence between both incidence angles yields important distortions between both represented scenes. Resolution cells of both images contain dierent contributing scatterers, making the phase dierence analysis useless. Radargrammetry is therefore an incoherent method, whereas InSAR uses the coherence of the signal between both acquisitions. Radargrammetry is mostly used for the creation of DEM (Toutin et al. 2013), and is also employed in combination with InSAR for resolving problems in terms of phase ambiguities (Crosetto & Pérez Aragues 2000, Lachaise et al. 2014). This tech-nique is presented in more details in Section 2.4.

Tomography

The technique of SAR tomography is a relatively new technique, whose applications in forested and urban areas are booming since about ve years. Instead of using a one-dimensional synthetic aperture, tomography exploits a bi-dimensional aperture. Additionally to the synthetic aperture