• Keine Ergebnisse gefunden

Observations, sample selection and data reduction

7.4. Data reduction

compensate for deviations from an uniform field illumination induced by the optical train and to correct for sensitivity variations among individual detector pixels. In SINFONI, slitlets 1, 16 and 17 in particular suffer from partial vignetting due to the design of the image slicer, which needs to be corrected for.

• By combining the flat and dark frames and thresholding a final bad pixel mask is derived to identify hot or cold pixel which are defective, permanently giving a maximum or no signal.

• Using existing distortion map and arc lamp data, an initialwavelength solutionis generated. The wavelength solution is basically a wavelength map for the detector x-y image coordinates. It is needed for rectifying the spectral curvature and for correcting the spectral alignment of the individual slitlets, which are not in mutual alignment in wavelength space when projected on the detector due to the very construction of the IFU image slicing mechanism itself.

• A2D-to-3D lookup table is calculated which gives the correspondence between 2D spatial detector pixel coordinates to 3D x-y-λ coordinates in the final 3D data cube.

Basic reduction steps

Next, the wavelength solution is refined using sky emission lines to correct for small shifts and flexure. After this, removal of cosmic rays follows. This step is carried out on the raw science frames using the L.A.Cosmic removal algorithm developed and described by van Dokkum (2001). It uses a Laplacian edge detection algorithm to identify cosmic rays of arbitrary shapes and sizes by the sharpness of their edges. This method is better suited than algorithms involving median filtering since those tend to fail when the PSF is smaller than the filter or cosmic rays affect more than half the filter area (van Dokkum 2001). The calibrations files created in the steps above are then used to process the raw science frames and extract the data cubes.

Intermediate 3D-data-cube

Since the subsequent steps and scripts operate on 3D data cubes, the raw 2D science frames are converted into 3D cubes using the wavelength calibration files and lookup tables. The result is a raw science cube, that is wavelength calibrated, dark subtracted, divided by the flat field and corrected for atmospheric dispersion but still contains all OH lines, and thermal background. Each two cubes that have been taken successively on sky as A-B pairs are then used in the OH line removal process (see section 7.4.1 for a detailed

7.4. Data reduction

description of the OH line treatment and removal), which creates a sky-subtracted cube from the raw cube. The sky-subtracted cubes are then fed into a refined background sub-traction routine. The background to be subtracted is derived for each slitlet individually using a linear fit along the spatial direction to remove residual features.

After all frames have been processed, the science cubes of one observing block are usu-ally combined for a first evaluation of the data quality. This is done since the S/N for our faint high-z sources in individual 600s exposure cubes is relatively low and often the sources cannot be detected in individual exposures, except for the brightest ones or when the continuum level is high. Spatial alignment of the single-exposure cubes is done according to the dither offsets sequence used for the observations because the targets are too faint to determine their position in individual exposures from centroid fitting or cross-correlation. The small offsets applied within an OB for dithering or offsets-to-sky are very accurate at the VLT, as they are performed relative to the telescope active optics guide star.

To combine all OB cubes, the relative offsets between them (often taken on different nights) are determined in three manners: based on the measured position of the acquisi-tion star observed for each OB and the known offsets that were applied to go on target, by measuring the centroid position of the sources in the individual combined OB-cubes provided the source is sufficiently bright and compact, or from the relative offsets be-tween OBs if they are taken successively without reacquisition in bebe-tween. In this work all targets had their OB cubes registered by measuring the centroid position of the source as this was found to yield the best registration.

At this point a sky-subtracted master science cube has been produced. This cube is still not flux-calibrated, which is done in the next step.

Flux calibration and atmospheric correction

Flux calibration and correction for atmospheric transmission is done on the individual single-exposure science cubes on a night-by-night basis. The data of the telluric stan-dard stars and the acquisition stars are reduced in a similar way as the science data.

The standard star spectrum is extracted from the data cube and further processed as follows.

Firstly, an atmospheric template is generated from the standard star spectrum by re-moving intrinsic stellar absorption lines and a calculated blackbody spectrum. The star spectrum is divided by a model atmosphere spectrum generated using the atran soft-ware package (Lord et al. 1992) for the corresponding spectral resolution, airmass and atmospheric water vapor content for the telluric star’s data. After absorption line

cor-rection, the star spectrum is again multiplied by the model atmosphere. These tasks are done inIRAF. For late O and B stars the only significant features that need to be removed are the H lines of the Brackett series (e.g. Brγ at 2.166µm in the K band). For G stars, a solar template matched to the filter and pixel scale of the observation is used as di-verse lines need to be removed. The spectrum is then divided by a black body spectrum corresponding to the star’s temperature, and normalized to peak transmission of unity.

The next step is the flux calibration: the standard star is used to calculate the flux in physical units per ADU (i.e detector counts, ’analog-to-digital unit’). The synthetic broad band magnitude is calculated from the telluric standard star spectrum in ADU. The pho-tometric zero point is then derived by comparing with the star’s magnitude in physical units.

Final 3D-data cube

In the next step, the individual sky-subtracted cubes are first corrected for atmospheric transmission using the atmospheric template created as described above. These new cubes are then flux-calibrated using the nightly conversion factor derived from the anal-ysis of the standard star flux.

The final step is the combination of all flux- and transmission-calibrated science cubes of a given target by averaging with sigma-clipping (i.e., iteratively removing data points deviating from the mean, typically clipping at the 2.5σlevel). This step also generates a

’sigma cube’, which contains the standard deviation of the values for a given pixel in the 3D data cube across all cubes combined. To ensure the statistics (e.g. the sigma cube) are calculated correctly, the individual calibrated frames from all OBs are combined, rather than combining each OB into an OB-cube and then combining the OB-averaged cubes.

PSF determination

Determination the PSF is necessary in order to determine the spatial resolution of cubes in different bands if they are to be combined for a spatial analysis. The effective PSF of a science cube is determined from the acquisition stars observed immediately before or after the science object. For this, a broad-band image of the acquisition stars is created by averaging together all wavelength channels of the reduced cube of the PSF-star using sigma-clipping to exclude the strongest residuals due to night sky lines. When a science data cube is a combination of several OBs, the PSF for the combined cube is measured from the shifted and coaveraged PSF cubes of the individual OBs. For the purpose of characterizing the angular resolution of the data, the effective PSF shape is

7.4. Data reduction

well approximated by a Gaussian for both seeing-limited and AO-assisted SINFONI data (e.g. Förster Schreiber et al. 2009).

OH line removal

As mentioned above, the dominant source of background radiation in the near-IR is the OH airglow emission. Appendix A gives a more detailed description of the telluric OH lines. Here we describe the steps in the reduction, which allow us to minimize the impact of bright OH lines on the data by reducing their residuals left after simple ’on-off’ image subtraction.

In general, if a faint object’s emission line comes to fall directly onto a bright OH line, no recovery is possible (the case of broad emission lines can be an exception in this respect).

For all other cases, the standard technique for OH-line removal is the subtraction of a

’sky’ spectrum, taken from a blank piece of sky, from the ’object’ spectrum. For longslit and even multi-object spectroscopy there is always a part of blank sky in the slit that can be taken as the sky spectrum, so this can be done simultaneously. For integral field spectroscopy this is also possible if the object is sufficiently small compared to the FOV.

Otherwise this then requires taking separate sky and object frames which need to have the same exposure time in an object-sky-object pattern.

For the subtraction of sky and object frames one property of the OH emission becomes particularly important: the absolute and relative intensity is temporally variable on timescales of the order of a few minutes. Since typical exposure times for the high redshift sources in this work are around 10 minutes, the standard object minus sky tech-nique is generally not sufficient to eliminate all OH contamination. In addition, as an effect of instrument flexure while tracking, the wavelength calibration of the actual sci-ence data frame and the wavelength calibration frame (taken with an arc lamp usually at daytime) can differ (up to 1/2 pixel for SINFONI), leading to asymmetric, ’P-Cygni’-type, residual profiles.

To tackle these effects the wavelength calibration and sky subtraction used in this work were optimized using the method developed by R. Davies (2007a). This method has been integrated into our data reduction pipeline. The science frames are first interpolated to a common wavelength grid with an accuracy better than 1/30 of a pixel. The wavelength information needed for this step is derived from the night sky lines from the raw science frames. The sky subtraction is then improved by separately scaling the flux of each transition group of OH lines. This reduces the residuals around the emission lines under study.

From the analysis point of view, even if the OH line is very close or overlaps with the blue or red wing of an astronomical emission line the information can be recovered by fitting

the emission line using the remaining uncontaminated portion of the line profile. This, however, requires the resolution of the spectrograph to be sufficiently high, i.e. Rh4000, and that residuals have been minimized as described above. Especially in such cases, the use of integral field spectroscopy often makes it possible to still analyze a part of the object, namely those regions which are blue- or redshifted out of the contaminating OH line.

Despite all sophisticated OH line removal schemes and algorithms, good OH line avoid-ance in the first place has been done when selecting the objects, so that all emission lines of interest simultaneously come to fall in the free spectral range between bright OH lines. For all SINS galaxies, the Hαline had been observed previously. The spectro-scopic redshift from Ha (zsp = (λHα[nm]/656.46)−1) was used to calculate the expected wavelengths of the [OIII] and Hb lines of interest, in order to optimize our target selec-tion for OH line avoidance. However, despite the stringent selecselec-tion criteria, for some objects one or two emission lines were contaminated by weakerOH lines on their blue or red side hampering a full spatial analysis.