• Keine Ergebnisse gefunden

CHAPTER 2. OBSERVATIONS AND DATA REDUCTION ies, however, are at most weakly active and therefore not suitable for this kind of

2.4. DATA REDUCTION

and some commissioning data (for training purposes) were kindly provided by F.

Eisenhauer. SPREDwas solely used for the data reduction of the 2005 data. After the commissioning of SINFONI, ESO started to develop a reduction pipeline for common use with the front-ends Gasgano and EsoRex, in the same way as the pipelines of other ESO instruments. This pipeline is based on SPRED, thus the basic structure is the same with just a few small differences. Both pipelines are actively supported and regularly improved. In addition toSPRED, severalIDL pro-grams have been developed by R. Davies that improve shortcomings encountered with the SINFONI data, e.g. deliver a better sky subtraction (Davies,2007) or bad pixel correction. The improved sky subtraction program has recently been imple-mented in the ESO pipeline. As it is possible to useSPREDand the ESO pipeline in parallel, for each step the pipeline that yielded better results was chosen to re-duce the data from the 2006+ runs. Initially the majority of steps were done with

SPRED, lately mainly the ESO pipeline was used. A detailed description of the data reduction steps and the recipes used by the ESO pipeline can be found in the ESO SINFONI pipeline user manual (Modigliani,2009).

QFitsViewis a program for displaying and manipulating datacubes. It was used throughout the data reduction procedure and beyond for the inspection of inter-mediate results and the final object datacubes. The most common features can be used interactively. Being based on DPUSER,5 a large number of commands can be used and scripts can be executed.

In the remaining chapter the basic data reduction steps performed on the SIN-FONI data are described.

Bad lines removal

The bias level of each line in the detector is estimated from the four unilluminated pixels at the rim of the detector. If one or more bad pixels are present in these eight pixels per line (four pixels on each end), the bias level for this line is overestimated.

The measured bias is subtracted from each line during the data processing at de-tector level, resulting in raw frames with some dark stripes. The effect increases with exposure time. It has to be corrected before starting the data reduction. A DPUSER script kindly provided by S. Gillessen and later an IDL script provided by ESO were used for this purpose.

5

http://www.mpe. mp g. de /~ ot t/ dp us er /

CHAPTER 2. OBSERVATIONS AND DATA REDUCTION

Dark frames

Dark frames are necessary to identify bad pixels and they need to be subtracted from the science exposures in case the corresponding sky frame is not subtracted.

For each object exposure time three dark exposures with the same integration time were taken during the day-time calibrations by ESO. These three exposures were combined to a single exposure, thereby eliminating possible cosmic rays.

Flatfields

For each combination of grating and platescale used for the science observations a set of five exposures to the light of a halogen lamp and five lamp-off exposures were done during the daytime calibrations. The lamp-off frames were subtracted from the lamp-on frames and the resulting files were combined.

Bad pixel masks

The dark frames and the flatfield were used to generate bad pixel maps, which were then combined to a master bad pixel map. Generating bad pixel maps is not trivial in particular for the 25mas scale, where often the slitlet edges are marked as bad. This can perturb the wavelength calibration. The number of the pixels flagged as bad were adjusted by changing certain parameters like the detection threshold. The final master bad pixel maps had around 1×105 bad pixels from

∼4.2×106 pixels in total, i.e. ∼2.4%. This seems a lot, but the majority of bad pixels consisted of the outermost columns and rows not illuminated, a very large cluster and a few small clusters of bad pixels and a few bad columns, thus apart from these regions the fraction of bad pixels was only about 1%.

Detector distortion

The curvature of the detector was determined by recording a series of spectra from a continuous light fibre, which was moved perpendicular to the slitlets. This was also done during the daytime calibrations for all gratings used during the night.

The resulting ∼ 75 spectra were co-added and the distortion was computed by tracing the fibre spectra and solving a 2D polynomial fit which transforms the co-ordinates of the distorted frame to undistorted coco-ordinates. The relative distance (in pixels) between the slitlets was determined simultaneously.

2.4. DATA REDUCTION

Wavelength calibration

The wavelength calibration files are spectra of Neon and Argon arc lamps, taken during daytime for each combination of grating and platescale used during the night. The spectrum was background subtracted, flatfielded, corrected for bad pixels and distortion. The Ne and Ar lines were identified by crosscorrelating the spectrum with a spectrum made from a reference line list and convolved with a Gaussian. A polynomial was then fitted along each column to determine the dispersion coefficients, which were smoothed and used to generate a wavelength calibration map. At the same time the edges of the slitlets were determined from the positions of the brightest arc emission lines.

Telluric correction

A telluric standard star was always observed directly before, after or in between the galaxy observations. The telluric frames were sky subtracted, flatfielded, cor-rected for distortion and bad pixels and wavelength calibrated. Due to the short exposure time a simple subtraction of the sky frame worked well without leaving sky residuals. The datacube was reconstructed and a 1D standard star spectrum was extracted by averaging all spectra within a certain aperture. As the spectra of mid-to-late type B stars are featureless except for the Brγ absorption line, this line was removed in all telluric spectra using the task splot in IRAF, by fitting a Voigt profile to the line and subtracting the fit. Active or starforming galaxies often show Brγ emission, but this line is usually redshifted to wavelengths which are not affected by the Brγ absorption of the telluric. In order to remove the con-tinuum shape of the star, the spectrum of the telluric was divided by a blackbody spectrum of a temperature that corresponds to the spectral type of the star. The final result, after continuum normalisation, is the transmission of the atmosphere as a function of wavelength (see Fig. 2.5).

Spatial resolution and Strehl ratio

The PSF star exposures were calibrated in the same way as the telluric star. The Strehl ratio was computed as a function of wavelength by comparing the PSF star image with a theoretical PSF for a telescope with the primary and secondary mir-ror diameters of the VLT. The combined cube was collapsed along the wavelength direction (i.e., at each spatial position, the intensity was integrated within a cer-tain wavelength range). The spatial resolution, i.e., the FWHM of the image of the

CHAPTER 2. OBSERVATIONS AND DATA REDUCTION

) Å λ (

1.9 2 2.1 2.2 2.3 2.4 2.5

0.2 0.4 0.6 0.8 1 1.2 1.4

Figure 2.5: Atmospheric transmission as a function of wavelength created from a B9V star spectrum.

star, was measured by fitting a 2D Gaussian to the image (these values are given in Table2.2). For the dynamical modelling, however, the normalised 2D image of the PSF star was used instead of a 2D fit.

Reconstruction of the object datacubes

The galaxy exposures were, like the PSF, sky subtracted, flatfielded, corrected for distortion and bad pixels and wavelength calibrated. The bad pixels were corrected by using a 3D Bezier interpolation. Then the datacubes were reconstructed using the relative slitlet positions. The telluric absorption lines were removed by divid-ing each spectrum in the cube by the normalised transmission spectrum of the atmosphere. The final step was the combination of all object datacubes. Fig. 2.6 shows the final 3D datacube for the NGC 4486a (cf. Chapter4).

Improvement of the sky subtraction and the wavelength calibration

The individual datacubes were examined for residual sky lines usingQFitsView. In many cases significant residual sky lines or even “P-Cygni” shaped residuals were present in the spectra. “P-Cygni” residuals are the result of instrumental flexure due to the movement of the instrument with time, which causes a small shift of

2.4. DATA REDUCTION

)cescra(y

-1.5 -1 -0.5

0 0.5 1 1.5

Figure 2.6: 3D illustration of the final datacube of NGC 4486a. A 1D spectrum is overlaid along theλdirection for a better orientation.

the wavelength scale. Thus when a sky spectrum with such a shifted wavelength scale is subtracted from an unshifted object spectrum, each skyline is insufficiently subtracted on one side, and overcorrected on the other side. A significant improve-ment of the sky subtraction could be achieved using the method ofDavies(2007).

With this technique the sky emission lines were scaled as a function of wavelength such that the sky background was optimally matched. For this purpose separate object and sky datacubes were created, which were calibrated in the usual way but instead of subtracting the sky background from the object, only darks were sub-tracted from both object and sky. A blackbody function was fitted to the thermal background of the sky and subtracted from object and sky cube. The positions of the strongest OH lines were calculated in both object and sky cube, from which the shift in wavelength was determined and used to correct the wavelength scale of the sky spectrum. This step was omitted when no P-Cygni residuals were present.

Nevertheless the shifts between the object cubes, in particular cubes from different

CHAPTER 2. OBSERVATIONS AND DATA REDUCTION