• Keine Ergebnisse gefunden

4. Event and Object Reconstruction 49

4.2. Event Selection

4.1.10. Missing Transverse Energy (6ET)

Due to the neutrino from the single lepton decay of the t¯t pair, a significant amount of missing transverse energy is expected (6ET). 6ET uses the energy of all cells in the calorimeter and the calculated muon energy in order to check for an imbalance of energy in the transverse direction of the detector. The original energy in the transverse direction is assumed to be zero as the protons are colliding in the longitudinal direction. All of the reconstructed objects’ energies are directionally summed into one quantity and the opposite value dictates the size and transverse position of the missing energy. The energy in cells which do not contribute to reconstructed objects are also added to the 6ET.

The 6ET variable used for this analysis is a rescaled energy using the same jet energy calibration for consistency. The tight object definition is required for all objects. The understanding of the detector performance is essential to understand the6ET as all cells in the working detector are used for its calculation. The performance of the 6ET algorithm is shown in the W decay due to the large6ET component from the escaping neutrino. Using theW decay selection in both theµandechannels, the MC to data comparison is shown in Figure 4.10, which was performed on √

s= 7 TeV collision data [123].

Figure 4.10.: (Left): W → µν, and (right): W → eν 6ET distributions. Both channels show a very good agreement between MC and data. The figures are taken from √

s= 7 TeV collision data [123].

The6ET is calculated in the (x,y)-plane. It is constructed using all of the physics objects (muon, electrons, and jets) as well as the reconstructed jets with pT less than 20 GeV and clusters which are not associated to any objects. The formula is written as:

Ex,ymiss=−(Ex,ymuons+Ex,yelectrons+Ex,yjets+Ex,ysoft jet+Ex,ycell out), (4.6) where the 6ET is calculated by the sum in quadrature of the (x,y) components.

6ET =q

(Exmiss)2+ (Eymiss)2. (4.7)

4.2. Event Selection

The following event selection is used to improve the signal over background (S/B) ratio in the lepton + jets channel. The event selection is made to limit the largest portion of

4. Event and Object Reconstruction

the background contributions while keeping a relatively large amount of signal. The event cuts can be broken into several sections:

Rejection of Collision Background

To reject pile-up and other background events such as cosmic events, a primary vertex with at least four tracks associated to it is required in each event.

Lepton Requirements

Exactly one signal lepton (either electron or muon) is required. The lepton is also required to fire the trigger. This trigger requirement is only applied in thee+ jets channel for both data and MC. A trigger matching algorithm is applied to match the selected lepton with the trigger firing lepton. In theµ+ jets channel, the data is only required to fire the trigger. In data, a trigger matching algorithm is also applied. However, in MC the trigger and trigger matching efficiencies are directly applied to match the expected performance.

Furthermore, there are required to be no electron or muon overlaps in the inner detector.

6ET Requirements

Sincet¯tdecays in the single lepton channel to oneW boson which decays into a charged lepton and neutrino, there is expected to be a significant amount of6ET. Therefore a basic 6ET requirement of 6ET >20 GeV is required in every event. Since, however, there is still a large amount of background due to QCD multijets, an additional selection is applied to each channel separately. In the µ + jets channel, a triangular cut of 6ET + W transverse mass (mTW) > 60 GeV is required. In the e + jets channel, a higher 6ET threshold is required, 6ET >35 GeV, along with a mTW >25 GeV. The mTW is defined as:

mTW = q

2plT ·pνT[1−cos(φl−φν)]. (4.8)

Jet Requirements

At least four jets are required per event. It is expected from tree-leveltt¯decay into the single lepton channel that there are four jets in the final state. To reject a large amount of QCD multijets, at least one of these jets must be b-tagged using the JetFitterCombNN algorithm. The b-tagging weight must be > 0.35. This cut corresponds to the 70 % efficiency working point of the tagger. The light quark rejection is 99 at this working point.

Bad Jets

Using jet quality cuts, jets defined as bad with pT > 20 GeV are not used. Not only are these jets not used, but events which contain at least one such jet are also discarded.

Sources of bad jets include noise in certain calorimeter sections, as well as energy spikes in the hadronic end-cap.

62

4.2. Event Selection

LAr Dead FEBs

At event level any LAr bursts are removed in data. Also in data, some objects were lost due to dead LAr FEBs present for some of the data taking period (E-H), corresponding to about 80 % of the data-taking period. Therefore, an appropriate number of MC events are scaled to be lost in this region to match the hole present during some of the data-taking. This hole effects the calorimeter performance and can be seen in the electron and jet reconstruction.

Good Runs List

To ensure the detector is performing properly, a Good Runs List (GRL) is used. When the full detector is functioning as it should, a good run list calculates the selected lumi-nosity blocks (LBs) which are used for physics analyses. The GRL is defined for the entire physics group at ATLAS.

Pileup Re-weighting

In 2011 data-taking, pileup uncertainties due to out-of-time pileup from bunch trains constitute a large difference in MC to data modelling. As a result, this cannot be ignored and needs to be taken into account. The MC which is used to measure the top quark mass was created before data taking in 2011 began. The MC primary vertex distribution is therefore only abest-guess scenario. For the first part of data-taking period, the bunch separation is 50 ns. With such small separation time, overlapping signals can cause prob-lems when reconstructing objects in the detector. To deal with this out-of-time as well as in-time pileup, re-weighting is performed on MC to match that of the data periods which are used for the 1.04 fb−1. In-time pileup is the result of many reconstructed vertices in the same event due to multiple interactions at high instantaneous luminosities.

5. Modelling of Signal and Background