• Keine Ergebnisse gefunden

B ELLE IIE XPERIMENT B → K ( 892 ) µ µ D ECAYATTHE S ENSITIVITY S TUDYFORTHE

N/A
N/A
Protected

Academic year: 2022

Aktie "B ELLE IIE XPERIMENT B → K ( 892 ) µ µ D ECAYATTHE S ENSITIVITY S TUDYFORTHE"

Copied!
48
0
0

Wird geladen.... (Jetzt Volltext ansehen)

Volltext

(1)

B

ACHELORARBEIT

D

EUTSCHES

E

LEKTRONEN

-S

YNCHROTRON

, H

AMBURG

S ENSITIVITY S TUDY FOR THE

B + → K ( 892 ) + µ + µ D ECAY AT THE

B ELLE II E XPERIMENT

J ASPER R IEBESEHL

G

UTACHTER

P

ROF

. D

R

. C

AREN

H

AGNER

, U

NIVERSITÄT

H

AMBURG

&

D

R

. A

LEXANDER

G

LAZOV

,

D

EUTSCHES

E

LEKTRONEN

-S

YNCHROTRON

(2)
(3)

Abstract

With B meson decays it is possible to probe the Standard Model of particle physics without the necessity to investigate high energy scales. In previous analyses, the quark transitionb→s`+` showed signs of New Physics in kinematic observables that deviated from the Standard Model predictions. Among others, the decay B+ → K(892)+µ+µ is particularly suited for the search since it is highly suppressed in the Standard Model. New Physics effects therefore could have similar amplitudes to Standard Model effects which makes them detectable.

In this thesis the sensitivity of the Belle II experiment of this particular decay is determined.

Using simulated particle decays the amount of signal and background candidates is estimated for data sets with various integrated luminosities which were selected to be relevant for the Belle II data taking period. A new B meson reconstruction is presented which uses the Belle II Analysis Framework and a boosted decision tree algorithm to classify individual events for their likelihood of being a signal event. This boosted decision tree is trained using 29 observables which are derived from the particle decay and show separation between the shapes of signal and background events.

The presented analysis outperforms the predecessor Belle analysis as the reconstruction effi- ciency is doubled and the expected amount of background is reduced by a factor of eight.

(4)
(5)

Zusammenfassung

Durch die Untersuchung von B-Mesonen-Zerfällen ist es möglich, die Voraussagen des Stan- dardmodells der Teilchenphysik zu überprüfen, ohne auf hohe Energieskalen ausweichen zu müssen. In vorherigen Analysen von Zerfällen mit dem Quark-Übergang b→s`+` wurden bereits Anzeichen für Neue Physik in mehreren kinematischen Observablen gefunden. Neben Anderen bietet sich der ZerfallB+→K(892)+µ+µfür die Suche nach Neuer Physik an, da er im Standardmodell stark unterdrückt ist. Effekte jenseits des Standardmodells könnten daher ähnliche Amplituden haben, was sie detektierbar macht.

In dieser Arbeit wird die Sensitivität des Belle II Experiments gegenüber diesem Zerfall er- mittelt. Mit simulierten Teilchenzerfällen werden die Anzahl von Signal- und Hintergrundkan- didaten für Datensätze mit verschiedenen integrierten Luminositäten, welche relevant für den Belle II Datennahme-Zeitplan gewählt sind, ermittelt. Eine neue B Mesonen-Rekonstruktion, die das Belle II Analysis Framework und einen Boosted Decision Tree Algorithmus verwendet, wird vorgestellt. Dieser Boosted Decision Tree wird mit 29 Variablen, welche aus dem Zer- fall berechnet werden und eine Auftrennung zwischen den Signal- und Hintergrundkandidaten zeigen, trainiert.

Die präsentierte Analyse übertrifft die Vorgängeranalyse von Belle insofern, als dass die Re- konstrutioneffizienz verdoppelt und die Anzahl der Hintergrundkandidaten um einen Faktor acht verringert werden konnte.

(6)

Contents

1. Introduction 1

2. Theory Overview 3

2.1. Flavor Structure in the Standard Model . . . 3

2.2. New Physics Sensitivity ofB+→K(892)+µ+µ . . . 4

3. Experimental Setup 7 3.1. Introduction . . . 7

3.2. SuperKEKB . . . 7

3.3. The Belle II Detector . . . 8

3.3.1. Vertex Detectors . . . 8

3.3.2. Central Drift Chamber . . . 9

3.3.3. Particle Identification . . . 9

3.3.4. Electromagnetic Calorimeter . . . 10

3.3.5. KL and Muon Detection . . . 10

3.3.6. Detector Solenoid . . . 10

3.4. Data Taking Period . . . 10

3.5. Belle II Analysis Framework . . . 11

3.5.1. Python Steering . . . 11

3.5.2. Monte Carlo Simulation . . . 12

4. Analysis 14 4.1. B Meson Reconstruction . . . 14

4.1.1. Selection Criteria . . . 14

4.2. Machine Learning . . . 21

4.2.1. Multivariate Analysis Features . . . 22

4.2.2. Best Candidate Selection . . . 24

4.3. Efficiency Estimation . . . 26

4.3.1. Receiver Operator Characteristics . . . 26

4.3.2. Figure of Merit . . . 28

4.4. Results . . . 29

5. Conclusion 32

(7)

Contents

A. Appendix 35

A.1. Features . . . 35

(8)

1. Introduction

On a fundamental level, the Standard Model of particle physics is one of the most successful and most tested theories in physics. Since the second half of the last century it has been build up to the theory it is today. Through trial and error it predicted the existence of the gluon, the massiveW±andZ0bosons, the charm, the bottom, and the top quark before their experimental discovery with great accuracy.

It is however not perfect in a way that it leaves some phenomena unexplained. The most fundamental one, apart from not including gravity, is probably the imbalance of matter and antimatter in the universe which can not be explained by the allowed sources of CP violation in the Standard Model alone. Other sources of CP violation must lie beyond the Standard Model, showing that the theory needs to be expanded.

To come closer to the grand goal of particle physics, a theory of everything that explains all phenomena, two complementary experimental approaches exist. To provide confirmation of the existence of new particles they are produced directly at high energy scales of up to 14 TeV at the energy frontier. One example of this is the discovery of the Higgs boson at LHC in 2012.

This example illustrates how theory, which predicted the existence of the particle as early as the 1960s, and experiments worked together to gain knowledge about the nature of the universe.

On the intensity or precision frontier, hints on new particles are gathered indirectly. Elec- troweak processes that contain flavor changing neutral currents are highly susceptible to influ- ences of particles that arise in theoretical New Physics models. By detecting deviations from Standard Model predictions it is possible to learn about those particles without creating them directly. This method can be sensitive to new particles with masses up to orderO(100 TeV).

The Belle II experiment at SuperKEKB pursues the intensity approach. It is a B factory, creatingϒ(4S)mesons that decay intoBB¯meson pairs without other particles that contaminate the event, providing a clean and controlled analysis environment.

The focus of this thesis lies upon a specific B meson decay. The rareB+→K(892)+µ+µ decay only has a branching ratio of B(B+ →K(892)+µ+µ) = 9.6·107, less than one in a million, which creates the necessity to analyze a large amount of data to be statistically relevant. This decay proves interesting for multiple reasons: it features a forward-backward asymmetry, several kinematic variables that indicate New Physics and, together with its sister decayB+→K(892)+e+e, the possibility to measure whether lepton universality is broken.

The goal of this thesis is to create a B meson reconstruction method that provides both good efficiency and purity. With it the reconstruction efficiency for different integrated luminosities that are meaningful for the Belle II data taking period is estimated.

(9)

CHAPTER 1. INTRODUCTION

Outline In section 2, a brief overview over the Standard Model is given, followed by some explanation on flavor structure in the Standard Model and the specific decay in this thesis.

Section 3 contains information about the experimental setup, in particular how the Belle II detector is structured and how the analysis of data is done in the Belle II software framework.

In the main chapter which is section 4 the data analysis is described. Every step along is detailed from start to finish to illustrate the process. In section 5 a conclusion is given.

(10)

2. Theory Overview

This chapter introduces the decay featured in this thesis and gives an overview over Standard Model interactions relevant for the B+→K+µ+µ decay. Furthermore, different anomalies ofB+→K+µ+µ are presented and discussed.

2.1. Flavor Structure in the Standard Model

The Standard Model of particle physics is a quantum field theory that describes the interaction of elementary particles with three of the four known fundamental forces in the universe. It suc- cessfully predicts experimental observations concerning the electromagnetic, strong and weak force. The electromagnetic and the weak force are described in the GWS model, a combined theory named after Glashow, Weinberg and Salam [1, 2].

Figure 2.1.: Elementary particles in the Standard Model1.

Fig. 2.1 is a schematic representation of elementary particles in the Standard Model.

While the strong interaction is not relevant for this decay, the suppression of the decay can be explained with phenomena from the electroweak force. Two types of bosons are responsible

1Figure by MissMJ (Own Work), via Wikimedia Commons.

(11)

CHAPTER 2. THEORY OVERVIEW

for the weak interaction: the neutral Z0 and the chargedW±. WhileZ0 can interact with every particle in the Standard Model, W± can only interact with charged ones. Only W± bosons are capable of changing quark flavor, the neutral Z0 is not. Because of charge conservation, processes likeb→sare not possible on tree level since b and s have the same electrical charge.

This is known as the GIM (Glashow, Iliopolus and Maiani) mechanism, which forbids these Flavor Changing Neutral Currents at tree level. That is to say, the neutral gauge bosons (γ, g, Z0) can not have interactions with fermions which change flavor. Those processes can only occur highly suppressed in higher order loop diagrams.

Originally it was thought that a change of flavor could only occur within one generation, for example in the process u→d+W+. However, the mass eigenstates of quarks are not the same as the weak interaction eigenstates which introduces the mixing of quark flavors in the mass eigenstates of the down-type quarks. This translates to a probability of an inter-generation transition likes→u+Whappening.

This mixing is described by the CKM-matrixVCKM named after Cabbibo, Kobayashi and Masukawa. It is complex-valued, unitary and has four free parameters. Three of these are mixing angles between different quark generations and one is a CP-violating phase.

 d

s b

weak

=VCKM

 d s b

mass

=

Vud Vus Vub Vcd Vcs Vcb Vtd Vts Vtb

 d

s b

mass

(2.1)

|VCKM| ≈

0.974 0.225 0.004 0.225 0.973 0.041 0.009 0.040 0.999

 (2.2)

The probability of a quark transition is proportional to the magnitude of the corresponding matrix element squared. Eq. 2.2 is taken from [3], where each element is determined and averaged from different processes.

The diagonal elements correspond to transitions within a single generation (likeu→d+W+).

Their magnitude is close to one, making these kinds of transitions most likely. u→s+W+ for example is a less likely transition sinceVus is only 0.225.

2.2. New Physics Sensitivity of B

+

→ K

(892)

+

µ

+

µ

The decayB+→K+µ+µfeatured in this thesis contains the flavor changing neutral current processb→sand therefore is forbidden in the Standard Model at tree level. While no tree level Feynman diagram can be found, it is possible to find diagrams that include a loop. Feynman diagrams with loops are of higher order and usually suppressed compared to tree level diagrams.

The suppression is caused by the high mass of theW± boson and the flavor change in between

(12)

2.2. NEW PHYSICS SENSITIVITY OFB+→K(892)+µ+µ generations. Fig. 2.2 shows three different diagrams, two of which are possible in the Standard Model.

Both Standard Model diagrams contain at least oneW± with two interaction vertices where a change of flavor occurs. Fig. 2.2 (a) displays a penguin diagram in which the ¯bquark decays into an up-type anti-quark and a W. TheW emits a γ or Z0 that decays into two µ and interacts with the up-type anti-quark afterwards to form a ¯squark. In fig. 2.2 (b) two separate W± interact with the ¯bquark and the up-type anti-quark. TheW+ decays into a µ and a muon neutrinoνµ, which decays into the otherµ and theW. Fig. 2.2 (c) displays a diagram that is possible in the Minimal Supersymmetric Standard Model [4] which is a New Physics model. It is very similar to (b), only the twoW±are exchanged for two charged Higgs bosons.

Since the Standard Model suppression is large and the branching ratio is small, contributions of potential New Physics effects could have an order of magnitude comparable to Standard Model effects.

¯ s u u

¯b

W

γ, Z0

µ

µ+

¯ u,c,¯¯t

B+ K∗+

(a) Penguin Diagram

¯ s u u

¯b

µ

µ+

¯ u,¯c,¯t

νµ

W+ W

K∗+

B+

(b)W+WBox Diagram

¯ s u u

¯b

µ

µ+

¯ u,¯c,¯t

νµ

H+ H

K+ B+

(c) Minimal Supersymmetric Standard Model Diagram

Figure 2.2.: Feynman diagrams forB+→K+µ+µ Credit for templates: Simon Wehle, [5]

One of the earliest indications thatB+→K(892)+µ+µis sensitive to New Physics effects was the measurement of the forward-backward asymmetry AFB. It expresses that a different amount of decay products have a momentum in forward direction with respect toK+compared to decay products in backwards direction. This is caused by two interfering penguin diagrams where only the intermediate boson, aγ or aZ0, is different. For certain regions of the invariant

(13)

CHAPTER 2. THEORY OVERVIEW

mass of the lepton pair q2, AFB agrees both with the Standard Model prediction but also to predictions where certain Wilson coefficients were adjusted. Wilson coefficients are used to theoretically describe the kinematics of the decay in an ansatz called effective Hamiltonian.

The adjustment would simulate how New Physics effects could influence the decay [6, 7].

The theoretical uncertainties of AFB are fairly high because of hadronic form factor uncer- tainties, therefore the decay is described differently. Kinematically, four variables are needed to form a base that describes the decay. A common choice is q2 together with three different angles between the decay products. With several transformations a basis can be found that is mostly independent of hadronic uncertainties, thus a higher accuracy for the theory prediction can be achieved. In recent measurements, most of these observables would be in agreement with Standard Model predictions, but observableP50 showed 3.4σ deviation from the Standard Model [8].

Together with this decay’s sister decay B+ →K(892)+e+e, it is also possible to test a fundamental principle within the Standard Model. Lepton flavor universality suggests that it should not matter whether the lepton pair in the decay, apart from mass differences, is an electron or a muon pair. Therefore, the ratio

RK+= B(B+→K(892)+µ+µ)

B(B+ →K(892)+e+e) (2.3) should be around a value of one. At this point in time RK+ has not been measured. However, RK0 andRK which corresponds toB+→K+`+` were measured by the LHCb collaboration and found to be deviating from the Standard Model predictions. Both observables were found to be around 2.6σ below the Standard Model prediction [9, 10].

The data that Belle II will collect in the future will make these measurements more precise and usual 5σ that are required in particle physics to claim a discovery, might be achieved.

(14)

3. Experimental Setup

3.1. Introduction

The Belle II experiment is located at the SuperKEKB particle accelerator in Tsukuba, Japan and will be the direct and upgraded successor of the Belle experiment. The SuperKEKB accelerator is designed to produce B meson pairs for the detector in a clean experimental environment without perturbing amounts of other particles. With a comparatively low main collision energy this experiment pursues the precision approach of particle physics. It is a second generation B factory after Belle at KEKB and BaBar at the PEP-II accelerator.

3.2. SuperKEKB

The accelerator uses the tunnels of its predecessor KEKB and has the same basic principle. It consists of two beam pipes, one low energy ring (LER) for 4.0 GeV positrons and one high energy ring (HER) for 7.0 GeV electrons. The accelerator is depicted schematically in fig. 3.1.

At the interaction point, an electron annihilates with a positron with a center of mass energy of

√s≈ q

4·ELER·EHER=10.58GeV ≈mϒ(4S). Since the energy is chosen to be slightly higher than the mass of the ϒ(4S) meson, it is frequently created. At its creation it has almost no momentum and is therefore almost at rest. Theϒ(4S)decays with more than 96 % probability [3] into a BB¯ pair, and in many cases the e+e collision will exclusively produce a ϒ(4S), without any other particles.

The other feature is the asymmetric beam energy. If a BB¯ pair is created, its rest frame is boosted, giving the B mesons a longer half-life in the laboratory rest frame. This allows for a more precise detection of decay vertices, since the B meson traverses slightly further in the detector.

The main reason for the upgrade of KEKB to SuperKEKB was increasing the luminosity of the accelerator. By doubling the current in the beam pipes and shrinking the cross section of the interaction region (IR) by a factor of 20, SuperKEKB’s instantaneous luminosity L= 8·1035cm2s1will be 40 times larger than KEKB’s.

By immensely scaling down the vertical size of the beam which results in a smaller IR the problem of beam-induced background arises. The main source is the Touschek effect, where two particles in a bunch scatter and their energies are changed from the nominal bunch energy.

It is estimated that this effect will be about 20 times higher than in KEKB. Together with some

(15)

CHAPTER 3. EXPERIMENTAL SETUP

other sources, it is expected that beam background will be a lot higher, but as of now it is not known, how much impact this will have on the detection efficiency. [11, 12]

Figure 3.1.: Schematic view of the SuperKEKB accelerator. Credit: KEK

3.3. The Belle II Detector

The Belle II detector consists of several sub-detectors of which each one has a specific scope.

All of them are more or less arranged in multiple layers going outwards from the IR. The in- nermost component surrounding the IR is a two layer silicon pixel detector (PXD). Combined with the silicon vertex detector (SVD) it forms a unit to measure vertex positions. The central drift chamber (CDC) follows around these inner layers enclosed by the particle identification (PID) system, consisting of an array of time of propagation (TOP) counters in the radial di- rection and the Aerogel Ring-Imaging Cherenkov detector (ARICH) in the forward direction.

Finally, an electromagnetic calorimeter (ECL) and theKL and muon detector (KLM) conclude this summary of the sub-detectors. The full detector is depicted schematically in fig. 3.2.

3.3.1. Vertex Detectors

The innermost detector system of Belle II is the pixel detector (PXD). It consists of two cylin- drical layers of several planes, arranged in an overlapping fashion. With radii of 14 mm and 22 mm they are very close to the IR. Each plane is a two-dimensional array of 50µm thin DEPFET sensors with a size of 50 µm ×50 µm. With this small size a good vertex resolution despite the expected high beam background can be achieved. The silicon vertex detector (SVD) is composed of four cylindrical layers of double-sided silicon strip detectors. Just like the PXD’s layers, each layer is made of overlapping planes of detectors. The radius of the inner layer is 38 mm and the outer radius is 140 mm which is an upgrade from Belle’s SVD outer radius of 88

(16)

3.3. THE BELLE II DETECTOR

Figure 3.2.: Schematic view of the cross section of the Belle II detector, from the top. Credit:

[13]

mm. Because of higher beam background at radii up to about 10 cm which the CDC is not ex- pected to be able to handle, the larger coverage with silicon detectors is necessary. The purpose of these systems is the measurement of vertices of decaying particles, mainly B mesons.

3.3.2. Central Drift Chamber

The Central drift chamber (CDC) is a detector for ionizing radiation. It is a cylindrical chamber filled with a 50-50 gas mixture of Helium and Ethane. Thin charged wires run across it to attract electrons of gas particles that were ionized by a highly energetic particle flying trough. With the time information and positioning of the wires, a charged track can be reconstructed, including its momentum.

The CDC can also be used to identify particles based on their energy loss traversing the gas.

This is done by comparing the energy lossdE/dxoff each individual particle together with its momentum to the theoretical energy loss for each type of charged particle in the material, which helps for an overall estimation for the particle’s identity.

3.3.3. Particle Identification

As the name suggests, the particle identification systems purpose is the differentiation be- tween distinct particles, in particular between kaons and pions. It consists of two subsystems, the Time-Of-Propagation (TOP) counter and the Aerogel Ring-Imaging Cherenkov detector (ARICH). Both share their basic idea: measuring the velocity of a particle via Cherenkov radia- tion. Whenever a highly energetic particle passes through a radiator material, in this case quartz and aerogel, it emits Cherenkov photons in a cone with a specific opening angle. The angle

(17)

CHAPTER 3. EXPERIMENTAL SETUP

depends on the velocity, therefore measuring that angle yields the velocity. Together with the momentum information gained by the vertex detector and the CDC, a mass hypothesis can be calculated. The hypothesis is then compared to the nominal masses from PDG [3]. The particle associated with the nominal mass closest to the mass hypothesis is the most likely.

3.3.4. Electromagnetic Calorimeter

The electromagnetic calorimeter (ECL) is an accumulation of CsI(Tl)-crystals attached around the CDC. CsI(Tl) has a short radiation length and a high light output, making it a good scin- tillation material. When a photon or an electron hits one of the crystals or clusters, a electro- magnetic shower due to pair production and bremsstrahlung occurs. The intensity of the light of the shower is then measured by photo diodes, resulting in a value for the energy deposition.

Because of the large angular coverage of the ECL, photons can be detected with high efficiency and electrons can be identified. Their energy corresponds to the deposited energy.

3.3.5. K

L

and Muon Detection

The KL and muon detector is the most outer part of the Belle II detector and is made out of alternating iron plates and resistive plate chambers (RPC’s). RPC’s functions similar to the CDC in a way, as the charged particle ionizes molecules along its path and the secondary electrons are amplified and measured. Next to serving as a flux return for the magnetic field, the main purpose of the KLM is to detectKLand muons. The thick iron plates provide several interaction length to makeKL’s shower hadronically and the shower can be detected by the RPC’s.

3.3.6. Detector Solenoid

In between the ECL and the KLM a superconducting solenoid magnet is located. The supercon- ductor used is a composition of Niob Titanium Copper (NiTi/Cu). It creates an approximately homogeneous magnetic field with 1.5 T. The tracks of charged particles traversing the field are forced into a circular path which allows the measurement of the particle’s momentum.

This all information in this overview is taken from [12] which offers highly detailed technical descriptions of each component.

3.4. Data Taking Period

At this point in time, construction of the upgrade from Belle to Belle II and KEKB to Su- perKEKB is done. The data taking period is visualized in fig. 3.3. After thorough testing, the planned start of the data taking with all detector components in place is scheduled for late 2018.

Over time, the instantaneous luminosity is raised until it reaches its peak around mid 2022.

Plateaus in the integrated luminosity are caused by maintenance breaks.

(18)

3.5. BELLE II ANALYSIS FRAMEWORK An integrated luminosity of 1 ab1 will be reached after one year of operation, which is already comparable to what the Belle experiment was able to collect in more than ten years of operation. The goal of Belle II is to collect 50 ab1of data by 2025.[11]

Figure 3.3.: Overview of the planned Belle II data taking period. Credit: [11]

3.5. Belle II Analysis Framework

Because of the high luminosity, a reliable and stable software framework is required for the experiment. The Belle II Analysis Framework (basf2) contains almost every piece of software necessary to analyze a B meson decay from start to finish. Running an operation in basf2 re- quires the user to chain a set of modules along a linear path. The path is executed by running one module after another. Each task, for example reading a data file and importing its contents, has its own module. The modules have read and write access to shared memory called DataStore.

The general schema is depicted in fig. 3.4.

Figure 3.4.: Schematic display of a basf2 path. Credit: [14]

3.5.1. Python Steering

The framework itself is mostly written in object-oriented C++. It is modeled after the Belle Analysis Framework (basf) and utilizes concepts and libraries from other particle physics col-

(19)

CHAPTER 3. EXPERIMENTAL SETUP

laborations like ROOT1 from Cern. To simplify the physics analysis, the framework has an interface allowing it to be used with python. Just as described above, all modules can be loaded in a path that is defined by a simple python script. In addition to loading models this steering file can also contain any python functionality, making it quite versatile while the syntax remains simple.

3.5.2. Monte Carlo Simulation

To get an estimate on signal yields for the decay, a toy study with simulated data is conducted.

In basf2, simulations of particle collisions, also called Monte Carlo (MC) events, are generated using a variety of external packages. Among others, EvtGen [15] and PYTHIA [16] are used to generate a chosen particle production and decay, based on random number generators. Next, the detector response to the generated decay is simulated using the GEANT4 [17] software. The results should mimic the detector response to a real decay as closely as possible. Finally, the particle decay gets reconstructed from the detector response. Tracks from charged particles get fitted, ECL hits get grouped together and several other operations are performed. At this point, the Monte Carlo data is ready for use in a physics analysis.

All these steps can be done locally. Large samples of different kinds of Monte Carlo data are produced in central Monte Carlo campaigns. With each new release version of basf2, a new data set is generated. This data is accessible through the GRID, a global network of clusters for dispersed computing.

The analysis in this thesis requires two different types of MC data sets.

Signal MC Only signal events are contained in this set. These are events where a B+B pair is created and one of the mesons decays like B+ →K+µ+µ. This also includes the charge conjugated decay B →K∗−µµ+. The other meson decays generically according to its branching ratios. To have enough training data for the analysis, 1,000,000 events are generated locally.

Table 3.1.: Composition of generic MC

Type Content Percentage

Continuum MC e+e →qq¯,q=u,d,c,s 69 %

Mixed MC ϒ(4S)→B00 16 %

Charged MC ϒ(4S)→B+B 15 %

Generic MC This set contains events that are associated to be background events. These on one hand are events where aϒ(4S)is created, but both mesons of theBB¯pair do not decay like

1https://root.cern.ch/

(20)

3.5. BELLE II ANALYSIS FRAMEWORK B+→K+µ+µ, which is not even possible if it is aB00pair. On the other hand also events where not aϒ(4S), but a quark pair is created are included in the generic MC. These processes followe+e→qq¯, q=u,d,c,sand are called continuum events.

The generic MC is composed of these different background such as the percentage of each background component mimics the probability of a real collision be of that kind of background.

The percentages are listed in tab. 3.1. 1 ab1 of generic MC events was obtained through the GRID.

(21)

4. Analysis

The main part of this thesis is a sensitivity study to determine how many signal candidates of the rare decayB+ →K+µ+µ can be expected given the data sets with various luminosities at the Belle II experiment. The luminosities are chosen to be 0.711, 1, 5, 10, 25 and 50 ab1.

4.1. B Meson Reconstruction

The studies are done using the MC data described in sec. 3.5.2. It is evaluated using basf2.

First, the raw data undergoes several reconstruction steps to build up a full B meson decay chain. Every event, both from background MC and signal MC, is iterated as follows.

1. The charged muon, kaon and pion tracks are selected.

2. TheK+ gets reconstructed via its two main decay channels:

a) K+ →K+ π0 where π0 is reconstructed from two photons. Each photons is re- constructed by an algorithm that groups up ECL hits close to each other and creates possible photon candidates.

b) K+ →KS0π+ whereKS0 is reconstructed by an algorithm that looks for two oppo- sitely charged particle tracks which have the same spatial origin.

3. The B meson is reconstructed fromB+→K+µ+µ.

Every step is also done for the charge conjugated variant which is equivalent. In further steps, all variables described in section 4.2.1 are calculated and written into a file for analysis use.

4.1.1. Selection Criteria

Since most of events have various combinatorial possibilities to reconstruct each intermediate particle, a selection of constrains has to be applied. This limits the amount of computational work and saves time. In this first selection step the amount of combinatorial background is drastically decreased.

PID A particle identification (PID) with the detector response can be performed. For each reconstructed particle, six different probabilities are calculated, each corresponding to one of the »stable« charged particles. These are electrons, muons, kaons, pions, deuterons and protons.

(22)

4.1. B MESON RECONSTRUCTION Their lifetime and therefore their mean free path is long enough to not decay in the detector, which makes them stable in this experiment.

To calculate a PID value, the interaction of each particle with different sub-detectors is taken into consideration individually. For each sub-detector, six likelihoods for the stable hypotheses are determined.

∆ln(Lα) =ln(Lhyp)−ln(Lα) (4.1) With eq. 4.1, a logarithmic difference in combined likelihood from all sub-detectors for a specific particle hypothesis is calculated. Lα is the sum of the likelihoods of different sub- detectors for particle hypothesisα, whileLhyp is the summed likelihood for the hypothesis of the particle itself, which is arbitrarily chosen. The PID value can be extracted by normalizing

∆ln(L)to a scale from zero to one. This results in a powerful discriminator for opposing particle hypothesis [11].

To find the best PID information constraint that preserves enough efficiency, the PID informa- tion for every reconstructed electron, kaon and pion for a sample of 100,000 signal MC events is analyzed. Only the probability for a muon candidate to be a muon, and similarly for kaons and pions, is looked at. Other variants such as the probability of e.g. a muon candidate being a kaon are not taken into account.

Since the data is generated, every particle along the decay chain is known. Solely those particles that satisfy the following conditions are considered true candidates:

• ForK andπ:

– The particle candidate is a generatedK/π

– The particle candidates mother particle is aK(892)+ – The particle candidates grandmother particle is aB+

• Forµ:

– The particle candidate is a generatedµ

– The particle candidates mother particle is aB+

All other particles are neglected because they do not matter for the main decay and are consid- ered background.

The efficiency and purity for each constraint are calculated as efficiency=N(true|selected)

N(true) (4.2)

purity= N(true|selected)

N(true|selected)+N(f alse|selected). (4.3) The index ’selected’ indicates that only candidates that fulfill the PID constraint are included whereas a lack of this index includes all candidates.

(23)

CHAPTER 4. ANALYSIS

The result is illustrated in fig. 4.1. The PID constraints for the analysis are chosen to be PIDµ>0.9, PIDK >0.6 and PIDπ >0.6. This choice offers good efficiency and purity, espe- cially forµ candidates.

0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0

0.0 0.2 0.4 0.6 0.8 1.0

Efficiency

K

0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0

PID constraint

0.0 0.2 0.4 0.6 0.8 1.0

Purity

K

Figure 4.1.: Visualization of efficiency and purity for different particles along the PID range.

Both the kaon and the pion purity curve have very low values across the range, even at high PID constraints. This is due to what is defined as a true candidate: while there are many true pions and kaons at high PID values, most of them are considered background since they were not produced in the decayB+ →K+µ+µ. The purity is therefore significantly lower than it would be in a study not specific to a decay.

Origin and Quality of the Track Each of the primary particles has to fulfill three additional constraints. The track that is fitted to resemble the particle’s path has to have a minimal value for the quality of the fit. This is expressed in the variableχprobwhich has to satisfy χprob>0.001.

This is not a strong constraint and ensures that the track is at least remotely close to the actual path.

Secondly the origin of each primary particle has to be near the interaction point. This is ensured by -1 cm < dr < 1 cm and -5 cm < dz < 5 cm where dr is the transverse and dz the z

(24)

4.1. B MESON RECONSTRUCTION distance to the IP. More freedom for the z values is left because the resolution in z-direction is worse than the transversal resolution.

Mass Constraints on Intermediate Particles To more severely increase the purity of the data set, all the midway particle candidates are required to have an invariant mass that is close to the nominal invariant mass. Only if the reconstructed particle has an invariant mass that fulfills the constraint it is kept.

Both the reconstruction ofK0andπ0use the invariant mass constraints recommended for the software version that was used. These are

0.45GeV/c2<MK0

S <0.55GeV/c2 (4.4)

and

0.10GeV/c2<Mπ0 <0.16GeV/c2. (4.5) Their invariant mass distribution is displayed in fig. 4.2.

(a) Distribution ofMπ0 (b) Distribution ofMK0 S

Figure 4.2.: Distribution of the invariant masses ofπ0andKS0, separated by signal and different background types.

The mass constraint on MK(892)+ is examined more closely. With all previously discussed constraints in place, the B meson reconstruction efficiency is calculated, dependent onMK(892)+. This is displayed in fig. 4.3 (a), while 4.3 (b) displays the invariant mass distribution.

Since a lower and an upper constraint is needed, both constraints are treated independently.

To retain a high reconstruction efficiency, the invariant mass constraint is chosen to be

0.74GeV/c2<MK(892)+ <1.2GeV/c2. (4.6) Final requirements on the B Meson Reconstruction At the final reconstruction step, two more constraints are required.

(25)

CHAPTER 4. ANALYSIS

0.6 0.8 1.0 1.2 1.4 1.6

MK*(892)+ [GeV/c2]

0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1.0

Efficiency

low MK*(892)+ constraint high MK*(892)+ constraint

(a) B meson reconstruction efficiency as a function of MK(892)+.

(b) Distribution ofMK+

Figure 4.3.: Constraints on the invariant mass of K+. The region between the dotted lines indicate the range of the invariant mass constraint.

A beam constrained mass variable is defined as Mbc=

q

Ebeam2 − |p~B|2 (4.7)

where ~pBis the reconstructed momentum of the B meson candidate.Ebeamis half the center-of- mass (CM) frame energy of the beam.

Also, the energy difference

∆E=EB−Ebeam (4.8)

is defined withEB as the reconstructed energy of the candidate.

A real B meson is always produced with a partner, so each real B meson carries half of the beam energy. Therefore, a real B meson should have a beam constrained mass around the invariant mass of a B meson and an energy difference around zero. These two variables can separate signal and background very well and are useful for this analysis. The efficiency as a function of each variable is evaluated, similar to the procedure forMK(892)+.

Again with all previous constraints, the requirement that leaves 99% efficiency, derived from fig. 4.4, would beMbc>5.27GeV/c2. However, to estimate the final B meson reconstruction efficiency it is necessary to have background events that do not meet this constraint. Instead, Mbc>5.22GeV/c2is chosen.

The final constraint to be looked at is the one constraining ∆E with the same procedure as before. The constraint is applied on the absolute value of∆E. This is displayed in fig. 4.5.

Again, to retain efficiency, the constraint is set to be|∆E|<0.5 GeV/c2. Tab. 4.1 summarizes all hard constraints.

(26)

4.1. B MESON RECONSTRUCTION

5.18 5.20 5.22 5.24 5.26 5.28 5.30

Mbc [GeV/c2]

0.0 0.2 0.4 0.6 0.8 1.0

Efficiency

(a) B meson reconstruction efficiency as a function of Mbc

(b) Distribution ofMbc

Figure 4.4.: Constraint on Mbc. All candidates with values lower than the value at the dotted line are discarded.

0.0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8

| E| [GeV]

0.0 0.2 0.4 0.6 0.8 1.0

Efficiency

(a) B meson reconstruction efficiency as a function of

|∆E|

(b) Distribution of|∆E|

Figure 4.5.: Constraint on|∆E|. All candidates with values higher than the value at the dotted line are discarded.

Unavoidable Background Veto If the invariant mass of the two muonsq=Mµ µ in the decay is calculated, two distinct peaks are observable in the spectrum. Fig. 4.6 displays these for a background sample that consists of charged B pairs that decay generically.

These are charmonium resonances and originate from the decaysB+→K+J/ΨandB+→ K+ Ψ(2S), where the charmonium decays into two leptons. The peaks correspond to the masses of both theJ/ΨandΨ(2S)with nominal values ofMJ/Ψ= 3096 MeV/c2andMΨ(2S) = 3686 MeV/c2[3].

These decays have the same signature as theB+→K+µ+µdecays in a sense that they also have a beam constrained mass values around the nominal B massMB=5279.32±0.14MeV/c2 [3]. This makes them indistinguishable from real signal events, they therefore need to be filtered

(27)

CHAPTER 4. ANALYSIS

Table 4.1.: Summary of the hard constraints used in the B meson reconstruction.

Description Value

Particle ID Muon PIDµ >0.9 Particle ID Kaon PIDK>0.6 Particle ID Pion PIDπ >0.6 Track Fit Quality χprob>0.001

Track Origin -1 cm < dr < 1 cm -5 cm < dz < 5 cm

K0Mass 0.45 GeV/c2<MK0 < 0.55 GeV/c2 π0Mass 0.10 GeV/c2<Mπ0 < 0.16 GeV/c2 MK+ Mass 0.74 GeV/c2<MK+ < 1.2 GeV/c2 Beam Constrained Mass Mbc> 5.22 GeV/c2

Energy Difference |∆E|< 0.5

0.0 2.5 5.0 7.5 10.0 12.5 15.0 17.5 20.0

q2[GeV2/c4] 0

100000 200000 300000 400000 500000 600000

Candidates

Figure 4.6.: The distribution ofq2for a background sample, the veto for the unavoidable char- monium background is highlighted in red

out by applying a constraint on the lepton invariant mass.

In practice, the constrains are applied on the squared invariant mass q2. They are chosen according to tab. 4.2. They reach slightly further than the visible peaks because of potential tails of the charmonium mass distributions.

(28)

4.2. MACHINE LEARNING

Table 4.2.: Charmonium Veto

J/Ψresonance 9.0 GeV2/c4<q2<10.0 GeV2/c4 Ψ(2S)resonance 13.0 GeV2/c4<q2<14.0 GeV2/c4

4.2. Machine Learning

To further reduce background and isolate the signal events, methods of machine learning are applied. An artificial intelligence tries to learn patterns from already classified data and applies them onto similar data that is unclassified. In this case, a classifier tries to decide for each event if it is the decay that is wanted.

Boosted Decision Trees The most natural way of dividing a data set into subsets is to look at a certain condition and asking whether each individual element in the set fulfills this condition.

If an element does fulfill the condition, it will be part of subset A, otherwise it will be part of subset B. The condition uses a feature of the data set, in case of particle physics it may for example be if a certain particle’s energy is above a certain threshold.

Figure 4.7.: Schematic display of a decision tree with depth two. The first row in the three upper boxes displays a condition. Depending on the outcome of the evaluation of the condition, either the »true« or »false« path is taken.

In a basic decision tree, multiple of these conditions are chained together into a tree-like shape, hence the name. The features used for the conditions are selected by using a greedy algorithm. At each level of the tree, it selects the feature for the condition that produces the best split of the data set. The best feature is determined by the separation power the split delivers. For each subset then, the procedure is repeated to get a tree that divides the original set into multiple subsets. This process is called training. An example of a trained decision tree is depicted in fig.

4.7.

In this case, a binary classification is desired. With each step along the tree, the subsets get refined in such a way that one subset contains more true candidates than the other. By the lowest

(29)

CHAPTER 4. ANALYSIS

layer and in case of a good decision tree, one subset should contain either only true or only false candidates. Generally this is not the case and the model is not very strong.

To create a stronger model, multiple decision trees are combined together in a process called boosting. To start with, a decision tree limited to a depth of a few layers is trained. Also referred to as a weak learner, this model will predict some of the data correctly and some of it incorrectly. This imperfect learner is then inputted into the boosting algorithm which tries to find another weak learner that improves the shortcomings of the imperfect learner. Both are combined to form another imperfect learner which is slightly better than its predecessor. This step can theoretically be repeated indefinitely [18].

The combination of many weak learners, also referred to as estimators, results in a model that has a lot of separation power, referred to as a strong learner. After the training, the model can be applied to test data that is similar to the training data and has the same features. All data points in the test set are run through the tree and a probability for each one being a signal is calculated.

In case of a good model, this divides the test set into two groups with few data points that have a probability around 50 % which indicates that the tree can not classify this point well.

4.2.1. Multivariate Analysis Features

The data set that has to be classified is the generic background mixed with the signal MC, both of which are discussed in section 3.5.2. The set is split into two subsets of equal size, where the signal MC is marked as signal for the classifier. One of the sets is used to train the classifier while the other one is used as a test set to later get the results. This step prevents the classifier to learn the training set by heart and not generalize to similar events.

To train the boosted decision tree, features from the data set have to be selected. There is no general way to always find the optimal features, they are selected through trial and error. Tab.

4.3 contains every feature used to train the boosted decision tree.

The histograms for each feature, comparing the signal shape to the different background shapes that form the generic background, are displayed in the appendix in fig. A.1.

Continuum Suppression To identify and suppress continuum events (e+e→qq,¯ q=u,d,s, c), several sets of variables exist. The idea is to use the topological differences of continuum decays and realBB¯events. If aϒ(4S)is created, it only has a small momentum and is approx- imately at rest and it therefore decays almost isotropically into aBB¯ pair. If instead something other than aϒ(4S)is created, it has enough momentum that the decay forms two opposite jets of daughter particles.

For particles in the event with momenta pi, (i=1, ...,N), the thrust axis~T is defined as the unit vector along which their total projection is maximal. The magnitude of the thrust axis

T max= ∑Ni=1|Tˆ·~pi|

Ni=1|~pi| (4.9)

(30)

4.2. MACHINE LEARNING

Table 4.3.: Features used for classifier training

Feature Description

∆E ∆E=EB−Ebeam

MK(892)+ Mass of the reconstructedK(892)+ σMK∗

(892)+ Significance ofMK(892)+

Vertex P Value Measure for the quality of the decay vertex fit. The vertex is calculated using the charged tracks of the decay. A low value indicates that the charged tracks originate close to each other.

Thrust B Magnitude of B thrust axis

cos(θB,ROE) Cosine of the angle between the thrust axis of the B meson and the thrust axis of the Rest of Event (ROE)

Cleo Cones see below

Modified Fox-Wolfram Moments see below

R2 reduced Fox-Wolfram moment

Mmiss2 Squared Missing Mass

ET Transversal Energy

EROE Energy of unused tracks and clusters in ROE

MROE Invariant mass of unused tracks and clusters in ROE

Eextra,ROE Extra energy from ECLClusters in the calorimeter

that is not associated to the given Particle

nEextra,ROE Extra energy from neutral ECLClusters in the

calorimeter that is not associated to the given Parti- cle

can be defined as a derived quantity [19]. It serves as a measure for general direction of multiple particles. The B thrust axis therefore is the thrust axis given by all daughter particles of the B meson.

To put this concept into use, in 1996 the CLEO collaboration developed the Cleo Cones as variables for charmless B decays. Nine cones around the thrust axis of the B candidate with apex angles of each 10more than the one before it are defined. The momentum flowxi,(i=1, ...,9) through these cones are defined as the scalar sum of the momenta of all tracks going through the cones. Each event is folded in a way that the forward and backward direction are combined.

The momentum flow through each cone for an isotropic decay should look different than the flow of a jet-like decay [20].

Another way to quantify the shape differences in the decays is the definition of Fox-Wolfram

(31)

CHAPTER 4. ANALYSIS

moments. They are obtained by

Hl=

m,n

|p~m||~pn|Pl(cosθmn)

Evis2 (4.10)

where~pi is the momentum of the i-th particle,Pl the l-th order Legendre polynomial,θmn the angle between particle m and n and Evis the total visible energy in the event [21]. The ratio RK =Hk/H0 is often used as a feature, especiallyR2seems to have a lot of separation power between continuum andBB¯ events.

To further refine these moments, each calculation can be done using not all but only specific particles. The particles are separated into groups depending on if they are a daughter of the signal B meson (s) or a member of the rest of event (o). The modified Fox-Wolfram moments Hlss,HlsoandHloo, also known as KSFW moments, can be calculated by only using the specified particle groups. Multiple of these are used in the training of the classifier.

Correlation It is important that none of the features of the signal MC data set correlate too strongly with the beam constrained massMbc. To estimate the efficiency, this variable is used to obtain the number of signal and background candidates, it therefore must not have any sort of bias that is introduced by the classifier which could be caused by correlated features. A biased classifier would leave more background candidates tagged as signal candidates with aMbcvalue close to the nominal B mass, resembling a signal candidate.

Linear correlation can be quantified using the Pearson correlation coefficient. It is defined as ρxy≡ ∑ni=1(xi−x¯) (yi−y¯)

q

ni=1(xi−x¯)2·∑ni=1(yi−y¯)2

(4.11)

where ¯xis the mean value of data set x.

Fig. 4.8 is a heat map showing all possible correlations between the classifier features. It is symmetrical since the arguments of the Pearson coefficient are commutative. Each feature cor- relates with itself, therefore having a maximum correlation of one with itself. The bottom most row displays the correlation of Mbc to every other feature. The strongest correlation observ- able here is the one with∆E, beingρ∆E,Mbc =0.22. This compromise has to be made because

∆E has by far the greatest separation power of all features. All other features do not have any significant linear correlation withMbc.

It is possible that two features do not correlate linearly but non-linearly with each other. This case is not covered in this simple evaluation. It should however suggest how correlation affects the final classification result.

4.2.2. Best Candidate Selection

For most events there are multiple candidates that fit the discussed constraints, although there can at most only be one true candidate per event. To drastically reduce this combinatorial

(32)

4.2. MACHINE LEARNING

E MK*(892)+ MK*(892)+

Vertex P value Thrust B axis cos()B,ROE

Cleo Cone 1 Cleo Cone 2 Cleo Cone 3 Cleo Cone 4 Cleo Cone 5 Cleo Cone 6 Cleo Cone 7 Cleo Cone 8 Cleo Cone 9

mm2 ET Hso 1,0 Hso 1,2 Hso 1,4 Hso 2,0 Hso 2,2 Hso 2,4 Hoo 0 R2 EROE MROE Eextra,ROE nEextra,ROE Mbc

MK*(892)E+

MK*(892)+

Vertex P value Thrust B axis

cos( B, ROE)

Cleo Cone 1 Cleo Cone 2 Cleo Cone 3 Cleo Cone 4 Cleo Cone 5 Cleo Cone 6 Cleo Cone 7 Cleo Cone 8 Cleo Cone 9 mm2 ET

Hso1, 0 Hso1, 2 Hso1, 4 Hso2, 0

Hso2, 2 Hso2, 4 H0oo R2

EROE

MROE

Eextra, ROE

nEextra, ROE

Mbc

0.8 0.4 0.0 0.4 0.8

Figure 4.8.: Linear correlation between individual features background displayed in fig. 4.9, a good criterion has to be found.

0 5 10 15 20 25 30 35 40

Candidates per Event

0 20000 40000 60000 80000 100000 120000 140000

Events

Generic Background Signal

Figure 4.9.: Number of candidates per event

One possibility is to only select the candidate of a given event that has the∆E value that is closest to zero. It is known that correctly reconstructed B mesons should have ∆E ≈0, it is

(33)

CHAPTER 4. ANALYSIS

therefore likely to select the one true candidate out of the several false ones. This can only be done because∆E andMbc are not too strongly correlated. It is also possible to only select the candidate that has the highest probability of being a signal candidate, outputted by the boosted decision tree. Both options were tested against each other, leaving the∆E candidate selection as the superior variant.

4.3. Efficiency Estimation

As mentioned in sec. 4.2.1, both a training and test data set with equal sizes are created. The classifier is trained on the corresponding set and afterwards applied on the test set. Each can- didate in every event now has a calculated probability of being a signal candidate. To further reduce background candidates, the best candidate selection described above is applied.

The package XGBoost [22] that implements a boosted decision tree method is used in this thesis since it runs fast and reliable. To confirm the good performance of XGBoost it is com- pared to different classifiers.

4.3.1. Receiver Operator Characteristics

To quantify the performance of the classifier against other classifier models or the same model with different parameters, a Receiver Operating Characteristic (ROC) curve is defined. For a classifier the true positive rate (TPR) is plotted against the false positive rate (FPR) at different threshold settings. A perfect classifiers ROC curve would include the point (0, 1) since it sym- bolizes no false positives while the TPR is one. The area under the curve is a measure for the quality of the classifier, evaluated on a test data set.

A variation of this is a purity versus efficiency curve where both quantities are plotted against each other. Here a perfect classifier retains 100% efficiency while the purity is also 100%.

To justify the choice of classifier model in this thesis, different models are compared via purity versus efficiency curves displayed in fig. 4.10. All classifiers were trained on the same data and features. XGBoost visibly performs best.

To approximate the best parameters for XGBoost and for this particular problem, multiple XGBoost classifiers are tested against each other. The parameters that were varied are the maximum depth a single tree/weak learner can have and the number of iteration the training has. The result is displayed in 4.11.

At a certain number of estimators and at a certain max. depth, the scores are very similar.

With a score of 0.974, the classifier with max. depth of four and 800 estimators came out on top.

This configuration however produces an output that seems to be correlated toMbc. Therefore, a configuration slightly less powerful but also less correlated was chosen. A classifier with max.

depth three and 300 estimators was used.

(34)

4.3. EFFICIENCY ESTIMATION

0.0 0.2 0.4 0.6 0.8 1.0

Efficiency

0.0 0.2 0.4 0.6 0.8 1.0

Purity

XGBoost Ada Boost Neural Net Extra_Trees Random_Forest

Figure 4.10.: Purity versus efficiency curve obtained from different classifier models.

100 200 300 400 500 600 700 800

n Estimators

1 2 3 4 5 6 max. Depth

0.915 0.930 0.945 0.960

Figure 4.11.: Heat map that displays the scores for different parameter settings for XGBoost.

Separation Power To visualize the separation power the classifier achieves, the classifier output is plotted in a histogram for both classes, signal and background.

Fig. 4.12 displays this after the best candidate selection is applied. A separation between the two classes is visible and for high XGBoost outputs true signal events are abundant over background events.

Since an optimized classifier for this problem is found, the ideal constraint on the classifier output to estimate the B meson efficiency is calculated.

Referenzen

ÄHNLICHE DOKUMENTE

[r]

1. Weiter belege man mit einem Beispiel, daß in b) nicht allgemein Gleichheit

Bienenzelle: Eine regelm¨ aßige sechsseitige S¨ aule ist durch ein aus drei kongruenten Rhomben bestehendes Dach so abzuschließen, dass die Oberfl¨ ache des entstehenden K¨ orpers

Wengenroth Wintersemester 2014/15 21.01.2015. Maß- und Integrationstheorie

(Compared to weak mixing, note the absence of absolute value bars.)... Circle rotations are not weak

[r]

[r]

Programmieraufgabe 4: Verwenden Sie Ihren Code aus Teil PA1 (oder die Matlab-Funktionen fft und ifft) um ein Programm zu schreiben, welches zu vorgegebenen Daten eine