• No results found

To b or anti-b, Tagging calibration for Bs -> DsK

N/A
N/A
Protected

Academic year: 2021

Share "To b or anti-b, Tagging calibration for Bs -> DsK"

Copied!
43
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Report Bachelorproject Physics and Astronomy

size 15 EC

To b or anti-b

Tagging Calibration for B

s

0

→ D

s

K

±

Author:

Emma Schepers 10735267

Supervisors: Niels Tuning Marcel Merk

Conducted between April 1st and July 1st

(2)
(3)

Abstract

A necessary ingredient within the Standard Model for the current matter dominance is CP violation. With the decay Bs0→ D∓sK± CP violation between decay with and without mixing can be observed because both the Bsand ¯Bs can decay into the same final state and can oscillate

between the two flavours. To measure CP violation in this decay the production flavour of the B-meson should be reconstruced by Flavour Tagging algorithms. For an accurate result these algorithms should be calibrated on a flavour specific channel, in this case

Bs → Dsπ. In this thesis the first calibration for run 2 data of 2015 and 2016 with the new tool

EspressoPerformanceMonitor is presented. The two parameters of the linear calibration function have been calculated for 1 SS and 5 OS taggers and possible combinations. From this can be seen that the total tagging power, the fraction of rightly tagged events, is improved from 5.8% in run 1 to 7.2% in run 2. This means a relative statistical sensitivity improvement of 20% for all time dependent CP violation measurements at LHCb. The next step is to determine how to implement this in the B0

s → D∓sK± analysis and choose what kind of combination should be

used.

Populair wetenschappelijke samenvatting

In een proton-proton botsing in de deeltjesversneller op CERN kunnen onder andere Bs en

anti Bs deeltjes geproduceerd worden. Deze mesonen vervallen vervolgens in lichtere geladen

deeltjes die gedetecteerd kunnen worden. In het geval van Bs0 → D∓sK± kan het Bs deeltje

direct vervallen, of eerst veranderen in een anti-Bs en dan vervallen. In de interferentie van deze twee reacties kunnen we schending van de CP symmetry observeren.De CP symmetry zegt dat een antideeltjes (met tegengestelde lading) in spiegelcoordinaten zich hetzelfde gedraagt als een deeltje in normale coordinaten.

Om dit te meten moeten we weten of het begin deeltje een Bs of een anti-Bs was. Dit

gebeurt met Flavour Tagging algoritmes. Deze ‘taggers’ bestuderen de botsingsinformatie en reconstrueren daaruit de ‘tag’ en de kans dat deze beslissing fout is.

Om de smaak zo precies mogelijk te bepalen moeten we onze algoritmes calibreren. Dit doen we door te kijken naar een ander verval waarbij de Bs niet in dezelfde deeltjes vervalt als de

anti-Bs, in dit geval Bs→ Dsπ. Door de uitkomst van de algoritmes te vergelijken met de echte

smaak kunnen we een calibratie functie opstellen. In dit onderzoek is de eerste calibratie gedaan voor de run 2 data (2015 en 2016) voor het Bs→ Dsπ kanaal. De gevonden parameters kunnen

worden gebruikt in het onderzoek met Bs→ D∓sK± naar CP schending. Daarnaast is berekend

dat het percentage van deeltjes met een juiste tag is gestegen van 5.8% in run 1 naar 7.2% in run 2. Dit betekent dat de metingen van CP schendingen relatief 20% gevoeliger worden!

(4)
(5)

Contents

1 Introduction 1

1.1 Outline . . . 3

2 Theoretical Background 4 2.1 Matter dominance and CP violation . . . 4

2.2 Bs→ Ds±K∓ and the angle γ . . . 6

2.3 Flavour Tagging . . . 7

2.3.1 Used Taggers and particle selection . . . 7

2.3.2 Calibration . . . 10

2.4 Combination of Taggers . . . 11

3 LHC and LHCb 12 3.1 The LHCb Experiment . . . 12

3.1.1 Tracking System . . . 12

3.1.2 Particle Identification System . . . 13

4 Approach 15 4.1 sWeights . . . 15

4.2 EPM . . . 15

4.2.1 B meson oscillation and Decay Time resolution . . . 16

4.2.2 Goodness Of Fit parameters . . . 16

4.3 Combination of Taggers in EPM . . . 17

4.4 Run 1 vs Run 2 optimization . . . 17

4.5 Compare Performances on run l data: validation of EPM . . . 18

5 Results 20 5.1 Individual Taggers . . . 20

5.2 Combination of Taggers . . . 22

5.3 Asymmetry parameters . . . 24

5.4 Uncertainties on lifetime resolution parameters . . . 24

5.5 Calibration plots . . . 25

6 Discussion 31 7 Conclusion 33 References 33 8 Appendix 36 8.1 Calibration parameters and performances 2015 vs 2016 . . . 36

(6)

Acknowledgements

I would like to thank a couple people for making this project a very informative and interesting experience. First of all, my supervisor Niels Tuning for his enthusiasm, useful suggestions and feedback and the title of this thesis. My second examinor Marcel Merk for his never ending energy and support during this project. I would also like to thank my daily supervisor Sevda Esen for all the help from technical issues to interpreation of the results and Lenneart Bel who wasn’t even assigned as my supervisor but was nevertheless always there to help me. Michele Veronesi and Agnieszka Dziurda for providing the NTuples and the lifetime resolution parameters. Furthermore I would like to thank the Flavor Tagging group for their help with the technical details of this project, but also for their very valuable feedback on the results during the flavour tagging meeting. Lastly, an informal thank you to my office mates which made this project a social success as well.

(7)

Chapter 1

Introduction

The Standard Model of Elementary particles was developed in the 1960’s as a unified theory to explain the origin of matter and the interaction between particles. Over the years this model has been expanded, tested and verified with great succes by experiments at the Large Hadron Collider at CERN.

In the SM all elementary particles and their interactions are described in terms of the Relativistic Quantum Field theory following the principles of local Gauge invariance and symmetries. The particles are divided into Fermions (spin 1/2) and Bosons (integer spins). The class of Fermions consists of three families of doublets as is presented in figure [ref to figure]. Each doublet consists of an up type quark (u,c,t) and down type quark (d,s,b) and a charged lepton (e−, µ−, τ−) with their neutral counterpart (νe, νµ, ντ). The Bosons on the other hand are the

force mediating particles of the three fundamental forces that describe the particle interactions. Photons are the carriers of the Electromagnetic force, gluons mediate strong interactions in Quantum Chromodynamics and the Weak force couples left handed doublets via W± and neutral Z bosons. The last fundamental Boson is the recently discovered scalar Higgs who’s mechanism is responsible for the mass gain of the W±, Z and fermions [1]. In figure 1.1 an overview of the fundamental particles is given.

The Standard model is based on several symmetry including the symmetry of Parity (P): the discrete operations of space inversion; Charge (C): the inversion of charge; and Time (T): the reversal of time. Both the strong force and Electromagnetic force are symmetric under P, C and T individually. The weak force however, is not. Furthermore, experiments show that the weak interaction is not symmetric under the combined CP symmetry either [3]. The origin of CP violation in the Standard Model will be described within the CKM framework in the Theoretical Background of this thesis.

CP violation is important because it is a necessary ingredient in the creation of the matter-antimatter imbalance in our Universe. The LHCb experiment is designed and optimized to measure CP in the weak decay of B system, which exhibits the largest effects.

Although CP violation is predicted in the Standard Model, the predicted amount is not enough to explain the observed matter-antimatter asymmetry in the current Universe. The main question is therefore if the predicted CP violation from the electroweak phase transition is the only source.

For this thesis we will focus on time dependent CP violation measurements in neutral Bs0 meson decays. These neutral mesons can oscillate between the particle and antiparticle states in a process called mixing. Furthermore, both Bs0 and ¯B0

s can decay into the same final state.

Therefore the decay can proceed with and without mixing causing the two decay amplitudes to interfere and lead to CP violation. The tree-level decays Bs→ D±sK∓are particularly interesting

(8)

Figure 1.1: Overview of the Elementary Particle in the Standard Model [2].

large interference.

To be able to study CP violation the flavour of the Bs meson at production

should be known. This is the topic of this thesis In the LHCb detector the relatively heavy mesons only travel a few cm before decaying. Unlike the final state particles, the initial Bs can

not be detected and should therefore be reconstructed from the final state particles. Determining if the original meson contained a b or ¯b is the process of Flavour Tagging.

Over the years different Flavour Tagging algorithms have been developed at LHCb that use the event information from the detector to deduce the tag. Furthermore, a mistag probability, or the probability to be wrong, is assigned to each decision. This probability is computed using Boosted Decision Tree’s and Neural Networks. Before using these tools for analysis they should be trained on decays in which the initial flavour is known. Besides training on Monte Carlo simulated data Flavour specific decays can also be used. Flavour specific means that the Bs0 and ¯B0

s decay in a different way such that the initial flavour can be obtained directly

form the final states. These flavour specific decays can also be used to ”calibrate” the final tagging results. Since the mistag probability directly affects the asymmetry measurements the calibration is essential to compute this as accurately as possible. For the calibration of Bs → Ds±K∓ the flavour specific decay Bs→ D+sπ− is used because this decay is very similar

in terms of kinematics.

The main subject of this thesis will be to find a calibration for Bs → Ds±K∓ analysis using

Bs → D+sπ− data from run 2 (2015 and 2016) of the LHCb experiment. The final outcome will

(9)

1.1

Outline

First the Theoretical Background of CP violation and Flavour Tagging will be explained in chapter 2. Next the LHC and LHCb experiment will be discussed in a little more detail to understand where the data comes from. After that the used tools and technical details of the calibration will be introduced in the Approach section. Finally, the results will be presented and discussed and recommendations for future analysis will be made.

(10)

Chapter 2

Theoretical Background

For a more detailed and mathematical description of the Standard Model the readers are directed to [1]. Since the concept of CP violation is the main motivation of this thesis, this topic will be discussed in more detail below. After that the principles of Flavour Tagging and Calibration as well as the algorithms used in this analysis will be introduced.

2.1

Matter dominance and CP violation

In the Standard model each elementary particle has its antimatter counterpart. An antiparticle is essentially the same as a particle but now all additive quantum numbers are inverted. (Anti)particles combine into several states: two quark states are called mesons and 3 quark states

baryons. All these states also have antimatter partners. In the early Universe the amount of matter and antimatter was equal. To explain how the present matter dominance has developed three conditions formulated by Sakharov must have been present [4]. Namely: baryon number violation, C and CP violation and departure from the thermal equilibrium. For this thesis wer study the second condition. The concept of CP or Charge-Parity symmetry states that the laws of physics for an antiparticle in ’mirror’ coordinates should be same as for a particle in the original spatial coordinates. If this symmetry is conserved it would mean that for every reaction that results in a net number of baryons that should also be a conjugate reaction resulting in a net number of antibaryons so that particle-antiparticle asymmetries could never occur. Therefore, CP violation is a crucial concept to explain matter dominance within the standard model.

In the SM CP violation can only occur in the weak interactions of quarks and leptons. For quarks the mass eigenstates differ from the weak eigenstates resulting in a different coupling. The strength of the flavour changing interactions between quarks were originally explained by the Cabibbo hypothesis before the charm quark was discovered. This hypothesis can be naturally extended to the three generations quark model parametrized by the Cabibbo-Kobayashi-Maskawa matrix (CKM matrix).    d0 s0 b0   =    Vud Vus Vub Vcd Vcs Vcb Vtd Vts Vtb   = VCKM    d s b   

In the commonly used Wolfenstein parametrization [5] this matrix becomes:

VCKM=    1 − λ2/2 λ Aλ3(ρ − iη) −λ 1 − λ2/2 Aλ2

Aλ3(1 − ρ − iη) −Aλ2 1

 

(11)

To observe CP violation the matrix should have a complex component. In Wolfenstein’s parametrization this means that η should be non-zero.

Quarks do not propagate as free particles, they always hadronize in different combinations. Therefore the final states of the weak quark interactions are mesons or baryons. The corresponding observable quantities are the mass eigenstates of the quarks, this makes it possible to measure all the CKM elements separately. The CKM matrix is defined to be unitary, this implies that ΣiVijVik∗ = ∆jk and ΣiVijVkj∗ = ∆ik. These conditions can be represented as triangles in the

complex plane. The area of the triangle then corresponds to the amount of CP violation in the system [6]. Because all the properties of the triangle (sides, angles and the point (η, ρ) can be calculated the triangle is overdetermined. This allows for very precise measurements and crosschecks [7]. For these measurements Kaon and B-meson systems are used.

The CKM matrix also allows for weak flavour-changing neutral currents of pairs of W particles. This means that neutral mesons like Bs0 = |¯bs > can oscillate between the particle and antiparticle state. This is called mixing [8]. This adds the decay time dimension to the analysis and allows for different types of CP violation.

In general three types of CP violation are distinguished. • Direct CP violation or CP violation in Decay

Af 6= ¯ Af¯ or Af¯ 6= ¯Af

There is a difference in Amplitude for the decay of particles and antiparticles into the possible final states. For charged B mesons, this is the only possible type of CP violation • Indirect CP violation or CP violation in mixing

p q

6= 1 . There is a difference in transition rate of B oscillating to ¯B and the other way around.

This asymmetry can be written as:

asl= p q 2 − q p 2 p q 2 + q p 2 (2.1)

• Interference of decay (P0 → f ) and mixing (P0 → ¯P0 → f ) In this case there are two

ways to decay into the final state. This means that the different possible amplitudes will interfere resulting in CP violation.

This occurs when Imλf 6= 1 where:

λf = ηf q p ¯ Af¯ Af (2.2)

This last category is the one observable in Bs → D±sK∓ and is therefore of the most interest

for this thesis. Because both Bs and ¯Bs can decay in two final states this decay is also sensitive

(12)

2.2

B

s

→ D

±s

K

and the angle γ

The earlier proposed Wolfenstein parametrization can also be described in the following way that is convenient to localise weak phase differences in Feynman diagrams:

VCKM,W olf enstein=    |Vud| |Vus| |Vub| e−iγ −|Vcd| |Vcs| |Vcb| −|Vtd|e−iβ −|Vts| eiβs |Vtb|    (2.3)

The phase γ is one of the angles in the triangle which can be determined from the Bs→ Ds±K∓

decays (see figure 2.1). Although the Ds±K∓ final state is not a CP-eigenstate, CP violation in interference between a decay with an without mixing (third type) can still occur since Bs0 and ¯Bs0 decay into the same final state. Including the mixing of Bs mesons there are two

contributing amplitudes which do not only differ in magnitude but also in relative phase γ, see figure 2.2 and 2.3 for the interfering Feynman diagrams. Using the amplitudes of both Bs0 → Ds−K+ and Bs0 → ¯B0

s → Ds−K+ and the decay into the CP conjugate state B0s → DsK−

and Bs0 → ¯B0

s → DsK− together with the same four possibilities with the ¯Bs the angle γ can be

extracted: B0s → Ds−K++ B0s → ¯B0 s → D − sK+ B¯0s → D − s K++ ¯Bs0→ Bs0 → D − sK+ (2.4) B0s → Ds+K−+ B0s → ¯B0 s → Ds+K− B¯0s → Ds+K−+ ¯Bs0→ Bs0 → Ds+K− (2.5)

Now 2.4 = γ + δ and 2.5 = γ − δ. Adding these two will give γ.

Figure 2.1: The Unitary triangle appropriate for Bs→ D±sK∓ decays [9].

As discussed in the introduction, measurements like this depend on the initial quark content of the B meson reconstructed with Flavour Tagging. How the Flavour Tagging procedure works will be explained in the next section.

(13)

Figure 2.2: The two interfering diagrams for Bs→ Ds−K + [10].

Figure 2.3: The two interfering diagrams for Bs→ Ds+K− [10].

2.3

Flavour Tagging

Flavour Tagging is the process that determines the initial flavour of the B-meson produced in the proton-proton collision at LHCb. Since the B-meson itself will not be detected the initial flavour should be reconstructed in an indirect way. Over the years several algorithms have been developed using different final particles and states and different computing techniques as Boosted Decision Trees (BDT’s) and Neural Networks (NN). Two main cases can be distinguished: same side (SS) tagging and opposite side (OS) tagging, see figure 2.4.

In the proton collision a b¯b pair can be produced. One of these b quarks might hadronize in a B0

s meson while the other b quark hadronizes in a hadron. For the purpose of Flavour

Tagging the path of both b quarks can be used. Same Side (SS) tagging looks for particles that accompany the signal B0s and Opposite Side (OS) tagging uses particles originating from the non-signal b decays, see figure 2.4.

2.3.1 Used Taggers and particle selection

The OS kaon, electron and muon single track taggers all have the same general design. First, a set of selections is applied to identify the best tagging particle. The particles should originate from b → Xµ−or b → Xe−decays or the b → c → s chain in the case of OS kaon. By identifying the charge of the tagging particle, the charge of the B meson can be determined. The particles are selected and identified based on different subsets of event information including (transverse)

(14)

Figure 2.4: Overview of the available SS and OS taggers.

momentum, the minimum angular distance between the tagging particle and all other tracks and many more. The full set of selection criteria can be found in figure 2.5. In case more than one particle is selected, the one with highest transverse momentum is used [11].

In the case of Bs → D±sK∓ the OS Vertex Charge tagger is also used in addition to the

single track taggers. For the Vertex Charge tagger the secondary vertex of the decay of the tagging B is reconstructed from two tracks with a minimal transverse momentum of 0.15 GeV/c. For each candidate the probability that it comes from the b decay is estimated from the quality of the fit. Finally, the one with the highest probability is used as the tagging candidate. From this candidate the charge of the corresponding B is calculated from the sum of all track charges, weighted by their transverse momentum to the power κ. Where the value of κ is optimized for the highest tagging power [7].

During run 1 the Charm tagging algorithm has been developed as the fifth OS tagger. The Charm tagger uses charm hadron candidates of a number of different decay modes including fully and partially reconstructed hadronic modes with a single Kaon or unobserved neutral pion in the final state and partially recontructed semileptonic modes. The flavour of the signal B is determined in a similar manner as the OS Kaon tagger, but now the Kaons are selected based on the reconstruction of c hadrons. Charm decays are formed with kaon, pion and proton candidated that pass the selection criteria. Particles are required to have momentum p > 1000 MeV/c2 and pt > 100 MeV/c2 and significant displacemet of any PV. Additional requirements

can be found in [12].

Because of the large correlations with primarely the OS Kaon and VertexCharge tagger the additional tagging power of the OS Charm tagger will be less than the individual tagging power [12].

The OS taggers are all trained on B+→ J/ψK+ decays but can also be calibrated using

Bs → D¬sπ+ decays.

Besides OS taggers, the SS nnetKaon tagger is used in this analysis. This tagger uses the Kaon that accompanies the decay of the signal b quark. The flavour is determined by the use of two Neural Networks. Neural Networks are artificial networks consisting of connected nodes that can be trained to convert import variables like event information into a single output value [13]. In this case the first Neural Network is used to select one or more tracks based on similar

(15)

Figure 2.5: Selection criteria used for the selection of particles and determination of the mistag fraction (if marked with a dot) [7]

selection criteria as showed before. The second NN is then used to determine the flavour [14]. Each tagging algorithm will provide a decision (1 for Bs, -1 for ¯Bs and 0 for no decision)

and a predicted mistag rate (η). The potential mistag can have many reasons, from picking up wrong tracks to B-meson oscillations. This mistag probability is estimated using Neural Networks or Boosted Decision Trees. The input for these networks consists of event information such as transverse momentum, the number of selected tracks, the number of pile-up vertices and other kinematical properties of the tagging particles or tracks associated with the vertex.

For the OS Charm tagger there are also multiple sources that contribute to an additional, irriducable mistag probability with the dominant impact coming from B0B¯0 oscillations and wrong sign charm hadrons produced in the b → c¯cq chain. The irriducable mistag probability is decay mode dependent and estimated to be between 6% and 23% [15].

The mistag probability (ω) will go directly go into the dilution by:

D = 1 − 2ω (2.6)

The tagging algorithms have been optimised to reach the highest tagging power:

ef f = tagD2 (2.7)

(16)

2.3.2 Calibration

The mistag probability will be used in CP violation measurements as a weight for the studied event such that events with a low mistag probability will be assigned a higher weight. It is therefore crucial to determine this mistag accurately. To achieve this, flavour specific decays, in this case Bs→ D¬sπ+, are used to calibrate the tagging results. Since these decays are flavour

specific, the mistag fraction can be easily determined by comparing the tagging decision with the reconstructed flavour from the final states. To calibrate the results, a linear calibration function is used, where the estimated mistag (η) is used as a variable.

ω(η) = p0+ p1(η− < η >) (2.8)

For a perfectly calibrated tagger the parameters should be p0− < η >= 0 and p1 = 1. In the

used tool Espresso Performance Monitor (EPM, see section 4.2) the values are slightly differently defined, the EPM parameters will be referred to as p∗0 = p0− < η > and p∗1 = p1− 1, in this

case they would be both close to 0 for a perfectly calibrated tagger.

In this analysis the calibration is done using the tool Espresso Performance Monitor for which the technical details will be explained in section 4.2.

Since the Bs→ Ds±K∓ and Bs→ D¬sπ+ are very similar in terms of kinematics, calibration

parameters obtained from the flavour specific Bs→ D¬sπ+ channel can be directly applied to

the flavour tagging in Bs→ D¬sπ+, which reduces the systematic uncertainties related to the

portability between the channels [16].

In the case of neutral B mesons, the analysis has a time dependent component since there is mixing involved. Therefore the decay time distribution and decay time error are used in the calculation of the parameters. Technical details will be explained in section 4.2.1.

Asymmetry parameters

The calibration parameters p0 and p1 do also depend on the initial Bs flavour due to de different

interactions of K+ and K− with matter. To include this difference two extra parameters (asymmetry parameters) ∆p0 and ∆p1 are introduced. They are determined by looking at the

differences between the parameters obtained with D−s and D+s decays. The complete calibration equation then becomes:

ω(η) = p0+ ∆p0 2 + (p1+ ∆p1 2 )(η− < η >) (2.9) ¯ ω(η) = p0− ∆p0 2 + (p1− ∆p1 2 )(η− < η >) (2.10)

The third asymmetry parameter is the difference in tagging efficiency for both flavours: ∆tag = tag(Ds−) − tag(D+s) = tag(Bs0) − tag( ¯Bs0) (2.11)

(17)

2.4

Combination of Taggers

To reach the highest tagging power possible different taggers are combined into one tagging decision and corresponding probability. In terms of mathematics this is done by combining the tagging decisions and predicted mistags of the different taggers into a combined probability for the B-meson to contain a b quark [11]:

P (b) = p(b)

p(b) + p(¯b) (2.12)

where p(b) = Q

i pi(b) is the probability to have a b-tagged meson for tagger i. This

probability can be calculated from the tag (di) and mistag (pi = 1 − ηi) fraction as follows:

p(b) =Y i (1 + di 2 − dipi) p(¯b) = Y i (1 − di 2 + dipi) (2.13) The final calculated decision then is:

d = -1 if P(b) > P(¯b), with a mistag fraction ω = 1 − P (b). d = +1 if P(¯b) > P(b), with a mistag fraction ω = 1 − P (¯b).

This combined mistag fraction is usually slightly underestimated because the correlation between taggers is not taken into account. The Vertex Charge will have the biggest correlation with all the other OS taggers since the same particles can be used for the individual taggers and the reconstruction of the secondary vertex. To account for this underestimation the combination should be calibrated on data just like the individual taggers. The correlation between OS and SS tagger however is negligible, therefore the calibrated OS combination can be directly combined with the (calibrated) SS taggers.

(18)

Chapter 3

LHC and LHCb

The Large Hadron Collider (LHC) is a proton-proton collider located 100m underground close to Geneva, Switserland. The four main experiments ATLAS, CMS, LHCb and ALICE are located at the four interaction points along the 27 km circumference. Before entering the LHC itself the particle go through several pre-accelerators. The LHC consists of two beampipes where protons are accelerated in opposite directions. The more than 1600, 15m long superconducting magnets generate a magnetic field of approximately 8.4 T to bend the trajectories into the required curvature. Since the first collisions in 2010 the luminosity has been increased from a center of mass energy of of√s = 7 TeV and√s = 8 TeV in 2011 and 2012 (run l) to√s = 14 TeV in 2015 and 2016 (run ll) [17].

3.1

The LHCb Experiment

The main fields of study for the LHCb experiment are rare decays and CP violation in B and D meson decays. In 2011 approximately one b¯b pair was produced in every 230 collisions. Because of the high energies at LHCb the b mesons are usually produced along one of the two beam directions. Therefore the LHCb spectrometer is built as a single-arm forward spectrometer. Because of the high interaction rate, not all produced data can be stored. To select the interesting events the hardware based trigger (Level-0 Trigger) and two software based triggers are used.

In figure 3.1 a schematic view of the LHCb spectrometer is shown. The different components will now be discussed briefly.

3.1.1 Tracking System

The tracking system consists of the Vertex Locator (VELO) and Tracker Turicensis (TT) in front of the magnet and the Inner and Outer Tracker behind the magnet. The system output is used for the reconstruction of tracks and identification of particles. The VELO is a silicon strip detector determining both the radial coordinate r and the azimuthal angle φ to reconstruct the primary and secondary vertices. The primary or production vertex is found by looking for a large number of reconstructed tracks originating from a common point on the LHC beam axis. The secondary vertices are identified using the mass and decay direction hypotheses of specific B-candidates.

Located just upstream of the magnet the silicon based Tracker Turicensis gives a 3-dimensional coordinate measurement. The TT has several purposes, it is used to reconstruct tracks that leave no sufficient hits in the VELO (downstream tracks) and low momentum particles. It is also used in the online reconstruction of long tracks as well as improving their momentum resolution and finally to reject wrong track combinations of VELO track seeds and downstream track seeds.

(19)

Figure 3.1: Schematic overview of the LHCb spectrometer [18].

After the first two trackers the dipole magnet is installed. The magnet is essential to be able to measure the momentum of charged particles using their deviation from the original track due to the magnetic field. To allow for cross-checks of systematic effects of the detector, the magnetic field is reversed frequently.

The silicon Inner Tracker is surrounded by the Outer Tracker, which is a gas detector that uses the ionization of the gasses which occurs when charged particles travel through. Located just behind the magnet, the Inner Trackers have the highest particle density due to the forward production of the B mesons.

All the track hits of all the detectors are combined to reconstruct the final particle tracks. The curvature of the tracks is used to deduce the charge of a particle from the information of the magnetic field. A typical B event consist of about 100 reconstructed tracks.

3.1.2 Particle Identification System

Next in the LHCb are four detectors used for particle identification: two Ring Imaging Cherenkov detectors (RICH1 and RICH2), the Calorimeter system and the Muon system.

The first two detectors are based on the Cherenkov effect which states that a particle travelling through a dielectric medium will emit radiation in a cone if it travels faster than the speed of light. The angle at which it radiates is specific for the particle and can therefore be used for the identification of K± and π±. The two detectors use different gasses to cover a different momentum range.

The Calorimeters measure the energy deposition inside the detector to identify electrons, photons and hadrons. Electrons that enter the calorimeter ionize the material by interaction via electromagnetic processes (ECAL). This leads to a cascade of secondary particles which are absorbed by the detector. In between the absorbing layers the scintillation light produced is detected to measure the number of detected photons which corresponds to the energy deposition

(20)

in the material. Hadrons interact with the material via nuclear interactions and are identified in the HCAL part.

The last 5 detectors form the Muon System, performing the essential muon identification in many B meson decay final states. There is one muon detector in front of the calorimeters and 4 behind separated by thick layers of iron to absorb all hadrons. The first station has the highest density of muons and is therefore made of the more robust triple gas electron multiplier detectors. The remaining detectors are constructed with multi-wire proportional chambers, all with different granularity to cope with the not constant rate of the particle flux [9]

The output of the PID detectors is transformed into a particle ID with the use of several algorithms that match track information with the expected patterns under a certain particle hypotheses. Photons for example do not have any charge or mass and are therefore identified by looking for clusters without an associated track.

(21)

Chapter 4

Approach

To calibrate the tagging of Bs→ D±sK∓ decays the self-tagging channel Bs→ D¬sπ+ is used,

as explained before. The Tagging decisions and mistag probabilities can be analyzed with the Espresso Performance Monitor. Since this is a relatively new tool run 1 data is used first to compare its performances with known calibrations from literature. In this section the EPM and its use will be introduces an its performances on run 1 will be discussed.

4.1

sWeights

Before the data can be fed to flavour tagging algorithms the signal events should be distinguished from background events. The background subtraction is done via a multi-dimensional fit (MDFit) to the (D∓sh±) and (h−h+h±) mass and PIDK distributions. Since the Bs0 → Ds±K∓

and Bs0 → D+

sπ− are kinematically very similar information of both channels can be used in

the fits to both samples. The mass windows considered are [5300,5800]MeV/c2 for (Ds∓h±) and [1930, 2015]MeV/c2 for (h−h+h±) [16].

The signal shape is modelled with a Crystal Ball function. The shape of the combinatorial background (consisting of candidates where a true Ds is reconstructed and combined with a

random pion track, ´or candidates where all the final charged tracks are selected randomly [19]) is determined from the sidebands and set to a single exponential plus a constant function. The dominant background sources are the fully reconstructed background decays such as B0→ D−

sπ+

and Λ0

b → Λ+cπ−. The shapes for these background events are taken from Monte Carlo [16].

The obtained sWeights fromt the MDFit are used by the tagging algorithms to seperate signal from background.

4.2

EPM

EPM is a tool to monitor, combine and calibrate tagging decisions [20]. The calibration is done by binomial regression which requires no binning or full life time fits and is a very flexible and well studied tool. EPM implemented the generalized linear models (GLM) as particular type of binomial regression. In that case the calibrated probability (¯ω = 1 − ω) is related to the prediction (¯η = 1 − η) by [21]: g( ¯ωk(θ)) = M X i=0 θiPi(¯η) (4.1)

where Pk(¯η) are a set of basis functions, θk the calibration parameters and g a so called link

(22)

and n-spline models. For the analysis of B0s → D±

sK∓ and Bs0 → D+sπ− a linear polynomial is

used. With the use of a likelihood maximization the parameters (p0 and p1 for linear polynomial) can be determined. The preferred method is solving via Minuit [22], but using a Newton-Raphson type algorithm is also possible.

The EPM runs on option files specifying the source file (ROOT TTree) and the branches corresponding to the tagging decision, predicted mistag, charge of the bachelor particle and sWeigths. The calibration mode (Bs, Bd, Bu) and several plotting options can be set. For

neutral decay modes lifetime and lifetime-error branches should be provided as well as the preferred lifetime resolution model [20].

The output consists of the uncalibrated tagging performances (Tagging rate, Raw mistag, Effective mistag and Tagging power) and correlation matrices which can be of interest if more than one tagger is used. If a calibration is applied the calibration parameters for the different taggers will be printed next along with goodness of fit parameters before and after calibration. After the calibration the final output consists of the calibrated performances and several plots. Although EPM does not use binning for the fit the results will be showed in same statistics binning in the plots. The amount of bins can be set in the option file.

4.2.1 B meson oscillation and Decay Time resolution

For calibration of the neutral decay modes B0s → D±sK∓and Bs0→ D+

sπ−the flavour oscillations

have to be taken into account. In EPM this is done by implementing a fixed probability ¯ωc that the flavour of the Bs meson at decay time t is the same as the flavour at production [21]. First,

the theoretical oscillation probability and corresponding dilution can be calculated following the evolution of the light and heavy B meson state. Additionally life time resolution and production asymmetries will lead to an extra dilution. For Bs0 the finite resolution of the detector is an important effect since it is of the same order as the oscillation frequency [21].

For the analysis of Bs→ DsK decays the resolution is modelled by a Gaussian function with

a different width for each event corresponding to the per-event decay time error. The errors are calibrated by calculating the decay time resolution in bins of the per-event error. In an ideal situation the per event error would directly correspond to the time resolution, in reality a scale factor is needed to match the real decay time resolution [16]. Due the calibration the lifetime resolution parameters also have an uncertainty. Since it is not possible to Gaussian constrain these parameters in the used tool or to automatically propagate these uncertainties into the calculations the effect should be studied seperately. This can be done by using the uncorrelated version of the parameters and vary them independently while calculating the calibration parameters. For run 1 the two parameters have been established, for run 2 this is still work in progress at the time of this thesis. Therefore the presented calibration parameters will change with improved lifetime resolution modelling.

4.2.2 Goodness Of Fit parameters

The EPM will report the quality of the fit in terms of goodness of fit parameters. The G2 deviance, Akaike Information criterion (AIC) and Bayesian Information criterion (BIC) can be used to compare between different calibration models. These models compare the maximum likelihood that can be achieved with a certain model looking at the number of parameters in the model [23]. Since in this analysis only a linear calibration model will be used these tests are of less interest.

To measure the quality of the used model three methods that study the distribution of residuals are used, namely Pearson-like residuals, deviance residuals and simple unweighted residuals. These all use the probability of the tag to be correct (including B oscillations and calibration parameters). In the EPM the residuals are converted into test statistics applying

(23)

unbinned tests based on the sum in quadrature. The output consists of a G2, X2, S test known as the Cessie-van Houwelingen-Copas-Hosmer statistic [24] and an additional Cressie-Read test as compromise between G2 and X2 [25]. For a well calibrated tagger these test scores should follow a normal distribution and should therefore all be close to zero. If sWeights are applied, these are used in all statistical calculations.

4.3

Combination of Taggers in EPM

In the Bs → DsK analysis of run 1 data the combination of taggers was done as follows. For the

OS combination the uncalibrated tagging results are first combined into one effective OS tagger and then calibrated. For this analysis only the SS nnetKaon tagger is used for the SS tagging decision. This tagger is first calibrated individually before entering the total combination. The events are now devided into three categories, events with a tag from only SS, only OS or a tag from both. For the first two categories the calibration parameters as calculated for SS nnetKaon and the OS combination are used. For the last category the calibrated OS combination and SS nnetKaon results are combined and then calibrated again [16].

However, the preferred way of combinning taggers of the Flavour Tagging Group is to first calibrate all taggers individually and then combine them into one decision. After this the total combination is calibrated again to correct for correlations between the taggers.

In EPM it is possible to combine different taggers into one final decision and corresponding mistag probability. To combine calibrated taggers EPM should be run twice on disjoint datasets. In this case the two samples are selected on a even/uneven number of tracks (nTracks in the NTuple). This type of selection is justified by the fact that both samples have a compatible mistag distribution. It should be however noted that the samples do not give exact same result for tagging power and calibration parameters. A different selection method is therefore recommended for future analysis.

First we follow the run 1 strategy to combine the taggers. In the first run the Calibration parameters for the OS Combination and SS nnetKaon are obtained. In the second run these parameters are applied on the remaining data such that the calibrated data can be combined in to one tagging result and can then be calibrated again. Another option is to save the calibrated branches or combined tagging decisions into a new TTree, add these branches to the original NTuple and use these for further analysis. The now obtained calibration parameters for the total combination can be used in CP violation measurements. The combined mistag probability is used as a weight variable to favour decays with a high probability of being rightly tagged [21]. In the results section the different combinations and the corresponding Tagging Power are presented. Calibrating in multiple steps will increase Tagging Power, but also the corresponding uncertainty. In addition, the uncertainties of the calibration of individual taggers or OS combinations are not propagated into the combination. Therefore the uncertainties on the combinations and corresponding parameters could be underestimated. The taggers with low statistics such as OS Elecron and OS Muon have large uncerainties which should be taken into account. However, due to their low tagging rate they will only be a small part of the combination. To study the effect of uncertainties of individual tagger calibrations a toy study should be performed because the equations behind the combination are not simply linear sums (see section 2.4. This study is beyond the scope of this thesis, but is recommended for future analysis.

4.4

Run 1 vs Run 2 optimization

With the increased centre of mass energy in run 2 the kinematic properties as used in flavour tagging algorithms have been changed. This will lead to a different performance of the taggers

(24)

for run 1 or run 2 data. It has been shown that the tagging rate usually increases for run 2 but the quality of the tagging particles decreases resulting in an overall lower tagging power [7]. Therefore the flavour tagging algorithms have been re-optimized for run 2. For comparison the performances of the run 1 optimized taggers on run 2 data will be reported as well. It should be noted that for these results a different sample was used which makes direct comparison of the calibration parameters impossible. However the tagging power and tagging rate can be compared to the run 2 optimization to see if it improved with the re-optimization.

Another note is that there was a bug in the single track OS Taggers (OS Muon, OS Electron and OS Kaon) resulting in a lower tagging rate and therefore could have lowered the tagging power. The data used in this thesis does still contain this bug.

4.5

Compare Performances on run l data: validation of EPM

To validate the performances of the EPM its output is compared to the calibration parameters as found in the 3 f b−1 Bs → Ds±K∓ analysis note [16]. Here the calibration parameters are

extracted from the fit to the decay time distribution. The lifetime resolution published in this paper is also used in the EPM analysis and the same sample (obtained by applying sWeigths) is used. The lifetime resolution is modelled following the Bs→ DsK analysis [16]:

σ(σt)data= s0+ s1σt= (10.262 ± 1.523) + (1.280 ± 0.042)σt (4.2)

The ’Mistag’ (identity) link function and a linear polynomial are used for the calibration. In the following tables the results are summarized. The tagging rate represents the fraction of events with a b or anti-b tag assigned. The average mistag probability before calibration is given by < η >, where as P0 is the mistag probability after calibration for events with a mistag

probability equal to < η >. The parameters p1 quantifies the correction factor to the initial

mistag probability.

Table 4.1: SS nnetKaon

Source Tagging rate (%) Tagging Power (%) P0 P1 < η >

EPM 63.90 ± 0.17 2.083 ± 0.019 (stat)

± 0.2005 (cal) 0.442 ± 0.005 1.086 ± 0.069 0.438

ANA 63.90 ± 0.17 2.08 ± 0.21 0.441 ± 0.005 1.084 ± 0.068 0.437

Results of the previous Bs→ D∓sK± analysis noted with ’ANA’ and calibration with EPM.

Table 4.2: OS Combination

Source Tagging rate Tagging Power P0 P1 η

EPM 37.15 ± 0.16 3.918 ± 0.033 (stat)

± 0.287 (cal) 0.374 ± 0.006 1.099 ± 0.064 0.370 ANA 37.15 ± 0.17 ± 3.89 ± 0.29 0.374 ± 0.006 1.094 ± 0.063 0.370

Results of the previous Bs→ D∓sK± analysis noted with ’ANA’ and calibration with EPM.

As can be seen from tables 4.1 and 4.2 the results of the ANA note can be almost perfectly be reproduced using EPM. After calibration all events with η > 0.5 are being ignored by EPM.

(25)

In the ANA note these events are preserved by flipping the tag and using 1 − η as the new mistag. Since the events with a high mistag probability almost do not contribute to the fit, ignoring these events is the safest strategy for calibration purposes. From this comparison we can conclude that EPM is well suited for calibrating these channels.

The calibration plot is shown in figure 4.1. Here the expected mistag (η) is plotted against the ’real’ mistag (ω), the performed calibration overlays the data. The green and yellow correspond

to a 1 and 2 sigma confidence interval respectively.

Figure 4.1: Calibration obtained by EPM for the run1 data sample for SS nnetKaon (left) and the OS combination (right). Both plotted in 10 equal statistics bins, this is only done for plotting purposes, the binning is not used in the calibration.

(26)

Chapter 5

Results

In this chapter the calibration parameters and corresponding tagging power for the individual taggers and different combinations of taggers will be presented. These results will be compared with the run 1 optimization of the taggers and additionally the effect of lifetime resolution uncertainties will be studied.

The parameters are calculated with the combined 2015 and 2016 sample of Bs → Dsπ events,

corrected for background using sWeights. The selection for this data is the same as used for run 1 BsDsK analysis [16]. Compared to 2016, the 2015 sample is a relatively small sample (158954

vs 26495 events). For the single track OS taggers (Muon, Electron and Kaon) the tagging rate is too low to calibrate with 2015 only. Therefore only the combined 2015 and 2016 sample is used for the calibration. The results are compatible with those from 2016 only. The results for both years separate with corresponding plots can be found in the appendix.

For run 2 the lifetime resolution is set to:

σ(σt)data= s0+ s1σt= (14.34 ± 3.1) + (1.05 ± 0.07)σt (5.1)

Note that the uncertainties on the lifetime resolution are not propagated into EPM and should therefore be added as a systematic uncertainty (see section 5.4). A linear polynomial with the ’Mistag’ (identity) link function is used for the calibration. Minimization is done with Minuit instead of the faster, but more unstable NewtonRaphshon method.

5.1

Individual Taggers

In tabel 5.1 the calibration parameters for the individual taggers and the raw OS and OS+SS combinations are shown. ”Raw” means in this case that the output of the taggers is first combined and then only the combination as a whole is calibrated. This is done with and without the OS Charm tagger included for comparison with run 1. The calibration parameters in this table are computed using the whole Bs→ Dsπ sample from 2015 and 2016. Because the

Bs0 → Ds±K∓ and Bs0→ D+

sπ− candidates are kinematically compatible, these parameters can

be directly applied to individual Tagging results in the Bs→ DsK analysis.

In table 5.3 the performances of the run 1 optimization for the taggers on run 2 data are showed as well for comparison. These are not applied to the same sample and could therefore give misleading results. A direct comparison of the calibration parameters is therefore not useful. However, the resulting tagger power of both samples can give an idea about the improvements due to re-optimization.

(27)

Table 5.1: Run 2 calibration parameters individual taggers, run 2 optimization

Calibration Parameters following the linear calibration of equation 2.8. P0* represents the parameters as defined in EPM (P0 - < η >) to show the effective deviation from a perfectly calibrated tagger (P0* = 0), P1 = 1 for a perfectly calibrated tagger.

Tagger P0 P0* P1 < η > OS Muon 0.290 ± 0.011 -0.065 ± 0.011 1.303 ± 0.182 0.356 OS Electron 0.278 ± 0.019 -0.081 ± 0.019 1.76 ± 0.35 0.359 OS Kaon 0.385 ± 0.008 -0.025 ± 0.008 1.72 ± 0.19 0.410 VtxCharge 0.376 ± 0.007 -0.014 ± 0.007 1.39 ± 0.12 0.390 OS Charm 0.348 ± 0.015 -0.018 ± 0.015 1.48 ± 0.32 0.365 SS Kaon 0.427 ± 0.004 0.011 ± 0.004 0.83 ± 0.05 0.416 OS Combi Raw 0.369 ± 0.005 -0.001 ± 0.005 1.21 ± 0.06 0.370 SS OS Raw 0.390 ± 0.004 0.005 ± 0.004 0.98 ± 0.04 0.385 OS Combi Charm Raw 0.366 ± 0.005 0.003 ± 0.005 1.13 ± 0.06 0.363 SS OS Charm Raw 0.388 ± 0.004 0.007 ± 0.004 0.965 ± 0.035 0.381

Table 5.2: Calibrated Performances, from single ”raw” input taggers using run 2 optimized taggers Calibrated Tagging rate (events with η > 0.5 excluded) and Tagging Power (eq. 2.7).

Tagger Tag rate Tagging Power ±(stat)±(cal)

OS Muon 9.33 ± 0.09 % 1.874 ± 0.021 ± 0.181 % OS Electron 2.96 ± 0.05 % 0.697 ± 0.015 ± 0.107 % OS Kaon 17.80 ± 0.12 % 1.323 ± 0.012 ± 0.15 % VtxCharge 19.15 ± 0.12 % 1.953 ± 0.017 ± 0.188 % OS Charm 4.89 ± 0.07 % 0.558 ± 0.009 ± 0.101 % SS Kaon 68.85 ± 0.14 % 2.849 ± 0.017 ± 0.213 % OS Combi Raw 36.21 ± 0.15 % 4.342 ± 0.032 ± 0.267 % OS SS Raw 78.233 ± 0.13 % 6.856 ± 0.033 ± 0.330 % OS Combi Charm Raw 37.43 ± 0.15 % 4.509 ± 0.032 ± 0.272 % OS SS Charm Raw 78.61 ± 0.12 % 7.089 ± 0.034 ± 0.337 %

Table 5.3: Calibrated Performances, run 2 data, run 1 optimization taggers

Calibrated performances of the individual taggers and raw combination for the run 1 tuning of taggers applied to run 2 data.

Tagger Tag rate Tagging Power ±(stat)±(cal)

OS Muon 9.22 ± 0.10 % 1.841 ± 0.025 ± 0.215 % OS Electron 1.91 ± 0.05 % 0.316 ± 0.009 ± 0.082 % OS Kaon 15.77 ± 0.13% 1.645 ± 0.02 ± 0.198 % VtxCharge 19.42 ± 0.14% 2.028 ± 0.021 ± 0.222 % OS Charm 4.20 ± 0.07 % 0.650 ± 0.013 ± 0.127 % SS nnetKaon 54.36 ± 0.17% 2.348 ± 0.02 ± 0.223 % OS Combi Raw 32.56 ± 0.16% 4.454 ± 0.038 ± 0.323 % SS OS Raw 66.67 ± 0.17 % 6.701 ± 0.044 ± 0.385 % OS Combi Charm Raw 32.72 ± 0.16 % 4.722 ± 0.04 ± 0.332 % SS OS Charm Raw 65.27 ± 0.17 % 6.847 ± 0.045 ± 0.323 %

(28)

Table 5.4: Run 2 Combination of Taggers Calibrated Performances

Tagger Tag rate Tagging Power

OS Combi Raw 35.13 ± 0.20 % 4.410 ± 0.047 ± 0.375 %

OS SS Combi Raw 78.70 ± 0.18 % 6.612 ± 0.045 ± 0.458 %

OS Combi Charm Raw 36.42 ± 0.21 % 4.672 ± 0.048 ± 0.388 % OS SS Combi Raw Charm 78.37 ± 0.18 % 6.942 ± 0.047 ± 0.470 %

OS Combi Calib 33.96 ± 0.20 % 4.546 ± 0.046 ± 0.390 %

OS SS Combi Calib 74.12 ± 0.19 % 6.936 ± 0.048 ± 0.476 % OS Combi Charm Calib 34.75 ± 0.20 % 4.817 ± 0.047 ± 0.402 % OS SS Combi Charm Calib 74.34 ± 0.19 % 7.206 ± 0.049 ± 0.486 % SS OS Combi OSCombCalib 74.35 ± 0.19 % 6.809 ± 0.048 ± 0.466 % SS OS Combi OSCombCalib Charm 74.78 ± 0.19% 7.052 ± 0.049 ± 0.474 %

Combination of Taggers using the calibrated version of the nTracks = even sample. The ’Raw’ Combina-tions are now calibrated with this selected sample instead of the whole sample

5.2

Combination of Taggers

There are different possibilities for calibrating the combination of the different taggers following the equation presented in section 2.4. The calibration parameters are calculated on the events with nTracks = uneven. This is applied to all even number of tracks, and these results are then combined. The raw combinations are shown again in this table, but are now calculated using the even nTracks sample, so the values could deviate from the full sample results from table 5.1 and 5.2.

For the run 1 Bs → DsK analysis [16] the OS taggers are combined into a single tagging

decision and then calibrated. This method will be referred to as OS Combi Raw. This will be mainly be used for comparison.

Another approach is to first calibrate the single taggers, combine the calibrated outcomes and then calibrate the combination to correct for the correlation between taggers [11]. This method will be called OS Combi Calib. This last method is preferred by the flavour tagging group [11] and will therefore be treated as the central result.

During run 1 the additional Charm tagger has been developed. To be able to compare the results with run 1 the results will be presented with and without the Charm tagger included. This will be denoted with OS Combi Charm. The naming of the OS plus SS combination follows the same logic.

A last addition is to use the calibrated outcome of the OS Combi Raw and the calibrated SS Kaon and combine those. This resembles the method of the run 1 DsK analysis best. However, the events are not divided into the three categories: OS tagged, SS tagged, and a tag from both as is done in the run 1 analysis. So all events will be recalibrated in the total combination. To do this the OS Combination and calibrated SS Kaon results are saved in a TTree using EPM’s ’WriteCalibratedMistagBranches’. These branches are added to the original NTuple and can then be used for further analysis. This results in a final tagging power of 6.81% compared to 5.8% for run 1. With addition of the OS Charm tagger the tagging power can be raised to 7.05%.

Using calibrated taggers for the combination leads to a total tagging power of 7.2% if the charm tagger is included. See table 5.4 for all tagging performances.

The combination of taggers should be calibrated on data to account for correlations between taggers. The calibration parameters are therefore an indication of the correlations. See table 5.5. In the case of the OS taggers it is expected that the single track taggers have a relatively

(29)

Table 5.5: run 2 All latest Combination of Taggers Calibration Parameters

Tagger P0 P0* P1 < η >

OS Combi Raw 0.372 ± 0.008 0.002 ± 0.008 1.284 ± 0.083 0.370

OS SS Raw 0.392 ± 0.005 0.007 ± 0.005 0.958 ± 0.051 0.385

OS Combi Charm Raw 0.367 ± 0.008 0.005 ± 0.008 1.200 ± 0.078 0.362

SS OS Charm Raw 0.390 ± 0.005 0.009 ± 0.005 0.959 ± 0.05 0.381

OS Combi Calib 0.362 ± 0.008 0.035 ± 0.008 1.01 ± 0.07 0.327

OS SS Calib 0.383 ± 0.006 0.014 ± 0.006 0.88 ± 0.05 0.369

OS Combi Charm Calib 0.359 ± 0.008 0.038 ± 0.008 0.99 ± 0.07 0.321 SS OS Charm Calib 0.382 ± 0.006 0.016 ± 0.006 0.88 ± 0.05 0.365 SS OS OSCombCalib 0.383 ± 0.006 -0.0003 ± 0.0055 0.950 ± 0.052 0.383 SS OS OSCombCalib Charm 0.383 ± 0.005 0.000003 ± 0.005487 0.981 ± 0.052 0.383

large correlation with the vertex charge taggers because one track can be used by a single track tagger and the vertex charge tagger at the same time. OS and SS taggers are assumed to be approximately uncorrelated. The correction on the OS and SS + OS combination should therefore be approximately the same.

(30)

5.3

Asymmetry parameters

As introduced in section 2.3.2 the calibration parameters are not the same for both flavours. To correct for this effect the asymmetry parameters are calculated in the same run as the calibration parameters itself. Results are shown in table 5.6 and 5.7.

Tagger ∆ P0 ∆ P1 OS Muon 0.006 ± 0.010 0.143 ± 0.159 OS Electron 0.010 ± 0.017 -0.098 ± 0.316 OS Kaon 0.018 ± 0.007 -0.068 ± 0.174 VtxCharge 0.017 ± 0.007 -0.020 ± 0.104 OS Charm 0.009 ± 0.014 -0.350 ± 0.282 SS Kaon -0.017 ± 0.004 -0.048 ± 0.042 OS Combi Raw 0.018 ± 0.005 0.034 ± 0.051 OS SS Raw -0.0036 ± 0.0034 -0.051 ± 0.034 OS Combi Charm Raw 0.017 ± 0.005 0.020 ± 0.051 OS SS Charm Raw -0.0039 ± 0.0034 -0.059 ± 0.032

Table 5.6: Asymmetry parameters for the individual taggers and Raw Combinations calculated with the whole sample.

Tagger ∆ P0 ∆ P1

OS CombiCalib 0.023 ± 0.007 -0.038 ± 0.064 OS SS Calib 0.004 ± 0.005 -0.095 ± 0.044 OS Combi Charm Calib 0.025 ± 0.007 -0.027 ± 0.061 OS SS Charm Chalib 0.004 ± 0.005 -0.104 ± 0.042 OS SS OSCalib 0.001 ± 0.005 -0.120 ± 0.048 OS SS OSCalib Charm 0.001 ± 0.005 -0.130 ± 0.048

Table 5.7: Asymmetry parameters for the different combination options, calculated with the selected sample as is done for the regular calibration parameters.

5.4

Uncertainties on lifetime resolution parameters

The uncertainties on the lifetime resolution parameters are not propagated into the calibration calculations. Since the lifetime resolution model also determines the CP observables in Bs→ DsK

this is a source of correlated systematic uncertainty. In the Bs→ DsK run 1 analysis the effect

of the lifetime resolution has only be studied in this correlated way and not for the flavour tagging only. To get a general idea about the effects of the lifetime resolution model on the calibration the uncorrelated parameters are used:

σ(σt)data= s0+ s1σt= (56.48 ± 0.69) + (1.05 ± 0.07)(σt− 40) (5.2)

The parameters s0 and s1 are varied independently by adding or subtracting their uncertainties. The calibration is performed for all four options and the difference between the highest and lowest value is calculated for the parameters and tagging power. In table 5.8 and 5.9 the tagging power and calibration parameters are printed with the uncertainty due to the variation of the resolution parameters. It can be seen that the uncertainties due to lifetime resolution are of the same order as the uncertainties due to calibration. The effects are studied for the individual taggers and raw and calibrated combinations (all including the OS Charm tagger).

(31)

Table 5.8: Tagging power and uncertainty due to lifetime resolution

Tagger Tagging power Uncertainty (lifetime)

OS Muon 1.882 % 0.151 % OS Electron 0.700 % 0.052 % OS Kaon 1.329 % 0.095 % VtxCharge 1.961 % 0.152 % OS Charm 0.561 % 0.045 % SS Kaon 2.861 % 0.202 % OS Combi Raw 4.361 % 0.329 % SS OS Raw 6.885 % 0.493 %

OS Combi Charm Raw 4.529 % 0.345 %

SS OS Charm Raw 7.119 % 0.513 %

OS Combi Charm Calib 4.308 % 0.460 %

SS OS Charm Calib 7.142 % 0.345 %

Table 5.9: Calibration parameters with uncertainty due to lifetime resolution

Tagger P0 Uncertainty P1 Uncertainty < η >

OS Muon 0.290 0.008 1.306 0.051 0.356 OS Electron 0.277 0.008 1.762 0.057 0.359 OS Kaon 0.385 0.004 1.726 0.052 0.410 VtxCharge 0.375 0.005 1.394 0.053 0.390 OS Charm 0.347 0.006 1.479 0.057 0.365 SS Kaon 0.426 0.003 0.831 0.027 0.416 OS Combi Raw 0.368 0.005 1.217 0.044 0.370 SS OS Raw 0.390 0.004 0.986 0.033 0.385

OS Combi Charm Raw 0.366 0.005 1.128 0.042 0.363

SS OS Charm Raw 0.388 0.004 0.967 0.033 0.381

OS Combi Calib Charm 0.360 0.006 0.984 0.032 0.381

SS OS Calib Charm 0.383 0.005 0.872 0.046 0.381

5.5

Calibration plots

The following plots show the linear calibration applied to the individual taggers and combinations. The estimated mistag (η) is plotted against the ’real’ mistag (ω), the applied calibration is plotted over the data. In the caption the parametric test scores are reported. The Pearson χ2 is inapplicable to the used identity link, therefore only the Cressie-Read and Cessie-van Houwelingen-Copas-Hosmer S test (lCvHCH) will be used (see section 4.2.2. If the model is suited for the data these parameters should follow a normal distribution and therefore be close to 0. The statistic test scores are showed under each plot.

(32)

(a) OS Muon CRESSI READ: 0.68725; LCvHCH S: 0.60819 (b) OS Electron Cressie-Read: 0.33272; lCvHCH S = 0.39097 (c) OS Kaon CRESSIE-READ = -1.583 lCvHCH S = -1.5496 (d) OS Charm CRESSIE-READ = 0.089082 lCvHCH S = 0.02676 Figure 5.1: Calibration plots for the individual taggers. The predicted mistag is plotted against the real mistag. Then the applied calibration is overlayed on the data. All plots are showed in 10 same statistics bins. Additionally, the goodness of fit parameters are showed under the plots.

(33)

(a) VtxCharge CRESSIEREAD = 0.0074086 lCvHCH S = -0.044721 (b) SS Kaon CRESSIE-READ = 1.0165 lCvHCH S = 1.0862 (c) OS Combi Raw CRESSIE-READ = -2.4169 lCvHCH S = -2.4901 (d) SS OS Raw CRESSIE-READ = 0.30975 lCvHCH S = 0.33327 Figure 5.2: Calibration plots for the individual taggers and raw combinations. The predicted mistag is plotted against the real mistag. Then the applied calibration is overlayed on the data. All plots are showed in 10 same statistics bins. Additionally, the goodness of fit parameters are showed under the plots.

(34)

(a) OS Combi Charm Raw

CRESSIE-READ = -2.5258 lCvHCH S = -2.6426

(b) SS OS Charm Raw

CRESSIE-READ = 0.2898 lCvHCH S = 0.32914 Figure 5.3: Calibration plots for the raw combinations including the Charm tagger. The predicted mistag is plotted against the real mistag. Then the applied calibration is overlayed on the data. All plots are showed in 10 same statistics bins. Additionally, the goodness of fit parameters are showed under the plots.

(35)

(a) OS Combi Calib

CRESSIE-READ = -0.0416 lCvHCH S = 0.0025

(b) SS OS Calib

CRESSIE-READ = 1.1612 lCvHCH S = 1.2715 (c) OS Comb Calib Charm

CRESSIE-READ = 0.2310 lCvHCH S = 0.2366

(d) SS OS Calib Charm

CRESSIE-READ = 1.2021 lCvHCH S = 1.2858 Figure 5.4: Calibration plots for the combinations of calibrated taggers including the Charm tagger. The predicted mistag is plotted against the real mistag. Then the applied calibration is overlayed on the data. All plots are showed in 10 same statistics bins. Additionally, the goodness of fit parameters are showed under the plots.

(36)

(a) SS OS OSCombCalib

CRESSIE-READ = 0.21319 lCvHCH S = 0.38782

(b) SS OS OSCombCalib Charm

CRESSIE-READ = 0.44571 lCvHCH S = 0.54846 Figure 5.5: Calibration plots for the combination of calibrated OS combination with the calibrated SS Kaon tagger. The predicted mistag is plotted against the real mistag. Then the applied calibration is overlayed on the data. All plots are showed in 10 same statistics bins. Additionally, the goodness of fit parameters are showed under the plots.

(37)

Chapter 6

Discussion

In this thesis we have calibrated the flavour tagging at LHCb for the run 2 data from 2015 and 2016. First we have shown that the tool (’EPM’) yields the same results as the published results from run 1. Then we calibrated the flavour tagging for run 2.

Individual taggers

From the calibration plots and Goodness of Fit parameters under the plots we can conclude that a linear calibration is appropriate for this dataset. The resulting calibration parameters as shown in table 5.1 are small in the case of P0* as is expected. The P1 values are quite high resulting in a steeper calibration curve. The only exception is the SS Kaon tagger which has a low p1 value. This could be due to material interactions. The goodness of fit parameters indicate a good fit quality for the individual taggers therefore the high P1 values are not concerning. The raw OS combinations have the worst quality of fit judging from the statistical tests. Another link function than the default ’Mistag’ link could result in a better fit.

The Calibrated Performances in table 5.2 show a very low tagging rate for OS Electron and OS Charm. The uncertainties on the calibration parameters and tagging power will therefore be larger. The most tagging power is generated with the SS Kaon tagger which was also the case in the run 1 analysis.

The run 2 performances can be compared with the run 1 version of the taggers applied on run 2 data. The biggest improvement is achieved for the SS Kaon tagger. The single track OS taggers did not reach a significantly higher tagging power in the reoptimization as was expected. For most taggers the tagging rate did improve after the reoptimization. The bug in the single OS taggers will lead to a lower tagging rate and will therefore also effect tagging power. With this bug fixed, the performances for these taggers could be improved.

Combination of Taggers

To be able to compare the total tagging power to the performances in run 1 the same combination strategy is used. First the OS Combination and SS Kaon tagger are calibrated and these are then combined into a single tagging decision and calibrated again. The total tagging power is improved from 5.8% to 7.05% using this method. The OS Charm tagger is included in the total combination for run 2. The tagging power without this additional tagger is 6.8% for run 2 optimized taggers. Since there are no expected correlations between the OS and SS taggers the effect of the last calibration step is supposed to be small. This is supported by the calibration parameters found for the SS OS OSCombCalib( Charm) in table 5.5.

The combining strategy preferred by the Flavour Tagging Group is a combination of calibrated taggers which is then calibrated again. With this combination the total tagging power is 7.2% including the Charm tagger. Compared to the 5.8% in run 1 this means

(38)

a 20% relative improvement in the statistical sensitivity of all time-dependent CP violation measurements at LHCb. This result is compatible with the expectations for the re-optimization. In this method the calibration parameters are first calculated on a subset of the sample and than applied to the rest of the sample for the combination. In this analysis the selection was based on the number of tracks being even or uneven. This is justified by comparing the mistag distribution before calibration between the two samples. Although the distributions are very similar the number of tracks is correlated to the tagging decision, this is therefore not a preferred selection mechanism. Future NTuples should contain the event numbers to create an unbiased subsample of the data.

When performing a calibration before the combination in the EPM the uncertainties of the individual calibration parameters are not propagated into the combination. Although the combination itself is calibrated again the uncertainties could be underestimated. Since the combination is not a simple linear sum, the effect of the uncertainties should be studied by performing a toy study. This is beyond the scope of this thesis but is recommended for future analysis.

The asymmetry parameters for both the individual taggers and different combination are small but non-negligible as is expected (see 5.6 and5.7).

Because Bs → DsK and Bs → Dsπ decays are kinematically similar, the calibration

parameters (including the asymmetry parameters) can be directly applied in the Bs→ DsK

analysis.

Lifetime resolution

EPM relies on an exact model for the lifetime resolution since parameters can only be fixed and not constraint. This also means that uncertainties on the lifetime resolution parameters (Gaussian width and scale factor) are not propagated into the results. Usually, the effects of variation in resolution parameters is only studied in combination with the calculation of CP observables for Bs→ DsK since they are stronly correlated with the effects on tagging. For this

analysis the effects on the calibration have been studied independently to get an idea about the size of the effects. In table 5.8 and 5.9 it can be seen that the uncertainties due to variation of the lifetime parameters are of the same order as the uncertainties of the calibration itself. The variation of the scale factor had the biggest effect on the tagging power, for the calibration parameters this was less clear. Since the effect is of the same order as the other uncertainties resolution uncertainties are an important effect which should be studied more intensively. This also emphasizes the importance of an accurate lifetime resolution model. The used resolution parameters are still under development and could therefore be subject to change. This would mean that calibration parameters should be recalculated if new values for the proper time resolution are available.

(39)

Chapter 7

Conclusion

Comparison with previous analysis shows that EPM is an appropriate tool to monitor and calibrate tagging results.

From the calibration plots and corresponding goodness-of-fit parameters can be concluded that a linear function is appropriate when calibrating Bs0→ D+

sπ−. Because of the similarities

between the two channels the calibration parameters from table 5.1 can be directly used in CP violation measurements using B0s → D±sK∓ decays.

The preferred method for combining tagging decisions from different taggers is to first calibrate the individual taggers and after that calibrate their combination again to account for correlations between the taggers. Using this method the total tagging power can be raised from 5.8% in run 1 to 7.2% in run 2 including the OS Charm tagger which was not yet available in run 1. This OS Charm tagger adds a valuable contribution of 0.2% to the total combination and is therefore recommended to use. The biggest improvement comes from the higher tagging rate and tagging power of the SS Kaon tagger. With the recent fix of the bug in the single track OS taggers the tagging power may be improved even further.

Variation of the lifetime resolution parameters gives an uncertainty of the same order of the uncertainty due to calibration for both the calibration parameter and tagging power. The effects of lifetime resolution should also be studied in combination with the effects on CP observables for Bs→ DsK for the real propagation of these uncertainties. Furthermore, with new values for

the proper decay time resolution the calibration parameters should be recalculated.

The uncertainties of the calibration parameters of individual taggers are not propagated into the combination. To study the effects of these uncertainties a toy study is proposed generating toys following a certain mistag distribution. Perturbations of the distribution will show the effect on the combination.

The relative improvement of the tagging power in run 2 corresponds to a 20% relative improvement of statistical sensitivity for all time-dependent CP violation measurements at LHCb.

Referenties

GERELATEERDE DOCUMENTEN

The handle http://hdl.handle.net/1887/19952 holds various files of this Leiden University dissertation.!. Het omslag is niet voorzien

Belgian customers consider Agfa to provide product-related services and besides these product-related services a range of additional service-products where the customer can choose

No useful power calculation for the current study can be calculated, since the effect sizes of early environmental experiences, the early development of self-regulation and a

PAUL WILLIAMS Music by ANNA LAURA PAGE... CREATION WILL BE AT

Recommendation and execution of special conditions by juvenile probation (research question 5) In almost all conditional PIJ recommendations some form of treatment was advised in

Als we er klakkeloos van uitgaan dat gezondheid voor iedereen het belangrijkste is, dan gaan we voorbij aan een andere belangrijke waarde in onze samenleving, namelijk die van

The junkshop was chosen as the first research object for multiple reasons: the junkshops would provide information about the informal waste sector in Bacolod, such as the

Er vinden nog steeds evaluaties plaats met alle instellingen gezamenlijk; in sommige disciplines organiseert vrijwel iedere universiteit een eigenstandige evaluatie, zoals