• No results found

Detecting Muons at LHCb Upgrade Conditions

N/A
N/A
Protected

Academic year: 2021

Share "Detecting Muons at LHCb Upgrade Conditions"

Copied!
63
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

M.Sc. Thesis

Detecting Muons at LHCb Upgrade Conditions

Author : H.S. Kuindersma

First examiner : Dr. Ir. C.J.G. Onderwater Second examiner : Dr. Ir. J.P.M. Beijers

August 10, 2018

University of Groningen

Faculty of Science and Engineering

Van Swinderen Institute for Particle Physics and Gravity

(2)
(3)

Contents

Introduction 4

1 Standard model and beyond 6

1.1 Standard model . . . 6

1.1.1 Fundamental particles . . . 6

1.1.2 Fields and forces . . . 7

1.2 Unexplained phenomena . . . 8

1.2.1 Flavor sector . . . 9

1.3 New physics models . . . 9

1.3.1 Unification . . . 9

1.3.2 Supersymmetry . . . 10

1.3.3 Strings . . . 10

2 Bs0 → µ+µ and the search for new physics 11 2.1 Physics of Bs0 → µ+µ . . . 11

2.1.1 Quark flavor changing currents . . . 12

2.1.2 Branching fraction and effective lifetime . . . 13

2.1.3 Testing new physics models . . . 15

2.2 Hints of new physics . . . 16

2.2.1 RK(∗) and RD(∗) results . . . 16

3 LHCb at the LHC 20 3.1 Large hadron collider . . . 20

3.1.1 Proton-proton collisions . . . 20

3.2 Detector lay-out . . . 22

3.2.1 Tracking . . . 22

3.2.2 Particle identification . . . 24

3.3 Upgrade during long shutdown 2 . . . 25

4 Identifying and triggering muons 28 4.1 Muon identification . . . 28

4.1.1 Efficiency determination . . . 29

4.2 Trigger . . . 29

4.2.1 Level-0 . . . 29

4.2.2 High level trigger . . . 30

4.2.3 Trigger after the upgrade . . . 32

5 Muon detector dead time induced inefficiencies 34 5.1 Intermediate board removal . . . 34

5.2 Muon chamber replacement . . . 37

(4)

6 Muon selections at upgrade conditions 40

6.1 The variables . . . 40

6.1.1 χ2corr . . . 41

6.1.2 Boosted decision trees . . . 42

6.2 Training of a multilayer perceptron . . . 44

6.2.1 Artificial neural network . . . 44

6.2.2 Results . . . 44

6.3 Variable performances in the HLT . . . 47

6.3.1 Data samples and χ2corr distributions . . . 47

6.3.2 Rates and signal efficiencies . . . 47

6.3.3 Rate reduction at upgrade conditions . . . 53

6.3.4 χ2corr in the HLT2 . . . 55

Summary and outlook 57

Acknowledgment 59

References 62

(5)

Introduction

The Standard Model (SM) of particle physics is the most complete and accurate theory of the fundamental building blocks of our universe. For the past few decades, it has proven to be a very successful theory, describing nature with very high precision. Nevertheless, its range of applicability is being exceeded.

There are certain phenomena which are not described by the SM, making apparent it is not a complete theory. A few striking examples of such phenomena are the absence of a description of gravity as the fourth force, the lack of an explanation for dark matter and dark energy in our universe and the imbalance between matter and anti-matter in our visible universe.

To give a theoretical explanation for the shortcomings of the SM, new physics models can be developed. Physicists can test the validity of these models in two ways; either directly, e.g. by looking for new particles associated with them, or to test the SM prediction with high accuracy with the goal of finding hints to where the experimental observations deviate from its prediction. In this way, the results can point into the direction of certain new models in which they are supported.

The LHCb experiment is an experiment in which the latter strategy is put into practice.

Its detector is located at the Large Hadron Collider (LHC), the largest particle accelerator in the world, run by the European Organization for Nuclear Research (CERN). Protons are made to collide at a center of mass energy of 13 TeV with a speed almost equal to that of light. Every collision produces a vast amount of new particles, known as an event, which can be detected by LHCb.

Because of the record high energy in the LHC and the unpreceded production of B mesons in proton- proton collisions, the LHCb experiment can test the flavor sector of the SM with high accuracy. An example of where new physics may be found is in the Bs0 → µ+µ decay. Its branching fraction can be be measured and compared to the Standard Model prediction. Deviations of this value due to flavor changing neutral currents, highly suppressed within the SM, immediately hint toward new physics.

Since the LHC has become operational, the LHCb has performed exceptionally well and is expected to have taken more than 8fb−1of data by the end of run II (2018). Nevertheless, in order to fully exploit the flavor physics potential of the LHC, the luminosity can be increased. This increase will require an upgrade of the detector. This is currently foreseen during the second long shutdown of the LHC (2018-2020), after which the LHCb detector is expected to take 5fb−1 of data per year.

The increase in luminosity means an increase in particles traversing the detector. This will have several consequences. First of all, not all hardware of the current detector is able to cope with the increased particles fluxes, which will have to be replaced. Furthermore, the experiment can not profit from the increase in luminosity if the current trigger strategy remains unchanged. This is caused by a limitation in the readout rate of electronics, which has to be increased. Therefore, the trigger strategy will be revised in view of the upgrade.

(6)

Muons appear frequently in many physics decays, such as Bs0 → µ+µ. High muon detection and identification efficiencies are crucial to be able to do these types of analyses at LHCb. Arriving at the main topic of this thesis; the upgrade will have consequences for the detection of muons. First of all, despite the planned upgrade of the detector and the muon system, the increase in particle flux will cause the dead time of electronics to increase which in turn causes inefficiencies in detecting muons. Secondly, the increase in particles does not only mean more muons, it also means more misidentification due to an increase in combinatorial background. Studies were performed to see if the detection and identification inefficiencies can be reduced.

Outline

In Chapter 1, an introduction is given to the theory of fundamental particles and forces.

It serves as a basis for the rest of the thesis. In Chapter 2, the search for new physics is discussed, using B0s → µ+µ to explain the importance of these searches.

The LHCb detector, with which these searches are performed, will be discussed in Chapter 3. The upgrade of the detector and the consequences on the hardware will also be discussed.

Chapter 4 focuses on two important parts of the LHCb detector: muon identification and the trigger. The upgraded trigger will also be discussed.

Results on how the dead time induced inefficiencies, caused by the increased particle flux, can be reduced are given in Chapter 5. In Chapter 6, the results are given of a study on how the newly developed muon identification algorithms can be used in the upgraded trigger to remove combinatorial background more effectively and improve the signal (muon) to background ratio.

(7)

1 Standard model and beyond

In this chapter a general overview of the theory is given to set a basis for the rest of the thesis, based on [1–3]. The discussion starts with a brief explanation of the Standard Model (SM) of particle physics in Sec. 1.1. The SM was designed with the purpose to explain the experimental observations of elementary particles. So far, it is the most complete and successful theory of particles at the subatomic scale. Despite its many successes which include the prediction and discovery of the Higgs boson [4, 5], the SM is not a complete theory. Some of the phenomena that are not explained by the SM and therefore make apparent it needs extensions, are discussed in Sec. 1.2. These phenomena can both be experimental observations as well as theoretical predictions. In Sec 1.3, the last section of this chapter, certain new physics models will be discussed as possible solutions to the shortcomings of the SM.

1.1 Standard model

The SM is a theory that attempts to describe the building blocks of our universe and its interactions. These building blocks are the fundamental particles. The interactions of the particles are called forces. Four forces can be distinguished based on their strength, range and which particles are affected: the strong, weak and electromagnetic force. Gravity is the fourth force, but it is not described within the SM.

1.1.1 Fundamental particles

Currently, 12 different fundamental particles have been identified from experimental obser- vations. These are called the fermions and have half integer spin. They can also be referred to as the 12 fermion “flavors”. The fundamental forces act differently on the fermions and they can therefore be divided into six quarks and six leptons. The quarks can then be divided into up-types (up, charm, top) with +2/3e charge and down-types (down, strange, bottom) with −1/3e charge.

 u d

 ,

 c s

 ,

 t b

 + 2/3e - 1/3e

The different leptons are the electron (e), muon (µ) and tauon (τ ), all with 1e charge and their corresponding neutrino which have no charge. The strong force only interacts with the quarks, the weak force with all particles and the electromagnetic force with all charged particles. Based on the strength at which the strong and weak force act on the fermions, they can be grouped into three generations, each containing a quark or lepton pair.

(8)

 e νe

 ,

 µ νµ

 ,

 τ ντ

 + 1e 0e

Finally, for every fermion there is an anti-fermion with the same mass but opposite charge, arriving at 24 fundamental particles in our visible universe.

1.1.2 Fields and forces

The underlying theory of the SM is Quantum Field Theory (QFT) [6], a combination of special relativity and quantum mechanics. Within this framework, every particle can be seen as an excitation of the corresponding quantum field.

The way forces arise within the SM is by imposing the theory should be invariant under the local phase transformations:

SU (3)C× SU (2)L× U (1)Y

Because of this symmetry requirement, gauge fields have to be introduced to the model in order to keep the invarance. The excitations of the gauge fields are called bosons which have integer spin. In the SM, a force is the exchange of a boson between the fermions: the gluons for the strong force, the the W± and Z bosons for the weak force and photons for the electromagnetic force.

The last fundamental particle within the the SM (Fig. 1) is the scalar Higgs boson, responsible for the mass of the weak force carriers by means of the Higgs mechanism. The gluons and photons are massless.

The theory that describes the interactions of quarks and gluons is known as Quantum Chromodynamics (QCD). In this theory, quarks carry color charge (red, green or blue) and anti-quarks have opposite color charge. One of the postulates of QCD is that colorless particles can not be observed in nature. Therefore, quarks never appear in isolation.

This is known as color confinement. They can however, form a hadron. A color neutral state of multiple quarks bound together by gluons. This process is called hadronisation.

A hadron consisting of a quark and anti-quark pair is called a meson, e.g. Bs0(¯bs). A baryon is a hadron consisting of a combination of three quarks, e.g. a proton (uud).

Recent observations have led to the discovery of more exotic hadrons, such as a tetra- and pentaquarks [7, 8].

The electromagnetic and weak interactions appear very different in current experimental observations. However, above the electro-weak unification energy they can be described by a single theory; the electro-weak theory. Together with QCD it forms the SM. Attempts have been made to unify QCD and the electro-weak theory to obtain a grand unified theory, a theory in which there is only one, larger, gauge symmetry (Sec. 1.3).

(9)

Figure 1: All fundamental particles and forces within the SM, including the generations and the Higgs boson. The sizes represent the relative mass differences.

1.2 Unexplained phenomena

Despite the many successes and extreme accurate predictive power of the SM, it is not a complete theory of nature. There are several issues, coming from theory and observation, which the SM can not explain. Some examples which reveal the shortcomings of SM will be briefly discussed below.

Gravity: The most striking example of incompleteness of the SM, already mentioned above, is the absence of a description of gravity as the fourth force. Currently, the best theory describing gravity is general relativity. This theory is based on classical physics while the three forces in the SM are based on quantum mechanics. Many attempts have been made to produce a theory of quantum gravity and combine it with particle physics in order to obtain a so called Theory of Everything (Sec. 1.3).

Dark matter and dark energy: The SM can give a very accurate description on how the fundamental building blocks of our universe, the fermions and bosons, interact.

However, these elementary particles only make up what is called our visible universe. Ac- cording to astronomical observations [9], only 5% of the mass in the universe is made out of atoms. The rest is divided into dark matter (26%) and dark energy (69%). Currently, there is no known solution to this problem and the origin of dark matter and dark energy remains unknown.

(10)

Hierarchy problem: The Plank scale is an energy scale at which quantum gravita- tional effects become apparent [10]. It seems obvious that the SM needs to be extended in order the explain physics at these energy levels. The SM provides no answer to why there is such a huge gap between the electro-weak energy scale 102 − 103 GeV and the Planck scale 1019 GeV. This is known as the hierarchy problem.

1.2.1 Flavor sector

Next to the more general physics problems discussed above, there are also several open questions related directly to the flavor sector of the SM:

Baryogenesis: In the SM, a particle and its corresponding anti-particle have the same mass but opposite quantum numbers such as charge. When a particle and anti-particle interact, they annihilate producing photons. Given these principles, it would make sense to have similar amounts of matter and anti-matter in the universe, yet we appear to be living in a matter dominated universe [11]. One proposed solution to this problem is C and CP violation larger than the SM prediction [12].

Neutrino mass: Within the SM, neutrinos are massless. Recent observations have shown that neutrinos can oscillate [13], changing flavor during propagation. This proves that neutrinos do have mass. This can be solved by making a small extension to the SM:

letting the mass eigenstates of the neutrinos be different than the flavor eigenstates.

There are several other problems arising in the flavor sector of physics. Such as why there are exactly three generations. What is the fundamental mechanism responsible for the quark mass hierarchy or the characteristic strength of the weak interaction with different quark flavors?

1.3 New physics models

Attempts to answer the types of questions listed above or solve other unexplained phe- nomena come in the form of new physics models, or physics beyond the SM. As mentioned earlier, the electromagnetic and weak forces unify into a single force at some energy scale.

Attempts have been made to include the strong force in this unification as well. There are even more complete unifying schemes, e.g. theories in which gravity is included. A general discussion of unifying models, as well as some other new physics models such as supersym- metry and string theories, will be given next. The general discussion relies on [14, 15].

1.3.1 Unification

The relative strengths, or coupling constants, of the three different forces within the SM have different values at different energy scales. If they are plotted, it can be seen that they appear to arrive at a single point at an energy of 1015 GeV. One explanation is that at

(11)

that energy scale, there is a larger symmetry group which breaks, in a way similar to the Higgs mechanism, into some intermediate or the SM group. Theories that try to unify the electro-weak and strong interactions and explain nature by a single gauge symmetry group are known as Grand Unified Theories (GUTs). Examples of such symmetry groups, where the SM is embedded in, are SO(10) and E6

1.3.2 Supersymmetry

In fact, the coupling constants tend to match more precisely if a supersymmetric extension of the SM is used. The point where they are equal is increased somewhat to 1016 GeV. The Minimal Supersymmetric Standard Model (MSSM), is a model in which every fermion in the SM gets a bosonic supersymmetric partner and vice versa, with an s in front of the SM name. So for example, the supersymmetric partner of the electron is called the selectron. Furthermore, the Higgs boson receives two supersymmetric partners, called Higgsinos. Supersymmetry can not be an exact symmetry. If it were, the supersymmetric partners would have equal masses. No such observations have been made. Nevertheless, it can be an approximate symmetry of nature in which the masses of the supersymmetric particles are higher than the SM partners which can possibly be probed by current high energy experiments.

1.3.3 Strings

Next to GUTs and supersymmetry, physicists have attempted to unify gravity with the SM to obtain a so called Theory of Everything (ToE). A possible ToE candidate is string the- ory. In string theory, elementary particles are no longer excitations of quantum fields but combinations of 1-dimensional vibrating strings in 10 (sometimes 11) dimensions. Since we appear to be living in a four-dimensional world, these extra dimensions would have to be very small or compactified. It was originally hoped that the SM would emerge from string theory as a unique four-dimensional limit. However, string theorists have determined that there is a enormous set (> 10500) of possible low-energy theories that could emerge, each with its own particle interactions and parameters. This limits the current predictive power of string theory.

A general overview of the SM, its shortcomings and possible NP models was given above. The role of experiments is to test the validity of these models to see in which direction the particle physics community will move next. This can be done in two ways:

testing the theoretical predictions directly, for example by searching for supersymmetric particles. Another approach is by indirect searches, testing the SM predictions with high accuracy. One experiment where the indirect approach is put into practice is the LHCb experiment, where the flavor sector of the SM is put to the test. How exactly LHCb attempts to do this is discussed in detail in the next chapter.

(12)

2 B

s0

→ µ

+

µ

and the search for new physics

As we have seen from the previous discussion, despite the SM being such a successful and precise theory, signs of incompleteness are increasingly apparent. New physics models are under constant development to explain its shortcomings. The LHCb experiment is designed to explore the validity of the SM through precision measurements.

Most of the key physics measurements done by the LHCb experiment, listed in [16], are precision measurements with the goal of testing the flavor section of the SM. So called Flavor Changing Neutral Currents (FCNC), highly suppressed within the SM, can receive additional contributions from particles associated with new physics models. The magni- tudes of these processes can be tested with high precision at LHCb and compared to the SM prediction.

One of the problems addressed in the previous chapter is the baryon asymmetry of the universe. One of the conditions needed to produce such an asymmetry is CP violation.

However, there is not enough CP-violation in the SM to explain the asymmetry. FCNC can also alter the SM magnitude of CP-violation. This is why CP-violating decays are actively searched for at LHCb.

Another interesting probe for LHCb in the exploration of the validity of the SM are so called rare decays. These are decays which have a very small branching fractions which can be significantly altered by new physics models. Bs0 → µ+µ is such a rare decay and specifically its branching fraction is of great interest. The physics behind this decay and an explanation why this decay is so interesting is given in Sec. 2.1.

Next to testing the SM by studying CP-violating and rare decays, another important physics measurement done at LHCb is the testing of lepton flavor universality. Within the SM, the bosons related to the weak force, W±and Z0, couple identically to the three lepton flavors. Any deviations of the relative strengths would hint toward new physics. In Sec.

2.2, some of the latest results of the LHCb experiment involving lepton flavor universality will be discussed, particularly those results which are currently not in agreement with the SM and can thus possibly hint toward physics beyond the SM. The LHCb detector with which all these types of searches are performed, is described in detail in Ch. 3.

2.1 Physics of B

s0

→ µ

+

µ

The decay of B0s → µ+µhas long been identified as a powerful probe for new physics (Fig.

2). Many upper limits on its branching fraction have been set throughout the years getting closer to the SM prediction. The di-muon signature of the decay together with the fact that the B production at the Large Hadron Collider (Ch. 3) is unpreceded, make it finally possible to observe the decay for the first time. In fact, its observation was announced by a combined search of LHCb and CMS [17] recently. Some of the physics behind this decay will be discussed below.

(13)

Figure 2: History of the search for Bs0 → µ+µ. It is only in recent years that the SM prediction can be tested with high precision. Figure is taken from [17]

2.1.1 Quark flavor changing currents

In the framework of the SM, the weak force is able to convert a certain up type quark flavor (u, c, t ) to a down type quark flavor (d, s, b) through the exchange of a W± and vice versa. A well known process where this occurs is in semileptonic decays. This is a process containing both hadrons and leptons, e.g. the decay of a neutron (Fig. 3).

The strength of these types of charged-current weak interactions is characterized by the Cabibbo-Kobayashi-Maskawa (CKM) matrix [18]. For example, the object that couples to the u quark is in fact a superposition of all down type quarks:

d0= Vudd + Vuss + Vubb. (1)

Therefore, all possible combinations of mixing can then be given in terms of the CKM matrix:

 d0 s0 b0

=

Vud Vus Vub Vcd Vcs Vcb

Vtd Vts Vtb

 d s b

 (2)

.

The probability for an up type quark i to convert to a down type j is proportional to |Vij|2.

(14)

Figure 3: The mixing of a d quark to a u quark through the exchange of a W boson in free neutron decay.

The values of the CKM matrix can be obtained experimentally and for the currently best obtained values, see [19].

The W± boson also couples to leptons, by changing the lepton into a neutrino of the same generation. Due to lepton universality, these coupling constant are the similar for different flavors and therefore there is no analogues CKM matrix for the mixing of leptons.

However, because neutrinos have been observed to oscillate, there exists an analogues matrix for neutrino mixing called the Pontecorvo Maki Nakagawa Sakata (PMNS) matrix [20].

Within the SM, flavor changing neutral currents (FCNC) are not allowed at tree level (Fig. 4a). They can however, occur due to loop contributions (Fig. 4b). The occurrence of these transitions is rare due to several suppressing mechanisms [21]. First of all, they involve multiple weak interaction contributions as well as a loop suppression factor. Secondly, the unitarity of the CKM matrix would impose a complete suppression, but this is counteracted by the GIM-mechanism [22]. Therefore, Bs0 → µ+µ is still allowed to occur within the SM (Fig. 5), although very limited. How often it exactly occurs is expressed in terms of the branching fraction.

2.1.2 Branching fraction and effective lifetime

The experimentally determined branching fraction is the fraction of a specific decay over the total amount of decay modes of the mother particle. So for Bs0 → µ+µ:

B(Bs0 → µ+µ) ≡ N (Bs0 → µ+µ) + N ( ¯Bs0 → µ+µ)

N (Bs0) + N ( ¯Bs0) . (3) In this definition of the branching fraction, no distinction is made between the Bs0 and B¯s0. The experimentally determined branching fraction can be compared to the theoretical prediction which is based on Bs0 flavor eigenstates. The time dependent decay rate to a certain final state X is given by

(15)

(a) FCNC at tree level, forbidden within the SM.

(b) FCNC at loop level, allowed within the SM.

Figure 4: Flavor changing neutral currents

Γ(Bs0(t) → X) = dN (Bs0(t) → X)/dt

NBs0 (4)

and the theoretical branching fraction is then given by B(B0s → µ+µ) ≡ 1

2 Z

0

hΓ(Bs0(t) → µ+µ)i dt, (5) where

hΓ(Bs0(t) → µ+µ)i = Γ(Bs0(t) → µ+µ) + Γ( ¯B0(t) → µ+µ). (6) However, because the Bs0 and its anti-partner can mix, the physical Bs0 meson that propa- gates through space is a mixture of the two. Therefore, the lifetime and decay width can not be determined for flavor eigenstates but have to be determined for the mass eigenstates Bs,H0 and Bs,L0 , where H, L stands for heavy and light referring to the mass hierarchy of the physical particle. The decay width is then given by

Γs = Γs,H + Γs,L

2 , (7)

arriving at the theoretical prediction of the branching fraction Bth(Bs0(t) → µ+µ) = 1

2

 Γ(Bs0(t) → µ+µ)t=0

s,H + Γs,L)/2 +Γ( ¯Bs0(t) → µ+µ)t=0s,H + Γs,L)/2



. (8)

See [21] for an exact theoretical computation. The most recent SM prediction is:

BSM(Bs0 → µ+µ) = (3.66 ± 0.23) × 10−9. (9) Because of this extremely low value, these types of decays are often referred to as “rare”

decays. The experimentally determined branching fraction can also be expressed in terms

(16)

(a) “Z penguin” diagram (b) “W box” diagram Figure 5: Bs0 → µ+µ main contributions within the SM.

of mass eigenstates. Because of a lifetime asymmetry in the mass eigenstates [23], the theoretical and experimental determined branching fractions can be related by

Bex(Bs0 → µ+µ) =

"

1 + ys2Aµ∆Γ+µ 1 − ys2

#

× Bth(Bs0 → µ+µ), (10) where

ys ≡ Γs,L− Γs,H

Γs,L+ Γs,H (11)

is the lifetime difference between the mass eigenstates and

Aµ∆Γ+µ = Γ(Bs,H0 → µ+µ) − Γ(Bs,L0 → µ+µ)

Γ(Bs,H0 → µ+µ) + Γ(Bs,L0 → µ+µ) (12) is the mass eigenstate rate asymmetry. The value for Aµ∆Γ+µ can vary depending on the physics model describing it. In the SM, Γ(Bs,L0 → µ+µ) = 0, so Aµ∆Γ+µ = 1. By measuring the effective lifetime of the Bs0, the value of Aµ∆Γ+µ can be constrained [24]. Therefore, complementary to the determination of the branching fraction, measuring the effective lifetime of Bs0 → µ+µ is another interesting probe for new physics.

2.1.3 Testing new physics models

The accuracy of the prediction of the branching fraction of B(Bs0 → µ+µ) can be improved by canceling out theoretical uncertainties [17]. This is done by taking the ratio

R = B(B0 → µ+µ)

B(Bs0 → µ+µ) = 0.0295+0.0028−0.0025 (13) New physics models often come with the introduction of new particles. These new particles can interact with the SM particles extending the SM diagrams for the Bs0 → µ+µ

(17)

(a) Extension of the “Z penguin” dia- gram beyond the SM

(b) Extension of the “W box”

diagram beyond the SM

Figure 6: Hypothetical contributions from new physics models introducing new particles X0 and X+ which can alter the decay rate.

decay with new processes (Fig. 6). This can alter the SM branching fraction prediction.

By measuring the value of the branching fraction, certain new physics models such as different types of MSSM models can be constrained (Fig. 7). The current measured value for the B(Bs0 → µ+µ) is still in agreement with the SM prediction but the situation can be improved by including Aµ∆Γ+µ in the measurement and by increasing the amount of data used for the analysis. The discussion will now turn to some recent anomalies observed at LHCb, which can hint toward new physics.

2.2 Hints of new physics

Next to CP-violating and rare decay searches, Lepton Flavor Universality (LFU) tests are also performed at LHCb. LFU is the assumption that the interactions of the electron and its neutrino are equal to that of the other leptons and their neutrinos if the mass differences is accounted for. These principles can be applied to B meson decays with leptons in the final as well. Recent observations of these types of measurements at LHCb challenge the SM prediction.

2.2.1 RK(∗) and RD(∗) results

Semileptonic processes can be used to test LFU in decays involving b → s ` ` (Fig. 8). The rate of such a processes should be equal for electrons and muons if the mass corrections are taken into account. This can be tested by measuring the ratio

RK = Γ(B+ → K+µ+µ)

Γ(B+→ K+e+e), (14)

which should be equal to unity according to the SM.

(18)

Figure 7: The possible values for the branching fractions of B0(s)→ µ+µ in different new physics models. It shows how the observed quantities can exclude the new physics models parameter space. The grey area in the top figure is excluded by experiments before the results obtained by LHCb and the grey are in the bottom figure by results from LHCb with 2011 data. The blue ellipse shows the region from the latest [17] results. See [25] for an explanation on the different new physics models. The figure is taken from [21]

(19)

Figure 8: Example of a b → s ` ` diagram within the SM, which should occur equal for l = µ, e.

In fact, the strategy used by LHCb is to determine [26]

RK = B(B+ → K+µ+µ) B(B+→ K+J/ψ(→ µ+µ))

. B(B+ → K+e+e)

B(B+→ K+J/ψ(→ e+e)) (15) to cancel out any experimental uncertainties related to detecting either electrons or muons.

The value of RK was found to be [27]

RK = 0.745+0.090−0.074(stat) ± 0.036(syst) (16) which is in agreement with the SM prediction to within 2.6σ.

A similar analysis was done for the B0 → K∗0l+l decays to obtain RK. This also included the double ratio to remove any experimental uncertainties. The obtained values for two different bins of dilepton invariant mass squared, q2, were

RK =

(0.66+0.11−0.07(stat) ± 0.03(syst)

0.69+0.11−0.07(stat) ± 0.05(syst) (17)

which are in agreement with the SM prediction to within 2.1 and 2.4σ, respectively.

A different strategy is to study decays involving b → c ` ν, such as ¯B0 → D∗+`ν¯l, where the ratio of ` = τ over ` = µ is taken. The τ can be measured indirectly through τ→ µν¯µντ or τ→ π+ππντ and the measured observable RD is compatible with the SM prediction to within 2.1 or 1.0σ, respectively [26]. Currently a factor three more data has been acquired by the LHCb experiment which than the set used for these analyses [26]

which could present interesting updates on the status of LFU.

(20)

As we have seen in the previous discussions, the SM is not a complete theory as many physics phenomena occurring in nature are not described by it. New physics models are ac- tively under investigation as solutions to these types of problems. Because of the extremely small and precise prediction of B(Bs0 → µ+µ), extending the SM with new theories and interactions can also significantly alter this value. That is why the experimentally deter- mined value of B(Bs0 → µ+µ) is such an interesting probe in the search for new physics.

Furthermore, hints of new physics have been found in LFU tests at LHCb which require further investigation. The LHCb detector, used for these types of searches in the flavor sector of physics, will be discussed in the next chapter.

(21)

3 LHCb at the LHC

The Large Hadron Collider (LHC) is the largest particle accelerator in the world, with a 27 km circumference. It is controlled by the European Organization for Nuclear Research (CERN). CERN hosts a large number of experiments including four major experiments involving proton-proton collisions in the LHC which will be discussed briefly in Sec. 3.1.

One of the four detectors involved with experiments at the LHC is LHCb. The LHCb detector will be discussed in more detail in Sec. 3.2, including its subdetectors used for tracking and particle identification. At the end of 2018, when the LHC will shut down for two years, the LHCb detector will undergo a series of upgrades with the goal of increasing the amount of data gathered from the proton-proton collisions. A brief overview of the detector upgrade and its consequences will be given in Sec. 3.3.

3.1 Large hadron collider

The LHC design is described in detail in [28]. It is located partially in France and Switser- land (Fig. 9), 50 to 175m below the surface. The protons get their peak energy through several steps of acceleration. They start out simply as hydrogen gas. The protons and elec- trons in the gas are separated using an electric field. The protons are then sent to LINAC 2, the only linear accelerator in the sequence where they obtain an energy of 50 MeV. The Proton Synchrotron Booster (PSB) then accelerates the proton to 1.4 GeV. They then get accelerated to 25 GeV and 450 GeV by the Proton Synchrotron (PS) and Super Proton Synchrotron (SPS), respectively, before entering the LHC. During 2011, the first run of data taking, the proton beams were accelerated to around 3.5 TeV per beam. This was gradually increase and in 2018, a center of mass energy √

s = 13 TeV was recorded. This makes the protons highly relativistic. To keep the protons in the circular trajectory, 1232 dipole magnets, each 14m long, are used. The proton beams are made to collide on four LHC collision points. On each of these points, a detector is located. The experiments involved are ALICE, ATLAS, CMS and LHCb.

3.1.1 Proton-proton collisions

Each of the proton beams is divided into 3564 slots and every slot can contain a bunch of 1011 protons. The separation of the bunches is therefore 27 · 103m / 3564 ≈ 7.6m. Because the velocity of the protons is almost equal to the speed of light, the time between every bunch crossing is 7.6m / 3 · 108ms−1 ≈ 25ns. This means the maximum bunch crossing rate is 40 MHz. In practice however, not all slots are filled and the effective bunch crossing rate, νef f, will be lower.

An important parameter describing the proton-proton (pp) collisions is the instanta- neous luminosity L. Together with the pp collision cross section σpp, which scales with the center of mass energy [29], it determines the rate of collisions as

dNpp

dt = L · σpp. (18)

(22)

Figure 9: A schematic overview of the LHC and its interaction points. The LHCb detector is located at point 8, close to the French-Swiss border. The other experiments are ATLAS, ALICE and CMS.

For a colliding bunch pair it can be written as [30]

L = N1N2νef fΩ (19)

where N1 and N2 are the populations of the corresponding bunches and Ω is the beam overlap integral, which is determined by the passage of the two bunches and their particle density distributions.

Another quantity often used in terms of pp collisions is the integrated luminosity over the time in which the collisions take place:

Lint = Z

L dt. (20)

As this quantity has units of cm−2, it is often expressed in units of inverse femtobarn, which is equal to fb−1 = 1039cm−2. Furthermore, it is directly related to the number of total pp collisions, which is way it is widely used as an expression of the total amount of data acquired.

The instantaneous luminosity of the LHCb is kept lower (L = 4 · 1032cm−2s−1) than the LHC design luminosity of L = 1034cm−2s−1. This is done to reduce the number of pp interactions. This has several advantages, including less deterioration of the electronics in the detector and better reconstruction of particles. The number of inelastic pp collisions per bunch crossing in the LHCb detector can now be determined by

(23)

µ = L · σpp

νef f (21)

where µ is the mean of a Poisson distribution.

A pp bunch crossing, is usually referred to as an “event”. An event in which there was a inelastic pp collision can contain up to hundreds of newly produced particles. The LHCb detector is used to specify where exactly the particles traversed the detector, known as tracking. Furthermore, several sub detectors exploit the different “signatures” left by different particles in an attempt to identify them. The overview of the detector lay-out as well as a brief overview of the different sub detectors used for tracking and particle identification is given below.

3.2 Detector lay-out

The LHCb detector [31, 32] (Fig. 10) is a spectrometer in the forward region. It covers a pseudorapidity of 2 < η < 5, which describes the direction of the particles originating from a pp collision, defined as

η ≡ − ln(tan(θ

2)), (22)

where θ is the angle with respect to the beam pipe. The detector is designed with the purpose to study decays containing b or c quarks (fig. 10). Due to the high mass of these particles, the decay products tend to travel in the high pseudorapidity region.

The different sub detectors of the LHCb serve different purposes. They can be grouped into the tracking system, which is responsible for reconstruction of the particle trajectories and the particle identification system. These will be discussed in the next sections.

3.2.1 Tracking

The sub detectors used for for tracking are the Vertex Locator (VELO), the dipole magnet and the tracking stations before and after the magnet (TT and T1-T3, respectively). They will be briefly discussed. The tracking system is used to reconstruct the particle trajectories known as tracks.

The Vertex Locator (VELO) is positioned around the pp interaction zone. When the protons collide, this produces a range of particles, all originating from the so called primary vertex (PV) (Fig. 11). This point can be different for every event and is reconstructed using the tracks in the VELO originating from the same point. The decay vertex of a B meson produced in this collision is known as the secondary vertex. The impact parameter (IP) of a track is defined as the closest distance to the PV. Because a B meson is long lived, the secondary vertex is displaced from the PV and therefore the IP of the decay products is usually larger than those of particles originating from the PV. This can be useful to remove any background originating from the pp interaction zone.

The Tracker Turicensis (TT) is located between the VELO and the dipole magnet.

The tracking stations (T1-T3) are located downstream of the dipole magnet. The VELO,

(24)

Figure 10: A schematic overview of the different sub detectors of the LHCb detector. In the middle of the detector, along the z-axis, runs the LHC beam pipe. The protons collide on the left side in the vertex locator.

TT and the innermost part (IT) of T1-T3, consist of silicon micro strip detectors. When charged particles traverse the micro strip detectors, small currents of ionization are pro- duced and this is registered in the from of a hit. As the occupancy in the outer part (OT) of the tracking stations is much lower than the rest of the tracking system, it consists of gas straw tubes, which is less costly and less accurate than silicon trackers. The working principle is similar. When a charged particle traverses the tube, the gas inside becomes ionized. Because of a potential difference inside the tube, the electrons and ions drift in different directions which induces a current that can be measured as a hit.

The magnetic field created by the dipole magnet is perpendicular to the beam line. It will bend the trajectory of the charged particles and this can be used to determine the momentum, p, of a track. Furthermore, because the bending is opposite for positive and negatively charged particles, this can be used to determine the charge corresponding to a track. The resulting resolution of the momentum calculation by the complete tracking system is determined to be δp/p ≈ 0.5% at p = 20 GeV and δp/p ≈ 0.8% at p = 100 GeV.

(25)

!"#

$%

&%

'(

')

*$

Figure 11: The primary vertex (PV) and secondary vertex (SV) in a Bs0 → µ+µ decay.

The PV is where the protons collide in the VELO. The impact parameter (IP) of the muons is the closest distance to the PV.

3.2.2 Particle identification

The sub detectors used for particle identification (PID) are the Cherenkov detectors (RICH1 and RICH2), the Calorimeter system (CALO) and the muon stations (M1-M5). The PID detectors exploit the different signatures of muons, hadrons and electromagnetic particles to distinguish between them.

When particles move through a medium faster than the speed of light in that medium, they radiate in the from of Cherenkov radiation. It is analogous to the emission of a sonicboom when moving faster than the speed of sound through a medium. The Cherenkov radiation is emitted in a cone of light. The opening angle θ of the cone is determined by:

cos(θ) = 1

nβ (23)

where n is the refractive index of the medium and β = v/c. Two Ring Imaging Cherenkov (RICH) detectors are used to measure θ and determine the velocity of the particle. The first is located immediately downstream of the VELO, which covers the low momentum range of the particle up to 60 GeV, the second is located downstream of the T1-T3 stations and covers a higher momentum range up to 100 GeV. Together with the information from the tracking system on the momentum of the particle, the mass can be determined. The RICH systems can distinguish between muons, pions, kaons and protons.

The Calorimeter system (CALO) is located downstream of RICH2 and the first muon station M1. It consists of a Scintillator Pad Detector (SPD), a Pre-Shower (PS), an elec- tromagnetic calorimeter (ECAL) and a hadronic calorimeter (HCAL). When a particle traverses the CALO, it will interact with the medium, creating a particle shower. The difference in energy deposits (Fig. 12) of the showers is used to distinguish between elec- tromagnetic particles and hadrons. Because of the penetrating power of the muons, they

(26)

Figure 12: An illustration of the difference in energy deposits between different particles.

The figure is taken from [21].

traverse the CALO completely. The SPD measures the occupancy of an event, which can be used in the trigger (Ch. 4) to remove highly occupied events. The PS is used to determine the point where the particles start an electromagnetic or hadronic shower.

Muon station one (M1) is located between RICH1 and the CALO. Muon stations two to five (M2-M5) are located at the very end of the detector, with 80 cm thick iron absorbers between the stations (Fig. 13a), to select only very high penetrating muons. Each station consists of multi-wire proporional chambers, or simply muon chambers. Only the innermost part of M1 consists of triple-GEM detectors because the rate of particles is highest here.

Since the particle flux is highest close to the beam pipe and lower at the edges of the muon stations, they are divided into four regions. For every next region, dimensions and logical pad size scale by a factor to with respect to the previous (Fig. 13b). The hits in the muon stations can be used for a fast transverse momentum, pT, estimation in the first stage of the trigger.

3.3 Upgrade during long shutdown 2

In run I, an integrated luminosity of 3 fb−1 was recorded and it is expected to reach more than 8 fb−1 after run II. In order to fully exploit the potential of the LHC and therefore the LHCb flavor physics program, the instantaneous luminosity in the LHCb detector can be increased to L = 2 · 1033cm−2s−1 and an integrated luminosity of 50 fb−1 can possibly be reached (2028). As the luminosity scales with the number of visible pp interactions, the occupancies are expected to increase at upgrade conditions. This increase will have several consequences on the detector (Fig. 14) and therefore some changes will be made in the detector hardware and electronics. Some of the major changes will be briefly be discussed below. For a more detailed description see [33, 34].

The current average interactions per proton bunch crossing is around 1.5. After the upgrade this will increase to 3.6 − 7.2 [35]. The current VELO can not cope with this

(27)

(a) Side view. The muon cham- bers are placed in two rows in each station.

(b) Geometry of the different regions.

The boxes represent the sizes of the muon chambers in that region.

Figure 13: Muon system

increase in particle flux. Therefore, the VELO will be upgraded and the silicon strip sensors will be replaced by hybrid pixel sensors. The lay-out will remain unchanged. See [36] for a more detailed description on the VELO upgrade.

Both the inner and outer tracker parts of the tracking stations will not be able to handle the increase of particle. They will be completely replaced by a single system, the Scintillating Fibre (SciFi) Tracker [37]. Futhermore, the TT will be replaced by the Upstream Tracker (UT) [38].

No major changes are needed for the RICH systems. Only the optical lay-out of RICH1 will be modified to halve the occupancy in the central region of the detector. This is done by increasing the focal length of the spherical mirrors inside the detector. The spherical curvature will also have to be increased to restore the focusing.

The SPD and PS will be removed since their main use will become redundant. The ECAL and HCAL remain mostly unchanged. These changes are due to the fact that the first level of the trigger will be removed. The current and new trigger strategies will be discussed in detail in Ch. 4.

Since the muon system is located at the very end of the particle stream, it is well shielded. Therefore, all stations can tolerate the increased luminosity of 2 · 1033cm−2s−1 with the exception of M1 and the innermost part of M2. The M1 station will be removed completely and the particle rate in M2 will be reduced considerably by installing extra shielding around the pp beam pipe before M2.

Despite these changes, the muon system is still expected to cause inefficiencies in de- tecting muons due to increased dead time in read out electronics. The dead time can be

(28)

Figure 14: A schematic overview of the upgraded LHCb detector.

reduced by reducing the granularity of the muon stations. One strategy is to remove the intermediate boards (IB) which perform logical OR decisions of smaller physical read out channels. The granularity can then be reduced by the size of the OR decision. A study was performed to see what the effect of several IB removal scenarios is on the inefficiency to detect muons and therefore certain decays with muons in the final state. The results are given in Ch. 5.

Next to hardware, the upgrade will also have consequences for muon identification and the way events are triggered. The current identification procedure and trigger will be discussed in Ch. 4, including the expected changes in the trigger.

At upgrade conditions, an increase in particle flux does not only mean an increase in muons, it also means an increase in misidentification, mainly due to combinatorial background. Since high muon identification is crucial, new algorithms have been developed to remove these types of backgrounds more effectively. These algorithms will be introduced in Ch. 6. The results of a study performed on the performances of these new algorithms in the upgraded trigger in terms of background removal are also given in Ch. 6.

(29)

4 Identifying and triggering muons

Every analysis of LHCb starts by storing the data of events, acquired in the tracking and PID systems. As many of the analyses involve CP violating and (very) rare decays, the overwhelming majority of the events are not interesting for the analyses and can be deleted.

The decision whether or not to store the events is done right after the pp collision by a mechanism called the trigger.

Many of these decays, important for the physics program of LHCb, have muons in the final state, such as Bs0 → µ+µ. That is why a high muon identification efficiency is of great importance. Furthermore, muon identification can be used in the trigger to obtain a high signal efficiency of events containing muons.

The complete current muon identification procedure will be discussed in Sec. 4.1.

In view of the upgrade, the current trigger procedure will be revised. The current and upgraded trigger strategies will be discussed in Sec. 4.2.

4.1 Muon identification

The muon identification procedure [39], can be divided into several steps, each step im- proving on the signal to background ratio. The first step of the identification procedure is assigning tracks a binary value based on a simple selection algorithm called IsMuon (Tab.

1), reducing the probability of misidentifying hadrons as muons to the order of 1%. This algorithm is primarily based on the high penetration power of the muons. It requires hits in the muon stations around the track extrapolation points within some field of interest (FOI). The size of the FOI depends on the momentum of the particle and the multiple scattering expected with that momentum.

Table 1: Muon stations required to trigger the IsMuon decision as a function of momentum range.

Momentum range Muon stations

3 GeV/c < p < 6 GeV/c M2 and M3

6 GeV/c < p < 10 GeV/c M2 and M3 and (M4 or M5) p > 10 GeV/c M2 and M3 and M4 and M5

Once an IsMuon selection has been made, the signal (muons) to background (protons, kaons, pions) ratio can be further improved . This is done by introducing the discriminating average square distance variable, D2, defined as

D2 = 1 N

N

X

i

( xiclosest− xitrack padix

2

+ yiclosest− yitrack padiy

2)

, (24)

where (xiclosest, yclosesti ) are the coordinates of the hit in muon station i, closest to the track extrapolation point (xitrack, ytracki ). N is the total number of stations containing hits within

(30)

their FOI and padi the pad size of the subdetector. The D2 distributions are determined for muons and non-muons. True muons tend to have a narrower D2 distribution, closer to 0, than particles wrongly labeled IsMuon. The likelihood of the muon or non-muon hypotheses for each track can now be determined as the integral of the probability density function of D2, integrated from 0 to the track value D2track. Finally, a muDLL distribution can be computed as the logarithm of the ratio between the muon and non-muon likelihoods.

4.1.1 Efficiency determination

The tracks get assigned a value for the muDLL procedure. A cut can be made in this value to separate muons from background (classification). Knowing the efficiency of the muon identification method after applying such a cut is crucial. This is determined on real data using the so-called “tag and probe” method on J/ψ → µ+µ decays.

Overall kinematic selection requirements are applied and a maximum likelihood fit on the mass distribution is is performed to be certain one selects a J/ψ → µ+µ decay. If one of the tracks is identified as a muon under stringent conditions it is labeled as “tag”.

The efficiency can then be determined by the fraction of the “probe” muons identified as such after a cut in the muon identification variable is made. The events used in such a sample are not triggered on the probe muon to remove any bias possibly originating from the trigger.

4.2 Trigger

The LHCb trigger is optimized to keep interesting events involving a B meson decay and to remove uninteresting events, referred to as background. It is divided into two steps:

A robust level-0 (L0) hardware trigger, discussed in Sec. 4.2.1, and a more sophisticated software based High Level Trigger (HLT, Sec. 4.2.2). In view of the upgrade, the trigger strategy will be revised. This is discussed in Sec. 4.2.3. The general discussion follows that of [21, 31]. See [40] for an overview of the trigger performance during run II.

4.2.1 Level-0

The first step of the trigger process (L0) is designed to quickly analyze the event information and make a decision. Because of timing constraints, only information from the quickest subdetectors (calorimeters, muon system) is used. For each of the subdetectors a separate L0 decision is made. If at least one of the two decisions is positive, the complete event information is passed to the HLT.

The L0-Calorimeter looks for high transverse energy, ET, candidates and its decision is based on information from all elements of the CALO system. Because different particles have different signatures in terms of penetrating power (Sec. 3.2.2), the L0-Calorimeter can make a decision distinguishing between electromagnetic and hadronic showers (L0Hadron, L0Photon or L0Electron). The ET deposited can be measured for the different showers and compared the a threshold given for each of the L0-Calorimeter decisions. If at least

(31)

Figure 15: Schematic view of the selection of muon candidates in one of the quadrants of the muon system. The path is curved because of the magnetic field.

one candidate meets the criteria, the L0-Calorimeter decision is positive and therefore the complete event is passed to the HLT.

The L0-Muon decision is based on the hits in the muon stations. The muon system can be divided into four quadrants. There is a processor for each quadrant, looking for the two tracks with the highest transverse momentum pT in that quadrant (Fig. 15). This enables a quick pT determination with a resolution of 20%, which is currently sufficient for the purpose of the L0. The eight candidates, two for each quadrant are compared to the L0Muon and L0DiMuon thresholds. The L0Muon decision is positive if the highest pT track of that quadrant passes the threshold and the L0DiMuon decision if the combined p1T × p2T passes the threshold. If either is positive, the event is passed to the HLT.

A final veto can also be applied on the L0 decision to remove the event if the number of hits in the SPD exceeds some fixed threshold, despite the L0-Calorimeter or L0-Muon decisions. This is referred to as a Global Event Cut (GEC) to remove any events with high multiplicity. These events would take too much processing time in the HLT. The L0 reduces the event rate of 40 MHz down to 1 MHz, at which the whole detector can be readout, and sent to the HLT.

4.2.2 High level trigger

Once an event has been triggered by the L0, the full event information arrives in the High Level Trigger (HLT) to be analyzed further. In the HLT, the tracks in the events are reconstructed. This is done by software running on 52.000 [40] logical CPU cores. Because of the rate of events of 1 MHz entering the HLT, not all events can be reconstructed completely. Therefore, the HLT is divided into two steps, HLT1 and HLT2.

HLT1 fully reconstructs the events in the VELO. It then determines the PVs by selecting points where at least five tracks originate from. VELO tracks with a high probability of

(32)

originating from a signal decay are selected. Momentum, mass and IP can be determine by fitting the tracks from the VELO using hits in the TT. Tracks which have been triggered by either L0Muon or L0DiMuon, are ran through the IsMuon algorithm. This improves the signal over background ratio in the muon sample.

The complete event is passed to the HLT2 if at least one track passes a set of thresholds known as a “trigger line”. Trigger lines contain threshold for parameters such as mass, pT and IP. There are several general HLT1 trigger lines such a HLT1TrackAllL0, or specific muon trigger lines such as HLT1TrackMuon (Tab. 2).

HLT1TrackAllL0 is executed on all selected tracks passed by the L0 and the muon trig- ger lines only on tracks originating from events which are triggered on L0Muon or L0DiMuon.

Trigger lines such as HLT1TrackMuon therefore have an extra requirement that the tracks passed the IsMuon algorithm.

Table 2: Hlt1TrackMuon trigger line (2017) Threshold

IsMuon True

Track p > 6 GeV Track pT > 1.1 GeV GhostProb < 0.2 Track χ2/DOF < 3

IP χ2 > 35

Once events have been triggered in the HLT1, they are buffered onto disk. They are sent to the HLT2 after the online detector alignment and calibration, which is performed at regular intervals for a high quality performance of reconstruction in the HLT2 [41]

The rate arriving in the HLT2, O(100kHz), is low enough to be able to reconstruct the complete event. The tracks are then compared to the 300+ HLT2 trigger lines, which can be grouped into exclusive- and inclusive lines. Exclusive trigger lines are designed to target decays with a specified final state. Furthermore, it requires a full reconstruction of all final state particles. Inclusive trigger lines target the decay of a b hadron, but the final state particles are not specified. It does however, require at least two charged particles and a displaced decay vertex. The output of the HLT2 is 12.5 kHz, which can be grouped in three specific lines:

Generic beauty: mostly inclusive lines, triggering on primarily b hadron decays which are partially reconstructed with at least two topological lines. Examples include B0 → D+π and B(s)0 → h+h.

Muonic: The complete muon identification algorithm is ran on all tracks in the HLT2.

Events are selected if at least one track identified as a muon passes certain criteria in terms of pT or PV or if the dimuon candidates invariant mass is equal to the mother particle. An example is B+→ J/ψ(→ µ+µ)K+.

(33)

Charm: Requires a full final state reconstructions. Triggers mainly on hadronic two- and three body decays such as D0 → Kπ+ and D+ → Kπ+π if criteria in terms of track and vertex quality are met.

4.2.3 Trigger after the upgrade

The description of the trigger in the previous section refers to the current, run II, situation.

After LS2, when the detector is upgraded to cope with the increase in particle flux (Sec.

3.3), the trigger strategy will also be completely revised [42]. Two new main features in the upgraded scenario are: trigger-less readout and the trigger will be completely software based.

The largest bottleneck of the current trigger system is the maximum readout of 1 MHz.

The bunch crossing rate of 40 MHz is reduced to the readout rate by the L0 hardware trigger. This is also where the largest trigger inefficiencies arise, since this decision has to happen within a limited time frame. Therefore, the L0 will be removed and a completely software based trigger will be able to read out at the 30 MHz inelastic collision rate expected after the upgrade (Fig. 16).

The sequence of the upgraded trigger will resemble that of the current trigger, with an optional Low Level Trigger (LLT) mimicking the L0, to make a first reduction in the event stream, These are passed to the HLT, where a full event reconstruction and selection is performed. The complete software trigger will run on the Event Filter Farm (EFF).

One major consequence of the new trigger sequence is the rate in the HLT, since the full 30 MHz of events has to be reconstructed. Newly developed muon identification algorithm can be used in the trigger to remove combinatorial background and reduce to a workable rate. These variables will be introduces in Ch. 6. Results of a study on the performance of these variables under upgrade conditions are also given in Ch. 6. In Ch. 5, the results of a study on dead time induced inefficiencies, expected at upgrade luminosity, are given.

(34)

(a) (b)

Figure 16: Trigger schemes during (a) run II and (b) run III. The offline-like event recon- struction refers to the run I scenario, when events were reconstructed in the HLT2 and again offline with improved accuracy. During LS1, when the detector remained unchanged, the trigger was slightly modified [41]. This allowed for an online event reconstruction with similar accuracy as the offline, run I scenario, saving significant computing resources.

Referenties

GERELATEERDE DOCUMENTEN

The similarity reductions and exact solutions with the aid of simplest equations and Jacobi elliptic function methods are obtained for the coupled Korteweg-de Vries

Keeping this in mind and referring to Figure 6.14, the conversion of n-hexene was very high at all temperatures indicating that double bond shifts as well as

They cover virtually all aspects of geotourism - in a National Park, a gold mining heritage town, a visitor gold mine, a diamond theme park, cave systems developed as a World

Twenty of the letters were positively identified as autographical (among which the letter written by Jacob van de Velde), two letters were identified as non-autographical (the

examined the effect of message framing (gain vs. loss) and imagery (pleasant vs. unpleasant) on emotions and donation intention of an environmental charity cause.. The

How to design a mechanism that will be best in securing compliance, by all EU Member States, with common standards in the field of the rule of law and human

Replacing missing values with the median of each feature as explained in Section 2 results in a highest average test AUC of 0.7371 for the second Neural Network model fitted

Chapter 1: Background and Introduction to the study 9 From the above discussion, a conclusion can be drawn that sports properties need a better understanding of their potential