• No results found

Accounting for jet response induced MET backgrounds to new physics at the ATLAS experiment at CERN’s LHC

N/A
N/A
Protected

Academic year: 2021

Share "Accounting for jet response induced MET backgrounds to new physics at the ATLAS experiment at CERN’s LHC"

Copied!
149
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

by

Lorraine Courneyea

B.Sc., University of British Columbia, 2004

A Dissertation Submitted in Partial Fulfillment of the Requirements for the Degree of

DOCTOR OF PHILOSOPHY

in the Department of Physics and Astronomy

c

Lorraine Courneyea, 2011 University of Victoria

All rights reserved. This dissertation may not be reproduced in whole or in part, by photocopying or other means, without the permission of the author.

(2)

Accounting for jet response induced /ET backgrounds to new physics at the ATLAS experiment at CERN’s LHC

by

Lorraine Courneyea

B.Sc., University of British Columbia, 2004

Supervisory Committee

Dr. R. Keeler, Co-Supervisor

(Department of Physics and Astronomy)

Dr. R. McPherson, Co-Supervisor

(Department of Physics and Astronomy)

Dr. R. Kowalewski, Departmental Member (Department of Physics and Astronomy)

Dr. P. Wan, Outside Member (Department of Chemistry)

(3)

Supervisory Committee

Dr. R. Keeler, Co-Supervisor

(Department of Physics and Astronomy)

Dr. R. McPherson, Co-Supervisor

(Department of Physics and Astronomy)

Dr. R. Kowalewski, Departmental Member (Department of Physics and Astronomy)

Dr. P. Wan, Outside Member (Department of Chemistry)

ABSTRACT

Detector mis-modelling or hardware problems may cause an excess of Missing Transverse Energy ( /ET) to be inferred in physics events recorded in ATLAS, leading to higher than expected backgrounds for new physics. In particular, non-Gaussian tails in the /ET distribution are unlikely to be well modelled in Monte Carlo simulations. To account for this, a technique has been established to improve the background predictions derived from Monte Carlo simulations. This is done using a correction derived through comparison of control samples in data and Monte Carlo simulation. Two different control samples are used to create the correction, giving a range of predictions for the shape of the /ET tail and aiding the calculation of systematic errors. This technique is then applied to several samples which are potential backgrounds to new physics which give detector signatures which include /ET.

(4)

Contents

Supervisory Committee ii

Abstract iii

Table of Contents iv

List of Tables viii

List of Figures ix

Acknowledgements xi

Dedication xii

1 Introduction 1

2 Theory 4

2.1 The Standard Model . . . 4

2.1.1 Quantum Electrodynamics . . . 6

2.1.2 Quantum Chromodynamics . . . 6

2.1.3 Weak Interaction . . . 8

2.1.4 Factorization . . . 8

2.1.5 Particle Showers . . . 12

2.1.6 Jets and Dijets . . . 14

2.1.7 Direct Photons . . . 15

2.2 Proton–Proton Collisions . . . 17

2.2.1 Two Body Kinematics . . . 18

2.2.2 Missing Transverse Energy . . . 19

2.2.3 Transverse Mass. . . 20

(5)

2.3 Beyond the Standard Model . . . 21

2.3.1 Supersymmetry . . . 22

2.4 Estimating MET Using Jet Response Measurements . . . 27

2.4.1 Kolmogorov–Smirnov Tests . . . 27

2.4.2 Chi-Square Tests . . . 27

2.4.3 Method . . . 29

3 The Experiment 33 3.1 The Large Hadron Collider . . . 33

3.2 ATLAS . . . 34 3.2.1 Inner Detector. . . 34 3.2.2 Calorimetry . . . 39 3.2.3 Muon Spectrometer . . . 42 3.3 Data . . . 43 3.3.1 Data Acquisition/Trigger . . . 44

3.3.2 ATLAS Computing Model . . . 46

3.3.3 Data Reconstruction . . . 46

3.3.4 Data Quality . . . 47

3.3.5 Data Used in the Analysis . . . 49

3.4 Monte Carlo Simulations . . . 50

3.4.1 Monte Carlo Used in the Analysis . . . 50

3.5 Definition of Physics Objects. . . 51

3.6 Event Selection . . . 55

3.6.1 Dijet Events . . . 55

3.6.2 Direct Photon Events. . . 56

3.6.3 W + Jet Events . . . 56

3.7 Normalization of Events . . . 57

4 Liquid Argon Data Quality 59 4.1 Liquid Argon Calibration Procedure. . . 59

4.2 Problematic Channels . . . 60

4.2.1 Treatment of Problematic Channels . . . 61

4.2.2 Effect on Reconstructed Data . . . 62

5 Method for Creating Smearing Functions 63 5.1 Correlation of Jets and MET. . . 63

(6)

5.2 Direct Photon Methods. . . 64

5.3 Dijet Method . . . 67

5.4 Statistics in Early Data. . . 69

6 Creation and Tests of the Smearing Functions 71 6.1 Direct Photon Results . . . 71

6.1.1 Creation of the Smearing Functions . . . 71

6.1.2 Closure Tests . . . 71

6.2 Dijet Results . . . 74

6.2.1 Creation of the Smearing Functions . . . 74

6.2.2 Closure Tests . . . 77

6.3 Additional Tests on Events with No Real MET . . . 79

6.4 Tests Using Events with Real MET . . . 82

7 Application to SUSY Analysis 94 7.1 Results for W + Jet Events With Muon Removal . . . 94

7.2 Results for QCD Events . . . 95

7.3 Events in SUSY Signal Regions . . . 99

7.3.1 W + Jet Events . . . 99

7.3.2 QCD Events . . . 99

8 Discussion 110 8.1 Smearing Functions . . . 110

8.2 Smearing Function Tests . . . 111

8.3 Application to SUSY Analysis . . . 113

9 Conclusion 115 A LAr Electronic Calibration 117 A.1 Energy Reconstruction in a LAr Calorimeter . . . 117

A.2 LAr Calibration Runs . . . 118

A.2.1 Pedestal runs . . . 119

A.2.2 Delay runs . . . 120

A.2.3 Ramp runs . . . 122

A.2.4 Prediction of Physics Pulses . . . 122

A.3 Automatic Processing. . . 124

(7)

A.4 Validation of LAr Calibration . . . 125

A.4.1 Parameters Used to Validate LAr Calibration Runs . . . 126

A.4.2 Validation Strategies . . . 127

A.5 Types of Problematic Channels in the LAr Calorimeters . . . 130

Bibliography 132

(8)

List of Tables

Table 2.1 SUSY particle chart. . . 23

Table 2.2 SUSY selection criteria. . . 26

Table 3.1 2010 datasets with corresponding luminosity. . . 49

Table 3.2 The Monte Carlo samples used to create smearing functions. . . 51

Table 3.3 The W + jet and t¯t Monte Carlo samples used. . . 52

Table 6.1 Binning used in the direct photon analyses. . . 72

Table 6.2 Kolmogorov–Smirnov probabilities and chi–square values for the direct photon closure tests. . . 74

Table 6.3 Binning used in the dijet analysis. . . 75

Table 6.4 Kolmogorov–Smirnov probabilities and chi–square values for the dijet closure test. . . 78

Table 6.5 Kolmogorov–Smirnov and chi–square test results for the dijet and direct photon /ET distributions. . . 80

Table 7.1 The χ2 test results for the W + jet samples with muon removed. 95 Table 7.2 The chi–square test results for the dijet samples. . . 99

Table 7.3 W + jet events passing SUSY selection criteria. . . 102

Table 7.4 W + jet cut flow for SUSY signal regions A and B. . . 104

Table 7.5 W + jet cut flow for SUSY signal regions C and D. . . 105

Table 7.6 QCD events passing SUSY selection criteria. . . 106

Table 7.7 QCD cut flow for SUSY signal regions A and B. . . 108

(9)

List of Figures

Figure 2.1 The particles which comprise the Standard Model. . . 5

Figure 2.2 The QED interaction vertex. . . 6

Figure 2.3 The running of the strong coupling constant. . . 7

Figure 2.4 Example of a flavour changing weak vertex in a top quark decay. 8 Figure 2.5 Factorization of hadron-hadron collisions. . . 9

Figure 2.6 Direct photon creation at next–to–leading order. . . 11

Figure 2.7 Creation of a quark–anti–quark pair due to the large separation of the initial partons.. . . 14

Figure 2.8 Direct photon creation at leading order. . . 15

Figure 2.9 Next–to–leading order parton distribution functions. . . 16

Figure 2.10Unification of the coupling constants in the MSSM model. . . . 24

Figure 2.11Kolmogorov–Smirnov test probabilities for binned distributions. 28 Figure 3.1 The LHC injection chain . . . 35

Figure 3.2 The ATLAS detector. . . 36

Figure 3.3 The ATLAS inner detector . . . 37

Figure 3.4 The ATLAS calorimetry. . . 40

Figure 3.5 The integrated luminosity collected by ATLAS in 2010.. . . 43

Figure 4.1 Sample eta–phi distribution for jets in 2010 data. . . 62

Figure 5.1 Correlation of the /ET and jet direction. . . 65

Figure 5.2 Correlation of the /ET and photon direction. . . 66

Figure 5.3 The pT reach of direct photons compared to dijets. . . 70

Figure 6.1 Sample jet response measurement determined by the direct pho-ton method.. . . 72

Figure 6.2 Sample smearing functions determined by the two direct photon methods. . . 73

(10)

Figure 6.4 Sample jet response measurement determined by the dijet method. 75

Figure 6.5 Sample smearing function determined by the dijet method.. . . 76

Figure 6.6 Closure test for the dijet smearing function. . . 77

Figure 6.7 Analytic direct photon smearing function applied to dijets. . . . 79

Figure 6.8 Iterative deconvolution direct photon smearing function applied to dijets. . . 80

Figure 6.9 Dijet smearing function applied to direct photons.. . . 81

Figure 6.10 /ET distributions for W + jet events. . . 83

Figure 6.11Transverse mass distributions for W + jet events. . . 84

Figure 6.12Effective mass distributions for W + jet events. . . 86

Figure 6.13Stransverse mass distributions for W + jet events. . . 87

Figure 6.14Variable distributions for W + jet vs t¯t events. . . 88

Figure 6.15 /ET distributions for W + jet and t¯t events. . . 90

Figure 6.16Transverse mass distributions for W + jet and t¯t events. . . 91

Figure 6.17Effective mass distributions for W + jet and t¯t events. . . 92

Figure 6.18Stransverse mass distributions for W + jet and t¯t events. . . 93

Figure 7.1 /ET distributions for W + jet and t¯t events after muon removal. 96 Figure 7.2 Effective mass distributions for W + jet and t¯t events after muon removal. . . 97

Figure 7.3 Stransverse mass distributions for W + jet and t¯t events after muon removal. . . 98

Figure 7.4 Effective mass distributions for QCD events. . . 100

Figure 7.5 Stransverse mass distributions for QCD events. . . 101

Figure 7.6 Cross–section of W + jet events in each SUSY signal region. . . 103

Figure 7.7 Cross–section of QCD events in each SUSY signal region. . . . 107

Figure A.1 Distribution of typical noise values in the EM Barrel. . . 119

Figure A.2 Sample autocorrelation functions. . . 121

Figure A.3 Comparison of calibration and physics pulses. . . 123

Figure A.4 An example calibration pulse. . . 127

(11)

ACKNOWLEDGEMENTS

There are many people who helped me along this path whom I would like to thank. Particular thanks goes to my parents, Anne and Paul Courneyea, for always encour-aging my unique life choices; my sister and brother–in–law, Mary Anne and Michael Coules, for always having their door open to me; my sister, Joanne Courneyea, for willing to be the only non–physicist at my parties; my uncle, Matthew Rogoyski, for his support and advice throughout my studies; Jean–Raphael Lessard, for discovering the good life with me in Geneva; and all my friends at UVic and CERN who made things all the more fun!

As well, I would like to thank NSERC and the P.E.O. for funding my studies. This made a big difference by allowing me to focus on my thesis instead of my finances.

Of course I could not have completed this work without the advice and encour-agement I have received from my supervisors Richard Keeler and Rob McPherson. Thank you for everything you have done for me. It has been a true pleasure working with you both.

Research is the process of going up alleys to see if they are blind.

(12)

DEDICATION

I dedicate this work to my grandmother, Hannah Rogoyski, who always knew I would be a doctor someday.

(13)

Introduction

For the last half century, the field of particle physics has been dominated by the

Standard Model (SM) [1] (Chapter 2.1), a theory of the fundamental constituents that make up all matter and the forces which determine their dynamics. This theory has proven remarkably successful in explaining a wide variety of experimental results and in predicting the existence of certain particles before their observation.

Though the SMhas gone a long way in helping us understand our universe, there are still unresolved problems within the field of particle physics. One of the outstand-ing unknowns is the reason why the universe is made up of mostly matter, when matter and antimatter would have been created in equal amounts in the early universe. The

SM provides an explanation for this through a breaking of the product of the charge conjugation (C) and parity (P) symmetries; however, this is not enough to explain the matter–antimatter discrepancy observed. Another problem in the SM, known as the hierarchy problem, deals with the question of why the anticipated Higgs boson mass is so much lighter than the Planck mass, something that can only be reconciled in the SMthrough unreasonably precise adjustments of the model’s parameters.

Another open issue in theSMis the nature of dark matter, a currently unidentified type of matter which makes up the majority of matter in the universe. Dark matter has not been observed directly, but its existence has been inferred from cosmological observations such as its gravitational effects [2]. Based on current observations, dark matter is postulated to consist of weakly interacting neutral particles. The SM does not provide a candidate for dark matter; thus an extension to this model is necessary to explain the matter density of the universe.

There exist many theoretical extensions to the SM that try to resolve its short-comings. One such theory is Supersymmetry (SUSY) [3], a model that gives every

(14)

SM particle a ‘superpartner’ particle that has spin that differs by 1/2 but otherwise has the same quantum numbers. SUSYis postulated to be a broken symmetry, with the superpartners being heavier than the SM particles, with masses of the order of several hundred GeV. SUSY is a particularly attractive theory because it provides a dark matter candidate particle as well resolving the hierarchy problem. Alternative theories exist, but it should be noted that in order to resolve the problems with the

SM, many of these theories predict new physics accessible at a centre of mass energy of approximately a TeV.

A new era in particle physics research is starting with the advent of the Large Hadron Collider (LHC)[4,5,6], a particle accelerator with record breaking interaction energies. The LHC is currently operating with a centre of mass energy of 7 TeV, several times higher than the previous record of 1.96 TeV achieved at the Tevatron [7]. With this new energy regime, coupled with the anticipation of new TeV scale physics, there is an exciting possibility of observing new phenomena such as SUSY.

In order to use the LHCdata to search for new physics, one needs to understand the types of signatures that may be seen. In the case of a dark matter candidate, regardless of the model, such a particle is expected to interact only weakly. Due to this, it would pass through a particle detector without leaving a signal behind. Thus, it is this very lack of signal that must be searched for. In a hadron–hadron collider, conservation of momentum dictates that the final state system of particles must have momentum transverse to the beam pipe that sums to zero. A deviation from this is known as missing transverse energy ( /ET), and implies one or more final state particles was not well measured, or not detected at all. Due to the dark matter characteristics discussed above, /ET is a very interesting quantity for physics analyses looking to validate any relevant SM extension.

Weakly interacting particles exist in the SMand also manifest themselves as /ET. These can be important backgrounds in any dark matter searches as they may be created at a higher rate than the dark matter candidates. Along with these back-grounds, there are additional sources of /ET from instrumental effects. In particular, any detector imperfections can result in lost or mis–measured energy signals. /ET from such sources is referred to as ‘fake /ET,’ and can lead to higherSMbackgrounds, or result in new backgrounds from processes which have little /ET expected. Thus, it is important to ensure that any background prediction takes fake /ET into account.

This work presents a method of creating a background estimation which includes the contributions from fake /ET and applies it to an analysis which searches forSUSY

(15)

in final states with no leptons. Using this, a more accurate background determination for SUSY events with such final states is obtained.

(16)

Chapter 2

Theory

2.1

The Standard Model

TheStandard Model (SM)of particle physics is a theory which describes the currently known sub-atomic particles and the forces between them. In the SM, matter is comprised of twelve spin 1/2 particles known as fermions. These fermions are grouped into three generations, each progressively more massive. In each generation, the four fermions include two leptons and two quarks. The leptons are particles with integer

Electromagnetic (EM)charge, and in each generation there is one particle with charge -1, and a neutral partner known as a neutrino. Meanwhile, the quarks have fractional charge, with each generation having one quark with charge 2/3 and one with charge -1/3. For each particle described, there also exists a particle with the same mass, but opposite quantum numbers, referred to as its antiparticle. Quarks combine with other quarks and anti–quarks to create composite particles of integerEMcharge. These are referred to as hadrons, and further subdivided into mesons, which are comprised of a quark and anti–quark and have integer spin, and baryons, which are made of three quarks and have half integer spin.

Along with the fermions described above, theSMalso includes four types of gauge bosons which carry the forces governing particle interactions. There are three forces described in theSM—EM, strong and weak—which are mediated by photons, gluons, and the weak bosons (W± and Z0) respectively. A more detailed description of each force along with the force carriers follows. The particle makeup of the SM can be seen in Figure 2.1.

(17)

Figure 2.1: The particles which comprise the Standard Model (Courtesy Fermilab Visual Media Services).

(18)

γ

¯ f i√α

Figure 2.2: The QED interaction vertex.

2.1.1

Quantum Electrodynamics

Quantum Electrodynamics (QED) is the relativistic quantum field theory which de-scribes the interaction of particles with electric charge through the exchange of pho-tons. The zero mass of the photon implies that QEDinteractions have infinite range, with the strength of the interaction falling proportionately to r−2 as can be explained via geometrical considerations. The strength of the electrodynamic coupling constant (α) is approximately 1/137 at low energies. Photons are neutral particles with spin 1, thusQEDinteractions cannot alter a particle’s charge, only its spin and 4–momentum. The basic interaction vertex of QED can be seen in Figure 2.2, with f / ¯f repre-senting a fermion/anti–fermion pair. Processes described by QED include the emis-sion/absorption of a photon from a charged particle, as well as fermion–anti–fermion pair creation/annihilation. The strength of the interaction is characterized by the coupling constant associated with the vertex.

2.1.2

Quantum Chromodynamics

Quantum Chromodynamics (QCD)is the theory of the strong force, which is mediated through the exchange of gluons. The strong force governs the interaction between quarks and gluons (collectively referred to as partons). Analogous to the electric charge of electrodynamics, partons carry colour charge, which has three values along with the anti–colour equivalents. Each quark (anti–quark) has a colour (anti–colour), and each gluon effectively has both colour and anti–colour. There are a total of eight different gluons, corresponding to different combinations of colour and anti–colour. Like photons, gluons are massless; however, the dynamics of QCD differs from that of QED by virtue of the self–interaction properties of gluons.

As the name suggests, the coupling constant of the strong force (αs) is large compared with the other two forces. However, this is only true for certain momentum

(19)

QCD

α (Μ ) = 0.1184 ± 0.0007

s Z

0.1

0.2

0.3

0.4

0.5

α

s

(Q)

1

10

100

Q [GeV]

Heavy Quarkonia

e

+

e

Annihilation

Deep Inelastic Scattering

July 2009

Figure 2.3: The running of the strong coupling αS with respect to the momentum transfer between two partons. Note that αs(MZ) refers to the value of the strong coupling at the scale of the rest energy of the Z boson [9].

transfers between the partons, as αs varies in magnitude as a function of momentum transfer (or equivalently, length scale). In fact, for high momentum transfer, or equivalently small length scales, αsis sufficiently small that the partons can be treated as free particles. This approximation is known as asymptotic freedom. Conversely, at large distances or small momentum transfer the coupling is very strong. This feature ofQCD, known as confinement, is why a free quark is never seen [8]. Figure2.3shows how the coupling constant varies with respect to momentum transfer.

In QCDthe local gauge group is SU(3). To preserve local gauge invariance, eight massless vector fields are introduced, the quanta of which are the eight gluons.

(20)

t

b

Figure 2.4: Example of a flavour changing weak vertex in a top quark decay.

2.1.3

Weak Interaction

The weak interaction is mediated through the exchange of W and Z bosons. This interaction is different than that of QED or QCD in several ways. First of all, the force carriers are massive—the W boson mass is 80.399±0.023 GeV, while the Z boson mass is 91.1876±0.0021 GeV [10]—making the weak interaction short range and weak. In addition, the weak interaction violates parity symmetry, with the W– boson only acting on particles with a left handed chiral state and antiparticles with a right handed chiral state (which is maximal violation), and the Z–boson having unequal coupling to particles with left and right handed chiral states.

The W and Z bosons are spin 1, with the W boson carrying an electric charge of ±1 while the Z boson is neutral. Exchange of these bosons can thus change the charge and spin of particles by these amounts. A unique feature of weak interactions is that quark flavour is not conserved in interactions mediated by the W bosons. A sample interaction demonstrating this can be seen in Figure2.4.

2.1.4

Factorization

The models of the forces described above can be used to make predictions for cross sections and rates in hadron-hadron scattering. However, these predictions are not trivial to form since not all parts of the interaction are calculable. Fortunately, it has been shown that by using the impulse approximation, the problem can be factorized into calculable and non-calculable portions (see Figure 2.5) [8]. The non-calculable, low momentum transfer parts of the problem can then be measured experimentally using various reference processes. This leads to the predictive capability of the theory for high momentum parton scattering.

The impulse approximation is used to treat the hard scatter of two partons sep-arately from the rest of the problem. At the TeV energy scale, the hard scatter can be calculated using perturbation theory due to the effect of asymptotic freedom

(21)

Figure 2.5: An illustration of how a hadron-hadron collision can be factorized into measurable and calculable parts. The initial hadrons (A & B) have partons with momentum fractions given by the measured PDFs (Ga/A & Gb/B). These partons then interact with a calculable cross section (dσdt) and the resultant partons fragment according to the measured fragmentation functions (Dh1/c & Dh2/d). The t in

δσ δt is a variable related to momentum transfer and is defined in Section 2.2.1. [8]

(22)

present for large momentum transfer (Q2). As is typical in perturbation theory, the calculation of the hard scatter can be expanded to various orders of the coupling con-stants. A Leading Order (LO) calculation corresponds to 2 → 2 scattering, while a Next–to–Leading Order (NLO)calculation has another parton in the final state. This can be visualized pictorially through the use of Feynman diagrams. The kinematics of the LO scattering are discussed in section 2.2.1.

There are two non-calculable parts of high momentum hadron-hadron scattering. The first is the probability that a parton carries a given fraction (x) of the proton’s momentum. The measured distributions that provide this information are thePDFs. The second non-calculable part of the interaction is the probability that a final state parton will produce a hadron of a given species and momentum. The measured distributions which govern this are known as fragmentation functions. Fragmentation encompasses the QCD evolution which brings the partons from high energies to the GeV scale and the formation of hadrons from these GeV scale partons. In practice, this may be separated into two steps.

In order to make a prediction of the cross section of a given process, an incoherent sum of all the possible hard scatters weighted by the measured PDFsand fragmenta-tion funcfragmenta-tions is required. However, there is some freedom of choice in what aspects of the problem are included in the measured distributions versus the perturbative calculation. In the naive parton model, PDFs and fragmentation functions are as-sumed to have no dependence on Q2. This is known as scale invariance. However, some higher order terms from the perturbative expansion of the hard scatter intro-duce divergent quantities into the calculation. For example, there are divergences due to the partons being treated as massless particles. These mass singularities can be factorized out of the problem and absorbed into the measured distributions. Also, there are ultraviolet singularities introduced by loop diagrams such as the ones seen in the last two rows of Figure 2.6. The manner in which the mass singularities and the ultraviolet singularities are dealt with is known as the renormalization scheme. An additional divergence in the problem is infrared singularities which arise due to the gluon being massless, but these are cancelled out when the loop diagrams are added with the diagrams which do not contain loops.

The effect of absorbing these singularities into the measured distributions is that the PDFs and fragmentation functions are no longer scale invariant. Keeping the leading terms for the scale dependence is known as the leading logarithm approxima-tion. Note that evolution from the scale where the PDFsare determined to another

(23)

Figure 2.6: TheNLOprocesses that create direct photons in proton–proton collisions. The processes in the first five rows lead to three body final states. Note that in these diagrams curly lines represent gluons, wavy lines represent photons, straight lines represent quarks and time goes to the right.

(24)

scale can be calculated using the DGLAP equation [10]. A related effect is the in-troduction of the running of αs with respect to Q2. As seen in Equation 2.1, αs can be calculated as a function of Q2 and the number of quark flavours (f ) with masses below the energy at which αsis calculated by defining aQCDscale parameter Λ. It is through a comparison with Λ, which is of order several hundred MeV, that a process can be said to be sufficiently high momentum to be considered a hard scatter [8].

αs(Q2) = 12π

(33 − 2f)ln(Q22) (2.1)

To summarize,LOcalculations have the following characteristics. The interacting partons are assumed to have no initial pT. Three scales must be set: Q2 to determine αs, Md2 the scale for the PDFs, and Mf2 the fragmentation scale. It can be shown [8] that if one works in the leading log approximation all large scales are equivalent, thus a convenient choice (such as p2

T of a final state particle) may be used. However, this is a mathematical formality, and in practice the actual value derived can be dependent on the scales selected because the leading log approximation is only valid for interactions αs(Q2) << 1, and where the centre of mass energy squared of the two parton system is of the same order as the momentum transfer from each of the initial partons to one of the final state partons. This is what motivates a higher order perturbative expansion of the hard scatter. Calculation of higher order terms reduces what is absorbed into the measured distributions and hence their scale dependence. Additionally, calculating higher order terms helps to determine if the perturbation series is converging.

2.1.5

Particle Showers

A particle shower is a cascade of particles produced when a high energy particle inter-acts with matter. The energy of the original particle is split amongst the particles in the cascade, which continue to interact until there are numerous low energy particles which are then stopped by the material. Energy loss of the original particle occurs as it penetrates deeper into the material and the resulting low energy particles in the cascade are stopped, this is typically denoted dE/dx. There are two types of particle showers: EM and hadronic.

EMshowers are produced by high energy particles which interact primarily through the EM force, i.e. photons and electrons. The first process involved in this is pair

(25)

production, where a photon produces e+epairs1. Momentum conservation in the

process is achieved through the photon interacting with particles in the material. The electrons and positrons meanwhile radiate photons when they are slowed through in-teraction with the particles in the material, a process known as Bremsstrahlung, or braking radiation. These two effects alternate until the resultant particles have a low enough energy that other processes such as ionization (for electrons) and the pho-toelectric effect (for photons) dominate. Note that for muons, which are also EM

particles but have a larger mass, ionization dominates the energy loss for all energies. The resulting showers have a characteristic shape, which is used to identify EM

particles in the calorimeter. They also have a characteristic length scale, the radiation length (X0), measured in gcm−2. This length scale describes both the average distance anEMparticle traverses before it loses all but 1/e of its energy, and 7/9 of the mean free path by a high energy photon prior to pair production. The radiation length depends on the properties of the material, with a higher electron density leading to smaller value of X0, reflecting a better stopping power.

Hadronic showers are produced by high energy hadrons which interact primarily through the strong force. However, these particles may also have an electric charge and lose energy through EMshowers. Typical processes in a hadronic shower involve a hadron interacting with the nuclei in the material and creating several hadrons which split the initial momentum. As in EM showers, this process continues until the resultant particles have low enough energy that they are stopped by the material through other processes such as multiple scattering and ionization.

The complexity of a hadronic shower leads to limitations on the energy resolution for a hadronic particle. There are two main factors that affect this. First of all, a hadronic shower will produce a varying fraction of secondary particles which interact electromagnetically. In an extreme case, such as a neutral pion decaying to two photons, the resulting shower may have all the characteristics of an EM shower. Secondly, energy is lost through processes such as nuclear excitation and breakup which do not yield a detectable signal. This causes a systematic underestimation of the particle energy unless a detector is designed to correct for this, a type of detector known as a compensating calorimeter. ATLAS does not have compensating calorimeters, therefore there is a difference in its response to hadronic particles versus

EM particles. 1

Pair production of other EM particles is possible but has a much lower cross–section due to

(26)

← q q′ q¯q →¯

Figure 2.7: Creation of a quark–anti–quark pair due to the large separation of the initial partons.

As with an EM shower, a hadronic shower may be parametrized as a function of the interaction length, λ, of the material it traverses and the hadron’s incident energy. At high energies, the depth at which a shower is 95% contained by the calorimeter is l95% ∼ lmax+ 4E0.15λ where lmax refers to the depth of the shower maximum, which is lmax ∼ [0.6log(E) − 0.2]λ. For both formulas, E is in GeV and the units for depth are determined by the value of λ used.

2.1.6

Jets and Dijets

Some of the features that are seen in parton scattering final states can be explained using QCD (Section 2.1.2). If the final state particles in an interaction have colour charge, as they move apart the confinement aspect ofQCDbecomes relevant. This is often visualized as a string between the two particles which has increasing tension as the particles move apart. Eventually, as the separation between the partons grows, it becomes more energetically favourable to create a quark–anti–quark pair than to have the colour charged particles at this separation (Figure 2.7). Because of this, a single quark is never seen in the detector. Instead, one sees a collimated group of particles, known as a jet. If a hard scatter has two final state particles with colour charge, the result is referred to as a dijet event. Dijet events have jets back–to–back in the transverse plane, with their transverse momenta balancing within the energy resolution.

A jet can be used to determine the properties of the initial quark or gluon; however, relating the jet back to a parton is not trivial. The composition of a jet in terms of particle species and the momentum fraction in each particle varies according to the fragmentation function. Thus the fraction of the jet’s momentum which is carried by

EM particles (e.g. from π0 → γγ) versus hadronic particles (e.g. π+ or π) varies. For a jet, the energy is systematically underestimated by the calorimeter response due to the hadronic portion of the jet. As well, the varying EM fraction of the jet leads to an additional intrinsic uncertainty in the jet energy as compared to a single hadron. This results in the reconstructed energy of a jet having a significantly larger

(27)

q γ q¯ g q g γ q q ¯ q γ g

Figure 2.8: The LO processes that create direct photons in proton-proton collisions. The plots on the left are Compton scattering processes, while the ones on the right are q–¯q annihilation. In these plots, time goes to the right.

error as compared with the error for a purely EM particle such as an electron or a photon.

In addition to these factors, there are numerous algorithms used to group energy deposits in the detector. This leads to a variation in what is included in a given jet. This will be covered in Section 3.5.

2.1.7

Direct Photons

A direct photon is a photon that is produced by the hard scatter. In a proton–proton collider there are two types of LO processes that will create direct photons. These are quark–anti–quark annihilation2, which results in a gluon jet and a photon in the

final state, and quark–gluon scattering (i.e. Compton scattering) which results in a quark jet and a photon in the final state. The Feynman diagrams for these processes can be seen in Figure 2.8. At the LHC, because of the high centre of mass energy, it is possible to probe the small x range of the PDFs. As is shown in Figure 2.9, the gluonPDFdominates at low x, thus it is expected that the dominant production mechanism for direct photons at theLHCwill be quark–gluon scattering. Predictions put the fraction of direct photon events from quark–gluon scattering at 75-90% [11].

2

While the term annihilation may typically indicate a process such as q ¯q → γ, in the case of

(28)

x -4 10 10-3 10-2 10-1 1 ) 2 xf(x,Q 0 0.2 0.4 0.6 0.8 1 1.2 g/10 d d u u s s, c c, 2 = 10 GeV 2 Q x -4 10 10-3 10-2 10-1 1 ) 2 xf(x,Q 0 0.2 0.4 0.6 0.8 1 1.2 x -4 10 10-3 10-2 10-1 1 ) 2 xf(x,Q 0 0.2 0.4 0.6 0.8 1 1.2 g/10 d d u u s s, c c, b b, 2 GeV 4 = 10 2 Q x -4 10 10-3 10-2 10-1 1 ) 2 xf(x,Q 0 0.2 0.4 0.6 0.8 1 1.2

MSTW 2008 NLO PDFs (68% C.L.)

Figure 2.9: The MSTW2008 NLO PDFs for the valence quarks (the quarks present in the limit Q2 → 0), sea quarks (virtual quark–anti–quark pairs) and gluons for Q2 = 10 GeV2 (left) and Q2 = 104 GeV2 (right) [12].

(29)

As both a gluon and a quark in the final state will result in a jet, for both LO

processes the final state will be a photon and a jet. If the approximation is made that there is zero transverse momenta for either incident parton, the jet and photon will be back–to–back in the plane transverse to the beam line. This expectation will be used in selecting these events.

NLO processes which create direct photons can be seen in Figure 2.6. In these higher order processes, the photon does not have momentum balance with a single jet because P~pT = 0 is true only if all final state particles are included. Thus, if such events meet the selection criteria for direct photon events, a photon–jet momentum imbalance independent of detector effects will be introduced. This will motivate direct photon selection criteria that use the event topology along with isolation requirements to reduce contributions for these higher order processes.

Another way in which momentum imbalance between the photon and jet is in-troduced is through the initial partons having some transverse momentum due to soft gluon emission. This would alter the pT balance between the photon and the jet without changing the topology and isolation variables in the event. It is predicted that the size of the effect falls logarithmically with the centre of mass energy (√s), thus will become negligible at high √s [13]. However, exact calculation of this effect is difficult, as perturbative QCD corrects are not enough to properly model this ef-fect. For final states without colour charged particles an exact calculation is possible; however, for final states which include quarks or gluons, this effect is approximated through convolving the NLO cross–section with a Gaussian smearing function [13]. Therefore it is not certain that this effect is adequately described in the Monte Carlo for direct photon and dijet events.

2.2

Proton–Proton Collisions

A detector at a collider experiment is designed to reconstruct interactions with high momentum transfer (hard scatters) between two particles. At LHC energies it is the constituent quarks and gluons of the proton, collectively known as partons, that participate in the hard scatter, not the proton as a whole. To understand why this is the case, note that a higher energy implies a smaller length scale, and at these energies the length scale is less than the size of a proton. Using the Heisenberg uncertainty principle one can obtain an estimate of this length scale to be of order 10−4 fm, or 10−4 the size of a proton for a momentum transfer of 1 TeV.

(30)

An important thing to understand about the hard scattering of partons is that the centre of mass energy of the two parton system is not known in advance. This is because each parton carries a fraction (x) of the proton’s momentum which is distributed according to the PDFs, seen in Figure 2.9. Hence, at the LHC a hard scatter between two partons could technically have a centre of mass energy ranging from 0 to 7 TeV. The probability of a given centre of mass energy can be determined using the PDFs; however, data is not available for all ranges of x.

2.2.1

Two Body Kinematics

The two body scattering process a + b → c + d can be written using the Mandelstam variables3: ˆ s = (pa+ pb)2, (2.2) ˆ t = (pa− pc)2, ˆ u = (pb− pc)2,

where pa,b,c,d are the 4–vectors of the particles. These variables have physical meaning (ˆs is the centre of mass energy squared, and ˆt and ˆu are the squares of the four momentum transfers from particles a and b to particle c) and are used in the calculation of the hard scatter. As well, for massless two body scattering, it is convenient to note that ˆs + ˆt + ˆu = 0.

Another useful set of variables can be found by using the projection of the momen-tum transverse and longitudinal to the beam (pT and pl) and defining the quantity rapidity as: y = 12ln(E+plE−pl). For a massless particle, y may also be written as:

y = −ln(tan(θ2)) ≡ η. (2.3)

For a massive particle these two equations are not equal; however, the variable given by Equation2.3is still used, with the calculated quantity referred to as pseudorapidity and denoted η. Rapidity is a useful variable because it is invariant under a boost Bo, except for a constant term 12ln(1−Bo

1+Bo). This implies the quantity ∆y is relativistically 3

The caret symbol over the Mandelstam variables refers to the variables for scattering partons. If the caret is not present, the variables in question refer to the hadrons as a whole.

(31)

invariant. Additionally, it is found that for minimum bias events4 the number of

particles per unit of rapidity is approximately constant.

Rapidity can be used to calculate some of the kinematics in the a + b → c + d scattering problem. If the momentum fraction of the proton carried by parton a and b are xa and xb respectively and parton a has rapidity yaand parton b has rapidity yb, the particles’ four vectors (E,Px,Py,Pz) and the Mandelstam variables can be written:

pa = xa √ s 2 (1, 0, 0, 1), (2.4) pb = xb √ s 2 (1, 0, 0, −1), pc = pT(cosh(ya), 1, 0, sinh(ya)), pd = pT(cosh(yb), −1, 0, sinh(yb)), and ˆ s = xaxbs, (2.5) ˆ t = −xapT√se−ya = −xbpT√seyb, ˆ u = −xapT√seya = −xbpT√se−yb.

2.2.2

Missing Transverse Energy

As shown in Section 2.2.1, the 4–momenta of the final state particles can be used to measure the 4–momenta of the partons involved in the hard scatter. However, since there are particles such as neutrinos that interact weakly (or not at all) in the detector, some energy in the final state may not be measured. Additionally, even if such a particle is not present in the final state, energy can be lost in un–instrumented regions of the detector. Since ˆs is unknown a priori, without use of better constrained variables one would not know that this energy was missed. Such a variable is the transverse momentum (pT). Since the initial partons can be approximated as having

4

Minimum bias events are the events which would be collected with a totally inclusive trigger. The composition of a minimum bias sample is determined by the experimental conditions, but is typically inelastic non-single-diffractive events.

(32)

pT = 0, the final state should also have P~pT = 0. Deviations from this are labelled Missing Transverse Energy ( /ET or MET) with large amounts of /ET implying the possibility of a particle that was undetected. Note that the reference to ‘large amounts of /ET’ means /ET that is significant compared to the /ET resolution.

While /ET can signal an unobserved particle, it will also result from mis–measurement of final state particles. One of the reasons this will occur is because there is a finite resolution that can be achieved when measuring a particle’s energy. Thus, there is a limit on how accurately /ET can be determined, and there will always be small amounts of /ET observed.

Additional /ET will be measured due to problems in the detector. This may take many forms including malfunctioning power supplies or readouts and noisy detector channels. While this can also fake or mask a new physics signal, unlike the intrinsic detector resolution this is a reducible source of /ET. Missing Transverse Energy from such reducible sources is denoted ‘fake /ET.’

2.2.3

Transverse Mass

If the four vector of a particle created by the hard scatter is known, its mass may be determined by m2 = E2 − p2. However, at a proton–proton collider, where ˆs is unknown for a given event, if not all of the particle decay products are visible, the z component of the four vector of the original particle cannot be determined through the four vectors of the decay products and the /ET vector. In this case a boost– independent variable known as the transverse mass, mT, is used, the distribution of which has a maximum at mT = m where m is the mass of the particle5.

The transverse mass is defined as:

m2T ≡ [ET(1) + ET(2)]2− [pT(1) + pT(2)]2

= m21+ m22+ 2[ET(1)ET(2) − pT(1) · pT(2)] (2.6) where pT(1) is equated with the /ET vector. If the approximation m1 = m2 = 0 can be made, this can also be written as:

m2T = 2|pT(1)||pT(2)|(1 − cos(∆φ)) (2.7) 5

This maximum corresponds to the case where the decay products have all their momentum

in the transverse plane, therefore ET = E, pT = p and the equation for mT is equivalent to

(33)

where ∆φ is the angle between the two particles in the transverse plane.

2.2.4

Coordinate System

In a high energy hadron collider, the initial centre of mass system of the colliding partons typically has momentum, referred to as a boost, along the beam line. This is because it is highly unlikely that the two partons participating in the hard scatter carry the same fraction (x) of the proton’s momentum. Due to this, quantities that are invariant under a boost are needed to make sense of an event. This is the reason that transverse momentum and missing transverse energy are such important variables. As well, this motivates the use of pseudorapidity, η, instead of the normal polar angle. An important use of η is to define the separation of points on the surface of the calorimeter. A commonly used measure of distance in ATLAS is ∆R, which is a distance measure in η − φ space. This variable is defined in Equation 2.8 for the distance between two points (η1, φ1) and (η2, φ2).

∆R = ((η1− η2)2+ (φ1− φ2)2)1/2 (2.8) Throughout this paper, η and variables calculated using η are used. To assist the reader with visualizing this variable, the formula for which is in Equation 2.3, note that η = 0 corresponds to a vector on the x-y plane and η = ∞ corresponds to a vector along the beam axis and that for higher values of η, an equivalent η range corresponds to smaller θ ranges.

2.3

Beyond the Standard Model

As discussed in Chapter 1, theSM has been successful in describing the constituents of matter and their interactions to high precision. However, several fundamental questions and problems remain unanswered by the model including the hierarchical problem, the nature of dark matter, and the unification problem.

The hierarchical problem is a concern in theSMbecause the Higgs mass is subject to loop corrections proportional to the square of the masses of each existing particle. This is particularly worrisome as quantum gravity effects could contribute around the Planck mass (∼ 1019 GeV). Thus unless there are precise cancellations between the various corrections and the bare mass one would expect the mHiggs to be large,

(34)

instead of near the weak scale, as it is constrained to be through precision electroweak measurements [10].

Regarding dark matter, cosmological observations suggest that 80% of the matter in the universe consists of dark matter, a non–baryonic form of matter. As previously mentioned, there are no dark matter candidates in the usual particles of the SM, necessitating an extension to the model.

Finally, extrapolation of the running coupling constants of the strong, EM, and weak interactions to the Planck scale does not result in a convergence to a single value. This is problematic in the context of creating a theory which unifies the forces.

2.3.1

Supersymmetry

Numerous theories have been proposed to address the problems discussed above, one of which is Supersymmetry (SUSY). SUSY is a theory which proposes a symmetry which associates a fermionic partner to eachSMboson, and a bosonic partner to each

SM fermion (Table 2.1). The SUSY particles (sparticles) have otherwise identical quantum numbers to their SM counterparts.

Tables and plots shown in this section are derived using one SUSY model, the

Minimally Supersymmetric Standard Model (MSSM), which is the the supersym-metric extension of the SM which introduces the fewest particles while still being viable [3]. The MSSM builds upon an extension to the SM which has two Higgs doublets instead of a single Higgs boson, by adding a superpartner to every particle contained in that model.

The hierarchy problem is nicely solved bySUSYdue to the fact that fermions and bosons give corrections of opposite sign to the Higgs mass. Thus the introduction of sparticle partners to the normal particle content of the SM provides an exact cancellation to the divergent terms if SUSYis an unbroken symmetry. In reality, the solution is not this simple. Since we have not seen SUSY particles with the same masses as their SM counterparts SUSY is a broken symmetry if it exists. However, if the models are designed such that the symmetry breaking terms result only in logarithmic divergences in the Higgs mass, the breaking is termed ‘soft,’ and the effect becomes small at large energy scales, maintaining this as a solution [3].

Another interesting result of theSUSYmodel is an alteration of the dependence of the coupling constants with respect to energy scale. The introduction of new particles changes the slope, leading to a unification of the couplings at high energy. This can

(35)

Fermions Bosons

Name Symbols Spin Name Symbols Spin

leptons e, νe 1/2 sleptons ˜e, ˜νe 0

µ, νµ µ, ˜˜ νµ τ, ντ τ , ˜˜ ντ quarks u, d 1/2 squarks u, ˜˜ d 0 c, s c, ˜˜ s t, b t, ˜b˜ gluinos ˜g 1/2 gluons g 1 charginos χ˜±1, ˜χ±2 1/2 EW bosons γ, Z0, W± 1 neutralinos χ˜0 1, ˜χ02, ˜χ03, ˜χ40 1/2 Higgs h0, H0, A0, H± 0

Table 2.1: The particles proposed by the MSSM. A tilde above a character denotes a superpartner of a Standard Model particle.

(36)

2

4

6

8

10

12

14

16

18

Log

10

(Q/GeV)

0

10

20

30

40

50

60

α

-1

U(1)

SU(2)

SU(3)

Figure 2.10: The unification of the forces is shown for the Minimal Supersymmetric Standard Model (solid lines) as compared to the Standard Model (dashed lines). [3].

be seen in Figure 2.10 for the MSSM.

Finally, SUSYhas the attractive feature that it provides a dark matter candidate. This may be seen in the context of the MSSM. In the MSSM, in order to avoid fast proton decay, which is unobserved by experiments, a multiplicative symmetry called R–parity is introduced. EachSMparticle has R–parity of +1 while eachSUSY

particle has R–parity of −1. Postulating that R–partity is a conserved multiplicative quantum number has the consequence of sparticles being created in even numbers at any given interaction vertex. In the same vein, a sparticle may only decay into an odd number of sparticles. Consequently, theLightest Supersymmetric Particle (LSP)

is stable. In some MSSM models, the LSP is a neutralino—a weakly interacting, neutral, massive particle. As these attributes are required in dark matter particles due to cosmological observations, this makes the LSP a very promising dark matter candidate.

(37)

SUSY Signatures

As previously discussed, conservation of the multiplicative R–parity in some SUSY

models has observable consequences. These are thatSUSY particles must be created in pairs, and that each one of these particles must decay, possibly through a several step decay chain, to a final state containing theLSP which is stable. In addition the

LSP is neutral and weakly interacting, so is expected to traverse a detector without leaving a signal.

Typical signatures from SUSY creation therefore involve /ET in the presence of multiple particles which arise from the SUSYdecay chain. Depending on the model parameters, the particle content of the final state will vary. Possibilities include jets with no leptons (0–lepton SUSY), a combination of leptons and jet, or other states such as those containing photons.

Variables which are typically used inSUSYanalyses take into account this general phenomenology as well as the environment in a hadron–hadron collider. Two such variable are the effective mass, mef f, and the stransverse mass, mT 2 [14, 15].

In general, the effective mass takes the form of a scalar sum of the pT of objects— including /ET—in the final state. However, the choice of objects included in this sum may vary depending on the physics model of interest. For the 0–leptonSUSYanalysis which will be discussed in Chapter 7, two different definitions for mef f are used. One definition sums the magnitude of the /ET with the pT of the two highest pT jets, while the other sums the magnitude of the /ET with the three highest pT jets.

The stransverse mass is an event variable designed for events where pair produced particles decays into two massless undetected particles and visible particles—exactly as would be the case in R–parity conserving SUSY models. In this scenario, mT 2 places a lower bound on the mass of the pair produced particle under the assumption that they are the unique source of /ET in the event. For a final state with two visible objects, mT 2 is defined as:

mT 2(p1T, p2T, /pT) ≡ min / q1 T+/q 2 T=E/T max(mT(p1T, /q1T), mT(p2T, /q2T)) (2.9)

where the transverse mass mT is:

mT(piT, /qiT) ≡ 2|piT||/qiT| − piT · /qiT (2.10) and /qi

T denotes the transverse momentum of the i

(38)

minimiza-Signal region A B C D Leading jet pT [GeV] > 120 > 120 > 120 > 120 Second jet pT [GeV] > 40 > 40 > 40 > 40 Third jet pT [GeV] > 40 > 40 -

-/ ET [GeV] > 100 > 100 > 100 > 100 ∆φ(jet, /ET) > 0.4 > 0.4 > 0.4 > 0.4 / ET/mef f > 0.25 > 0.25 > 0.3 -mef f [GeV] > 500 > 1000 > 500 -mT 2 [GeV] - - - > 300

Table 2.2: The four signal regions in the analysis searching forSUSYwith no leptons in the final state.

tion in Equation2.9is done over all possible values for /q1 T and /q

2

T, which is constrained by the /ET in the event.

SUSY Signal Regions

In this analysis, theSUSYsignal regions used in an ATLAS 0–leptonSUSYsearch [16] were used. These signal regions were defined using four different sets of selection criteria corresponding to differentSUSY particle masses. These selection criteria are listed in Table 2.2. In this table, the first five requirements are used for background reduction, while the last three requirements are optimized for a givenSUSYscenario. The mT 2 variable is calculated via Equation 2.9, with the two highest pT jets in the event used as the visible objects. Finally mef f is calculated via the scalar sum of the magnitude of the /ET and the transverse momenta of the leading three (two) jets for signal regions A&B (C&D).

Two of the signal regions defined in Table 2.2 look for 2 or more jets, while the other two search for 3 or more jets. The signal regions which require two or more jets in the final state are optimized to find ˜q ˜¯q where each squark decays to a jet and a neutralino. These neutralinos are undetected, leading to the reconstruction of /ET in the event. For signal region D, the mT 2 requirement serves to increase sensitivity to SUSY models with a squark mass of more than 300 GeV, while for signal region C the requirements give coverage to a wider range of masses.

For the signal regions which require three or more jets, the SUSYscenario which is being targeted is squark-gluino production, with the two sets of selection criteria giving sensitivity to different ranges for the mass.

(39)

2.4

Estimating MET Using Jet Response

Measure-ments

2.4.1

Kolmogorov–Smirnov Tests

The Kolmogorov–Smirnov test [17] is a statistical test to determine if two sets of data follow the same distribution; this is taken as the null hypothesis. The test returns a distance measure between the two datasets. The probability of obtaining this result or greater is then calculated assuming the null hypothesis. A probability value close to zero thus implies the datasets follow different distributions.

For unbinned data, the probability values returned are distributed uniformly be-tween zero and one for similar distributions. However, for binned data this is no longer the case, and the mean of the distribution is shifted higher than 0.5. This effect is small if the width of the bin is small with respect to the experimental res-olution and if the number of bins is large compared to the number of events. This is not the case for the experimental results which will be used in this thesis, so this must be taken into account when using results from this test.

To understand the distributions in binned data, the distribution of the Kolmogorov– Smirnov probability for two sample functional forms have been plotted. These func-tions have been chosen as they have similar distribufunc-tions to many of the variables of interest in this thesis.

The first distribution checked was the Gaussian distribution f (x) = 1 .02πe−x

2/(0.02)

defined in the region -1 to 1. The second distribution was the exponential distribution f (x) = 10e−10xdefined in the region 0 to 1. For each of these functions, 100000 pairs of histograms were randomly generated according to the function, the Kolmogorov– Smirnov probability was calculated, and the results plotted in a histogram with 100 bins. These results can be seen in Figure 2.11.

2.4.2

Chi-Square Tests

As an additional statistical measure, chi–square tests will be used to determine the agreement between various distributions. To test between a histogram with data results (Hdata) and the corresponding histogram from simulation (Hsim), the chi–

(40)

K-S probability 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 % of events/bin 0 5 10 15 20 25 30 K-S probability 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 % of events/bin 0 5 10 15 20 25

Figure 2.11: The Kolmogorov–Smirnov test probabilities for a binned Gaussian dis-tribution (left) and exponential disdis-tribution (right).

(41)

square test is defined as:

χ2 = dof X

i=1

(Hdata(i) − Hsim(i))2 σ2

Hdata(i)+ σ

2 Hsim(i)

(2.11)

where the sum is done over all non–empty bins of Hdata, the number of which is equal to the number of degrees of freedom (dof). In addition, for bins where the data histogram has an entry, but the histogram with simulated results does not, the content of the bin is set to 0 ± 1 for the purpose of computing χ2.

Due to the nature of the tests, it is expected that the χ2 test will be strongly influ-enced by the normalization of the two distributions, while the Kolmogorov–Smirnov test has more of an emphasis on the shape.

2.4.3

Method

In situations where no real /ET is expected from physics processes, /ET will still be reconstructed in the event due to limitations in how accurately the final state objects can be measured. Jets in particular have a large intrinsic uncertainty in their en-ergy measurements as discussed in Section 2.1.6. In addition to the intrinsic energy resolution, mis–modelled or malfunctioning detector components may affect the re-construction of the final state objects, leading to /ET being reconstructed in the event. In both of these cases the /ET will be correlated with the jets in the event.

To determine the amount of /ET expected in such events, well understood physics processes with no inherent /ET may be used to quantify the effects of the jet response and detector imperfections. Dijets and direct photons are examples of such processes. As EM objects are well measured as compared to jets, an assumption will be made that photons have their energy reconstructed perfectly. This allows their use as an energy reference in the event.

To illustrate the method, a simplified example using direct photons will be used. In this example, the incoming partons have no transverse momentum, therefore, the jet and photon are perfectly back–to–back in phi. To simplify the scenario, a further assumption is made that the response function of the jet is the same for all locations in the detector and jet energies.

(42)

of the jet over many events, where the response, z′, is defined by: z′ = pTjet− pTγ pTγ = pTjet pTγ − 1. (2.12)

The results of these measurements may be used to fill a histogram, which will give an estimate of the shape of the jet response, with more measurements yielding a more accurate estimate.

Next, to determine if the response observed in data is well modelled, the same measurement may be done using simulations of direct photon events. The resulting histograms of these two measurements may then be compared to see there is a statisti-cally significant difference between them. To accomplish this, a Kolmogorov–Smirnov test is used, with a Kolmogorov–Smirnov probability of 0.9 or higher taken to indicate good agreement. If the two distributions agree, the jet response in the simulation is determined to be well modelled, and the resulting /ET distribution is therefore cor-rect. If the two distributions do not agree, a correction may be derived, which when applied to the jets in the simulation will result in a /ET distribution compatible with the one observed in data. This correction function will be referred to as a ‘smearing function,’ as it will be used to alter (or ‘smear’) the jet energy in the simulation, by convolving it with the simulated jet response. In the case that the data and Monte Carlo agree, a Dirac–delta function is the logical choice for a smearing function.

There are two approaches used to form a smearing function. One approach uses an analytical derivation, whereas the other relies on a deconvolution technique. However, the idea behind both methods is to create a function which has the property rsimulation rsmear = rdata, where r represents the jet response, ∗ denotes convolution, and rsmear is the smearing function.

From this, it is clear how a deconvolution technique would proceed. The result from deconvolving rsimulation from rdata would give the correct function for rsmear by definition. For this method, Gold Deconvolution [18], an iterative deconvolution method, is used. Note that in practice noise is introduced into this process since rsimulation and rdataare not known, but must be estimated from the available statistics using Equation 2.12.

The analytical method for determining a smearing function is more complex and relies on assumptions about the functional form of the jet response. As discussed earlier, two main factors are relevant in determining the jet response: the intrinsic resolution and detector imperfections. For this method, the intrinsic resolution is

(43)

modelled using a Gaussian distribution. To this is added the possibility of non– Gaussian tails in the distribution from more extreme mis–measurements caused by detector imperfections and mis–modelling.

To proceed from these assumptions, the first step is to normalize the measured response function in data and simulation to integrate to 1 in the region -1 to 1. This region is selected to reflect the range of response where the jet is entirely lost (-1) to where the jet energy is overestimated by 100% (1). Theoretically, it is possible for the jet energy to be overestimated by more than this, however, it is exceedingly unlikely to happen in practice.

Next, the Gaussian portion of the distribution is fit in data and simulation through an iterative method6. This can then be used to determine the Gaussian portion of

the smearing function by taking advantage of the properties of Gaussian convolution. For two Gaussian distributions G1 and G2:

G1(z) = 1 σ1√2πe −(z−µ1)2/2σ12 (2.13) G2(z) = 1 σ2 √ 2πe −(z−µ2)2/2σ22

the convolution of the two Gaussian distributions is another Gaussian distribution: G1(z) ∗ G2(z) = G3(z) = 1 σ3√2πe −(z−µ3)2/2σ32 (2.14) where: σ23 = σ12+ σ22, µ3 = µ1+ µ2

Now, equating G1 with the Gaussian part of the response in simulation, G2 with the smearing function, and G3 with the Gaussian part of the response in data, it is clear

6

The iterative method starts with the mean (µ) and RMS (σ) of the distribution and fits the region µ − 2σ to µ + 2σ. This procedure is then repeated 10 times, using the mean and RMS of the previous fit as inputs.

(44)

that the Gaussian part of the smearing function can be defined as:

GSmear(z) = 1

σSmear√2πe

−(z−µSmear)2/2σ2Smear (2.15)

where:

σSmear2 = σData2 − σ2M onteCarlo, µSmear = µData− µM onteCarlo

Note that if σM onteCarlo > σData the value obtained for σSmear is imaginary. However, overestimation of the /ET is less of a concern for this analysis compared to underesti-mation of the /ET. Therefore, in this case, GSmear is set to be a Dirac–delta function. To account for the tails, which follow an arbitrary distribution T(z) which is normalized to 1 in the region -1 to 1, the fraction of events (α) in data that are not contained in the Gaussian portion of the response is established. Then, making the approximation that the resolution of the Gaussian part of the simulation response is minor as compared to the mis–measurement present in the non–Gaussian tail, the final smearing function is then:

PSmear(z) = (1 − α)GSmear(z) + αTData(z). (2.16) Once the smearing function is created by either method, it can be applied to all the jets in the simulation so that the data is better modelled. This is done by treating all the jets in each event by smearing the jet pT according to Equation2.17, where z is a random number generated using the rsmear distribution, pT M C is the reconstructed pT of the jet in Monte Carlo and pT true refers to the pT of the jet as it was generated by the simulation.

pTSmear = pTM C + z × pTtrue (2.17)

After this, event variables including /ET are recalculated using the new jet pT values. The distributions for /ET, jet pT, and jet resolution in the altered simulation should now match the data, and any excess may then be attributed to sources other than detector effects. Note the smearing functions derived from this method may be used on any sample, including ones that have /ET from undetected particles.

While this simplified version illustrates the idea behind the smearing function, the scenario in data is more complicated. Additional considerations along with detailed information on the creation of the smearing functions are explored in Chapter 5.

(45)

Chapter 3

The Experiment

3.1

The Large Hadron Collider

TheLarge Hadron Collider (LHC)atthe European Organization for Nuclear Research (CERN)[4,5,6] is a 27 km circumference accelerator which has a design centre of mass energy of 14 TeV for proton–proton collisions. This is almost an order of magnitude increase from the previous energy reach of 1.96 TeV from p¯p collisions produced at the Tevatron at Fermilab [7]. TheLHCbegan operation with proton–proton collisions in 2010 at a centre of mass energy of 7 TeV after initial tests at lower energies and will continue a physics program at 7 TeV through 2012.

The LHC is a synchrotron particle accelerator which contains two adjacent beam pipes. Four interaction points exist around the ring at which particles may collide. In order to create the strong magnetic fields necessary to steer the highly energetic particles around the ring, superconducting magnets are used. These magnets are cooled to temperatures of 1.9 K using liquid helium. Two different types of magnets are used in the LHC. Dipole magnets, of which there are 1232, create the circular trajectory of the particles. In addition, 392 quadrupole magnets focus the beam.

Protons go through several successive accelerations before being injected into the

LHC ring, using accelerators which were already present at CERN prior to the con-struction of the LHC. First, protons are accelerated to 50 MeV by LINAC 2, a linear accelerator, and injected into the first of a series of synchrotron accelerators, the Proton Synchrotron Booster (PS Booster). The PS Booster accelerates the protons to 1.4 GeV for injection into the Proton Synchrotron, which further accelerates the protons to 26 GeV for injection into the Super Proton Synchrotron (SPS). The SPS

Referenties

GERELATEERDE DOCUMENTEN

I would like to express my deepest gratitude to my research colleague Shahzad Hussain, and Muhammad Usman for their

'Stan daardlijst' had ik toen nog niet). Helaas stond er op de vindplaats in Baarn slechts één exemplaar, dat ik- na er enkele notities bij te hebben gemaakt - naar

Omdat het overgrote deel van Nederlandse zoete wateren de functie heeft ‘water voor karperachtigen’, is er gekeken of er bij 3°C opwarming überhaupt nog ruimte overblijft voor

Moreover, linking the municipalities’ experiences with insights on relevant organisational requirements needed within change processes, the research does not only provide

Among all of the information we can get from S-1 form, in this thesis we mainly focus on following dataset: (1) firm’s founding or incorporation year and initial

Meer dan het materiaal is het deze bouwtrant die de vreemde herkomst van de nieuwe inwoners verraadt en zeer waarschijnlijk staan we voor de woningen van een groep Germanen

Wookey PA, Parsons AN, Welker JM, Potter JA, Callaghan TV, Lee JA, Press MC (1993) Comparative responses of phenology and reproductive development to simulated environmental change

Although it is not a secret that the Wiener filter may cause some detrimental effects to the speech signal (appreciable or even sig- nificant degradation in quality or