• No results found

Determining the Drift Time dependence of S2 Width for the XENON100 dark matter experiment

N/A
N/A
Protected

Academic year: 2021

Share "Determining the Drift Time dependence of S2 Width for the XENON100 dark matter experiment"

Copied!
38
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Determining the Drift Time dependence of S2

Width for the XENON100 dark matter

experiment

Stephen Skocpol

10165177

August 17, 2015

Verslag van Bachelorproject Natuur- en Sterrenkunde, omvang 15 EC, uitgevoerd tussen 30-03-2015 en 17-08-2015

Nikhef FNWI

Begeleider: Andrea Tiseni

Tweede beoordelaar: Auke-Pieter Colijn

Abstract

Using montecarlo and real data, an analysis has been performed, quanti-fying the drift time dependence of S2 signal widths for the XENON100 dark matter detector. The new ’Processor for Analyzing Xenon’ (PAX), which will be the data analysis software package for XENON1T, XENON100’s successor, has had its montecarlo simulator tested and compared to real data. Various new width methods in PAX have been compared, in order to find which is the most accurate predictor of an event’s drift time. The S2 Hit Time Standard Deviation width method has been found to predict event depth most accurately. A correction to the width at low S2 energies has also been determined. With the mean and spread of this relationship, it is now possible to apply Z-cuts to S2-only data, allowing S2-only analysis. The research shall also improve general quality cuts on all XENON100 data, improving its total sensitivity.

(2)

Contents

1 Introduction . . . 4

2 Theory . . . 5

2.1 Dark Matter . . . 5

2.2 Interaction Rate . . . 7

2.3 The XENON100 Experiment . . . 9

2.4 Position Reconstruction . . . 13

2.5 S2-only Analysis . . . 14

3 Method . . . 15

3.1 Simulating Data with PAX . . . 16

3.2 Width Measures . . . 16

3.3 Slicing Data . . . 17

3.4 Real Data . . . 19

3.5 Parameterization . . . 20

4 Results . . . 23

4.1 Comparing the width measures . . . 23

4.2 Parameterizations . . . 28

4.3 Simulator Versus Real Data . . . 30

5 Conclusion and Discussion . . . 32

5.1 Comparing The Width Measures . . . 32

5.2 Parameterization . . . 32

5.3 Simulator Versus Real Data . . . 33

(3)

List of Figures

1 Energy composition of the universe . . . 5

2 WIMP mass / cross section exclusion limits . . . 6

3 XENON100 Detector Shielding . . . 10

4 The XENON100 Time Projection Chamber . . . 11

5 Result of 225 days of XENON100 data . . . 12

6 XENON10 S2-only exclusion limit . . . 15

7 Comparison of FWLM and S2 Range 90p Area . . . 17

8 2d histogram with highlighted slice . . . 18

9 Exponentially Modified Gaussian Function . . . 19

10 Result of slicing data and gaussian fitting . . . 20

11 Fitting a function to data points . . . 22

12 Results of montecarlo analysis . . . 24

13 Comparison of width measures with montecarlo data . . . 25

14 Results of real data analysis . . . 26

15 Comparison of width measures with montecarlo data . . . 27

16 Montecarlo Drift Time Parameterization . . . 28

17 Montecarlo S2 Area Parameterization . . . 28

18 Real Data Drift Time Parametrization . . . 29

19 Real Data S2 Area Parametrization . . . 29

20 Comparing real data to simulated data, drift time . . . 30

21 Comparing real data to simulated data, s2 area . . . 30

(4)

1

Introduction

It has long been known that there is matter in and between galaxies which cannot be seen, but must be present, due to its gravitational effects. As early as 1939, the astronomer Horace W. Babcock had found evidence for large amounts of ’missing mass’ from galactic rotation curves. The first scientists to prove that dark matter was more than just invisible ordinary matter, were Rubin and Ford (1970), based on observations of stars and gas emissions from the Andromeda galaxy. It is of course entirely possible that there are particles in the universe which we do not understand yet. What makes this unknown matter so fascinating is that it vastly outweighs the amount of ordinary matter.

In 2015, the Planck Collaboration released new results that give ex-tremely accurate estimates of there amounts. The results show that less than a fifth of all the matter in our universe is currently understood. Besides this ordinary, baryonic matter, we have no understanding of the remaining 85% of mass in the universe. This matter is called Dark Matter.

Many theories on the nature of this Dark Matter have been postulated, and many have already been rejected. Currently, a number of the most promising theories are being researched with the help of the liquid xenon detector XENON100 (Orrigo, 2015). These theories assume that dark mat-ter consists of Weakly Inmat-teracting Massive Particles (WIMPs), that may occasionally bounce off heavy atoms. By shielding these detectors from background noise and observing the xenon with sensitive equipment, re-searchers hope to detect some of these events and to be able to distinguish this from other events.

In order to distinguish dark matter from other sources, it is necessary to have a good understanding of what happens inside the detector. The more you know about an event’s parameters, the better. In this research, the relationship between an event’s ’drift time’ and ’S2 width’ will be determined and quantified. This research should improve the sensitivity of dark matter experiments, by improving quality cuts for all research and allowing research to be done on events which would normally have an energy that is not high enough for the events to be studied.

With these improvements, the search for the dark matter particle will be significantly improved, even allowing XENON100 to compete with experi-ments whose low-mass WIMP sensitivity is far better than XENON100’s was originally designed to be. This research will improve exclusion limits for WIMP masses and cross sections, and will bring us one step closer to finding dark matter particles.

(5)

2

Theory

2.1

Dark Matter

The latest results from the Planck Collaboration (2015) show that ordinary, baryonic matter makes up only 4.9 percent of the universe. The rest of the universe’s energy content is made up of dark matter and dark energy, which we do not yet understand. In this thesis we will focus on the nature of dark matter, since that is what the XENON100 experiment was built to detect.

Figure 1: Pie chart showing the energy composition of the universe, according to the most recent Planck data. (Planck Collaboration, 2015)

At the moment, there are many different theories on the nature of dark matter. Though there are theories that assume differently, XENON100 was built to search for dark matter particles, generally named Weakly Interacting Massive Particles (WIMPs).

2.1.1 Current WIMP Candidates

One of the leading candidates for dark matter particles is currently predicted by the Constrained Minimal Supersymmetric Standard Model (cMSSM) model, which predicts WIMPs with a mass between 100 GeV and several T eV per particle (Feng, 2010). With the discovery of the Higgs boson, which has a mass of 126 GeV , the cMSSM lower limit for the WIMP masses has been adjusted to 160 GeV (Beskidt et al., 2014).

Besides cMSSM, which predicts relatively high mass WIMPs, there are

also low mass dark matter particle candidates. For example, the

Next-to-Minimal Supersymmetric Standard Model (NMSSM) predicts low mass WIMPs with masses as low as 2 GeV.

(6)

Figure 2: Ov erview of exclusion limits for giv en WIMP masses and in terac tion cross sec tions. Y ou see whic h parts of the parameter space ha v e already b een excluded and whic h exp erimen ts ha v e done so. The thic k orange dashed line indicates the limit at whic h neutrino in terference will mak e WIMP detection im p ossible unless the exact neutrin o bac kground is kno wn. There are tw o regions that remain mostly unexplored: The high mass, lo w in teraction cross section region, whic h m a y rev eal WIMPs predicted b y cMSSM, while NMSSM predicts WI MPs at masses as lo w as 2 GeV . In the lo w mass area, higher cross sections ha v e n ot y et b een excluded. Image source: Billard et al. (2014)

(7)

2.1.2 WIMP exclusion limits

This wide range of possible WIMP masses is shown in figure 2, where WIMP masses are set out against WIMP interaction cross section. The figure shows which parts of this parameter space have already been excluded by current experiments, and which regions are left to explore.

The thick orange dashed line in the bottom of the figure shows the limit where certain types of neutrino’s will start interfering with direct-detection measurements, such as XENON100 (Billard et al., 2014). If we wish to push our sensitivity past this limit, a better understanding of the neutrino background will be necessary.

For example, neutrinos created by the decay8B →7 Be∗+ e++ νein the

sun, would create a event flux that is indistinguishable from an interaction

with a ∼ 6 GeV WIMP with an interaction cross section of ∼ 5 × 1045cm2.

In order to identify WIMP interactions in this range, a precise knowledge of the neutrino background is necessary.

Currently, future experiments are focusing on the two regions where neutrino scattering is not an issue. The XENON1T experiment should sig-nificantly lower the detection threshold in the high mass WIMP range, while

experiments such as SuperCDMS are probing the low mas regions (Cerde˜no

et al., 2014).

2.2

Interaction Rate

Before going into how XENON100 detects dark matter interactions, it is instructive to find out what kind of event can be expected.

R = Φ · σ · NA

AXe

(1) Where Φ is the WIMP flux through the detector, σ is the WIMP-nucleus

interaction cross section, and NA

AXe is Avogadro’s number divided by the

atomic mass of a xenon atom, giving the number density of Xenon atoms per mass.

The flux can be calculated with

Φ = ρ · v0

, (2)

which gives the flux per cross section per unit time. Multiplying with σ

gives the flux per nucleus per unit time, adding the NA

AXe factor, gives R = ρ · v0 mχ · σ · NA AXe , (3)

(8)

with dimensions of events per unit time per unit mass.

In order to do this calculation, we will need approximate values of the local dark matter density, and the mean velocity of the dark matter particles relative to our solar system (Saab, 2013):

ρ0= 0.35 GeV /cm3,

v0= 220 km/s.

Here, we assume the WIMPs velocities will follow a Maxwell distribution, with a mean relative velocity to our solar system of 220 m/s, which is simply the sun’s rotational velocity around the galactic center. For this calculation we will neglect the earth’s instantaneous velocity around the sun, since it will cancel out, if taken over years.

The dark matter density in this part of the galaxy is ∼ 0.35 GeV . Since there may be more than one type of particle contributing to this density, this is actually an upper limit. If there is more than one kind of dark matter particle, we expect to measure the most easily detectable one first.

Next, we should choose a value for WIMP-nucleus interaction cross

sec-tion σ and WIMP mass mχ:

σ = 10−39cm2 = 1 fb,

mχ= 100 GeV.

These values are chosen to reflect the parameter space which XENON100 is sensitive to.

One should note that the cross section here is the WIMP-nucleus inter-action cross section, where all exclusion limit plots show the WIMP-nucleon interaction cross section. The WIMP-nucleus cross section is several orders of magnitude larger than the WIMP-nucleon interaction cross section for Xenon.

Putting this all together, you get an expected interaction rate of

∼ 0.01 events

kg · year. (4)

When comparing the chosen σnucleus and mχ to the exclusion limits in

figure 2, you see that these values have already been excluded by current experiments. In order to get a measurable event rate, it will be necessary to increase the amount of detector material. The XENON1T experiment, cur-rently under construction, will do exactly this, increasing the active Xenon mass to ∼ 2.2 tons (Aprile, 2012).

(9)

2.2.1 Using interaction rates

The DAMA experiment is an experiment that only measures the event rate in a passively shielded detector. It has tried to observe the annual modula-tion in event rate, caused by the earth’s rotamodula-tion around the sun, and found a measurable annual modulation of 2%. If the modulation were caused ex-clusively by dark matter, it would correspond to a mass of ∼ 10 GeV and

a WIMP-nucleon interaction cross section of ∼ 10−40cm2 (Davis, 2014).

However, these values have already been excluded by other direct detection experiments. Current research is exploring other possible causes for this modulation.

2.3

The XENON100 Experiment

The XENON100 experiment is located underground at the Laboratori Nazion-ali del Gran Sasso (LNGS), near the Gran Sasso mountain in Italy. LNGS had extensive underground facilities, which can be used for particle- and astroparticle physics experiments that require a low radiation background.

2.3.1 Shielding

Besides the mountain natural shielding, XENON100 employs several layers of passive shielding to further decrease background noise. In figure 3, you see that the detector is surrounded by water tanks, a lead shield with an inner layer of radio-poor ancient lead, polyethylene and copper.

Using this construction, it was possible to reduce the background by two orders of magnitude with respect to the XENON10 experiment (Aprile et al., 2012b). Combined with a factor 10 increase in active volume, XENON100 has been able to greatly increase excluded parameter space for WIMP masses and cross-sections.

2.3.2 The Time Projection Chamber

The XENON100 employs a two-phase, position-sensitive Time Projection Chamber (TPC), which allows per-event analysis of collected data. Figure 4 shows the principle by which particle interactions are detected.

When a particle collides with a xenon atom, an S1 light signal is produced by direct scintillation. Ionization electrons are also produced, which drift to the top of the TPC by the electric field applied between the gate grid and the cathode. A stronger electric field is present between the gate grid and the anode, which causes an S2 signal by proportional scintillation of the electrons, as they are accelerated towards the anode. These signals are recorded by photomultiplier tubes (PMTs) in the top and bottom of the TPC (Aprile et al., 2012b).

(10)

Figure 3: The radiation shielding of the XENON100 detector. The XENON100 detector is surrounded by water tanks, lead, ancient lead, polyethylene, and copper, in order to reduce the electromagnetic background as much as possible.

For proper S2 generation, it is very important that the liquid/gas inter-face is kept stable, between the gate grid and the anode. In order to achieve this, the TPC has a diving bell construction, which allows accurate control of the liquid level in the TPC.

2.3.3 Quality Cuts

Since it is impossible to remove all the background in the detector, it is very important to be able to distinguish between potential WIMP interactions and other events. This is what makes the per-event analysis of XENON100 data so powerful, since it allows us to exclude events from the dataset that can be explained to be something else. These ’quality cuts’ allow removal of many kinds of noise, improving the sensitivity of XENON100.

(11)

Figure 4: Schematic overview of XENON100’s TPC. The top and bottom of the TPC are covered in PMTs, in order to detect the light from the S1 and S2 of a particle recoil. An electric field between the gate grid and the cathode cause S2 electrons to drift upwards, towards the liquid/gas interface, where the S2 pro-portional scintillation is produced. On the right you see an example of a nuclear and an electronic recoil. The difference in the S2/S1 ratio is used to distinquish between the two. Image source: Orrigo (2015)

Fiducial Volume Cut

One of the most important quality cuts is the fiducial volume cut. The copper surrounding the active volume contains impurities that causes many events in the xenon near the copper. These signals are absorbed by the xenon quickly and do not penetrate deeply into the detector. Because of this, it is possible to cut out most of this noise by excluding all the events that occur near the edge of the detector. The remaining volume is called the fiducial volume. More information on position reconstruction can be found in section 2.4.

The importance of fiducial volume cuts can be clearly seen in a publica-tion by Aprile et al. (2012a). Figure 5 shows all the events observed in 225 days of observation. The gray dots are events which have not passed the quality cuts. The remaining black dots are potential WIMP interactions. The overwhelming majority of all events occur very near the top, bottom or sides of the detector. When applying the fiducial volume cut, all the events whose position is outside the red dashed line are thrown away. This slightly reduces the effective active volume, but vastly increases the singal to noise

(12)

Figure 5: The result of 225 of XENON100 observation. Black dots are events that have passed all quality cuts, while gray events can be excluded. The red line encloses all the events that are within the fiducial volume. Unfortunately, this is consistent with the expected background. Image source: Aprile et al. (2012a)

ratio.

Electronic Recoil Cut

Another important cut that can be made here is the electronic recoil cut. Since WIMPs don’t interact via the electromagnetic force, any events that can be identified as being electromagnetic in nature can be excluded from the data. In xenon data analysis, the ratio between the S2 and S1 peak energies, S2/S1 is used for this. Since this ’event discrimination’ cut requires an S1 and an S2, it cannot be used for S2-only analysis (see section 2.5).

Veto Cut

In addition to the passive shielding that protects XENON100 from outside radiation sources, XENON100 also has an active veto surrounding the TPC. The diving bell construction of the TPC allows it to be submerged in an

(13)

additional volume of liquid xenon, while maintaining the level of the liq-uid/gas interface. PMTs have been placed around the TPC which detect recoil events in the active veto. If an event in the TPC coincides with an event in the active veto, it can be identified as coming from some external source and may be excluded from the data.

Multiple Scatter Cut

According to the current exclusion limits, dark matter already has an ex-tremely low interaction cross section. It is so low, that it is valid to exclude all events where more than one recoil has been detected. If the event con-tains multiple recoils from one particle, it may be excluded on the basis that it is extremely unlikely for a dark matter particle to scatter twice in a row. The event rate in XENON100 is sufficiently low that we may also neglect cases where two different particles may have recoiled in the same event window.

2.4

Position Reconstruction

In the previous section, an import quality cut that was discussed, was the fiducial volume cut. In order to determine whether or not an interaction occurred inside the fiducial volume, accurate position reconstruction of the event is necessary.

2.4.1 X-Y position reconstruction

The X-Y position of an interaction is determined using the S2 hitpattern in the top PMT-array. While the S1 can originate from anywhere in the detector, the S2 always is produced at the liquid-gas interface in the top of the position. That means that, unlike the S1, the S2’s light is collected mostly by the PMTs closest to the event. This allows accurate X-Y position reconstruction using the S2 hitpattern.

2.4.2 (Z position reconstruction)

Since only the top and bottom surfaces of the detector contain PMTs, a different approach is necessary to determine the Z co-ordinate of an event. In xenon data analysis, the Drift Time is used to determine this.

Using the finite drift speed of the electrons, one can deduce the depth at which an interaction occurred in the detector from the time the electrons cloud needed to drift up to the liquid/gas interface. The depth and drift time are extremely well correlated; the following equation can be considered to be exact:

(14)

Z = ve· tdrif t (5)

The electron drift velocity ve depends mostly on the voltage which is

applied between the gate grid and the cathode. The drift field in XENON100 is set to 16 kV , which gives 0.53kV /cm.

2.5

S2-only Analysis

When looking for lower energy events, it is necessary to let go of the re-quirement that each event should have an S1 and an S2, since detection of the S1 is no longer always possible. For S2-only analysis, this is indeed the case. Without the drift time, the usual approach to determining the depth can no longer be applied. When this is the case, it is necessary to resort to other methods of determining the depth.

Another drawback to relinquishing the S1 requirement is that it is no longer possible to use the S2/S1 ratio to discriminate between nuclear and electronic recoils. This leads to an increase in noise, which can reduce the sensitivity of the analysis. However, sensitivity gained for low-energy events, makes S2-only analysis a viable way to further lower the exclusion limit for WIMPs at low masses.

One way of determining the interaction depth without a drift time is by using the S2 signal width to determine the depth. Sorensen et al. (2010) describes using this method for S2-only analysis on XENON10 data. Figure 6 shows how this has pushed XENON10’s exclusion limits beyond those of XENON100 at low WIMP masses. If the same thing could be done for XENON100 data, it could push these limits down even further.

(15)

Figure 6: Exclusion limits for low mass WIMPs. The dashed lines show the range of WIMPs excluded by XENON100 and LUX. The red solid line shows exclusion limits from S2-only analysis of XENON10 data. Despite the loss of accurate

Z-reconstruction, Log(S2/S1) event discrimination, and having only a tenth of the

active volume, it allows XENON10 to overtake its successor, XENON100, in sen-sitivity at low WIMP masses.

3

Method

As mentioned in section 2.5, the main goal of this research is to determine the drift time dependence of S2 signal width. A montecarlo dataset was first created and analysed, before the switch to real data of XENON100 run 10. This is a dataset where the detector was exposed to AmBe, a neutron source. The calibration source for this run was inserted in the calibration pipe in figure 3.

(16)

3.1

Simulating Data with PAX

Before analyzing the real calibration data of the AmBe run, a montecarlo analysis was performed, using data created with the new Processor for Analyzing Xenon (PAX). PAX is currently in development and will be used as the new data processor for XENON1T (Aalbers, 2018). Currently, XENON100 data is used to test PAX.

A powerful feature of PAX is its event simulation package. PAX uses physical models to simulate events according to a number of parameters, given by the user. One can instruct PAX to simulate events with custom S1 and S2 energies, depths, electron lifetimes, etc.

For this analysis, two simulations were run in PAX. The first was a series of depths, run at a fixed energy of 50 electrons (∼ 1000 pe), with depths ranging between 0 and 30 cm. the other was a simulation run at a fixed depth of 15 cm, with energies between 1 and 100 electrons (∼ 20 to ∼ 2000 pe).

3.2

Width Measures

One of the goals of this research is to find the best width measure for esti-mating event depths. In the analysis, four width measures were considered: s2 range 90p area, s2 range 50p area, s2 full range, and s2 hit time std.

The current data processor for XENON100, XeRawDP, uses an old width measure, known as the Full Width at Low Maximum (FWLM). This width measure is determined by summing the signal from all the PMTs and creat-ing a summed waveform. XeRawDP then finds peaks in the summed signal and finds their widths at 10% of the peaks’ heights.

PAX does not use this width measure, since it does not use a summed waveform except for plotting purposes (Aalbers, 2018). Instead, PAX records the individual PMT hits of an event. Peaks are now found using a clustering algorithm. It determines the center of this peak using the mean hit time and defines a width which encloses the hits directly before and after the mean hit time. Then, PAX increases this width around the center until 90% of the peak’s energy is enclosed. This width is known as the s2 range 90p area width. Though these width measures are not exactly the same, they can be considered to be equivalent. Figure 7 illustrates the difference between the width measures.

The s2 range 50p area width measure is equivalent to the well-known Full Width at Half Maximum (FWHM) width. s2 full range Simply takes the time difference between the first and the last hit of a peak (it could also be named the s2 range 100p area width).

s2 hit time std Is a new kind of width measure, which exploits the dis-crete nature of hits by determining the standard deviation of the hit times.

(17)

Figure 7: Comparison of XeRawDP’s Full Width at Low Maximum and PAX’s S2 Range 90p Area width measure. While the two width measures can be considered to be equivalent, they are not exactly the same, as the widths are determined in different ways. XeRawDP is the current data processor for XENON100, while PAX is the new processor, that will be used on XENON1T.

Because it uses the standard deviation of all hits, s2 hit time std should be less vulnerable to unusual peak shapes.

3.3

Slicing Data

Before parameterizing the drift time dependence of the S2 signal width, it is necessary to slice the data. This means that the 2d-histogram will be sliced up into a number of 1d-histograms (see figure 8), of which the mean and sigma can be determined by fitting an exponentially modified gaussian distribution to it (see figure 9). The motivation for using this function in stead of a regular gaussian is explained in section 3.4.

(18)

Figure 8: The top figure shows a 2d-histogram of some data. The red box indicates one slice of this dataset. This slice can be represented by a 1d-histogram, which can be seen in the lower figure. In order to determine the correct mean and standard deviation of this slice, an exponentially modified gaussian function is fit to the histogram. The green box shows the fit parameters and the corresponding errors which will be stored, to be used for the parameterization in the next step of the analysis.

(19)

3.4

Real Data

After running the analysis on simulated data, the same steps were carried out on real data. Real data is not as clean as simulated data; Not all events in the dataset are ’good’ events. Because of this, the width distribution at a given depth is no longer normally distributed, but also contains some (and in some cases many) events that have a width that is too large for the given depth. This can be seen in the lower part of figure 8, where a tail of events can be seen to the right of the main peak. Because of this, using a regular gaussian distribution is no longer sufficient to find the best mean and standard deviation. Instead, an exponentially modified gaussian is used. − 6 − 4 − 2 0 2 4 6 8 1 0 0 .0 0 0 .0 5 0 .1 0 0 .1 5 0 .2 0 0 .2 5 0 .3 0 0 .3 5 Ex p o n e n t ia lly Mo d ifie d Ga u s s ia n Fu n c t io n λ = 1.0 λ = 0.5 λ = 0.2 λ = 0.1 λ = 0.05

Figure 9: Example of the exponentially modified gaussian function for various values of lambda. All exponentially modified gaussian curves have a mean of 0 and a standard deviation of 1.

This tail may be due to the fact that only basic quality cuts have been

applied to this data. There are a number of quality cuts that can still

be applied using PAX. Another part of this tail may be caused by double peaks which have been incorrectly clustered together as one peak. These ’bad’ events could be removed by using the depth-width parameterization from this research to find and cut events that are too wide for their depth.

(20)

3.5

Parameterization

3.5.1 Drift Time

The information that has been acquired once all slices have had their gaus-sian fit and their mean and spread determined, can be seen in figure 10. The following step is to fit a curve through the µ and σ points, in order to find a function that best describes the depth-width relationship.

0 2 0 4 0 6 0 8 0 1 0 0 1 2 0 1 4 0 1 6 0 Drift t im e [u s ] 0 2 0 0 4 0 0 6 0 0 8 0 0 1 0 0 0 S 2 W id th ( s td ) [n s ]

Su m m a ry o f d a t a s lic in g fo r s 2 _h it _t im e _s t d

µ

σ

Figure 10: The result of slicing the data and fitting a (exponentially modified) gaussian to each slice. The blue points show the mean width µ for every slice and the green points give the spread of the width σ for every slice. The vertical error bars indicate the errors on the points and the horizontal error bars simply indicate the size of the slice.

There are many possible fit functions to be tried, but Sorensen (2011), expects the following function to fit this relationship:

σe = s 2DLt v2d + σ 2 0 (6)

Where σe is the S2 signal width, DL is the diffusion coefficient of liquid

xenon, vdis the drift velocity of the electrons in liquid xenon and σ0 is a free

parameter. It likely depends on the electron trapping time at the gas/liquid interface, but possibly also other things.

(21)

3.5.2 S2 Area

Due to the discrete nature of electrons, events with a small S2 area (and thus few electrons in the S2) are expected to have a significantly smaller width than the above dependence predicts. Therefore, an S2 area correction must also be determined.

This S2 area correction to the width may depend on the drift time as well as on the S2 area. For this research, however, we have assumed this correction to be drift time independent. By doing so, it is possible to write the S2 signal width and its spread as a product of a function of drift time and a function of S2 area, allowing easier inversion of the relationship later on:

s2 width (drift time, s2 area) = f (drift time) · g(s2 area) (7)

Spreads2 width(drift time, s2 area) = f (drift time) · g(s2 area) (8)

Whenever a simple function such f (x) = a · x + b, f (x) =√a · x + b, or

f (x) = log(a · x + b) did not properly fit the data, an empirical function was used:

f (x) = √a · x + b + log(c · x + d) + e (9)

3.5.3 Fit Quality

Figure 11 shows the result of fitting the curve in equation 6 through the µ points of a sliced data set. Fitting was done using the Levenberg-Marquardt algorithm through SciPy’s curve fit function. In the plot, you can see two

measures for fit quality. The reduced χ2 fit quality measure and the R2

measure. The reduced χ2 is defined as

χ2red = 1 ndf X i  f (xi) − µi σi 2 , (10)

Where µi is the i’th data point, f (xi) is the fit functions value at that

point and σiis that points corresponding error. ndf if the number of degrees

of freedom, which is equal to the number of points, minus the number of free parameters of the fit function.

χ2 is a measure that takes into account the errors of the points to which

the curve has been fit. When χ2 = 1, the curve perfectly captures the

(22)

perfectly fit the data. When the errors on the points are extremely small,

this may make χ2 very large, even though the curve fits the data quite

well. In order to recognize this situation, the R2 has also been computed,

which simply compares the residuals squared to the standard deviation of the points: R2= 1 − P i(µi− f (xi)) 2 P j(µj − ¯µ)2 . (11)

Figure 11: Square root function fit to the µ points from the data slicing. The fit function and parameters are shown in the top left of the image and the fit quality in the bottom left. Two different measures for fit quality have been computed.

(23)

4

Results

The following pages show the results of the analysis. The plots contain all the information relevant to the aims of this research.

For the montecarlo analysis, a dataset was simulated at a constant en-ergy of 50 electrons (∼ 1000 pe) and a range of depths, and a dataset was simulated at a constant depth of 15 cm and a range of energies.

For the real data analysis, a part of XENON100’s run 10 was used. For determining the drift time dependence, all widths greater than 5 µs, and energies between 100 and 5000 pe were included in the analysis.

For determining the S2 area dependence, all energies greater than 100 pe, and drift times between 80 µs and 120 µs (∼the center of the detector) were included in the analysis.

Figures 12 to 15 show the results of the data slicing for the montecarlo and real dataset for the various width measures. Figures 12 and 14 show A histogram of the data and the resulting parameterization for every width measure, while figures 13 and 15 compare the spread around each point σ, the error on each point’s mean µ and the error on σ, for every width measure.

Figures 16 to 19 show the best parameterizations for the means µ and the spreads σ of the drift time and S2 area dependence for montecarlo and real data. All figures show which function best fits the data, the fit quality, and the fit parameters.

Figures 21 and 20 show the end results of the parameterizations. Giving a plot of µ ± σ for the montecarlo data and real data, in order to compare the results and see the differences.

(24)

Montecarlo Results

0 5 0 0 1 0 0 0 1 5 0 0 2 0 0 0 2 5 0 0 3 0 0 0 3 5 0 0 S 2 W id th ( fu ll ) [n s ] R2(µ) : 0.995628 R2(σ) : 0.922594 0 5 0 0 1 0 0 0 1 5 0 0 2 0 0 0 2 5 0 0 3 0 0 0 3 5 0 0 0 5 0 0 1 0 0 0 1 5 0 0 S 2 W id th ( 5 0 p ) [n s ] R2(µ) : 0.997138 R2(σ) : 0.973485 0 5 0 0 1 0 0 0 1 5 0 0 0 5 0 0 1 0 0 0 1 5 0 0 2 0 0 0 2 5 0 0 S 2 W id th ( 9 0 p ) [n s ] R2(µ) : 0.997874 R2(σ) : 0.935401 0 5 0 0 1 0 0 0 1 5 0 0 2 0 0 0 2 5 0 0 0 2 0 4 0 6 0 8 0 1 0 0 1 2 0 1 4 0 1 6 0 Drift t im e [u s ] 0 2 0 0 4 0 0 6 0 0 8 0 0 S 2 W id th ( s td ) [n s ] R2(µ) : 0.998724 R2(σ) : 0.934130 0 2 0 4 0 6 0 8 0 1 0 0 1 2 0 1 4 0 1 6 0 Drift t im e [u s ] 0 2 0 0 4 0 0 6 0 0 8 0 0 Fu ll Wid t h 1 5 3 0 4 5 6 0 7 5 9 0 1 0 5 1 2 0 5 0 % Are a Wid t h 2 5 5 0 7 5 1 0 0 1 2 5 1 5 0 1 7 5 2 0 0 2 2 5 2 5 0 9 0 % Are a Wid t h 2 5 5 0 7 5 1 0 0 1 2 5 1 5 0 1 7 5 2 0 0 2 2 5 Hit Tim e St a n d a rd De v ia t io n 2 5 5 0 7 5 1 0 0 1 2 5 1 5 0 1 7 5 2 0 0

Figure 12: Results of montecarlo analysis. The left column shows the parameter-ization that has been made from the data seen in the 2d-histogram in the right column. Every row shows a different width measure

(25)

Montecarlo Errors

0 2 0 4 0 6 0 8 0 1 0 0 1 2 0 1 4 0 1 6 0 1 8 0 Drift t im e [u s ] 0 .0 2 0 .0 3 0 .0 4 0 .0 5 0 .0 6 0 .0 7 0 .0 8 0 .0 9 0 .1 0 0 .1 1 R e la ti v e s p re a d o f w id th s Re la t iv e s p re a d s 2 _ra n g e _5 0 p _a re a s 2 _fu ll_ra n g e s 2 _ra n g e _9 0 p _a re a s 2 _h it _t im e _s t d 0 2 0 4 0 6 0 8 0 1 0 0 1 2 0 1 4 0 1 6 0 1 8 0 Drift t im e [u s ] 0 .0 0 0 0 .0 0 5 0 .0 1 0 0 .0 1 5 0 .0 2 0 R el at iv e er ro r on µ p oi nt s Relative µ error s 2 _ra n g e _5 0 p _a re a s 2 _fu ll_ra n g e s 2 _ra n g e _9 0 p _a re a s 2 _h it _t im e _s t d 0 2 0 4 0 6 0 8 0 1 0 0 1 2 0 1 4 0 1 6 0 1 8 0 Drift t im e [u s ] 0 .0 0 0 .0 2 0 .0 4 0 .0 6 0 .0 8 0 .1 0 0 .1 2 R el at iv e er ro r on σ p oi nt s Relative σ error s 2 _ra n g e _5 0 p _a re a s 2 _fu ll_ra n g e s 2 _ra n g e _9 0 p _a re a s 2 _h it _t im e _s t d

Figure 13: Comparison of montecarlo spread and errors for all width measures. The top figure shows the spread (σ) of the width, given as a fraction of its corre-sponding µ point. The middle figure gives the errors on the µ data and the bottom figure gives the errors on the σ points, both as fractions of their µ or σ points.

(26)

Real Data Results

0 5 0 0 1 0 0 0 1 5 0 0 2 0 0 0 2 5 0 0 3 0 0 0 3 5 0 0 S 2 W id th ( fu ll ) [n s ] R2(µ) : 0.989234 R2(σ) : 0.986412 0 5 0 0 1 0 0 0 1 5 0 0 2 0 0 0 2 5 0 0 3 0 0 0 3 5 0 0 0 5 0 0 1 0 0 0 1 5 0 0 S 2 W id th ( 5 0 p ) [n s ] R2(µ) : −0.811882 R2(σ) : −0.438784 0 5 0 0 1 0 0 0 1 5 0 0 0 5 0 0 1 0 0 0 1 5 0 0 2 0 0 0 2 5 0 0 S 2 W id th ( 9 0 p ) [n s ] R2(µ) : 0.998231 R2(σ) : 0.916066 0 5 0 0 1 0 0 0 1 5 0 0 2 0 0 0 2 5 0 0 0 2 0 4 0 6 0 8 0 1 0 0 1 2 0 1 4 0 1 6 0 Drift t im e [u s ] 0 2 0 0 4 0 0 6 0 0 8 0 0 S 2 W id th ( s td ) [n s ] R2(µ) : 0.997558 R2(σ) : 0.979763 0 2 0 4 0 6 0 8 0 1 0 0 1 2 0 1 4 0 1 6 0 Drift t im e [u s ] 0 2 0 0 4 0 0 6 0 0 8 0 0 Fu ll Wid t h 1 0 2 0 3 0 4 0 5 0 6 0 7 0 8 0 9 0 5 0 % Are a Wid t h 1 0 2 0 3 0 4 0 5 0 6 0 7 0 8 0 9 0 % Are a Wid t h 1 5 3 0 4 5 6 0 7 5 9 0 1 0 5 1 2 0 Hit Tim e St a n d a rd De v ia t io n 2 5 5 0 7 5 1 0 0 1 2 5 1 5 0 1 7 5 2 0 0

Figure 14: Results of real data analysis. The left column shows the parameter-ization that has been made from the data seen in the 2d-histogram in the right column. Every row shows a different width measure

(27)

Real Data Errors

0 2 0 4 0 6 0 8 0 1 0 0 1 2 0 1 4 0 1 6 0 1 8 0 Drift t im e [u s ] 0 .0 0 0 .0 5 0 .1 0 0 .1 5 0 .2 0 0 .2 5 R e la ti v e s p re a d o f w id th s Re la t iv e sp r e a d s 2 _ra n g e _5 0 p _a re a s 2 _fu ll_ra n g e s 2 _ra n g e _9 0 p _a re a s 2 _h it _t im e _s t d 0 2 0 4 0 6 0 8 0 1 0 0 1 2 0 1 4 0 1 6 0 1 8 0 Drift t im e [u s ] 0 .0 0 0 .0 1 0 .0 2 0 .0 3 0 .0 4 0 .0 5 R el at iv e er ro r on µ p oi nt s Relative µ error s 2 _ra n g e _5 0 p _a re a s 2 _fu ll_ra n g e s 2 _ra n g e _9 0 p _a re a s 2 _h it _t im e _s t d 0 2 0 4 0 6 0 8 0 1 0 0 1 2 0 1 4 0 1 6 0 1 8 0 Drift t im e [u s ] 0 .0 0 0 .0 5 0 .1 0 0 .1 5 0 .2 0 0 .2 5 R el at iv e er ro r on σ p oi nt s Relative σ error s 2 _ra n g e _5 0 p _a re a s 2 _fu ll_ra n g e s 2 _ra n g e _9 0 p _a re a s 2 _h it _t im e _s t d

Figure 15: Comparison of real-data spread and errors for all width measures. The top figure shows the spread (σ) of the width, given as a fraction of its corresponding µ point. The middle figure gives the errors on the µ data and the bottom figure gives the errors on the σ points, both as fractions of their µ or σ points.

(28)

4.2

Parameterizations

Montecarlo Drift Time Parameterization

Figure 16: Montecarlo parametrization of the drift time dependence. As predicted by Sorensen (2011), the means µ of the width depend on the drift time according to a square-root function. The best fit for the spreads of the width, σ, also is a square-root function. This dataset was simulated at 50 electrons (∼ 1000 pe).

Montecarlo S2 Area Parameterization

Figure 17: Montecarlo parametrization of the S2 area dependence. The best fit for the means µ and the spreads σ are both a logarithmic function. The σ and µ are normalized to be 1.0 at the center of the last slice, i.e. at 1182.5pe. This dataset was simulated at a constant depth of 15cm (the center of the detector, ∼ 90 µs).

(29)

Real Data Drift Time Parameterization

Figure 18: Real Data Parametrization of the drift time dependence. Again, as predicted by Sorensen (2011), the means µ of the width depend on the drift time according to a square-root function. However, the best fit for the spreads σ is a linear function. In this analysis, all drift times greater than 5 µs, and energies between 100 and 5000 pe were included.

Real Data S2 Area Parameterization

Figure 19: Real data parametrization of the S2 area correction. The best fit function for the means µ of the width was the empirical function mentioned in section 3.5. For the spread σ, the best fit function was a logarithmic function. The σ and µ are normalized to be 1.0 at 1182.5pe, as a correction to the drift time dependence. All energies greater than 100 pe, and drift times between 80 µs and 120 µs (∼the center of the detector) were included.

(30)

4.3

Simulator Versus Real Data

0 2 0 4 0 6 0 8 0 1 0 0 1 2 0 1 4 0 1 6 0 Drift t im e [u s ] 0 1 0 0 2 0 0 3 0 0 4 0 0 5 0 0 S 2 W id th ( s td ) [n s ] R2(µ) : 0.998724 R2(σ) : 0.963637 M o n t e ca r lo 0 2 0 4 0 6 0 8 0 1 0 0 1 2 0 1 4 0 1 6 0 Drift t im e [u s ] 0 1 0 0 2 0 0 3 0 0 4 0 0 5 0 0 S 2 W id th ( s td ) [n s ] R2(µ) : 0.996683 R2(σ) : 0.979842 Re a l D a t a

Figure 20: Comparing drift time parameterizations of real and simulated data. You can see that at low depths, the simulated data and the real data are in agreement. However, at greater depths, there is a significant difference between the parametrization of real data and of the montecarlo data. This difference can be more clearly seen in figure 22.

0 2 0 0 4 0 0 6 0 0 8 0 0 1 0 0 0 1 2 0 0 S2 Are a [p e ] 0 .0 0 .2 0 .4 0 .6 0 .8 1 .0 1 .2 1 .4 S 2 W id th ( s td ) [n s ] R2(µ) : 0.915305 R2(σ) : 0.901285 M o n t e ca r lo 0 2 0 0 4 0 0 6 0 0 8 0 0 1 0 0 0 1 2 0 0 S2 Are a [p e ] 0 .0 0 .2 0 .4 0 .6 0 .8 1 .0 1 .2 1 .4 S 2 W id th ( s td ) [n s ] R2(µ) : 0.911783 R2(σ) : 0.700227 Re a l D a t a

Figure 21: Comparing s2 area parameterizations of real and simulated data. You can see that at low depths, the simulated data and the real data are in agree-ment. However, at greater depths, there is a significant difference between the parametrization of real data and of the montecarlo data.

(31)

0 2 0 4 0 6 0 8 0 1 0 0 1 2 0 1 4 0 1 6 0 1 8 0 Drift Tim e [u s ] 0 1 0 0 2 0 0 3 0 0 4 0 0 5 0 0 S 2 H it T im e S ta n d a rd D e v ia ti o n [ n s ]

Re a l D a t a v e r su s M o n t e ca r lo

Figure 22: Comparison of the real data parametrization and the montecarlo

parametrization of the drift time dependence. The red line indicates the real

data µ and the green dashed lines indicate the real data µ ± σ. The magenta line indicates the montecarlo µ and the dashed cyan lines indicate the montecarlo µ ± σ. A significant difference between the two is clearly visible.

(32)

5

Conclusion and Discussion

5.1

Comparing The Width Measures

In the montecarlo data, the four width measures can be used as depth estimators. The s2 range 90p area and s2 hit time std width measures are consistently more accurate than the other two, but the differences are small. When analyzing real data, however, it immediately becomes clear that

s2 range 50p area is not a suitable estimator for interaction depth. s2 range 50p area seems to have two entirely distinct populations of widths. The one

popula-tion, at low depth, has higher widths than the population at greater depths. Further research into the cause of this is suggested.

The s2 full range width measure is not entirely unsuitable as depth

es-timator. However, both the s2 range 90p area and s2 hit time std have

smaller errors and spreads.

It is hard to say whether s2 range 90p area or s2 hit time std is the best depth estimator. However, since s2 hit time std consistently has a slightly smaller spread and error than s2 range 90p area, the s2 hit time std width method should be chosen as most accurate depth estimator.

5.2

Parameterization

The montecarlo analysis of the drift time and S2 area correction gave very clean results, as can be see in figure 13. The montecarlo data also could

be parameterized in a very consistent, analytical way. Both the µ and

σ followed the same type of function, giving two quare root functions for the drift time dependence and two logarithmic functions for the S2 area dependence.

For the real data, these dependences were not as clean. The σ of the drift time dependence dit not fit a square root, but corresponded best to a linear distribution instead. Since the real data used for parameterizing the drift time dependence includes events at energies between 100 and 5000 pe instead of running at a fixed energy of 1000pe, the S2 area dependence of the width may have influenced this relationship.

It is valid to assume that the width is approximately S2 area indepen-dent, when parameterizing the drift time dependence of the width. However, if one wants to improve the accuracy of this parameterization, it is suggested to perform the parameterization on a dataset with a narrower energy range. Preferably, around the energy value at which the S2 area correction param-eterization is normalized. In order to maintain a good amount of statistics, a larger dataset will be necessary, requiring more RAM for the analysis than

(33)

is usual on most computers.

A good compromise would be to cut out only the low-energy events, since their width is affected most by their energy. At energies higher than ∼ 500 pe, we see the S2 Area dependence flattens out and can most likely be neglected.

When parameterizing the S2 area correction, it is not possible to assume that is is drift time independent. If you do assume so, the effect of the dependence is much smaller than its spread, making the parameterization nearly useless. The S2 area correction has therefore been determined at a relatively narrow range of drift times, picked around the center of the detector. This has been done, in order to compare the real data to monte-carlo parameterizations, which has also been simulated in the center of the detector.

Since the S2 area correction is drift time dependent, but has only been determined at a narrow range of drift times, it is necessary to consider a 10% systematic error on this correction when applying it to the µ and σ of the drift time dependence. This value has been chosen, since the correction becomes completely flat (there is no correction) at low depths, and increases by ∼ 10% when nearing the bottom of the detector.

The S2 area correction has been normalized at 1182.5 pe, meaning that there should be no correction at this value. Since the S2 area curve flattens off after ∼ 500 pe, the exact point of normalization should not make much difference on the correction. However, if one wants to improve the accuracy of this correction, it is possible to redo the parametrization of the drift time dependence using only a narrow range of energies, and normalizing the S2 area correction at a value within this range.

Though figure 21 may suggest that the width has an intrinsic S2 area dependence, it is important to remember that the S2 area correction should be seen as a correction to the µ and σ of the drift time dependence and not as a independent predictor of the width.

5.3

Simulator Versus Real Data

When PAX simulates the events, it tries to incorporate as much physics into the simulation as possible. The drift time dependence and S2 area dependence emerge as a result of this modeling and are very similar to the real data, which shows that the PAX simulator works very well.

Though the drift time dependence of the montecarlo and real data agree quite well at low depths, figures 20 and 22 show a significant difference between montecarlo and real data at large depths. It is unclear what causes this, and merits further research to the nature of this difference.

(34)

Another interesting difference between the montecarlo and real data pa-rameterization can be seen in the S2 area dependence. While the real data flattens out after ∼ 500 pe, the montecarlo data keeps rising at higher ener-gies, albeit logarithmically. This may indicate some necessary physics that PAX has not included in its models, or may indicate that the effect of some other physics is being incorrectly included in the simulation.

However, it is important to apply more rigorous quality cuts on the real data, in order to check if the differences are not simply due to unwanted noise in the real data.

5.4

Conclusion

The main aim of this research was to predict event depths (or drift times), based on their S2 signal widths and to determine the accuracy of this depen-dence through the spread σ. However, in the entire research, we have been working the other way around, predicting the width from the drift time and S2 area.

Fortunately, the drift time dependence of the width is parameterized by a simple analytical function, allowing us to give the inverse dependence simply by inverting the function. In section 3.5, a general form of the width dependence has been given:

s2 width (drift time, s2 area) = f (drift time) · g(s2 area) (12)

Let us now fill in these equations, using σe for S2 width, tdfor drift time

and S2 for S2 area:

σe = p A · td+ B  · f (S2), (13) where f (S2) = √C · S2 + D + log(E · S2 + F ) + G. (14)

This can be rewritten to

td =

σe2− B

f (S2)2· A. (15)

(35)

Spreads2 width(drift time, s2 area) = h(drift time) · i(s2 area) (16) becomes

spread (σe) = (A · td+ B) · C · log (S2 + D) + E. (17)

For determining acceptance rates with width-based quality cuts, it is not necessary to explicitly invert this equation.

This is convenient, because it is not trivial to determine the spread of td

from this equation. Since the equations for µ ± σ are not easily inverted an-alytically, it is most convenient to determine this spread numerically, should the need arise.

Putting this all together gives the final result of this research:

Drift Time Mean

σe = p A · td+ B  ·√C · S2 + D + log(E · S2 + F ) + G td = σe2− B √ C · S2 + D + log(E · S2 + F ) + G2 · A

Param Value Error

A 835.6 2.2 B 39691 115 C 0.134 0.036 D -134.4 25.4 E -0.0108 0.0026 F -167.0 7.8 G 0.41394 0.17

Drift Time Spread

spread (σe) = (A · td+ B) · C · log (S2 + D) + E. (18)

Param Value Error

A 0.2500 0.0033

B 6.81 0.25

C -0.199 0.024

D -145 19

(36)

Bibliography

J. Aalbers. Pax: Processor for analysing xenon. Xenon Wiki, 2018.

E. Aprile. The XENON1T Dark Matter Search Experiment. ArXiv e-prints, June 2012. E. Aprile, M. Alfonsi, K. Arisaka, F. Arneodo, C. Balan, L. Baudis, B. Bauermeister, A. Behrens, P. Beltrame, K. Bokeloh, E. Brown, G. Bruno, R. Budnik, J. M. R. Cardoso, W.-T. Chen, B. Choi, D. Cline, A. P. Colijn, H. Contreras, J. P. Cusson-neau, M. P. Decowski, E. Duchovni, S. Fattori, A. D. Ferella, W. Fulgione, F. Gao, M. Garbini, C. Ghag, K.-L. Giboni, L. W. Goetzke, C. Grignon, E. Gross, W. Hampel, F. Kaether, A. Kish, J. Lamblin, H. Landsman, R. F. Lang, M. Le Calloch, C. Levy,

K. E. Lim, Q. Lin, S. Lindemann, M. Lindner, J. A. M. Lopes, K. Lung, T. Marrod´an

Undagoitia, F. V. Massoli, A. J. Melgarejo Fernandez, Y. Meng, A. Molinario, E. Na-tiv, K. Ni, U. Oberlack, S. E. A. Orrigo, E. Pantic, R. Persiani, G. Plante, N. Priel, A. Rizzo, S. Rosendahl, J. M. F. dos Santos, G. Sartorelli, J. Schreiner, M. Schu-mann, L. Scotto Lavina, P. R. Scovell, M. Selvi, P. Shagin, H. Simgen, A. Teymourian, D. Thers, O. Vitells, H. Wang, M. Weber, and C. Weinheimer. Dark Matter Results from 225 Live Days of XENON100 Data. Physical Review Letters, 109(18):181301, November 2012a. doi: 10.1103/PhysRevLett.109.181301.

E. Aprile, K. Arisaka, F. Arneodo, A. Askin, L. Baudis, A. Behrens, E. Brown, J. M. R. Cardoso, B. Choi, D. Cline, S. Fattori, A. D. Ferella, K. L. Giboni, A. Kish,

C. W. Lam, R. F. Lang, K. E. Lim, J. A. M. Lopes, T. Marrod´an Undagoitia,

Y. Mei, A. J. Melgarejo Fernandez, K. Ni, U. Oberlack, S. E. A. Orrigo, E. Pan-tic, G. Plante, A. C. C. Ribeiro, R. Santorelli, J. M. F. Dos Santos, M. Schu-mann, P. Shagin, A. Teymourian, E. Tziaferi, H. Wang, and M. Yamashita. The XENON100 dark matter experiment. Astroparticle Physics, 35:573–590, April 2012b. doi: 10.1016/j.astropartphys.2012.01.003.

C. Beskidt, W. de Boer, and D. I. Kazakov. The impact of a 126 GeV Higgs

on the neutralino mass. Physics Letters B, 738:505–511, November 2014. doi:

10.1016/j.physletb.2014.08.011.

(37)

on the reach of next generation dark matter direct detection experiments. Physical Review D, 89(2):023524, January 2014. doi: 10.1103/PhysRevD.89.023524.

D. G. Cerde˜no, M. Peir´o, and S. Robles. Low-mass right-handed sneutrino dark matter:

SuperCDMS and LUX constraints and the Galactic Centre gamma-ray excess. Journal

of Cosmology and Astroparticle Physics, 8:005, August 2014. doi:

10.1088/1475-7516/2014/08/005.

J. H. Davis. Fitting the Annual Modulation in DAMA with Neutrons from Muons and Neutrinos. Physical Review Letters, 113(8):081302, August 2014. doi: 10.1103/Phys-RevLett.113.081302.

J. L. Feng. Dark Matter Candidates from Particle Physics and Methods of Detection. Annual Review of Astronomy and Astrophysics, 48:495–545, September 2010. doi: 10.1146/annurev-astro-082708-101659.

S. E. A. Orrigo. Direct Dark Matter Search with XENON100. ArXiv e-prints, January 2015.

The Planck Collaboration. Planck 2015 results i: Overview of products and scientific result. Submitted to A&A, 2015.

V. C. Rubin and W. K. Ford, Jr. Rotation of the Andromeda Nebula from a Spec-troscopic Survey of Emission Regions. The Astrophysical Journal, 159:379, February 1970. doi: 10.1086/150317.

T. Saab. An Introduction to Dark Matter Direct Detection Searches and Techniques. In K. Matchev and et al., editors, The Dark Secrets of the Terascale (TASI 2011) -Proceedings of the 2011 Theoretical Advanced Study Institute in Elementary Particle Physics. Edited by Matchev Konstantin et al. Published by World Scientific Publishing Co. Pte. Ltd., 2013. ISBN #9789814390163, pp. 711-738, pages 711–738, December 2013. doi: 10.1142.

P. Sorensen. Anisotropic diffusion of electrons in liquid xenon with application to improv-ing the sensitivity of direct dark matter searches. Nuclear Instruments and Methods in Physics Research A, 635:41–43, April 2011. doi: 10.1016/j.nima.2011.01.089. P. Sorensen, J. Angle, E. Aprile, F. Arneodo, L. Baudis, A. Bernstein, A. Bolozdynya,

L. C. C. Coelho, C. E. Dahl, L. Deviveiros, A. D. Ferella, L. M. P. Fernandes, S. Fiorucci, R. J. Gaitskell, K. L. Giboni, R. Gomez, R. Hasty, L. Kastens, J. Kwong, J. A. M. Lopes, N. Madden, A. Manalaysay, A. Manzur, D. N. McKinsey, M. E. Monzani, K. Ni, U. Oberlack, J. Orboeck, G. Plante, R. Santorelli, J. M. F. Dos Santos, S. Schulte, P. Shagin, T. Shutt, C. Winant, and M. Yamashita. Lowering the

(38)

low-energy threshold of xenon-based detectors. In Identification of Dark Matter 2010, page 17, 2010.

Referenties

GERELATEERDE DOCUMENTEN

By requiring the relic abundance of X to match the relic density of cold dark matter observed by the Planck experiment, the abun- dance constraint on the model’s parameter space

Let’s consider the scenario of searching scientific papers as for instance done by Citeseer, Google Scholar or Scopus 2 , that is, given a text query, for instance “theory

showed in their idealized numerical simulations that slanted gaps tend to form in the energy and spatial distribution of the stream as a result of collisions with dark subhalos, as

The data for the first experiment with the hourglass shaped detectors 64 and 64-2, as can be seen in figure 10, does not correspond to the required time resolution of approximately

contributes a few extra per cent in all three panels due to contraction of the halo compared to the DMO halo data (red points). Even when we assume the hydrodynamical EAGLE- derived

The scatter in the relationship between the flattening and the ratio of the rotation and dispersion velocities (v/σ) correlates strongly with the anisotropy of the stellar

We identify MW-mass haloes in the simulation whose satellite galaxies have similar kinemat- ics and spatial distribution to those of the bright satellites of the MW,

experimental data has higher rate than simulated background in the signal region.. Some new phenomenon is