• No results found

Distortion of CMB spectrum due to dark matter annihilation

N/A
N/A
Protected

Academic year: 2021

Share "Distortion of CMB spectrum due to dark matter annihilation"

Copied!
70
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Master’s Astronomy and Astrophysics

track GRAPPA

Master Thesis

Distortion of CMB spectrum due to dark matter

annihilation

by

Omer Tzuk

10390626

June 2014

60 ECTS

September 2013 - June 2014

Supervisor:

Dr. Shinichiro Ando

Dr. Maria E. Cabrera

Examiner:

Dr. Jacco Vink

(2)
(3)

The monopole signal of the cosmic microwave background (CMB) is a probe to the early Universe. The energy spectrum of the CMB holds the information of energy release episodes from redshift of about z ∼ 2 × 106 till the epoch of recombination, and an

anal-ysis of the energy spectrum allows us to test various hypotheses about physical processes during this time span. One of the sce-narios that can be put into test is the presence of dark matter particle in the early Universe. If dark matter is made of weakly interacting massive particles (WIMPs), we can expect them to in-ject energy into the primordial medium by self annihilation. This energy is expected to cause a distortion of the CMB spectrum. In this scenario, the expected distortion is characterize by a non-zero chemical potential (µ) at the photons energy distribution function (a Bose-Einstein spectrum), and so this distortion is termed µ-type distortion. We provide here a forecast for µ-µ-type distortion for WIMPs that arise from supersymmetric model. In addition, an analysis of µ-type distortion in the presence of Ultracompact Pri-mordial Dark Matter Minihalos (UCMHs) is presented, and the possibility of constraining cosmological parameters through µ-type distortion observations is explained.

(4)

1 Introduction 2

2 Theoretical Background 4

2.1 Spectral Distortions of the Cosmic Microwave Background . . . 4

2.1.1 Big Bang Cosmology . . . 4

2.1.2 Spectral Distortions . . . 12

2.1.3 Thermalization of the CMB spectrum . . . 17

2.1.4 Analytic Solutions for the Spectral Distortions of the CMB spectrum . . . 21

2.2 Supersymmetry . . . 27

3 Method and Results 32 3.1 Bayesian Inference and Markov Chain Monte Carlo methods . . . . 32

3.1.1 Bayesian Inference . . . 32

3.1.2 The Markov Chain Monte Carlo Method . . . 34

3.1.3 Bayesian approach in MSSM . . . 35

3.2 Implementation . . . 39

3.2.1 Bayesian analysis data used . . . 39

3.2.2 Codes used in this Thesis . . . 39

3.3 Results . . . 43

3.3.1 Forecast of µ-type distortion for NUGHM supersymmetric scenario . . . 43

3.3.2 Constraining Ultracompact Primordial Dark Matter Mini-halos through Spectral Distortions . . . 48

4 Discussion 60 4.0.3 Observational Issues . . . 61

4.0.4 Conclusions . . . 62

(5)

Introduction

Half of a century has passed since the discovery of the Cosmic Microwave Back-ground (CMB) by astronomers Arno Penzias and Robert Wilson. Analysis of CMB has been since then the main way of learning about the physical conditions during the early phases of the expansion of the universe. The standard hot model of the Universe predicts that the blackbody spectrum of the CMB emerged at z ≥ 2×106.

In the epoch precedent to z ∼ 2 × 106 the conditions of high-temperatures and

densities established a complete thermal equilibrium between the photons and the baryonic matter. In an idealized model of the universe, in which the universe is taken to be homogeneous and isotropic, the blackbody profile of the CMB spec-trum should be maintained during the expansion. Therefore, any departure of the CMB spectrum from Planck law can be related to some kind of irregularities that took place between the time of the establishment of the CMB blackbody spectrum and the present time. Those departures of the CMB frequency spectrum from a perfect blackbody spectrum are referred to as spectral distortions.

The spectral distortion can be typically distinguished as two main types of distortions: µ-type and y-type distortions. At redshift range 2× 106>

∼z>∼105 the

photon production mechanisms become inefficient enough in order to restore the blackbody spectrum, and the scattering processes bring the spectrum to a kinetic equilibrium, which is characterized by the Bose-Einstein spectrum. This range of redshifts define the era in which µ-type distortion can be imprinted in the CMB spectrum due to energy injection into the plasma. At redshift z<

∼1.5× 104

the scattering processes become minimal, and the type of distortion that arises due to energy injections is y-type distortion. In intermediate redshift range of 105>

∼z>∼1.5 × 104 the spectral distortion will be in a intermediate form between

µ-type and y-type distortions, and is termed i-type distortion. The type and magnitude of the spectral distortions is therefore dependent on the energy-releasing history of the Universe.

In this thesis we will focus on µ-type distortion from dark matter annihilation 2

(6)

in the early Universe. Starting from 1933, when astronomer Fritz Zwicky estimated the overall mass of the Coma cluster from the velocity dispersion of galaxies in it, there have been ample of observations that suggest that a large portion of the matter component of the Universe is non visible and does not radiate. From measurements of the galaxies rotation curves to gravitational lensing, it is now well established that around galaxies and in clusters of galaxies there are halos of dark matter. Nevertheless, the nature of dark matter remains unsettled. The current plausible model for the nature of dark matter is that it is made of massive particles what interact only in the weak scale. As such particles will tend to have non relativistic velocities in standard cosmological scenarios, the are classified as Cold Dark Matter (CDM). One of the plausible theoretical frameworks that give a candidate for the particles the dark matter is made of is supersymmetry (SUSY). The calculations of µ-type distortion in this thesis where based on data of Bayesian analysis of supersymmetric models. If dark matter particles are composed of the lightest supersymmetric particles (LSP) from supersymmetric theories, it is possible to predict the amount of energy released into the plasma in the epoch in which µ-type distortions can be imprinted in the CMB spectrum. This analysis can give us an estimate of the µ-type distortion that we expect to observe in the CMB in the case that dark matter is indeed composed mainly of LSPs.

This thesis is organized as follows:

In section 2 we start by describing the physical mechanism that causes the spectral distortions to arise in the CMB. In addition, the subject of supersymmetry and dark matter candidates from supersymmetric theories is briefly reviewed. In section 3 the methods and results of the analysis are presented. The ap-proach taken in part of the thesis is a Bayesian inference, therefore, the topic of Bayesian analysis and Markov Chain Monte Carlo (MCMC) is reviewed at the beginning of section 3. In the implementation part we describe the computational methods used for the calculations. The results of the forecast of µ-type distor-tion for a supersymmetric scenario based on Bayesian analysis are presented in this section. Additionaly, the subject of Ultra Compact Minihalos (UCMHs) is presented, and a description of how to constrain dark matter properties under the hypothesis of the existence of UCMHs is explained. The results regarding the analysis of spectral distortions in the presence of ultracompact minihalos in the early Universe are presented by the end of section 3.

In section 4 we draw conclusions from the analysis and the results of this thesis, and we discuss observational aspects of spectral distortions.

(7)

Theoretical Background

2.1

Spectral Distortions of the Cosmic Microwave

Background

2.1.1

Big Bang Cosmology

Standard Cosmology

Modern cosmology rests on two major concepts that were discovered at the begin-ning of the 20th century. The first concept was the discovery of general relativity that was introduced by Alberst Einstein in 1916. Second key concept was the discovery of extragalactic astronomy and the expansion of the Universe, that had first observational evidence after the observations of redshifting galaxies by Edwin Hubble in the 1930s. In the beginning of the 20th century there was a growing evidence for the equivalence principal, that asserts the equality of gravitational mass and inertial mass. This principle led Einstein to develop the theory of gen-eral relativity. The theory puts time and space on the same footing, and connects those two entities to one structure of space-time. Another important discovery that came along the theory of General Relativity is that mass and energy are equivalent.

In difference from the Newtonian framework, time is not an absolute entity that flows in the same way in every part of the Universe, but it is tightly connected to the observer. While in Newtonian mechanics mass tell gravity how to exert a force and Force tells mass how to accelerate, in the Einstein theory of relativity it is mass-energy that tells time how to curve and the curvature of space-time tells mass-energy how to move1. Einstein equations supply us with a tool to

1This summary is based on a famous quote by physicist John Wheeler [Ryden (2002)].

(8)

calculate the geometry of spacetime given the mass-energy content: Gµν+ Λgµν =

8πG

c4 Tµν (2.1)

Where Gµν ≡ Rµν−12Rgµν is the Einstein’s tensor, and Rµν,R are Ricci tensor and

scalar respectively. The left side of (2.1) tells us about the geometry of spacetime while the right side, the energy tensor T , gives us the matter-energy content.

Mass-energy moves along geodesics in a curved space-time. In order to calcu-late the geodesics the mathematical structure of metrics is needed. While Newto-nian mechanics didn’t take under consideration the fact that space itself can be curved, and so Newtonian mechanics is embedded in Euclidian geometry, In Ein-stein theory there is a need to calculate the geodesics of spacetime under a certain distribution of matter-energy. In order to solve eq. (2.1) the form of the metric g should be given. The Robertson-Walker metric was a metric devised to describe a space-time that is assumed to be homogeneous and isotropic at all time. Although there were not any observations that could support such homogeneity and isotropy of the Universe in the 1920’s, it was taken as an axiom, termed as the Cosmologi-cal Principal. It was Alexander Friedmann that derived the Friedmann equations 1922, based on Einstein’s field equation (eq. (2.1)) and Robertson-Walker met-ric. These equation that describe the dynamics of the Universe on the large scale, on which is can be taken that the Cosmological Principal can be applied. The Robertson-Walker metric, for a flat Universe (k = 0), can be written in the form:

ds2 =−c2dt2+ a (t)2

dx2+ x2dΩ2

(2.2) Using the above metric, the 00 component of eq. (2.1) gives us the first of Fried-mann’s equation:

˙a2

a2 =

8πGρ + Λc2

3 (2.3)

and the second equation is derived from the previous together with the trace of eq. (2.1) to be: ¨ a a =− 4πG 3  ρ +3p c2  +Λc 2 3 (2.4)

The second milestone in the development of modern physical cosmology was, as mentioned, the discovery of the expansion of the Universe. In the late 1920s Ed-win Hubble collected observations on galaxies. By this time it was established that the Universe does not comprises only of the Milky Way, and Cepheids were established as a useful standard candle to measure distance to extragalactic ob-jects as galaxies. Hubble measured distances together with the redshift of known

(9)

spectral lines in the spectrum of those galaxies. Edwin Hubble observed that most of the galaxies show known spectral lines which are redshifted in respect to the lines measured in laboratories. This was indication that the galaxies, in general, are having relative velocities away from our location. Following observations es-tablished what is termed today as Hubble’s law - the velocity of various galaxies which are receding from Earth’s location is approximately proportional to their distance:

z = H0D (2.5)

where z is the measured redshift of the observed galaxy, H0is the Hubble constant,

and D is the distance to the galaxy. The expansion of the Universe in any time of the evolution of the Universe can be calculated from the rate of change of the scale factor a of the Universe - ˙a. The Hubble constant is indeed the rate of expansion normalized by the scale factor itself:

H (t) ˙a(t)

a(t) (2.6)

and so H(0) = ˙a(0) (as the scale factor is taken to be 1 for t = 0). The Hubble constant is usually written as

H0 = 100h Km s−1M pc−1 (2.7)

where h is the dimensionless Hubble constant. The last observation, that was published by Planck Mission in March 2013, has given h = 0.6780± 0.0077. In order to simplify the use of Friedmann equation, it is customary to introduce the critical density

ρc(t) =

3H (t)2

8πG (2.8)

If the total energy content of the Universe is equal to this critical density, the curvature of space-time, parametrize in eq. (2.3) by k, will be zero, that is, the Universe will be flat. A further parametrization is introducing the density param-eter Ω ρρ

c, which gives the actual ratio of the energy density of the Universe in

respect to the critical density. In order to solve Friedmann equation we need to supply with equations of state, that connect between the pressure and the energy of each of the components of the Universe. This equation will be in the form of Pi = ωρi, in which i is taking one of the three commonly main known components

of the Universe: matter, radiation and dark-energy. The pressure that matter exerts is negligible, therefore for matter we take ω = 0. For radiation, or any kind of component of the Universe which travels in relativistic velocities, we take

(10)

ω = 1/3. Regarding dark-energy, although its nature is not understood, in order to get correspondence between the dark-energy component and the cosmological constant Λ, ω for the dark-energy should be −1. Combining the first and second of Friedmann’s equation we can get the relation that connects the density of a given component of the Universe to ω and a

ρ∝ a−3(1+ω) (2.9)

Therefore for the matter component we get ρm ∝ a−3, for the radiation component

we get ρr ∝ a−4, and for the dark energy component we get ρΛ∝ a0 = constant.

This is consistent with the Λ in eq. (2.1), that has the role of the cosmological constant, if we set Λ = 8πGρΛ.

Thermodynamics in expanding Universe

The discovery of the expansion of the Universe led to the assumption of the initial state, the Big Bang, from which the Universe started its expansion. As both matter and radiation have higher density for smaller scale factor, we expect the early Universe to have higher densities and temperatures than nowadays. As the Universe expands and the scale factor increases, it also cools down and the average energy per particle decreases. The density of the radiation component is scaled as a−4, while the matter component scales as a−3. Therefore, whatever are the quantities of matter and radiation in the Universe, at a period early enough in the evolution of the Universe, the scale factor was such that radiation density was higher than that of matter density. As the Universe expands the radiation density falls of in a higher rate than the matter density and eventually matter density will be higher. With this simple picture we can divide the history of the evolution of the Universe into three epochs. The earlies epoch is the one when the radiation density was the highest and so the Friedmann equation can be dealt as a single component Universe of properties radiation. After a certain redshift, in which the scale factor is such that the matter density is equal to the radiation density, we can regard the matter component as the dominant one. In order to find this redshift we should find when ρm(z) = ρr(z), and so rhoam,03 (z) =

ρr,0(z) a4 , and from a = 1 1+z we get 1 + zeq = 2.4× 104Ωm,0h2 (2.10)

that gives us the redshift in which the matter component became the dominant component in the expansion of the Universe. Current observations show that the expansion of the Universe is accelerating, and this establishes the belief that the Universe is now entering the epoch in which the expansion is dominated by the dark-energy. As dark energy density stays constant throughout the expansion

(11)

history of the Universe, a time in which it become the dominant factor in the ex-pansion of the Universe is certain, as radiation and matter density will be diluted enough to allow the dark energy to become the most dominant factor in the Fried-mann’s equation. Temperature scales as T 1a, and as temperature represents the average energy per particle, the expansion history of the Universe is very much connected to the average energies per relativistic particle during the expansion, and therefore, the possible interactions that are occurring between particles. Radiation Dominated Epoch

Under the assumption of weakly interactions between gas particles, we can derive several macro properties of the gas from the phase space distribution function f (~p)

n = g (2π)3 Z ∞ 0 d3pf (~p) (2.11) ρ = g (2π)3 Z ∞ 0 d3pE(~p)f (~p) (2.12)

In thermal equilibrium the phase space distribution function takes the form

f (~p) = 1

exp ET−µ ± 1 (2.13)

where it takes + sign for Fermi-Dirac distribution for fermions and − sign for Bose-Einstein distribution for bosons. µ is the chemical potential. Taking the phase space distribution function to be Bose-Einstein, under the assumption of T >> m and µ = 0, we get the following forms for the number and energy densities for bosons:

n = ζ(3) π2 gT 3 (2.14) p = π 2 30gT 4 (2.15)

where ζ(n) is the Riemann Zeta function . In the case of non relativistic particles, where T << m, taking the phase space distribution function as either Fermi-Dirac of Bose-Einstein, we get that the distribution converges to a Maxwell-Boltzmann distribution

n = g mT 2π

3/4

(12)

At early enough period of the Universe, the temperature was high enough such that all of the different particles where in thermodynamical equilibrium. In these conditions, as the average energy for particle was very high, processes of pair annihilation and productions were possible in both directions

p1+ p2 *) p3+ p4 (2.17)

In order to quantify the rate in which such processes occurs we need to calculate the reaction rate per particle:

Γ = n× hσvi (2.18)

where n is the number density of the particle in question andhσvi is the averaged cross-section times the relative velocity. As long as the expansion rate of the Universe is lower than the reaction rate for the specific particle in question, this particle will be in thermodynamical equilibrium. In the early Universe, when the average energy per particle was high enough, very massive particles were created through mechanisms like pair production or other forms of interactions of particles. As the Universe expands and cools down there are two effects that are influencing the equilibrium of specific particle :

1. When the expansion rate, given by the Hubble parameter H(t), become bigger than Γ, on average two particles will not be able to interact on time scales which are small enough to be able to consider the two particles as having thermodynamical equilibrium.

2. The average energy per particle drops as the Universe expands and cool down. As the average energy per particle become lower, the possible particles that can be produced by interactions are of lower mass.

From the point that the density of a specific particle species become too low to have sufficient interactions to maintain equilibrium, the particles are said to “freeze-out”, it decouples from the thermodynamical equilibrium and evolves in-dependently.

Neutrino background and e+ − eannihilation

Reaction rate for neutrinos (ν), having cross-section in the weak scale σw, is given

by

Γν = nνcσw (2.19)

as the velocity of ν is very close to the speed of light. Comparing the reaction rate with the Hubble parameter, for a Universe which is dominated by radiation,

(13)

will give us the temperature (time) in which the neutrinos decoupled from matter. This temperature is found to be T ∼ 1 MeV . At slightly lower temperature, right after the decoupling of ν from equilibrium with the rest of the constituents of the Universe, the average energy per particle becomes lower than electron-positron mass, and therefore the process of pair production of electrons and positrons stops and only electrons positrons annihilation can continue to occur. This marks the period of e+ − eannihilation, which inject energy to the primordial plasma at

redshift of the range of 109 > z > 108.

Primordial Nucleosynthesis

As long as the average kinetic energy per particle in the Universe was higher than the binding energy of nuclei (∼ 1MeV ) non of the heavier nuclei than atomic hydrogen could survive. Any nuclei that was synthesized during this period was immediatly disrupted by high-energy photons. As photons-to-baryon ratio η−1 is very high, the high-energy tail of photons energy distribution function was effective enough to cause the effective temperature in which protons started to bind together to form nuclei to be in the order of ∼ 0.1 MeV . Primordial nucleosynthesis is a range of temperatures in which the ongoing decrease of the temperature allowed the binding of nuclei with lower and lower binding energies. By the end of the primordial nucleosynthesis epoch the abundance of the different light elements in the Universe stabilized. There is a good agreement between abundances of 4He, D, 3He and 7Li deduced from observations and the values calculated in primordial nucleosynthesis[Coc (2013)]. Primordial nucleosynthesis can be considered to end by redshift z∼ 3 × 108.

Blackbody Surface

As long as photons are tightly coupled to the matter particles in the early Uni-verse, their spectrum will be a Blackbody spectrum. The dominant physical mech-anisms that are involved in the coupling of photons to matter particles are those of bremsstrahlung, Compton scattering and double Compton scattering. Once the blackbody spectrum is created it form will be maintained by the adiabatic expan-sion of the Universe. Nevertheless, small perturbations from perfect Blackbody spectrum are possible once the physical mechanisms that are required to maintain thermodynamical equilibrium between photons and matter particles become less efficient. This transition can be measured by comparing the Hubble expansion rate to the typical time scale for each one of the mechanisms mentioned. At z ' 2×106

the coupling of photons to matter particles by the above mentioned mechanisms was strong enough such that any energy injection to the primordial plasma would be immediately thermalized, and thermodynamical equilibrium would be back

(14)

on time scales much smaller than the expansion rate of the Universe. Therefore z ∼ 2 × 106 marks the blackbody surface for our Universe

Relic Density

In the search for the right model to describe the nature of the dark matter, a plau-sible hypothetical particles are the Weakly Interacting Massive Particles (WIMPs). These particles are expected to interact through the weak force and gravitation only. The assumption that those particle have cross section in the weak scale means that if those particles indeed exist, they would have been in thermal equilibrium at an early enough period of the Universe. In thermal equilibrium, those particles are constantly created and annihilated through self-interactions and interactions with other particles. The number density of particle in thermal equilibrium is found be solving the relevant Botzmann equation.

L [f (p, x)] = C [f (p, x)] (2.20)

where f (p, x) is the phase space distribution function, L is the Liouville operator and C is the collision operator. This equation gets the following form for a particle in thermal equilibrium dn dt =−3Hn − hσvi n 2 − n2eq  (2.21) As the Universe expand and the temperature decreases, reaction rates decreases as well. At the point that reaction rates become lower than the Hubble expansion rate the particle specie in question is no longer able to keep thermodynamical equilibrium with the rest of the constituents of the Universe and is said to “freeze out”. From this point unstable particle decay and gradually disappear from the Universe. Stable particles will approach a non-vanishing constant density, and this density is termed their thermal relic density.

If we take the annihilation cross section to be in the scale of weak interactions,

hσvi ∼ 1

M2

W EAK, it is possible to show that in standard cosmological scenario we

get a thermal relic density that correspond to the dark matter density of today of Ωh2 ∼ 0.1[Feng (2005)]. This correspondence of expected thermal relic

den-sity, calculated by taking the typical weak scale annihilation cross section, to the observed dark matter density of today is what referred to as the WIMP miracle. Recombination and Last Scattering Surface

Coupling between photons and matter particle is being mediated by Thomson scattering of photons and electrons. As long as the energy per photon is high enough to keep the medium completely ionized the photons and electrons will

(15)

have approximately the same temperatures, and by the Coulombic interactions between protons and electrons, protons will have the same temperature as well. Using the Saha equation it is possible to show that in a short interval of redshift around z ∼ 1100, the ionization level of the plasma dropped considerably. Under this condition there were no more free electrons to maintain the coupling between photons and particle matter and this mark the point of decoupling of photons. From this point onward the probability of interaction of photons with matter is exceedingly small. Therefore, the surface of last scattering of photons is at redshift very close to the redshift of decoupling.

2.1.2

Spectral Distortions

In standard ΛCDM cosmology scenario, the blackbody spectrum of the CMB is established at z > 2 × 106. z ∼ 2 × 106 is therefore defined as the

Black-body surface. At this redshift photons production mechanism as bremsstrahlung and double Compton scattering, together with energy redistribution mechanism as the Compton scattering, are efficient enough such that every energy injection is thermalized in a rate much faster than the expansion rate of the Universe. Af-ter z ' 2 × 106, photons production mechanisms, and later the comptonization

mechanism, become less efficient, and energy of photon injection to the medium will result in spectral distortion in the CMB. At the early times when Compton scattering is still efficient enough to redistribute the energies of the photons over the spectrum, the resulting distortion will be µ-type distortion. When also Comp-ton scattering is not efficient enough to redistribute the energy over the spectrum, energy injection will develop into a y-type distortion.

e+/e- annihilation Big Bang a z=104 z=103 109 >z>108 z∼3×108 BBN Blackbody photosphere Blackbody Surface z=2×106

μ

distortion y type distortion

Last scattering surface Intermediate

distortion

z=105

(16)

At redshifts 1.5× 104 < z < 2× 105 the resulting distortion will be in between

y-type distortion and µ-type distortion, and is termed also intermediate (i-type) distortion. This sort of distortion can give information both on the amount of energy injected and of the time of injection, and so the energy injection rate can be inferred from the shape of i-type distortion, and from it the plausible mechanism of energy injection that has caused this distortion.

(a) µ-type distortion

(b) y-type distortion Figure 2.2: Shape of spectral distortions [Danese and de Zotti (1977)]

Current available observation from COBE FIRAS place constraints on the upper level of µ and y types of distortion. However, the possible scenarios of energy injection before the time of recombination give energy injection that will cause spectral distortion of several orders of magnitude lower than the constraints given by COBE FIRAS. Proposed experiment PIXIE would be able to measure spectral distortions in higher accuracy, probing into the levels of spectral distortions that might arise from adiabatic cooling of matter, WIMP dark matter annihilation and Silk damping.

Different Energy Injection Mechanisms

The energy injection rate to the CMB is dependent on the dominant physical mechanism that produces energy release. Each of the following processes has a different epoch in which it is effective, and different evolution in time. The aim is to calculate the rate of energy injection, ˙ε, as function of redshift (time).

(17)

electron-positron annihilation [Khatri and Sunyaev (2012a)]

In standard cosmological scenarios, electron-positron annihilation takes place at redshifts range of 109 > z > 108. This period is still early in reference to the

blackbody surface boundary around z ∼ 2 × 106. At redshift z <∼ 108 the

positrons number density has dropped considerably below the number density of electrons, due to the asymmetry of ∼ 10−9 between matter and antimatter.

Nevertheless, due to the high number density of electrons it is still possible to have electron-positron annihilation rate much faster than the expansion rate of the Universe. So energy injection to the CMB can still be significant. However, this period is still early in reference to the blackbody surface boundary around z ∼ 2 × 106, therefore the spectral distortion should be suppressed significantly.

Analytical estimations for a µ type distortion from electron-positron annihilation show that the CMB blackbody spectrum is maintained at precision of 10−178. Big Bang Nucleosynthesis (BBN) [Khatri and Sunyaev (2012a)]

Big bang nucleosynthesis (BBN), occurring at z ∼ 3 × 108, releases the binding

energy difference between hydrogen and the different nuclei that are produced. The dominant energy injection mechanism of the nucleosynthesis is the produc-tion of helium-4. Most of the binding energy from the binding of hydrogen into helium-4 will be lost in neutrinos (and therefore will not be injected to the CMB). Nevertheless, the amount of energy released in form of electromagnetic interacting particle is the highest in comparison to the other stages of BBN. As the optical depth during this stage of BBN is of the order of 105, the spectral distortions

will be suppressed significantly and the resulting spectral distortion will be negli-gible. The BBN stage that cause the highest sectral distortion is Tritium decay. Although much smalled that the energy released during helium production, the energy released from Tritium decay occur in stages when the optical deapth is much smaller, and so it will cause a µ-type distortion of µ = 2× 10−15.

Decaying relic particles (dark matter particles)[Chluba and Sunyaev (2012)]

Decay of unstable particles in the early universe can also inject energy in form of electromagnetic interacting particles, which will interact with the CMB photons. Depending on the lifetime of the specific particle, its influence on the blackbody spectrum can be either µ-type or y-type distortion.

(18)

Annihilating dark matter particles [Chluba and Sunyaev (2012)][Khatri and Sunyaev (2012a)]

Lambda-CDM model of the Universe takes into account the existence of cold dark matter. A plausible candidate for dark matter is a weakly interacting massive particle (WIMP). As WIMP candidate will have interaction cross section in the scale of weak interaction, we can calculate what will be the abundance of WIMPs is they were thermally produced in the early universe. Such calculations give an amount of dark matter which is the correct amount that is observed today in cosmological observations. This correspondence is the reason WIMPs is a favorable candidate for dark matter, and this is coincidence is referred as WIMP miracle.

When the interaction rate between dark matter particles become smaller than the expansion rate of the universe, dark matter particle are no longer in thermal equilibrium with the other constituents of the Universe. After this point dark matter interaction drops significantly, and the number density of dark matter particles is conserved. Nevertheless, there is still residual annihilations between dark matter particles throughout the history of the universe. The rate of this interactions is proportional to the cross section and the square of the number density of the dark matter particles

-dε

dz ∝ mW IM P × hσvi × n

2

dm (2.22)

As energy release is proportional to the square of the number density, most of the energy release from WIMPs will take place in the early stages in which spectral distortions can be imprinted on the CMB, therefore we should expect a µ-type distortion to arise for WIMPs annihilation scenario. For a 10 GeV WIMP, with cross section of the order of magnitude of the weak scale interactions, the total µ distortion should be µ≈ 3 × 10−9.

Cooling by adiabatically expanding ordinary matter [Chluba and Sun-yaev (2012)]

After redshift of z ∼ 108the baryonic component of the Universe is non-relativistic.

As a result of the expansion of the universe the baryonic component expands and cool adiabatically with adiabatic index of 5/3. This results in a reduction of temperature of electrons of Te ∝ (1 + z)2. As the photons cool with adiabatic

index of 4/3, their cooling will be slower - Tγ ∝ (1 + z).

Till the time of photons decoupling, the energy transfer from photons to elec-trons is very efficient, and so energy will be transferred efficiently enough from photons to the cooler electrons by comptonization. Therefore photons are con-tinuously cooled by the baryonic part of the Universe, and photons from high

(19)

frequencies are down scattered to lower frequencies. Photons that reached the low frequencies where Bremsstrahlung and Double Compton processes are efficient will be destroyed by those processes. The distortions that results from this sort of pro-cess will be similar in shape to those of µ-type distortion or y-type distortion, but with opposite sign.

Evaporating of primordial black holes [Tashiro and Sugiyama (2008)] [Pani and Loeb (2013)]

There are several scenarios in which black holes are formed in the very early universe. One of the plausible mechanism of creation of such primordial black holes (PBH) is if there were sufficiently large density fluctuations in the early Universe. As the formed primordial black hole are emitting Hawking radiation, there will be a certain threshold for a mass of the black hole, under which the black hole will be evaporated completely before the current time. Calculations show that black holes of mass greater than 1015g will survive till the present epoch. While

PBH evaporate they emit many kinds of particles. Except from neutrinos, all of the emitted particles may interact with the photons of the CMB. Photons that are emitted from PBH will directly induce µ-type distortion in the range of redshifts of 106 > z > 105. Emitted photons can also interact with electrons through Compton

scattering and those scattered electrons can in their turn interact with the CMB photons to create a y-type distortion.

Superconducting strings [Ostriker and Thompson (1987)][Tashiro et al. (2012)]

Phase transitions of the vacuum in the early Universe may result in very thin threads of the previous phase of the vacuum, these threads are termed cosmic strings. If the string is superconducting it is possible that it will radiate elec-tromagnetic energy as it decays during the expansion of the Universe. This phe-nomena will result in additional energy injection to the medium, and would have kept the Universe ionized also after the assumed period of recombination. This energy will then influence the CMB spectrum, and the resulting distortion will be in a form of y-type distortion. Depending on the mass scale of the strings, the magnitude of the y distortion can be in the range of (1− 5) × 10−5.

Dissipation of primordial acoustic modes [Chluba and Sunyaev (2012)][Kha-tri and Sunyaev (2013)]

Primordial perturbations created acoustic waves at the baryon-electron-photon plasma, before recombination period. The dissipation of the primordial acous-tic waves, occurs at redshifts 1.5× 104 < z < 2× 106, and transfer energy from

(20)

comoving wavenumbers of 8 < k < 104 into the CMB monopole by diffusion

damp-ing. This damping of primordial perturbations was first calculated by Silk (1968). While about 2/3 of the energy from the dissipation of the perturbation is causing an increase of the average temperature of the CMB, 1/3 of the energy results in spectral distortion. The amount of energy injected into the CMB monopole from Silk damping is dependent on the primordial perturbaion power spectrum at the very small scales of these primordial acoustic waves, and in the parameter space allowed in the standard cosmological model the resulting µ-type distortion can be in the range 10−8 − 10−10.

Cosmological recombination

Cosmological recombination refers to the epoch in which there was a rapid change in the fraction of neutral hydrogen in the Universe. As the Universe expands, the average energy per photon decreases. Around redshift of z ∼ 1400, using Sahas equation, it can be shown that there was a rapid change in the fractional ionization of hydrogen, as less and less photons are able to ionize hydrogen, more electrons are captured by protons to form hydrogen nuclei. Each recombination of electron and protons will release the binding energy of 13.6 eV in form of photons. As there is a sharp decrease in the number of free electrons, close after the recombi-nation of hydrogen comes the photons decoupling from the electrons. Therefore thermalization process and any electron scattering is very slow and inefficient. After recombination

- There are several spectral distortions that can arise in the CMB spectrum from energy injection mechanism occurring after photon decoupling. Those distortions are characterized as being anisotropic, with sizes that depend on the spatial extent of the specific mechanism. The spectral distortions in this context are referred to as Sunyaev-Zeldovich effect, and the origin of the energy injection can be from hot electrons in the intracluster medium. Other mechanism of energy injection occurring after period of recombination are:

• Signatures due to first supernovae and their remnants • Shock waves arising due to large scale structure formation • Effects of reionization

2.1.3

Thermalization of the CMB spectrum

As stated, we expect that the CMB came out from era in which, due to the high density of electrons and positrons, the free-free emission and absorption processes

(21)

were efficient enough to establish full thermal equilibrium. Up until the recom-bination of electrons and protons (z ∼ 1000), the time scales which characterize the interactions between the different parts of the plasma were much smaller than the time scale of the expansion of the universe (as calculated from the Hubble parameter). This thermal contact between matter and radiation was ensured by the short time scale of the Compton scattering between electrons and photons, and the Coulombian interaction between electrons and protons. The recombination of electrons and protons caused a drop of the free electrons and therefore reduced the photons - electrons interaction rate. This stage marks the photon decoupling. Up until photon decoupling we can consider the various components of the plasma as having approximately the same temperatures (Tr ' Te ' Tp). Therefore the

CMB spectrum is predicted to be Planckian with high accuracy.

Injection of energy into the plasma prior to the epoch of recombination would perturb the system from thermal equilibrium. In order to study the evolution of the CMB spectrum after arbitrary energy injection we need to deal with a problem of nonequilibrium statistical mechanics. The kinetic equation of photon gas (Boltz-mann equation) describes the evolution of the photon occupation number, nγ(ν),

when the system is not in thermodynamic equilibrium. The photons Boltzmann equation can be written in this form:

dnν dt = ∂nν ∂t + ∂nν ∂xi ∂xi ∂t + ∂nν ∂p ∂p ∂t + ∂nν ∂pbi ∂pbi ∂t = C [n] (2.23)

In an homogeneous and isotropic expanding universe this Liouville term (the mid-dle part of the above equation) can be expressed in terms of the Hubble parameter as dnν dt = ∂nν ∂t − Hν ∂nν ∂ν .

In order to solve the kinetic equation we need to understand what is the form of the collision term C[n]. The three most dominant physical processes that are involved in the thermalization of the plasma are Compton scattering, dou-ble Compton scattering and bremsstrahlung. The collision term (the left hand side of equation) in the presence of the those three interactions takes the form

C [n] = dnν dt C + dnν dt BR + dnν dt DC (2.24) .

Compton scattering is the effective interaction that can redistribute the photons over the entire spectrum once energy is injected to the medium. This process on its own can bring the photons to a Bose-Einstein spectrum as long as the rate of comptonization (the characteristic rate by which photons are redistributed over the entire spectrum by Compton scattering) is greater than the expansion rate (the Hubble parameter). However, the Compton scattering by itself cannot bring the photons to a new thermodynamic equilibrium state once energy is injected into

(22)

the medium. This is because Compton scattering conserve the photon number, and in order to reach a full thermodynamic equilibrium for a system of photons and electrons with higher energy, the photon number should increase as well. The two processes that can emit and absorb photons are Bremsstrahlung and Double Compton scattering. Before redshift z∼ 2×106those two processes are fast enough

such that Compton scattering (which is also much faster than the expansion rate of the universe in this stage) is able to redistribute the photons over the entire spectrum and a Planck spectrum is reestablished. After z ∼ 2 × 106, energy

injection would give rise to a spectral distortion from Planck spectrum. In redshift range 2×106>

∼z>∼105, energy injection would give rise to µ-type distortion and after

redshift z<

∼104 energy injection would give rise to y-type distortion. In order to

study the evolution of the spectrum, we need to calculate the three components of the collision term of the kinetic equation. Analytically this is done by finding out what is the formula for the rates of each process, and what is the equation that describes the influence of the specific process over the photon distribution.

The Compton scattering γ + e⇔ γ + e

Compton scattering is inelastic scattering of a charged particle by a photon. In our context the inverse Compton scattering, upscattering of a photon from a charged particle, is more relevant. Since the electrons cross section is much higher than protons, it is sufficient to consider only the electrons as the scattering factors for photons. The scattering rate of Compton scattering is proportional to the electron density and electrons temperature

KC = neσTx kBTe mec2 (2.25) = 2.045× 10−30(1 + z)4 Ωbh 2 0 0.0226   1− YHe/2 0.88  s−1 (2.26)

where Ωb is the baryon density parameter, h0 is the Hubble parameter and YHe is

the primordial helium mass fraction.

The collisional term that describe the Compton scattering process in eq. (2.24),

dnν

dt

C ,can be described by the Kompaneets equation. The Kompaneets equation

describes the evolution of the photons phase space density n(ν) due to repeated scattering from nonrelativistic electrons. The Kompaneets equation can be written in the form: dn(x) dt C = KC 1 x2 ∂ ∂xx 4 ∂n(x) ∂x + n + n 2  (2.27) where x

kBTe is the dimensionless frequency corresponding to frequency ν.

(23)

photon diffusion in frequency due to the Doppler effect, electron recoil and induced recoil effects respectively.

Double Compton scattering γ + e→ γ + γ + e

Double Compton scattering is the first order radiative correction to Compton scattering. This scattering does not conserve the number of photons and it is the main source of low frequency photons for high redshifts of z > 2× 105. The

scattering rate rate of Double Compton scattering is proportional to the electron density, but to the square of the electron temperature

-KdC = neσTc 4αfs 3π  kBTe mec2 2 gdC(xe)IdC = 7.561× 10−41(1 + z)5g dC(xe)  Ωbh20 0.0226   1− YHe/2 0.88  s−1 (2.28)

where gdC is the gaunt factor for double Compton scattering. The evolution of the

photons phase space density under the influence of Double Compton scattering alone takes the form:

dn(x) dt dC = KdC e−x x3 [1− n(x)(e x − 1)] (2.29)

The Bremsstrahlung process p + e⇔ p + e + γ

Bremsstrahlung is the emission or absorption of a photon due to the acceleration (or deceleration) or a charged particle in Coulomb field of another charge. It can be considered as the first order radiative correction to Coulomb scattering. At low redshifts, of z <∼ 105, Bremsstrahlung become the main source of low

energy photons. The characteristic rate for Bremsstrahlung process is proportional to both the electron density and the baryon density (representing the scattering charged particles) -Kbr = neσTc αfsnB (24π3)1/2  kBTe mec2 −7/2 h mec 3 gbr(xe, Te) = 2.074× 10−27(1 + z)5/2g br(xe, Te)  Ωbh20 0.0226 2  1− YHe/2 0.88  s−1 (2.30)

where gbr is the average gaunt factor for bremsstrahlung.

The evolution of the photons phase space density under the influence of bremsstrahlung process alone takes the form:

dn(x) dt br = Kbr e−x x3 [1− n(x)(e x − 1)] (2.31)

(24)

1e-25 1e-20 1e-15 1e-10 1e-05 1 1000 10000 100000 1e+06 1e+07 redshift (z) BB BE y-distortion Comptonization Brehmstrahlung(x=0.01) Double Compton (x=0.01) Te equilibrium Hubble

Figure 2.3: Rates of the three dominant physical processes in respect to Hubble expansion rate [Khatri and Sunyaev (2012a)].

2.1.4

Analytic Solutions for the Spectral Distortions of the

CMB spectrum

The distinction between the epoch in which an energy injection will cause a µ-type distortion to the epoch in which it will cause a y-µ-type distortion is done by assessing the efficiency of the comptonization process. The Compton y parameter is used to evaluate the effectiveness of the redistribution of the energy of photons throughout the spectrum by repeated scattering

-y(z) = Z z 0 dzkBσT mec neTγ H(1 + z). (2.32)

The µ-type distortion occur while the comptonization is still very efficient, there-fore it is also termed saturated comptonization[Khatri and Sunyaev (2012b)]. For that, µ-type distortion is created in redshift range for which eq. (2.32) gives y > 1. On the other hand, y-type distortion, also termed minimal comptonization, is cre-ated for redshift range for which eq. (2.32) gives y < 0.01 << 1. At the redshift range for which 0.01 < y < 1, energy injection will cause an intermediate type distortion. Calculation shows that a good estimate for the redshift of division between µ-type distortion to y-type distortion is z ∼ 5 × 104 [Khatri and Sunyaev

(25)

(2012a)].

In the limit of small spectral distortions, it is convenient to write the total spectrum as n(x) = npl(x) + ∆n(x), where x is the dimensionless frequency x =

hν/kBT . The T is CMB reference temperature, which is the temperature that will

have have the same total number density of photons if the spectrum was perfect blackbody spectrum. ∆n is then the distortion from the reference blackbody spectrum.

Calculation of y-type distortion

For the calculation of y type of distortion we work in the approximation that the collisional term consist only of the Compton term [Zeldovich and Sunyaev (1969)], which is the Kompaneets equation

dn(x) dt |C= KC 1 x2 ∂ ∂xx 4[∂n(x) ∂x + n + n 2] (2.33)

In the limit that the spectrum is very close to a blackbody spectrum we can take

n(x) ≈ 1/(ex− 1) (2.34)

From this we can get to ∆n

npl by multiplying the Kompaneets equation with t and

dividing with npl to get an equation of the form ∆n≈ yY (x), where

y = Z

k[Te− Tγ]

mec2

σTNecdt (2.35)

is the magnitude of the y type distortion, and

Y (x) = xe

x

(ex− 1)2[x

ex+ 1

ex− 1− 4] (2.36)

is the spectrum of the y-type distortion.

Calculation of µ-type distortion

The µ-type distortion is calculated with the assumption that the spectrum has a Bose-Einstein distribution

n (x) = 1

ex+µ− 1 (2.37)

In the limit of µ << 1 we can Taylor expand the Bose-Einstein distribution to be n (x)≈ npl(x)− µ  ex (ex− 1)2  (2.38)

(26)

where the distortion from blackbody spectrum ∆n is ∆n (x) = −µ  ex (ex− 1)2  (2.39) The total number density and energy density are give by

n = g (2π)3 Z ∞ 0 d3pf (~p) (2.40) ρ = g (2π)3 Z ∞ 0 d3pE(~p)f (~p) (2.41)

Taking the phase space distribution function f (~p) to be given by eq. 2.38, we arrive to the following forms for the total number density and energy density

N = bRT 3 e I2 Z dxexe2n(x) ≈ bRTe3  1− µπ 2 3I2  (2.42) E = aRT 4 e I3 Z dxexe3n(x)≈ aRTe4  1− µ6ζ(3) I3  (2.43) where aR= 8π 5k4 B

15c3h3 is the radiation constant, bR =

16πk3 Bζ(3)

c3h3 , ζ is the Riemann zeta

function, In ≡ R xnnpl(x)dx and npl(x) = (ex− 1)−1. The reference temperature

is taken to be the electron temperature Te, which is taken to be equal to the

temperature of reference blackbody Tγ, and so xe ≡ hv/kBTe (unitless frequency

of photons in respect to Te). Next we will define normalizes quantities of the above

densities as E = E/Eγ and N = N/Nγ, where Eγ and Nγ are the energy density

and number density of a blackbody spectrum with temperature Tγ = TCM B(1 + z).

Normalizing the energy and number densities to the energy and number density of the blackbody spectrum will cancel the effect of expansion, as both E and Eγ

have the same dependency on redshift. We are interested in the rate of energy injection ˙E and photons injection ˙N to the plasma from various possible energy injection mechanisms, in order to account for the µ-type distortion of the specific mechanism. As ˙ E E ≡ d dt ln  E Eγ  (2.44) ˙ N N ≡ d dt ln  E Eγ  (2.45)

(27)

For small distortions we have E ≈ Eγ and N ≈ Nγ, therefore ˙E/E = ˙E and ˙ N /N = ˙N . And we get ˙ E = 4 d dtln  Te Tγ  − 6ζ(3) I3 dµ dt (2.46) ˙ N = 3dtd ln Te Tγ  − π 2 3I2 dµ dt (2.47)

An analytical solution for the kinetic equation can be reached by assuming that the instantaneous spectrum is stationary. Under this assumption all of the time derivatives are zero, and the kinetic equation takes the simple form of

∂n (x, t) ∂t = 0 (2.48) and therefore 0 =−KC 1 x2 d dxx 2dµ dx + (KdC+ Kbr) µ x4 (2.49)

Under this assumption we get to the form of µ at z = 0 µ(0) = µ(zi)e−T (zi)+ CB Z zi zmin dz (1 + z)H ˙E E − 4 3 ˙ N N Extra ! e−T (z) (2.50) where NN˙

Extra are the extra photons injected from processes other than the ones

created by bremsstrahlung and double Compton scattering, C = 0.7768, B = 1.803, and T (z) is the effective blackbody optical depth, which is a function of the rates of bremsstrahlung, Compton scattering and double Compton scattering. In the typical case of our study, of pair annihilation of dark matter particles, the term NN˙

Extrais negligible. Furthermore, we are not interested in any initial µ-type

distortion, as we take zi, the initial redshift from which the energy injection to

the plasma starts, to be the redshift of blackbody surface, so zi = 2× 106. We

take zmin ≈ 5 × 104, the reason for it is that we can use the analytical solution

up until this redshift. After z ∼ 105 the spectral distortion will be in of y-type.

Furthermore, as energy injection from particle annihilation is dependent on the averaged of the density squared, we expect that the higher energy injection from this source was higher in earlier stages of the expansion of the Universe. There-fore the spectral distortion arising from dark matter annihilation will be a µ-type distortion. In Standard Model cosmology scenario we expect that the dark matter particle get to a stable level of density, termed as the relic density, after the dark matter particle came out of thermal equilibrium with the other constituents of the Universe. After the interactions with the other constituents of the Universe freeze

(28)

out, the dark matter number is conserved to a high degree, but there is still a residual annihilation, given by

˙

Edm(z) = fγ

mdmc2n2dm(z)hσvi

aRT4(z)

(2.51) where mdm is the mass of the dark matter particle, fγ is the fraction of energy

that is injected to the medium in form of electromagnetic interacting particles, and aR is the radiation constant. As mentioned before, ˙E ≈ dtd ln



E Eγ



, therefore the term aRT4(z) in eq. (2.51) is the normalization factor of the energy injection

to the energy density of the blackbody spectrum for the specific redshift.

102 103 104 105 106 107 z 10−10 10−9 (1 + z) Gdε/dz

Figure 2.4: Energy injection from dark matter annihilation, for 10 GeV particle with hσvi = 3×10 −27

dmh20 cm

3s−1 and f γ = 1.

As we won’t consider any initial µ-type distortion and omit the NN˙

Extra term

(29)

from dark matter annihilation µdm(0) = CB Z zi zmin dz (1 + z)H(z) × ˙Edm(z)× e −T (z) (2.52)

where the effective blackbody optical depth T (z) is given by [Khatri and Sunyaev (2012a)] T (z) = Z z 0 dz01.007C [(KdC+ Kbr) KC] 1/2 + 2.958C (KdC+ Kbr) (1 + z0)H ≈ 1.007 "  1 + z 1 + zdC 5 + 1 + z 1 + zbr 5/2#1/2 + 1.007 ln    1 + z 1 + z 5/4 + s 1 + 1 + z 1 + z 5/2   + "  1 + z 1 + zdC0 3 +  1 + z 1 + zbr0 1/2# , (2.53)

where the numerical values for the parameters are zdC = 1.96× 106 zbr = 1.05× 107 z = 3.67× 105  = 0.0151 zdC0 = 7.11× 106 zbr0 = 5.41× 1011

(30)

2.2

Supersymmetry

There is ample of unambiguous, quantitative evidence for new particle physics. Cosmological measurements supply us with the amount of baryon, matter and dark energy in the Universe. The non-baryonic dark matter component, in units of the critical density, should be in the range of 0.094 < ΩDMh2 < 0.129 (95%CL)[Tegmark

et al. (2004)]. In contrast to the well established constraints of the phenomenol-ogy of the dark matter component in the astrophysical context, at the microscopic level, the properties of dark matter and dark energy are unconstrained. An ongo-ing research in theoretical particle physics aim to come up with valid candidates for dark matter. An important characteristic that a candidate particle for dark matter is that the resulting density after the various processes in the early Uni-verse would lie in the range of dark matter density calculated from cosmological measurements.

Evidence for WIMP as a plausible candidate for dark matter

The model of the Universe which comprises of Cold Dark Matter (CMD) and a cosmological constant (Λ) is termed ΛCDM model. As cosmological observations have pinned down the ratio of baryons to photons in the Universe, using the knowledge from standard model of particle physics, it was shown that the baryonic component of the Universe should be on the order of 4.5%[Planck Collaboration et al. (2013)] from the total content of the Universe. Therefore, the search for the plausible dark matter particles is focusing on theories beyond the Standard Model of particle physics.

In the realm on particle physics there were both observational and theoretical evidences for the need to find extension of the Standard Model. Oscillations of atmospheric neutrinos cannot be explained in the theoretical framework of the Standard Model, since the Standard Model does not predict any mass for neu-trinos. Alongside this evidence, there are some compelling arguments in favor of supersymmetry, among them we can mention:

• Running coupling constants - Supersymmetry can account for the energy dependence of the three running coupling constants such that those will converge at the GUT scale, making grand unification theories possible. • Hierarchy problem - The quantum (radiative) corrections to the mass of the

Higgs boson require a fine tuning cancellation between the classical Higgs mass (on the electroweak scale) and the Planck energy scales (which appear in the radiative correction term). This unnatural fine-tuning requirement is what termed as the Hierarchy problem. By introducing supersymmetric

(31)

partners to the Standard Model particles the Hierarchy problem can be over-come. Since loop corrections by bosons and fermions are of opposite signs, paring particles with their supersymmetric particles will cause an exact and automatic cancellation of the quantum corrections.

Supersymmetry and MSSM

Supersymmetry consist of introducing generator that changes fermions into bosons and boson into fermion

Q|fermioni = |bosoni ; Q |bosoni = |fermioni (2.54)

The implication of such a symmetry is that every fermion has a bosonic partner and every boson has a fermionic partner. If supersymmetry was unbroken the masses of the super-partners for the standard model particles would have been in a detectable range by current experiments, as they would have had masses of the same scales of the partner particles from the standard model. Therefore the supersymmetry is presumed to be broken, and so the masses of the super-partners for the standard model particles is unconstrained. Supersymmetry with additional construct of the conservation of R-parity produce a stable lightest supersymmetric particle (LSP), that cannot decay to standard model particles. If this particle is also neutral it can be a plausible candidate to form a significant part of the present energy density of the Universe. One supersymmetric model that is built with the addition of R-parity is the Minimal Supersymmetric Standard Model (MSSM).

The MSSM adds the smallest possible field content to the fields of the Stan-dard Model, and imposes R-parity in order to account for the stability of protons. In this model all gauge fields has fermionic superpartners associated with them, and all fermions are associated with scalar superpartners. Additionally, an ex-tra Higgs field is introduced that associates one spin 1/2 Higgisno to each Higgs boson. R-parity is a quantum number which distinguishes between the Standard Model particles, that have R = 1, to their superpartners, that have R =−1. The consequence of conservation of R-parity is that the superpartner particles can only decay to odd number of superpartner particles (with optional additional Standard Model particles). Therefore, the Lightest Superpartner Particle (LSP) is stable, as it cannot decay to a lighter superpartner particles, and cannot decay to only Stan-dard Model particles because of the conservation of R-parity. Though conservation of R-parity was first introduced in order to account for the suppression of proton decay rate, this additional quantum number can also give rise to a candidate for a dark matter particle. A stable, massive superparticle, that is neutral, can be the WIMP particles that form the dark matter.

The MSSM involves at least 124 independent parameters in order to specify a model. Each specific model will produce a different LSP, and therefore a WIMP

(32)

candidate with different characteristics. The specification of the parameter will determine how the supersymmetry is broken, by fixing masses and mixing angles. A favorable WIMP candidate that arises in MSSM models is the lightest neutralino. This super particle is the product of the mix of the superpartners of the B, W3

gauge bosons and the neutral Higgs bosons, H0

1 and H20, which are called binos,

winos and higgsinos. The mixture of these states produces four Majorana fermionic mass eigenstates called neutralinos. The lightest neutralino, labelled as χe0

1, is

therefore one of the plausible candidates for dark matter in MSSM models.

Neutralino Cosmology

The relevant characteristics of neutralino as a dark matter candidate are its col-orlessness, naturalness, stability, mass and his cross-section. The cross-section of the neutralino is relevant in the calculation of its thermodynamical properties. If dark matter is made mostly from neutralinos, its thermal relic density should be in the range of 0.094 < Ωχh2 < 0.129 (95%CL), in order to correspond to the dark

matter density observed today. As neutralinos are expected to be non-relativistic early enough in the evolution of the Universe, its annihilation cross section can be approximated with expansion in powers of v2

hσvi = a + bhv2i + O hv4i

(2.55) Relic density

An important feature that is required for a MSSM model to be viable is that the characteristics of the LSP are such that is will give the correct amount of dark matter observed today if it is thermally produced in the early Universe. At a very early stage of the Universe all particles are in thermal equilibrium, but as the Universe expands it cools down, interaction rates between pairs of particles come gradually out of equilibrium state. An interaction that stops due to falling out from thermal equilibrium is considered to be frozen, and so particles that goes out of equilibrium freeze out. The calculation of the relic thermal density is done by solving the Boltzmann equation (2.20). In expanding Universe, and after some simplifications, the Boltzmann equation takes the following form

dn dt =−3Hn − hσvi n 2− n2 eq  (2.56) where n is the number density of the particle in question and hσvi is thermally averaged annihilation cross section, and neqis the number density of the particle in

thermal equilibrium. We can introduce the entropy normalized density variables

Y ≡ n

s , Yeq≡

neq

(33)

and the variable x≡ m/T to get the following form for Boltzmann equation dY dx =− hσvis Hx Y 2− (Y eq)2  (2.58) Using this equation it is possible to show that the thermal relic density for a specific particle of cross section hσvi will be

ΩXh2 ≈

3× 10−27cm3s−1

hσvi (2.59)

Using eq. (2.59) it is possible to estimate the relic density for a dark matter candidate that arises as LSP in a specific supersymmetric model. A precise calcu-lation should include resonance enhancements (coannihicalcu-lations). Numerical codes for predictions of dark matter candidates for specific supersymmetric model, like MicrOMEGAs and DarkSusy, now include coannihilations in their calculations for the thermal relic density.

(34)

Figure 2.5: The evolution of the comoving number density Y versus x ≡ m/T .A thermal relic density starts in a thermodynamic equilibrium at high temperature (T >> Mdm). For an increasing cross section the comoving number density in

(35)

Method and Results

3.1

Bayesian Inference and Markov Chain Monte

Carlo methods

The calculation of µ-type distortion in this thesis was done for WIMP candidates that arise from MSSM as their corresponding LSP. The MSSM data is based on Bayesian analysis that give the relative probability of different regions in the parameter space of MSSM, based on inputs from experimental and theoretical The analysis in this thesis was based on Bayesian inference approach for forecasts on MSSM in respect to the observational constraints from µ-type distortion detection in the CMB. In this section we would explain the Bayesian approach in general and how it has been applied to this research.

3.1.1

Bayesian Inference

While the frequentist school of thought define probability in terms of the ratio of occurrence of events in the limit of infinite series of equiprobable repetitions, Bayesian statistics is basing the definition of probability on a subjective approach - probability is a “degree of belief”. Both approaches are subject for criticism and flaws. Nevertheless, in approaching the problem of inference of parameters, like the problem we wish to attend in studying the parameter space of supersymmetric theory, the Bayesian approach supply us with consistent framework. It can be shown that Bayesian probability theory is a unique generalization of boolean logic into a formal system, based on Cox axioms [Trotta (2008)]. Therefore, Bayesian inference is the natural approach for data analysis, and for assigning uncertainties to the physical measurements.

The statistical knowledge of a continuous physical quantity is described by a probability density function (PDF), this function gives us the relative likelihood

(36)

of this quantity to take a given value. The set of rules that apply for every value that this function can have are the normal rules of probability [D’Agostini (1995)]. The likelihood that a physical quantity have a certain value is to be considered within a specific theoretical framework, therefore this probability is conditional. The notation for the probability of a certain parameter θi to have a certain value,

based on the data accumulated d is

p (θi| d) (3.1)

This PDF is called the posterior, that give us the degree of belief in θi given d.

The Bayesian approach is based on Bayes theorem p (θi| d) =

p (d| θi) p (θi)

p (d) (3.2)

Where the different factors are:

• p (d| θi) is the sampling distribution of the data, assuming that the hypothesis

on the value of θi is true. Considering the sampling distribution of the data

for fixed data (the one that is measured), this component is also marked as the likelihood L (θi)≡ p (d| θi).

• p (θi) is the prior probability of the hypothesis, or in short - the prior, which

represent our state of knowledge before seeing the data.

• p (d) is the marginal likelihood, a normalization factor and represent the data probability, is calculated be summing the probabilities of all the possible outcomes for the hypothesis (in our case, values of θi)

p (d)≡ Z

dθip (d| θi) p (θi) (3.3)

The posterior PDF express our state of knowledge about the value of θi. It is

proportional to the likelihood and the prior. A guiding principle of Bayesian prob-ability theory is that there can be no inference without assumptions. The prior represent our assumptions on the model. Its presence in the theory is subjected to criticism. Nevertheless, the likelihood is more informative than the prior, and given two different priors, if we apply Bayesian inference on the same set of mea-surements, the posterior PDFs will tend to converge to the same value.

The inference process is achieved through the evaluation of the posterior given a prior and the likelihood. If we are interested in the parameter θi, we need

(37)

to marginalize over all the other parameters in the theory eθ, the marginalized posterior PDF will be written as

p (θi| d)|eθ = R deθ pd| θi, eθ  pθi, eθ  R dθiR deθ p  d| θi, eθ  pθi, eθ  (3.4)

Marginalization is important when we are dealing with multidimensional param-eter space like in supersymmetry theory, as we can reduce the information to subspaces that we can handle and present, typically to one or two dimensions. We can therefore study one or two dimensional PDF of one parameter or pair of pa-rameters. The technical problem that arises frequently in Bayesian analysis is the evaluation of the multidimensional integral in the marginalization. The common approach for this kind of integration is to use Markov Chain MonteCarlo (MCMC), which will be described in the next section. Marginalization with MCMC methods allows us therefore to present the results of the inference in terms of probability regions, either in one or two dimensional space of a subspace of the initial multi-dimensional parameter space. Typically regions of 68% and 95% probability are marked.

The likelihood functionL contains all the experimental information to be used in the inference. Independent measurements contribute to the global L. The objectivity of Bayesian inference is ensured by the fact that even for different priors, if repeated measures contribute to achieve the same likelihood, the multiplicative contribution of L will make the posterior converge to the same PDF, and the contribution of the prior will be negligible after repeated measurements.

A useful likelihood function that is often used is Gaussian likelihood

L = √ 1 2πσQ exp −(Q(θi)− µQ) 2 2σ2 Q ! (3.5) where Q is the measured quantity that depends on the parameter θi, µQis the

cen-tral value and σQ is the standard deviation. This likelihood is convenient because

when different independent measurements are added to the global likelihood their exponent just add. In this case it is also customary to use χ2 function, defined as

χ2 ≡ (Q(θi)− µQ)

2

2σ2 Q

(3.6) and so we have χ2 =−2 ln L + constant.

3.1.2

The Markov Chain Monte Carlo Method

As mentioned, when we are dealing with a multidimensional parameter space model, numerical evaluation of the integrals required to obtain a marginalized

Referenties

GERELATEERDE DOCUMENTEN

So, what has in fact been established through topo- logical means, is that tetrahedral, cubic, and dodecahedral tessellations have tetrahedral, octahedral, or icosahedral

All these surveys point to the continued need for laboratory spectroscopy at (sub)millimeter wavelengths, both of known molecules and of new increasingly more complex

This includes basic atomic and molecular data such as spectroscopy and collisional rate coefficients, but also an improved understanding of nuclear, plasma and particle physics, as

In spite of these close similarities, Ferreñafe Quechua also has a number of suffi xes associated with Quechua I (1st person object -ma- (‘me’) for Cajamarca -wa-,

Figure 7.10: Plotted is the density field around the void present in lower left corner of the z-slice.. The first column show this density field when no threshold was applied and

Several physical effects may contribute to this observed cut-off: i Doppler broadening resulting from the finite temperature T0 of the intergalactic medium IGM, ii Jeans smoothing

The ICON study will show if occipital nerve stimulation is an effective preventive therapy for patients suffering medically intractable chronic cluster headache and if there is

However, if this is the way normalization is achieved, the wave function for the ground state gives not the amplitude that a universe arises from nothing, but that the ground