• No results found

Dark matter self-annihilation signals from Galactic dwarf spheroidals; intermediate mass black holes and mini-spikes

N/A
N/A
Protected

Academic year: 2021

Share "Dark matter self-annihilation signals from Galactic dwarf spheroidals; intermediate mass black holes and mini-spikes"

Copied!
44
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

University of Amsterdam

Dark matter self-annihilation signals

from Galactic dwarf spheroidals;

intermediate mass black holes and

mini-spikes

Mark Wanders

Supervisor: Gianfranco Bertone

GRAPPA Institute, University of Amsterdam

June 17, 2014

Abstract.The formation of an intermediate mass black hole inevitably affects the distri-bution of dark and baryonic matter in its vicinity. Under the assumption that the black hole grows adiabatically, large overdensities in the halo’s dark matter density, called spikes, can be achieved, resulting in significant amplifications to the gamma-ray signal originating from self-annihilation of WIMP dark matter. Galactic dwarf spheroidals in particular are expected to contain and maintain mini-spikes, and we use five years of Fermi-LAT gamma-ray data to derive upper limits on the mass of an intermediate mass black hole in the center of the subhalo. Our limits are more constraining than those previously calculated using other methods.

(2)

Contents

1 Dark matter 2

1.1 Evidence 2

1.2 Candidates 5

1.3 Origin of WIMP dark matter and relic density 7

1.4 Density profile 10

1.5 Gamma-rays from dark matter annihilation 10

2 Mini-spikes 14

2.1 Dwarf spheroidals 14

2.2 Intermediate mass black holes 15

2.2.1 Motivation 15

2.2.2 Formation scenarios 16

2.2.3 Dark matter mini-spikes 18

3 Analysis 23

3.1 Fermi Large Area Telescope 23

3.2 Data acquisition and preparation 23

3.3 Maximum likelihood analysis 24

3.4 Error propagation 26

4 Results 28

4.1 Flux and cross-section constraints 28

4.2 Black hole mass upper limit 29

4.3 Age of the black hole 30

4.4 Other dwarf spheroidals 30

5 Discussion and conclusion 33

5.1 Persistence of the mini-spike 33

5.2 Other limits on black hole mass 34

(3)

1

Dark matter

Current astrophysical evidence suggests non-baryonic dark matter accounts for roughly 27% of the energy content of the present-day universe [1]. The nature of dark matter (DM) beyond its gravitational influence is largely a mystery, but a weakly interacting massive particle (WIMP) is a popular candidate [2]. A principle example of such a particle is the lightest neutralino, a stable mixture of supersymmetric partners of the Z boson, photon and Higgs boson, as predicted in the Minimal Supersymmetric Standard Model (MSSM). Since constraining the properties of such a weakly-interacting particle through direct detection (i.e., looking for nuclear recoils from scattering of dark matter particles off nucleons) appears difficult, indirect detection (i.e. looking for gamma-rays resulting from dark matter self-annihilation in astrophysical objects) could prove to be a useful alternative.

Nowadays, evidence for the existence of stellar mass black holes and supermassive black holes (SMBHs, M ∼ 106 − 108M), is compelling. Recently, evidence has emerged suggesting the existence of intermediate mass black holes (IMBHs), which could have masses from 102M− 106M, bridging the mass gap between stellar mass black holes and SMBHs. The formation of a massive black hole inevitably affects the distribution of (both baryonic and non-baryonic) matter around it, and can lead to large overdensi-ties called spikes. As the DM annihilation rate scales with density squared, this could improve prospects for indirect detection. However, mergers, off-center formation of the seed BH and gravitational scattering off of stars are likely to disrupt and reduce such spikes [3]. These overdensities could persist in a milder manifestation, called a mini-spike, in the case of an IMBH, as they are unlikely to undergo the processes that disrupt the overdensity. This is especially true in the case of an IMBH in the center of a dwarf spheroidal (dSph); objects orbiting the Milky Way with low central stellar densities and a relatively high DM content.

In this work, we shall examine the consequences of mini-spikes in the dark matter dis-tribution caused by the adiabatic growth of IMBHs on indirect detection of WIMP dark matter using data from the Fermi-LAT space telescope. This work is organized as fol-lows: in section 1 we will give an introduction to particle dark matter, the motivation and evidence1.1, possible candidates1.2, WIMP relic density1.3and its distribution1.4

before moving on to the theory of mini-spikes, including dwarf spheroidals 2.1 and in-termediate mass black holes 2.2, evidence for their existence 2.2.1, possible formation scenarios2.2.2and mini-spikes themselves 2.2.3, finishing with a description of our data analysis 3.1–3.3. In section 4 we will cover our results, before moving on to the discus-sion 5and ending with our conclusions 5.3.

1.1

Evidence

Galactic rotational velocity curves

(4)

in galaxies and comparing their observed velocities to those expected from moving in the gravitational potential of the visible, luminous matter. Specifically, the rotation curve of a galaxy is usually obtained through observations of the 21cm line, i.e. radio-emission from neutral atomic hydrogen (HI) in the interstellar gas, and shows the circular velocity of stars and gas as a function of their distance to the center of the galaxy. If we consider simple Newtonian dynamics, we would expect the circular velocity at a radius r to be

v(r) =

GM (r)

r , (1.1)

where the enclosed mass is M (r) =4πr2ρ(r)dr and ρ(r) is the density profile. We would then, from observed stellar densities, naively expect v(r)∝ 1/√r, i.e. Keplerian behavior. However, observations revealed that the velocity curve remained flat up to large radii (see e.g. [4]). From equation 1.1 we see that a flat rotation curve requires M (r)∝ r, or ρ ∝ r−2. To explain the flat behavior of the rotational velocity curve out to large radii, the concept of the dark matter halo was introduced; a large, roughly spherical halo of non-luminous matter extending far beyond the visible size of the galactic disk. Gravitational lensing

Another way to probe the amount of dark matter in a galaxy, given to us by Einstein’s Theory of General Relativity, is through gravitational lensing. General Relativity pos-tulates that space-time can be bent and stretched by massive objects, which in turn affects the motion of bodies, thus causing what we perceive as gravity. This space-time curvature not only affects massive bodies such as stars, which now follow geodesics in-stead of simple straight lines, the path of light traveling through curved space-time is bent as well; massive objects effectively act as a lens. In the case where a very massive object (such as a cluster of galaxies) is directly in front of a very bright object (such as a galaxy), one would see an “Einstein ring”; the closer, central object surrounded by a distorted ring-shaped image of the more distant object. However, such a perfect alignment is very rare, and so we generally observe only partial Einstein rings, called arclets. The Einstein radius θE, the radius of such an arclet in radians, gives us a way

to estimate the mass within a lensing cluster [5]:

θE = √ 4GM c2 dLS dLdS , (1.2)

where M is the mass of the lens and dLS, dLand dS are the (angular-diameter) distance

between the lens and the source, distance to the lens and distance to the source, respec-tively. Deriving the mass of the cluster this way and comparing it to the luminous mass again confirms that a large portion of the mass must be non-luminous. For example, an analysis of the cluster Abell 370 resulted in a M/L ratio of 102–103M/L [6].

Gravitational lensing has another use in the search for dark matter, through mi-crolensing, i.e. the change in brightness of a distant object due to the lensing effect of the passing of a massive object in front of it. The first attempt to explain the miss-ing mass in galaxies was to turn to massive objects already known to exist in galaxies and comprised of normal, baryonic matter. These objects would need to be “dark”,

(5)

and thus possible candidates are brown dwarfs, neutron stars, black holes and unassoci-ated planets. Together, these objects are referred to as Massive Compact Halo Objects (MACHOs).

Searches for these microlensing events have been conducted by several collabora-tions, e.g. the MACHO Collaboration, the EROS-2 Survey, and their results are conclu-sive. The MACHO Collaboration analyzed 5.7 years of photometry on 11.9 million stars in the Large Magellanic Cloud, finding only 13 – 17 possible microlensing events from MACHOs in the Milky Way halo [7]. Other collaborations found similar or even smaller numbers, leading to the conclusion that MACHOs can not account for a significant part of the mass in DM.

Cosmic microwave background anisotropies

Strong evidence that DM exists instead in the form of particles comes to us from cosmol-ogy. During the first seconds to minutes after the Big Bang, free neutrons and protons fused together to form light elements such as deuterium, helium and trace amounts of lithium and several others. As it turns out, this Big Bang Nucleosynthesis (BBN) is in fact the primary source of deuterium in the universe, as deuterium produced by stars is almost immediately lost in fusion to produce 4He. By treating the current amount of deuterium in the universe as a lower limit on the amount produced during BBN, it is possible to estimate the primordial deuterium to hydrogen ratio by determining the D/H abundance of regions with low levels of elements heavier than lithium. A major success of the BBN model is that theoretical abundances match up very well with obser-vational ranges, and one of the results of BBN modeling is that the D/H ratio depends strongly on the baryon density. The baryon density is usually written as Ωbh2, where Ωb

is the baryon density relative to the critical density ρc and h = H/100 km s−1 Mpc−1

is the reduced Hubble constant. Depending on which deuterium measurement is used, calculations results in two numbers for the baryon density[8]: Ωbh2 = 0.0229± 0.00013

or Ωbh2 = 0.0216+0.0020−0.0021. For an alternative method of determining the baryon density,

we now turn to the cosmic microwave background (CMB).

The CMB is a (nearly) isotropic background of photons with a temperature of about 2.73 K, first discovered in the ’60s by Penzias and Wilson. The early universe was filled with a hot, dense plasma of charged particles and photons that cooled as the universe expanded. After 380,000 years or so, the universe had cooled sufficiently to allow for recombination; protons and electrons formed the first neutral atoms. Photons, which previously had been locked in endless scattering with protons and electrons, now traveled freely through the newly transparent universe. The light from this last scattering persists and permeates the entire universe in the form of the CMB, now significantly redshifted to a very low temperature due to the expansion of the universe.

Several space-based missions have been launched to map out the full-sky CMB, starting with COBE in 1989, WMAP in 2001 and most recently ESA’s Planck, launched in May 2009. As it turns out, the CMB (after subtraction of foregrounds and the dipole caused by the movement of the telescope with respect to the CMB frame) is not entirely uniform; its temperature fluctuates on the order of one part per million. These tiny anisotropies can be attributed to two primary causes. Firstly, large scale fluctuations are caused by the Sachs-Wolfe effect; photons lose energy traveling out of a

(6)

gravity well, and so we observe more low energy photons from parts of the universe that were more dense at last scattering. Secondly and for our argument, more importantly, small scale fluctuations are attributed to something called acoustic oscillations. These oscillations emerge as the photon-baryon fluid filling the early universe goes through repeated contractions and expansions, as it compresses under gravity and expands due to radiation pressure. This creates acoustic waves, whose propagation is abruptly cut off when recombination severely decreases the speed of sound in the universe.

Clearly, these fluctuations are dependent on both the baryonic mass (as it is the interaction between baryons and photons that causes the expansion) and the total mass (the gravity of which causes the contraction). One of the most interesting results of stud-ies of the CMB is that these numbers are not the same. In fact, they differ significantly[1]:

mh2 = 0.14300± 0.0029, Ωbh2 = 0.02207± 0.00033, (1.3)

leading to the conclusion that over 85% of the universe’s mass-energy exists in the form of matter that interacts gravitationally but not electromagnetically; particle dark matter.

1.2

Candidates

Standard model and sterile neutrinos

It is prudent to begin our search for candidates of particle dark matter amongst those particles we already know to exist, in other words, the Standard Model (SM). The standard model of particle physics is the quantum field theory that describes three out of the four fundamental forces; electromagnetism, the weak force and the strong force, and their interactions with the constituents of matter, fermions, through force carriers called bosons. The model contains 17 confirmed particles, of which 8 were predicted by the model before being confirmed experimentally, the final one being the Higgs, discovered in July 2012, which completed the model. The fermions are divided into 6 quarks; up (with symbol u), down (d), top (t), bottom (b), charm (c) and strange (s), and 6 leptons; the electron (e), muon (µ) and tau lepton (τ ) and their corresponding neutrinos (νl). Unlike leptons, quarks are not found individually in nature, instead they

are the building blocks for hadrons, which each contain 3 quarks (like the proton; uud), and mesons, each containing 2 quarks (e.g. the pion; u¯d).

The five bosons of the SM are the photon (γ), responsible for electromagnetism, the gluons (g), force carriers of the strong force, W± and Z bosons, mediating the weak force and the Higgs (H), which gives mass to the other SM particles through the Higgs field. Particles in the SM are defined by their quantum numbers (i.e. charge, spin etc.), for instance, all fermions have half-integer spin, whereas bosons have integer spin. Additionally, each particle has an anti-particle, with the same mass but opposite quantum numbers, usually denoted with a bar (e.g. a quark q and an anti-quark q). When a particle collides with its anti-particle they annihilate, converting their mass-energy into photons (favored in the low-mass-energy limit) or other particles (favored in the high-energy limit). Some particles, such as the photon and the Z boson, are their own anti-particle, thus they self-annihilate. Interactions of SM particles obey the standard laws of conservation of energy and momentum as well as internal gauge symmetries, i.e. conservation of the respective quantum numbers.

(7)

Charge Spin 1/2 Spin 0 Charge Quarks 2/3 u c t g 0 Bosons

1/3 d s b γ 0

Leptons −1 e µ τ Z 0

0 νe νµ ντ W ±1

Table 1: Particles of the standard model.

The only particle in the SM that, at first glance, fulfills all the criteria for a DM particle (electrically neutral, stable, massive and weakly-interacting) is the neutrino. However, neutrinos are a form of “hot” dark matter; meaning their velocities are relativistic, causing them to erase density fluctuations below the scale of their free-streaming length, ∼ 40Mpc. This implies structure formation in the early universe followed a top-down process, whereby large-scale structures form before small-scale ones, something which seems excluded from the observations of galaxies, considered small-scale in the context of the structure of the universe, already having formed at high z.

More convincing, perhaps, are the constraints on neutrinos as DM coming from their expected relic abundance. Considering that there are three neutrino eigenstates, their total relic density will be [2]:

νh2 = 3 ∑ i=1 mi 93 eV. (1.4)

Given the upper limits on total neutrino mass arising from cosmology [1], we find an upper limit on the neutrino relic density of

νh2≤ 0.002, (1.5)

showing us that, while neutrinos constitute a small part of the DM in the universe, they can not comprise the majority of it.

As the Standard Model does not contain a proper candidate, we will now examine extensions of the SM. The sterile neutrino is a hypothetical, right-chiral counterpart of the left-chiral SM neutrino and receives its name from the fact that it does not interact through the electromagnetic, the strong or the weak interaction. The neutrino sector is a good candidate for physics beyond the SM, as, in theory, the SM neutrino is massless, which conflicts with observations that neutrinos in fact do have mass, and the intro-duction of a sterile neutrino would provide an explanation for neutrino flavor oscillation anomalies. The sterile neutrino was first considered as a DM candidate in 1993 [9] and, depending on its mass, can function as either cold or warm dark matter. However, current cosmological searches have as of yet found no evidence of sterile neutrinos [1]. Supersymmetry and WIMPs

Another possible extension of the SM is supersymmetry (SUSY). SUSY, in essence, pro-poses an additional symmetry beyond the Lorentz invariance and gauge symmetries of

(8)

the SM, namely a symmetry between fermions and bosons. In the SM, such a symmetry is forbidden by the Coleman-Mandula theorem, which states that space-time and inter-nal symmetries can not be combined in a non-trivial way [10]. This explicitly forbids particles from changing spin, i.e. fermions turning into bosons or vice-versa. The allowed symmetries in SM have generators belonging to Lie algebras, and SUSY circumvents the restriction by introducing graded Lie algebras, whose operators anti-commute. This new symmetry means that every boson in the SM now has a fermionic superpartner and every fermion has a bosonic superpartner, effectively doubling the number of particles in the model. This might seem like an unnecessary complication, but there are several theoretical motivations for introducing SUSY, such as its role in solving the problem of the low mass of the Higgs boson [2].

Specifically, we will focus on one incarnation of SUSY, the Minimally Supersym-metric Standard Model (MSSM), containing the smallest possible field content necessary to reproduce the SM fields. This is done by introducing fermionic superpartners for all gauge fields, called gauginos, introducing squarks and sleptons as bosonic superpartners to the fermion sector and adding one additional Higgs field with one Higgsino to each of the now 5 Higgs bosons. Additionally, the MSSM assumes R-parity (where SM particles have +1 R-parity and SUSY partners have -1 R parity) is conserved, one consequence of which being that the lightest supersymmetric particle is cannot decay and is therefor stable.

The MSSM in principle contains three candidates for WIMP dark matter; the sneu-trino (˜ν, superpartner of the neutrino), the gravitino ( eG, superpartner of the theoretical graviton) and the neutralino (χ, actually the lightest of four mass eigenstates of a mix of the bino, wino and two neutral higgsinos) [5]. However, sneutrinos annihilate so rapidly in the early universe that their relic density is far too low to be the WIMP[11], and grav-itinos act like hot dark matter [12], something already ruled out (as discussed earlier). We are then left with one WIMP candidate from MSSM; the lightest neutralino.

1.3

Origin of WIMP dark matter and relic density

The neutralino, and WIMPs in general, are expected to have masses at the weak scale, ∼ 100 GeV – 1 TeV. The Λ-Cold Dark Matter (ΛCDM) model predicts that, in

the very early universe, the temperature of the universe was high enough that such very massive particles were created and, through processes like pair-production and collisions, existed in thermal equilibrium with SM particles:

χ + χ⇋ SM + SM. (1.6)

However, the expansion and subsequent cooling of the universe contributed to the end of this equilibrium in two ways; firstly, as the temperature decreased, lighter particles no longer had the kinetic energy necessary to produce massive particles through interactions, and secondly, the expansion of the universe itself diluted the number density of the particles, thus decreasing the interaction rate. When the interaction rate Γ equals the expansion rate i.e. the Hubble constant H, Γ ≡ n⟨σv⟩ = H ≡ ˙a/a, massive particles will ‘freeze-out’ and their comoving number density remains constant. This freeze-out density is called the relic density.

(9)

We can use the Boltzmann equation to derive an approximation to⟨σv⟩, the thermal average of the annihilation cross section σ times the relative velocity v of the particles, which in the following we will simply call the cross section. For supersymmetric particles, the Boltzmann equation will have four terms, due to: 1) the expansion of the universe, 2) coannihilation of two SUSY particles into a standard model particle, 3) particle decay and 4) scattering off the thermal background. We can thus write the Boltzmann equation for N SUSY particles as:

dni dt =− 3Hni− Nj=1 ⟨σijvij⟩ ( ninj− neqi n eq j ) j̸=i [ Γij(ni− neqi )− Γji ( nj− neqj )] Nj̸=i [ ⟨σ′Xijvij⟩ ( ninX − neqi neqX ) − ⟨σXji′ vji⟩ ( njnX− neqj neqX )] . (1.7) We can simplify this equation by assuming R-parity (already an inherent feature of the MSSM) and by assuming the decay rate of SUSY particles is very high compared to the age of the universe, meaning that all SUSY particles present at the early universe have since decayed into the lightest stable particle, the neutralino (i.e. n = ni+ . . . + nN).

This means the third and fourth terms in equation 1.7 cancel. If we now additionally simplify the second term by excluding coannihilations between SUSY particles, we are left with

dn

dt =−3Hn − ⟨σv⟩ (

n2− n2eq). (1.8)

In the non-relativistic limit of the Maxwell-Boltzmann distribution, we can express the number density of particles in thermal equilibrium as:

neq = g ( mT )3/2 e−m/T, (1.9)

where T is the temperature and m the particle mass. If we now introduce the new variable Y n s, (1.10) where s = 2g ∗T3 45 (1.11)

is the entropy density of the universe (and g denotes the number of relativistic degrees of freedom), we can, under the assumption of conservation of entropy per comoving volume, write sdY dt = dn dt n s ds dt = dn dt + 3Hn. (1.12)

Substituting this into equation 1.8, we get

sdY dt =−s 2⟨σv⟩(Y2− Y2 eq ) , (1.13)

(10)

We will now define a new variable

x≡ m

T, (1.14)

and, since T ∝ a−1, we can write dxdt = xH. Using this, we can rewrite equation1.13as: dY dt dt dx = dY dx = −s⟨σv⟩ Hx ( Y2− Yeq2). (1.15)

For heavy states,⟨σv⟩ can be approximated with the non-relativistic expansion in v2 [2]: ⟨σv⟩ = a + b⟨v2⟩ + O(⟨v4)≈ a + 6b/x. (1.16)

Introducing a final new variable ∆ = Y − Yeq, leads to [2]

= d∆ dx = dYeq dx − (a + 6b/x) s Hx∆ ( ∆ + 2Yeq2) =−Yeq − (a + 6b/x)πg 45 mMPl x2 ∆ ( ∆ + 2Yeq2), (1.17)

where MPl is the Planck mass.

We can solve this equation analytically in the case long after freeze-out, x≫ xF (i.e.

the situation today), since in that case Y ≫ Yeq and thus also ∆ = Y − Yeq≈ Y ≫ Yeq

and equation1.17can be approximated as ∆ = (a + 6b/x)πg∗

45

mMPl

x2 ∆2. If we integrate

this last equation between xF and ∞ and simplify using ∆∞≪ ∆xf we arrive at: Y−1 = 1 (a + 6b/x) πg 45 mMPl xF , (1.18)

and, since the present density of a relic χ is given by ρχ = mχnχ = mχs0Y∞, we can

write for the relic density [2]:

χh2 = ρχ/ρc≈ 1.07× 109 GeV−1 MPl xF g 1 a + 3b/xF 3× 10−27⟨σv⟩cm3s−1 (1.19)

where in the last step we employed an order of magnitude approximation [13].

Equation 1.19, with the current value for the relic density ΩXh2 ≈ 0.1 [1] means

that⟨σv⟩ ≈ 3×10−26cm3s−1. This result is a strong argument in favor for the WIMP as a DM candidate. For a new particle with a weak-scale interaction, we can estimate its annihilation cross-section to be [13] ⟨σv⟩ ∼ α2(100 GeV)−2 ∼ 10−25cm3s−1, where the fine-structure constant α ∼ 10−2. This is remarkably close to the cross-section derived from the relic abundance of DM if comprised of neutralinos.

It should be noted that in equation1.8, we have ignored the effects of coannihilations between neutralinos and heavier SUSY particles. Including such reactions yields instead

dn dt =−3Hn − Ni,j=1 ⟨σijvij⟩ ( ninj− neqi neqj ) , (1.20)

which, demanding that the current relic density of neutralinos be the same as in the case without coannihilations, requires a much smaller ⟨σv⟩ [5].

(11)

1.4

Density profile

Whereas the shape of the density profile of a DM halo at large distances is known, falling approximately proportional to r−2 in order to result in the observed constant circular velocity at large radii, the exact form of the matter distribution at the center has not yet been pinned down. The discrepancy is largely between the cored models, preferred by observations, and the cuspy models, preferred by simulations.

The Navarro-Frenk-White (NFW) profile [14] is one of the most widely used density profiles [2], and is motivated by N-body simulations of dark matter halos:

ρNFW(r) =

r30ρ0 r(r + r0)2

, (1.21)

where r0 and ρ0 are the scale radius and scale density, respectively, and is an example

of a cuspy model. Another such power-law profile is the Einasto profile:

ρEin(r)∝ exp (−Arα) . (1.22)

The Einasto profile was originally introduced as a density profile for spherical stellar systems and is identical in mathematical form to Sersic’s law, which relates a galaxy’s surface brightness to its radius, and seems to be favored by more recent simulations [15,16].

An alternative to power-law profiles like the NFW and Einasto profile are the cored profiles, such as the isothermal sphere [17]

ρiso(r) =

ρ0

1 + (r/r0)2

(1.23) or the Burkert profile, an adaptation of the isothermal sphere fitted to observations [18]

ρBur(r) =

ρ0

(1 + r/r0)(1 + (r/r0)2)

(1.24) profiles.

Power-law profiles such as the NFW, Burkert or isothermal profile can be generalized in the αβγ formalism [19]:

ρ(r) = ρ0

(r/r0)γ(1 + (r/r0)α)(β−γ)/α

, (1.25)

see also table2. For a comparison of the different possible density profiles, see figure1. In our treatment of Draco, we follow the literature and assume the DM density follows an NFW profile and for the scaling parameters we take the values r0 = 2.09 kpc

and ρ0 = 107.41M⊙/kpc3 ∼= 0.976 GeV/cm3 [21].

1.5

Gamma-rays from dark matter annihilation

There are two ways for a Majorana DM particle (such as the WIMP) to produce gamma-rays through self-annihilation; either directly, through:

(12)

Profile α β γ

NFW 1 3 1

Isothermal 2 2 0 Burkert 1 3 2 Moore [20] 3/2 3 3/2

Table 2: Some of the most widely used power-law profiles.

10-3 10-2 10-1 100 101 102 0.01 0.1 1 10 ρ (r) r NFW Isothermal Burkert Einasto

Figure 1: A comparison of the different density profiles. For this plot, the scale factors

r0, ρ0, A and α were set to 1.

or indirectly, where gamma-rays are produced as the end result of a particle jet origi-nating from DM annihilation into a particle and anti-particle pair, for example a quark and an anti-quark:

χχ→ qq. (1.27)

The direct annihilation of DM into gamma-rays of1.26 has the advantage that the pro-duced gamma-rays are proportional to the mass of the WIMP, i.e. the process produces a mono-energetic gamma-ray line, which, if observed, would provide a clear indicator for DM annihilation. In the case of the neutralino however, we expect the gamma-ray flux from such processes to be quite small, as no tree level Feynman diagrams contribute to this process [2]. Thus, we shall instead focus on the annihilation of DM into other particles, specifically the channels

χχ→ bb, χχ → tt, χχ → W+W−, χχ→ e+e−, χχ→ µ+µ−, (1.28) covering annihilation into quarks, bosons and leptons.

(13)

If the neutralino is lighter than the W± (MW = 80.4 GeV), the bb channel

typi-cally dominates, with some contribution of annihilation into massive mesons (we note here that, for mesons, differences in the final gamma-ray spectrum for different mass generations are small, see also figure 7). In the case of a massive neutralino, the bb channel is heavily suppressed and, depending on the specific SUSY parameters, the tt channel dominates [22]. The different SUSY models allow for enough freedom in choice of parameters that the mass of the neutralino can take on widely different values, but it is generally expected to lie in the range of 30 GeV≲ mχ≲ 120 TeV [23]. Here, the lower

limit results from null detector searches for neutralinos and the upper limit comes from the so-called unitarity bound. This bound arises from demanding unitarity is not vio-lated in the scattering process, which leads to a maximum possible cross-section. This, together with a value for the number density Ωχh2, results in a maximum value for the

WIMP mass [24].

For a self-annihilating dark matter particle, the gamma-ray flux from annihilation scales with ρ2, specifically:

Φ ⟨σv⟩ m2

χ

J, (1.29)

where mχ is the mass of the WIMP,⟨σv⟩ is its cross-section and the J-factor is defined

as the line-of-sight integral of the dark matter density squared: J =

l.o.s.

ρ2dl (1.30)

In this case, however, we are interested in the integrated flux over a region ∆Ω corresponding to the resolution of the telescope. For a cone of aperture θmax, we can

write [25]: ¯ J (∆Ω) = ∆Ω ∫ dθ sin θJ (θ), (1.31) where ∆Ω = 2πθmax

0 dθ sin θ. We can now rewrite equation1.21:

ρ(s, θ) = r 3 0ρ0 r(s, θ)(r(s, θ) + r0)2 , (1.32) where r(s, θ) =D2+ s2− 2Ds cos θ, (1.33)

where D is the distance to the object, s is the line-of-sight distance and θ the line-of-sight angle. For an opening angle of ∆Ω = 2.4×10−4sr, which is approximately the size of the point-spread function of the Fermi-LAT in our energy range [26], we get θmax= 0.5◦.

The gamma-ray flux is then given by dΦ dE(∆Ω) = 1 2 dNγ dE ⟨σv⟩ m2 χ ¯ J (∆Ω) , (1.34)

where dNγ/dE is the photon spectrum and we have assumed a branching ratio of 1,

(14)

10-7 10-6 10-5 10-4 10-3 10-2 10-1 102 103 104 105 dN/dE [MeV -1] E [MeV] mχ = 6 GeV mχ = 18 GeV mχ = 68 GeV mχ = 260 GeV mχ = 1000 GeV

(a) Photon spectra dNγ/dE as a function of

particle energy E for several WIMP masses for annihilation into bb. 10-7 10-6 10-5 10-4 10-3 10-2 10-1 102 103 104 105 dN/dE [MeV -1] E [MeV] mχ = 6 GeV mχ = 18 GeV mχ = 68 GeV mχ = 260 GeV mχ = 1000 GeV

(b) Photon spectra dNγ/dE as a function of

particle energy E for several WIMP masses for annihilation into e+e.

Figure 2

generally calculated using Monte Carlo simulations of parton showers in packages such as pythia [27] or herwig [28]. A detailed comparison of these codes shows that, for our purposes, pythia produces better results and so for our analysis we will utilize the dNγ/dE spectra provided in the form of Mathematica interpolating functions by [25],

which are calculated using pythia and include necessary electroweak corrections [29]. We show some examples of these spectra in figure2.

(15)

2

Mini-spikes

2.1

Dwarf spheroidals

Dwarf spheroidal galaxies (dSph) are low luminosity (L = 103 − 108L, compared to L ∼ 2 × 1010L for a normal galaxy) objects orbiting the Milky Way with masses on the order of 105–107M [30]. They are the largest Galactic substructures predicted in the CDM model and ideal laboratories for DM indirect detection, for various reasons. They are largely DM dominated systems, as shown by their high mass-to-light ratios (M/L on the order of 100 - 1000 M/L [21]). This allows for the use of stars as trace particles of the DM gravitational potential and thus constraints on the DM distribution can be derived from stellar kinematics. Moreover, dSphs contain no detected neutral or ionized gas and show little to no star formation activity [31–33], which would simplify interpretation of the detection of a gamma-ray excess in the direction of a dSph.

For a long time, there were only 9 known dSph satellites to the Milky Way, whereas the CDM framework predicted a population several orders of magnitude larger. This discrepancy, called the “missing satellite problem”, was later solved by the discovery of 11 new dSphs by the Sloan Digital Sky Survey and new considerations of the suppression of galaxy formation by reionization [34].

For our analysis we chose the Draco dSph as our target, because initially it had the highest expected J-factor (the DM density squared along the line-of-sight, see also section1.5), as calculated by the Fermi-LAT collaboration, of all the Local Group’s dSph satellites (see also table3), though this has since been adjusted downwards slightly [35]. Additionally, its relatively high galactic latitude (at b = 57.9154◦) makes contamination by other gamma-ray sources in the galactic disk unlikely. Finally, deep photometric studies of Draco indicate it is featureless [36], which would both validate our assumption of a simple power-law density profile as well as rule out the possible disruption of a mini-spike by Galactic tidal forces.

dSph D (kpc) r0 (kpc) ρ0 (108M⊙/kpc3) JNFW ( 1019 GeVcm52 ) Bootes I 62± 3 0.27 2.04 0.16+0.35−0.13 Coma Berenices 44± 4 0.16 2.57 0.16+0.22−0.08 Draco 76± 5 2.09 0.26 1.20+0.31−0.25 Fornax 138± 8 0.58 0.66 0.06+0.03−0.03 Sculptor 79± 4 0.95 0.37 0.24+0.06−0.06 Sextans 86± 4 0.37 0.85 0.06+0.03−0.02 Ursa Major II 30± 5 0.65 0.98 0.58+0.91−0.35 Ursa Minor 66± 3 0.17 3.47 0.64+0.25−0.18

(16)

2.2

Intermediate mass black holes

2.2.1 Motivation Observational

So far, the most convincing observational evidence for the existence of intermediate mass black holes exists in the form of the so-called ultra-luminous X-ray sources (ULXs). These are extragalactic, non-nuclear sources of X-ray radiation with isotropic luminosi-ties in the 0.3–10 keV band of LX > 1039ergs s−1, exceeding those of stellar mass black

holes accreting at the Eddington rate [37]. An example of these objects is ESO 243-49 HLX-1, whose minimum mass has been determined to be approximately 500M, assum-ing accretion onto its black hole does not exceed the Eddassum-ington limit by more than a factor of ten [38]. However, the exact nature and mechanisms of ULXs are still a topic of debate, and recent research seems to rule out IMBHs as the sources for most ULXs [39], instead inferring from their association with young clusters that these objects are ejected, massive binaries. Alternative scenarios for achieving these high X-ray luminosi-ties have also been proposed, e.g. through efficient accretion from a captured stellar wind from a Wolf-Rayet star companion (the case of M101 ULX-1 [40]) or supercritical accretion disks [41].

Future prospects for the conclusive detection of IMBHs include observation of grav-itational waves originating from an IMBH in an elliptical binary with another compact object or the merger of two IMBHs [42], e.g. by eLISA.

Theoretical

Several theoretical motivations for the existence of IMBHs have also been raised. Strong empirical correlations between the mass of SMBHs and the properties of their host galax-ies suggest an inherent connection between the growth of the SMBH and the formation and evolution of galaxies. Extending these relations down to low-mass galaxies predicts the presence of IMBHs in their centers. Current research [43] indicates that the rela-tion between the mass of the central black hole M and the stellar velocity dispersion σ follows: M = 108.32M ( σ 200kms−1 )5.64 . (2.1)

For example, using velocity dispersion data of globular clusters in M31 from [44], we expect these objects to contain BHs with masses up to ∼ 3.5 × 103M. Assuming dwarf spheroidals are essentially scaled down globular clusters and follow the scaling relation outlined in equation 2.1, and given a velocity dispersion for Draco of σ = 10.5 km s−1 [45], we would expect it to contain a BH with a mass on the order of M ∼ 10M. However, this relation is based on observations of SMBHs only and without any observations of IMBHs at the center of galaxy-like objects, the behavior of the relation at low M(or indeed, if it even applies at these low BH masses) is uncertain. If we consider the uncertainties in both the parameters of the M–σ relation and the velocity dispersion measurement, the 95% confidence level upper limit on M for Draco is∼ 300M.

The case of IMBHs in the center of a dSph in particular has also been examined through N-body simulations, for instance in the case of the Ursa Minor dwarf [46].

(17)

Demanding that the gravitational influence of the BH does not destroy the observed stellar substructures in Ursa Minor within 10 Gyr puts an upper limit on the mass of a central BH of M = (2− 3) × 104M.

Additionally, IMBHs have been proposed as the initial seeds of SMBHs [47]. Obser-vations of high-redshift luminous quasars (e.g. ULASJ112001.48+064124.3 [48]) suggest the first MBHs must have formed in the very early universe. Their high luminosities, L > 1047erg, indicate the existence of BHs with masses ∼ 109M

at z ≈ 7, i.e. when

the universe was only about .8 Gyr old. If we assume a MBH grows at the Eddington rate, defined as the accretion rate at which the luminosity equals the Eddington limit i.e. when outward radiation pressure equals inward pressure by gravity, its mass will increase in time as [47]: M (t) = M (0) exp ( 1− ϵ ϵ t tEdd ) , (2.2)

where a standard radiative efficiency is ϵ ≈ 0.1 and the Eddington time is tEdd =

0.45 Gyr. A 109M SMBH, for example, would then require a seed BH with a mass of 102− 105M, well in the mass range of IMBHs.

2.2.2 Formation scenarios

We will now review two proposed formation scenarios for IMBHs [49]. In the first, scenario A, black holes form in rare, overdense regions at high redshift (z ∼ 20), after the collapse of massive population III stars. The second scenario, scenario B, features black holes originating from massive objects formed directly during the collapse of cold gas in early-forming halos. In both scenarios, the initial black holes grow adiabatically, due to the collisionless nature of particle dark matter [50]. This causes the surrounding dark matter halo to contract, resulting in a spike in the density profile, as we will discuss in detail in section2.2.3.

Scenario A

The Jeans mass MJ, which is defined as the maximum mass a gas cloud can have

while still stable against gravitational collapse, scales with temperature as MJ ∝ T3/2.

Primordial gas clouds, from which the first stars were formed, were metal-free, meaning cooling only happened through trace amounts of molecular hydrogen. This means that these early gas clouds were much hotter, and thus the masses of Pop III stars are expected to be higher than other, later generation stars, with M > 100M [51].

The evolution and eventual fate of these massive, zero metallicity stars differs sig-nificantly from those of their metal-enriched equivalents [52]. Pop I main sequence stars with masses over 100 Mare vibrationally unstable to radial pulsations, leading to sub-stantial mass loss, an effect which is subsub-stantially suppressed in massive stars with low metallicity. This, combined with the fact that, at zero metallicity, mass loss through stellar winds is also negligible, implies that massive Pop III stars could reach the end of their life as black holes with a mass close to their initial mass.

However, low metallicity stars with masses between 140 ≤ M/M ≤ 260M will succumb to pair-instability; photons from the core are so energetic that a significant fraction of them are lost in the production of electron-positron pairs, depriving the star

(18)

of the radiation pressure it needs to support itself under its own gravity. The star thus collapses, before exploding as a pair-instability supernova, leaving behind no remnant [53].

It turns out that, if one assumes these BHs populate halos representing a peak in the smoothed baryonic density field of∼ 3σ, the fraction of baryons that end up in these objects is comparable to the mass fraction in SMBHs [52]. As many of the relatively small (sub)halos that contain these IMBHs do not merge with the central galaxy, this scenario leads to a prediction of a population of “wandering” IMBHs in the halos of Milky Way-sized galaxies. We stress that IMBHs formed by this scenario do not necessarily form at the center of their host DM halos.

Scenario B

In the early universe, as the first halos virialize and collapse, gas cools and collapses as well. The gas forms pressure-supported disks at the center of those halos massive enough to contain sufficient amounts of molecular hydrogen. If molecular hydrogen cooling is efficient and no major mergers occur over a dynamical time, a protogalactic disk can form. Gravitational instabilities in the disk will produce an effective viscosity that transfers angular momentum outwards and mass inwards. This process is halted after 1–30 Myr by the heating of the disk by supernovae of Pop III stars or when the halo experiences a major merger[54]. The central object may be temporarily pressure supported, but will inevitably collapse into a black hole.

From the requirement that angular momentum loss via viscosity is effective, the mass scale for the initial halos is fixed to 107–108M and the characteristic mass scale for the final IMBH turns out to be approximately 105M. Specifically, scenario B’s IMBH mass distribution is a log-normal Gaussian with σ = 0.9 and mean value [49]:

M= 3.8× 104 ( κ 0.5 ) ( f 0.03 )3/2( Mvir 107M ) ( 1 + zf 18 ) ( t 10 Myr ) , (2.3)

Simulations [49] of both scenarios have provided rough estimates of the number of these objects in a Milky Way-like halo. For scenario A, populating all halos that constitute a 3σ density peaks at z = 18 with an IMBH with M = 100M and following their evolution down to redshift zero results in NA= (1027± 84) unmerged IMBHs, where a

merger is assumed to occur when two BHs come within 1 kpc of each other. A similar approach for scenario B, but instead only populating those halos with a mass larger than the threshold for the onset of effective viscosity (Mhalo ≥ 107–108M⊙) and using

equation 2.3 for the IMBH mass distribution, results in NB = (101± 22) unmerged

IMBHs.

While both of these scenarios are plausible, their details remain uncertain. Specif-ically, both scenarios stop producing IMBHs at redshifts when significant small-scale fragmentation of baryonic disks commences and IMBH formation through scenario B is cut off when the universe becomes sufficiently reionized. Both the general cut-off redshift due to fragmentation zc and the reionization redshift zre can take on a range of

reason-able values. Considering varying values for zcand zreshowcases this strong dependence,

however, a Milky Way-like halo is still expected to contain a significant population of IMBHs [55] (see also table4).

(19)

Scenario z N A 18 319± 49 A 15 54± 13 B 7 278± 37 B 12 62± 14 B 17 8.4± 3.7

Table 4: Number of unmerged IMBHs Nfor a given zc (for scenario A) or zre(for scenario B).

Taken from [55].

2.2.3 Dark matter mini-spikes

We will first consider the radius of the gravitational influence of the black hole rh, which

is the radius at which the enclosed mass equals twice the black hole mass [49]:

M (< rh)

rh

0

ρ(r)4πr2dr = 2M, (2.4)

For an arbitrary power law ρ(r) = ρ0(r/r0)−γ, from equation2.4it follows that rh= r0 ( 3− γ M ρ0r30 )1/(3−γ) . (2.5)

For Draco, we will consider both the enclosed mass from the DM in the NFW profile as well as the enclosed stellar mass, and so equation 2.4becomes

M (< rh)

rh

0

(r) + ρDM(r)]4πr2dr = 2M•, (2.6)

where ρDM is the density of dark matter and ρ∗ is the density of stellar matter, which

for Draco follows [56]:

ρ(r) = ρ∗,0 ( 1 + (r b )2)−α/2 , (2.7)

with α = 7, b = 0.394kpc and ρ∗,0 = 1.08× 107M/kpc3. Using this, we find for Draco, rh ∼ 0.8pc and rh ∼ 25pc for M• = 102M⊙ and M• = 105M⊙, respectively. For Draco

(and dSphs in general) the low stellar density near the center compared to the DM density (see also figure 4) means that the difference in rh calculated with equation 2.5

as opposed to2.6 is marginal (0.6% at most). Adiabatic contraction

As mentioned in section2.2.2, we assume the black hole grows adiabatically. Practically, this means the time needed for a dark matter particle to cross the central part of the halo, the crossing time tcr, is much smaller than the black hole growth timescale tgrowth.

Another consequence of the assumption of adiabacity is the conservation of both angular momentum and the radial action (the integral of radial velocity over one orbit). Knowing

(20)

this, we can determine the final density profile from the final phase space distribution [50]: ρsp(r) = ∫ 0 E′ m dE′L′m L′ c dL′4πL r2v r f′(E′, L′), (2.8) where vr= [ 2 ( E′+GM• r L′2 2r2 )]1/2 , (2.9) Em =−GM• r ( 1−4RS r ) , (2.10) L′c= 2cRS, (2.11) L′m= [ 2r2 ( E′+GM• r )]1/2 . (2.12)

We only consider bound orbits (i.e. we require E′ < 0) and exclude particles captured by the black hole (i.e. we require L′ > 2cRS and E′ >−GM•(1− 4RS/r) /r). Under

the assumption of adiabatic conditions, we can relate the initial and final phase-space distributions as f′(E′, L′) = f (E, L), L′ = L, I′(E′, L′) = I(E, L), with the last two equations signifying conservation of angular momentum L and conservation of the radial action I(E, L), respectively.

The slope of the spiked density profile depends strongly on the behavior of the initial phase space density f (E, L) in the limit E → ϕ(0). In the case of models with a finite core, i.e. where f (E, L) approaches a constant, the spike slope is γsp = 3/2. Models

with an inner cusp, in which f (E, L) diverges as [E− ϕ(0)]−β, will have γsp > 3/2.

The phase-space distribution function of single-power law profile ρ(r) = ρ0(r/r0)−γ

(i.e. equation1.25 with (β = γ and 0 < γ < 2) is given by [50]:

f (E, L) = ρ0 (2πϕ0)3/2 Γ(β) Γ(β−32) ϕβ0 Eβ, (2.13) with β = (6− γ)/[2(2 − γ)] and ϕ0 = 4πGr02ρ0/[(3− γ)(2 − γ)].

To find f′(E′, L′), we must solve I′(E′, L′) = I(E, L) for E as a function of E′. In the case of a point mass, we can write I′(E′, L′) = 2π[−L′+ GM/√−2E′]. Since the action integral in the field of a power law profile cannot be performed exactly, we use an approximation accurate to within 8% for the range 0 < γ < 2 [50]:

I(E, L) = b [ −L λ+ √ 2r20ϕ0 ( E ϕ0 ) 4−γ 2(2−γ) ] , (2.14) where λ = [2/(4− γ)]1/(2−γ)[(2− γ)/(4 − γ)]1/2 and b = π(2− γ)/B ( 1 2−γ, 3 2 ) . We can now write E as a function of E′ and integrate equation2.8to find:

ρsp(r) = gγ(r) ρR ( r rsp )−γsp , (2.15)

(21)

where ρR≡ ρ(rsp) = ρ0(rsp/r0)−γ, rsp= αγr0(M/ρ0r30)1/(3−γ)and γsp = (9−2γ)/(4−γ).

Here, gγ(r) is a factor accounting for the capture of particles as they cross the event

horizon. In all the cases we will consider, however, this factor is equal to 1, as self-annihilation places a stricter limit on the density than the Schwartzschild radius. The normalization factor αγneeds to be obtained numerically, and equals 0.120, 0.140, 0.142,

0.135, 0.122, 0.103, 0.0818, 0.0177 for γ = 0.05, 0.2, 0.4, . . . , 1.4, 2. [50]. Together with equation 2.5, this means the radius of the spike for a γ = 1 profile (e.g. NFW) is rsp ≈ 0.2rh [57] and for a black hole mass of 105M⊙, rsp≈ 5pc.

We can also determine γsp in a less convoluted way through scaling [2]. If we again

consider adiabatic accretion under the assumption of circular orbits and assume the initial distribution follows ρi ∝ r−γ and the final distribution follows ρf ∝ r−γf, we

can express conservation of mass and conservation of angular momentum, respectively, thusly:

ρir2idri = ρfr2fdrf (2.16)

and

riMi(r) = rfMf(r)≈ rfM•. (2.17)

From equation 2.16we see

r3i−γ ∝ r3−γf

f → ri∝ r

(3−γf)/(3−γ)

f (2.18)

and from equation2.17

r4i−γ ∝ rf → ri∝ r1/(4f −γ). (2.19)

Combining equations2.18 and 2.19, we find

1/(4− γ) = (3 − γf)/(3− γ) → 3 − γf= (3− γ)/(4 − γ) → γsp=−γf = 9− 2γ 4− γ , (2.20)

and so the spiked density profile for an initial NFW (i.e. γ = 1) profile is [49]:

ρsp(r) = ρ(rsp) ( r rsp )−7/3 . (2.21) Maximum density

As we showed in section1.5, the DM annihilation flux scales with ρ2, and for the profile in equation 2.21this would go to infinity for r → 0, which is problematic. However, for self-annihilating WIMP models, an upper limit to the DM halo density arises naturally from considering the age of the black hole in comparison to the annihilation rate. If we consider the case where the dark matter distribution is only affected by annihilation, the DM density evolution equation is [49]:

˙nχ(r, t) =−⟨σv⟩n2χ(r, t), (2.22)

which has the solution

nχ(r, t) =

nχ(r, tf)

1 + nχ(r, tf)⟨σv⟩t•

(22)

where t≡ t−tf is the age of the black hole, which for Draco we conservatively estimated

to be 109 years. Using the fact that ρχ= mχnχ, we can rewrite2.23 as

ρχ(r, t•) =

ρsp(r)

1 + ρsp(r)⟨σv⟩mχ t•

. (2.24)

This shows that annihilations place an upper limit on the DM density of the order mχ/⟨σv⟩t•, and we define rlim as the radius where:

ρsp(rlim) =

⟨σv⟩ t• ≡ ρlim, (2.25)

leading to a cut-off radius we define as rcut = Max [RSchw, rlim], where RSchw is the

Schwarzschild radius of the black hole. However, for common values for the WIMP mass and cross section, rlim> RSchw and so we can ignore the effect of particles being captured

by the black hole and set rcut = rlim. Treating the maximum density as a hard cut-off

is an approximation, in reality the self-annihilation of DM will cause the spike density to approach ρlim in a more gradual manner, as can be seen in figure4.

For mχ= 100 GeV and⟨σv⟩ = 3×10−26cm3s−1we find ρlim= 1.1×1011GeV/cm3=

2.8× 1018M/kpc3 (see also figure4) and a black hole mass of 105Mthen gives rcut=

6.3× 10−4pc. Flux

Finally, we can express the flux of gamma-rays from a mini-spike around an IMBH as [49]: dΦspike dE = 1 2 ⟨σv⟩ m2 χ 1 D2 dNγ dErsp rcut ρ2sp(r)r2dr = 3 10 dNγ dE ⟨σv⟩ m2 χ ρ2(r sp) D2 r 14/3 sp r−5/3cut , (2.26)

where in the last step we assumed rsp ≫ rcut. Note that here, we perform a volume

integral over the density, instead of a line-of-sight integral as in equation 1.30. This is a valid approximation due to the small size of the spike radius rsp compared to the

distance D.

To determine the exact dependency of Φsp on M•, we will approximate rh by using

equation 2.5, with γ = 1. In doing so, we neglect the contribution of stars and the overestimate the contribution of DM beyond the cusp of the NFW profile. However, these contribute little to the density at a radius on the order of rh ∼ 10−3kpc (see also

figure 4). Using rsp = 0.2rh = 0.2M/πρ0r0 and rcut = ( r4sp0r0t•/mχ)3 )1/7 and substituting these into equation2.26, we find

Φspike ⟨σv⟩ 2/7 m9/7χ M6/7r3/70 ρ3/70 t5/7 . (2.27)

(23)

The total flux originating from a spiked profile is the sum of the flux of the mini-spike and that of the NFW profile:

dΦtot dE = 1 2 ⟨σv⟩ m2 χ dNγ dE ( 1 D2 ∫ rsp rcut ρ2sp(r)r2dr +J (∆Ω)¯ ) (2.28) 100 105 1010 1015 1020 10-7 10-6 10-5 10-4 10-3 10-2 10-1 100 101 ρ (r) [M ⊙ kpc -3 ] r [kpc] Dark matter (NFW) Stars

Dark matter (spike) Cut-off density

Dark matter (spike, after annihilation)

Figure 4: The different density profiles related to the problem as a function of radius for

the Draco dSph. For this plot values of M = 105M

⊙, t• = 109 years, mχ = 100 GeV and

⟨σv⟩ = 3 × 10−26cm3s−1 were chosen.

We are interested in DM mini-spikes because of the relative insensitivity of the flux to the DM particle parameters ⟨σv⟩ and mχ. From equation 1.34 we see that

ΦNFW∝ ⟨σv⟩/m2χ, whereas from equations2.26 and 2.25:

Φspike ⟨σv⟩ m2 χrsp rcut ρ2sp(r)r2dr ⟨σv⟩ m2 χ r−5/3cut ⟨σv⟩ m2 χ ( ⟨σv⟩ )−5/7 ⟨σv⟩2/7 m9/7χ , (2.29)

(24)

3

Analysis

3.1

Fermi Large Area Telescope

The Large Area Telescope (LAT) is the primary instrument on the Fermi mission (previ-ously known as GLAST, Gamma-ray Large Area Space Telescope), designed to study the high-energy gamma-ray sky, and has been taking data since early August 2008 [26]. The instrument is an electron-positron pair conversion telescope, sensitive to photon energies in the range 20 MeV < E > 300 GeV [58]. To take advantage of the instrument’s large field-of-view (at 2.4 sr), in its primary observing mode the telescope scans the entire sky every 2 orbits, or roughly every 3 hours, resulting in fairly uniform coverage.

3.2

Data acquisition and preparation

For our analysis we used data from the Fermi-LAT gamma-ray space telescope, which is freely accessible online1. The data used was obtained between 04-08-2008 and 26-09-2013 (or between 239557417 and 401855656 seconds in Mission Elapsed Time, MET), spans an energy range of 100 MeV to 100 GeV and it encompasses a 10 radius circle centered on the coordinates of Draco; l = 260.052◦, b = 57.9154◦. We take such a large region of interest (ROI) to account for nearby sources whose point-spread-functions overlap with Draco. We thus obtain 9 .fits files; 8 event files containing data and one spacecraft file containing the position and orientation of the satellite during our chosen time interval.

Using the Fermi Science Tools2, we first apply several cuts to the data. With the gtselect tool, we select only those events with high probability of being photons (evclass=2). As recommended by the Fermi Science Support Center (FSSC), we set the maximum zenith angle to 100 to protect against elevated background levels caused by intersection of the ROI with the Earth’s limb. Next, using the gtmktime tool, we also applied cuts to the time intervals. We excluded periods when a spacecraft event could have affected the data quality (data qual>0) or the LAT instrument was not in normal science data-taking mode (lat config==1). Additionally, we exclude time intervals where the zenith angle exceeds our maximum value set with gtselect, this to correct the exposure for our zenith cut. Finally, our cut on rocking angle < 52◦ serves the same purpose of minimizing contamination from the Earth’s limb. This results in a final events file, which we shall use for the next stages of the analysis.

To check if our data looks as we expect it to, we next produce a counts map of the ROI, summed over our energy range. We do this using the cmap option of the gtbin tool, which takes as input our final event file. Additionally, we must set the image resolution (0.2◦/px, as recommended), its dimensions (100px by 100px, so as to include our entire 20 diameter ROI) and the projection (stereographic). The resulting counts map is shown in figure5.

1

http://fermi.gsfc.nasa.gov/ssc/data/access/

2version v9r32p5, freely available fromhttp://fermi.gsfc.nasa.gov/ssc/data/analysis/

(25)

Figure 5: Counts map of the 10 radius ROI, centered on Draco, inte-grated over our energy range. The amplitude is shown on a square-root scale. The data looks as we would expect it to, without a central source of gamma-rays.

3.3

Maximum likelihood analysis

In this section, we shall outline the steps we took to perform our binned likelihood analysis. Here, “binned” means we do not fit each count (i.e. each photon) individually, as the prolonged exposure time produces such a large number of counts that an unbinned analysis, though the most accurate, is unfeasible due to its time consumption. Instead, for our analysis we shall bin each count according to its energy.

We call the probability of obtaining the gathered data for a given model the likeli-hoodL, where we assume the mapping from data to model (i.e. the response function of the LAT) is sufficiently accurate. Specifically, for our binned analysis,L is the product of the probabilities of observing the detected counts in their corresponding bin. The number of counts observed in each bin is governed by a Poisson distribution which, since we are performing a binned analysis over a large number of spatial and energy bins, implying a small number of counts in each bin, can not be approximated by a normal distribution. The Poisson distribution function gives the probability that a discrete ran-dom variable X with an expected value of λ has a value k: f (k; λ) = λke−λ/k!. If we call

the number of counts expected (from the model) in the ith bin mi, then the probability

of detecting ni counts in bin i is given by

Pi = mnii

e−mi ni!

, (3.1)

and the likelihood is

L =i=1 mni i e−mi ni! . (3.2)

Note that the product of exp [−mi] over all i is just exp [−

imi]≡ exp [−Nexp], where Nexp is the total number of counts we should have detected according to the source

model.

To determine the best-fit parameters, we will need to vary the model parameters and calculate L every time in order to find the values for which L is maximized (and thus, since χ2 = −2log (L), for which χ2 is minimized). The uncertainty in the parameter

(26)

values is then derived from the variation of L in the vicinity of its maximum. Thus, to perform the analysis, we will need to construct our models, compare them to the data and repeat this process several times to find the best-fit parameters. To speed up this process, we will also precompute certain quantities, in particular the livetime cube and the exposure map.

Generating the counts cube

To prepare our data for the binned likelihood analysis, we must first produce what is known as a counts cube. Whereas the counts map integrates over energy, the counts cube retains the energy information as a third dimension. This cube is also created with the gtbin tool, using the ccube option, and likewise takes our final events file as input. The square region of the counts cube must fit within our circular ROI, and so as its size we take S =√2R =√2 10◦≈ 14.14◦. The recommended scale is 0.2◦/pixel, so we take as dimensions for the square 14.14◦/(0.2◦/px)≈ 70px by 70px. We use 30 energy bins and space them logarithmically; ten bins for each factor of ten in our energy range of 100 MeV to 100 GeV.

Making the source models

The next step is to create the source models to fit to our counts cube. The source models used in the likelihood analysis are contained in XML files, which we produce using the “make2FGLxml.py” python script provided by the FSSC. This script automatically includes sources within 5 of our ROI (to include sources outside of our range which might still contribute photons) from the 2nd Fermi Gamma-ray LAT catalog (2FGL, in this case we used the latest point source catalog “gll psc v08.fit”, provided by the FSSC).

Each source in the model has both a spatial model (e.g. point source) and a spectral model (e.g. power law). Each parameter of a model (for instance, the power law index or prefactor for a spectral model) can either be either free or set; free meaning the analysis will attempt to calculate a maximum likelihood value, fixed meaning it is treated as a given.

We add Draco to the model file manually. For the spatial model we make the assumption that Draco is a point source, a choice supported by the literature [21, 59]. We generate model spectra using twenty different WIMP masses logarithmically spaced between 6 GeV and 1000 GeV 3, for annihilation channels bb, tt, W+W−, e+e− and µ+µ−. To calculate the dNγ/dE distribution for the different masses and annihilation

channels, we used [25]. This results in twenty different model XML files (one for each WIMP mass) for each of the five annihilation channels.

Finally, our source models also include background components; the galactic diffuse foreground emission (“gal 2yearp7v6 v0.fits” from the FSSC), caused by interactions between cosmic rays and the interstellar medium, and the extragalactic isotropic diffuse emission (“iso p7v6source.txt”), caused by unresolved extragalactic sources such as AGN and truly isotropic processes e.g. cosmic rays scattering off relic photons [60].

3Specifically; 6, 8, 10, 13, 18, 23, 30, 40, 52, 68, 89, 116, 152, 199, 260, 341, 446, 584, 764 and

(27)

Computing the livetime cube

To speed up the exposure calculations during the likelihood analysis later, we now use the gtltcube tool to create a livetime cube. The response function of the LAT instrument depends on the inclination θ, i.e. the angle between the instrument’s z-axis and the direction of the source. The total number of counts received from a source depends on the inclination and the livetime, the accumulated time during which the instrument is actively gathering data. The livetime cube contains the livetime for the entire sky as a function of inclination and position on the sky. The gtltcube tool takes as input the spacecraft file and the final events file, we set the step size in cos(θ) to 0.025◦ and the spatial grid size to 1, the recommended values.

Producing the exposure map

Next, we use our livetime cube along with the extragalactic isotropic diffuse emission model to produce our binned exposure map, with the tool gtexpcube2. The exposure map contains an accounting of the exposure (i.e. time · area) at each position in the sky as a function of energy. We elect to generate an exposure map for the entire sky, as opposed to one limited to our ROI, and so we set the dimensions of the map to 360 by 180◦, i.e. 1800 pixels by 900 pixels for a pixel binning of 0.2◦/px. We use the same energy binning settings as we did for the counts cube and use the Aitoff projection. Creating a source map

Now we have everything to create our source map, i.e. the map containing the expected number of counts from the sources in our models, which we will be comparing to our data in the likelihood analysis. We do this with the gtsrcmaps tool, which requires the counts map, exposure map, livetime cube and the extragalactic emission model.

Binned likelihood analysis

We use the gtlike tool to construct our maximum likeliness model. The tool takes as input our counts map, exposure map livetime cube, a source model and an extragalactic emission model. There are several optimizers to choose from, and we pick newminuit, the most accurate one. Our output is in the form of an XML model file, with values and corresponding uncertainties for the parameters we had the analysis fit the data to. We perform this analysis for the twenty different WIMP masses for each of the 5 annihilation channels.

Upper limit calculation

As expected, the uncertainties on our fitted flux normalization prefactors for Draco exceed their fitted values i.e., we do not detect a significant gamma-ray signal. We shall therefor instead calculate the 95% confidence level upper limits on the flux. For this, we use the UpperLimits Python module included with the Fermi Science Tools. It requires the same input as gtlike, we again chose newminuit as our optimizer, set the tolerance to 10−10 and employ an energy range of 100 MeV < E < 100 GeV.

3.4

Error propagation

We would like to take into account the uncertainties on the dSph parameters when calculating our upper limits on M. From [21], we know D = 76± 5kpc, ⟨R⟩ ≡

Referenties

GERELATEERDE DOCUMENTEN

Theoretically, many properties of the observed (and expected) HVSs remain poorly understood, including the dominant ejection mechanism. Several different mech- anisms have been

We have measured α and its uncertainty for both GC models using a suite of synthetic data sets generated from snapshots of GRMHD simulations from the GRMHD image library (Paper V

Galactic halo velocity distributions between 50 and 120 kpc for a fixed binary statistical description (see parameters in the upper left corner) but with different treatments of

The residual optical emission in the 2013 observations (when both the X-ray source and the optical counterpart were at their faintest level) clearly places an upper limit to

In our dyncamical models, the distance operates as scaling fac- tor and is directly proportional to the mass of the black hole and anti-proportional to the M /L. This means

There are a number of dataflow languages such as Lustre or Lucid [13], which are close to functional languages and focus on programming signal processing, but do not support

Key words: stars: black holes – techniques: imaging spectroscopy – techniques: radial velocities – binaries: spectroscopic – globular clusters: individual: NGC 3201..

We find that these ‘quasi-stars’ suffer extremely high rates of mass loss through winds from their envelopes, in analogy to very massive stars such as η-Carinae. This relation