• No results found

A Robust Technique for Estimating the Energy Released by AGN Outbursts in Cluster Feedback Systems

N/A
N/A
Protected

Academic year: 2021

Share "A Robust Technique for Estimating the Energy Released by AGN Outbursts in Cluster Feedback Systems"

Copied!
63
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

A Robust Technique for Estimating

the Energy Released by AGN

Outbursts in Cluster Feedback

Systems

Author:

Martijn de Vries

Supervisor:

Dr. Michael Wise

Anton Pannekoek Institute

University of Amsterdam

(2)

Abstract

We present a new, robust method to estimate the energy released by AGN outbursts in clusters of galaxies. The energy released in such systems can have a huge impact on the surrounding environment and precise estimates for the amount of energy released are crucial to understanding how AGN feedback shapes the properties of galaxies and clusters of galaxies. Typically these energies are estimated by measuring the enthalpy contained in cavities observed in the X-ray and believed to be created by the outburst. A variety of techniques have been used in the literature to measure cavity energies, but these techniques have often relied on subjective determinations of the cavity volumes or lacked any consistent way of estimating uncertainties. Our method aims to address these limitations by developing an objective technique for cavity identification including full propagation of the uncertainties in the resulting energy estimates.

We demonstrate this technique using archival Chandra data for the well-known feed-back system 2A0335+096. This system exhibits a complex morphology in the X-ray presumably due to multiple AGN outbursts and has been studied previously using tra-ditional techniques. Unlike these techniques, our method detects the cavities and the corresponding pressure on a pixel-by-pixel basis. We determine the pressure at each pixel using a radial pressure profile for the cluster derived from the X-ray data. We then fit an elliptical surface brightness beta model to the data. The fitting procedure employs a Markov Chain Monte Carlo (MCMC) algorithm to generate a distribution of residual images by subtracting the MCMC-sampled surface brightness models from the data. A contour algorithm is then used on the residuals to find the cavity area. The contours are projected to calculate the cavity volume.

The cavity volume and pressure are combined to calculate the total cavity energy, 5.0 ± 0.9 × 1059 erg. We find that the value derived using our technique agrees with an

earlier estimate made by Sanders et al.(2009). In addition, with our method we are able to quote error bars of 20% in the measured value. Large parts of the method are automated. With further improvements, this technique could be adapted to run on large cluster samples thereby allowing robust, statistical studies of the energetics of AGN feedback.

(3)

Contents

1 Introduction 3

1.1 The cooling flow problem . . . 3

1.2 The AGN feedback cycle . . . 3

1.3 The importance of AGN feedback . . . 5

1.4 Estimating the Feedback energies . . . 6

1.5 Motivation and outline . . . 7

2 Target selection 8 3 Data reduction 9 3.1 The Chandra X-ray Observatory . . . 9

3.2 Data . . . 11

3.3 Data processing . . . 12

4 Determining the Pressure Profile 16 4.1 Binning . . . 16

4.2 Spectral fitting . . . 19

4.3 Temperature profile . . . 21

4.4 Density profile . . . 22

4.5 Pressure profile . . . 22

5 Cavity Identification and Volume 27 5.1 Unsharp masking . . . 27

5.2 Surface brightness profile fit . . . 29

5.3 Elliptical fit . . . 31

5.4 MCMC sampling . . . 34

5.5 Residuals, Cavity Areas and Volumes . . . 36

5.5.1 Residuals overview . . . 36

5.5.2 Cavity Identification . . . 38

5.5.3 Choosing a contour limit . . . 39

5.5.4 Conversion to Volume . . . 41

6 Results 43 6.1 Derived Cavity Energies . . . 43

6.2 Error analysis . . . 45

6.3 Comparison with literature . . . 49

7 Technical Discussion 53 7.1 Assessment of the method . . . 53

7.2 Improvement of the method in future work . . . 54

7.3 Running the method on samples . . . 55

8 Scientific Implications 56

(4)

1

Introduction

1.1

The cooling flow problem

In the 80’s and 90’s, X-ray astronomers were struggling to understand the fate of cooling gas in the cores of massive clusters. Observations of these cores led to the prediction of cooling flows. The intra-cluster medium (ICM) in the clusters cools through collisional excitation and subsequent photo-recombination. The density dependence of collisional excitation means that the dense cluster core will cool faster than the surrounding atmo-sphere. The decreased thermal pressure in the core will result in a matter infall towards the centre. This matter infall is referred to as a cooling flow (Fabian, 1994). In massive clusters, the cooling time of the core is short enough to allow the formation of such a cooling flow.

Assuming that the ICM cools homogenously implies a steep density profile, with a lot of material deposited at the core of the cluster. However, the observed density profiles are not as steep as predicted (Johnstone et al., 1992), possibly indicating that the ICM condenses locally into smaller clouds over a larger volume. The X-ray spectrum of the condensed clouds can be predicted, and is dependent on the mass deposition rate (Pe-terson, et al. 2003).

Analysis of X-ray spectra of cool-core clusters showed a large deficit in the spectrum towards lower temperatures (e.g. Peterson et al., 2001, Kaastra et al., 2001). This lack of cool gas suggested that the cooling flows are not as large as originally predicted. The lack of observational evidence for large cooling flows was known as the cooling flow problem. One possible solution to the cooling flow problem is to find an additional source of heating. Given a sufficiently strong source of heating, the amount of cooling material can be reduced such that it agrees with observations.

1.2

The AGN feedback cycle

In recent years, the improved angular resolution of new X-ray observing instruments such as the Chandra X-ray observatory have detected a wealth of complicated struc-tures in the cores of galaxy clusters such as bright, cold clumps and cold fronts. One particularly interesting feature that has been observed in many clusters is X-ray cavities: surface brightness depressions close to brightest cluster galaxy (BCG) where the hot, X-ray emitting intracluster medium (ICM) has been pushed away (e.g. Fabian et al., 2000; McNamara et al., 2005). These X-ray cavities are often correlated with synchroton radio emission, indicating the presence of relativistic electrons (Blanton et al., 2001; Sanders et al., 2009). The cavities are excavated by jets flowing from an active galactic nucleus (AGN), an accreting supermassive black hole. Figure 1 shows a multi-wavelength image of a cluster in the middle of an AGN outburst. (McNamara et al., 2009).

From Chandra observations, we know that the AGN, the jets and the X-ray cavities play an important role in the life cycle of a cluster, through the AGN feedback cycle

(5)

Figure 1: Multi-wavelength image of the well-known cluster of galaxies MS0735.6+7421 (McNamara et al., 2012). The blue color depicts the X-ray emitting ICM, the white color shows the optical, and red indicates the synchrotron-emitting radio lobes, that correspond with the location of the X- ray cavities.

(6)

(described by e.g. Ciotti & Ostriker, 2001; Pizzolato & Soker, 2005). The AGN accretes infalling matter, and converts some of this matter into energy to create jets. These jets carry energy away from the AGN, displacing the ICM and creating X-ray cavities. The jets, expansion of the gas, and the work done by the rising bubbles on the ICM contribute to heating of the ICM (McNamara & Nulsen, 2007). At some point, the AGN runs out of fuel needed to power the outburst. This decrease in AGN output allows the flow of the cooling material towards the center to increase. The increased flow of cooling material supplies the AGN with more fuel, allowing the cycle to continue.

AGN heating seems to provide a solution to the cooling flow problem: heating the ICM through feedback could suppress the cooling flow enough to match observations (Birzan et al., 2004). To estimate the contribution of AGN heating, assumptions about the size of the black hole and the efficiency with which it can convert infalling matter into energy are required (Fabian et al., 2002; McNamara et al., 2009). Inflating the bubbles will create sound and pressure waves that propagate through the ICM and slowly dissipate (Fabian et al., 2006). These waves are able to distribute the heating throughout the ICM in a continuous fashion, which is another compelling argument for AGN heating (Voigt & Fabian, 2004). Processes other than AGN heating might also contribute, such as the mixing of hot gas from the outskirts of the core with cool core gas. This process is referred to as sloshing (ZuHone et al., 2010).

1.3

The importance of AGN feedback

To better understand the process of AGN feedback it is important to consider the en-ergy budget of the feedback cycle. Sanders et al. (2009) and McNamara et. al (2009) among others, have tried to constrain the energy output of the black hole by studying the X-ray cavities. The cavities are thought to be excavated by jets flowing directly from the AGN. Therefore, the energy content of the cavities serve as a minimum value for the energy output of the AGN. Constraining the AGN energy output can increase our understanding of how the AGN is able to power the outburst (McNamara et al., 2009). The cavities inject their energy back into the system, so estimating the energy content of the cavities can also give an estimate for how long an outburst can heat the ICM, although it is as yet unclear what the efficiency of this energy transport is. Under-standing these processes will give a clearer picture of the way the outburst is powered, how long the outburst is turned on, how much and how often the ICM is heated, and how much AGN feedback contributes to the overall heating of the system.

The AGN feedback loop seems to dominate the heating and the cooling of the ICM in the cluster core (Blanton et al., 2001; McNamara et al., 2001; McNamara et al., 2005). Because of the cooling flow problem we would expect this feedback loop to take place in every cooling flow cluster. Yet X-ray cavities are not always detected. Birzan et al. (2012) investigate the duty cycle of AGN outbursts in cooling flow clusters, and find that at least 63% of all cooling flow clusters contain X-ray cavities. It is likely that some

(7)

X-ray cavities are too weak to detect, because the detectibility of a cavity depends on its location and orientation (Enßlin & Heinz, 2002) and therefore this could be a very conservative estimate: the duty cycle might approach 100%.

Recently, more insight has been gained in the influence of AGN feedback on both its local environment and large-scale structure formation. AGN heating directly impacts star formation rates (e.g. Cavagnolo et al., 2008, Li et al., 2015). Cavagnolo et al. demonstrate that this heating can have an impact on the form of the galaxy luminosity function, and therefore needs to be taken into account when modelling structure forma-tion in the high-redshift universe. Modelling done by Bower et al., (2006) and Croton et al., (2006) suggests that adding AGN feedback to a hierarchical CDM cosmology model provides a good match to recent observations of high-redshift galaxies.

1.4

Estimating the Feedback energies

The energy contained in the X-ray cavities is the sum of their internal energy and the work done by their expansion. Assuming a slow expansion rate, the energy is given by formula 1. Based on the correlation between the cavities and radio lobes, we can assume that the cavities are filled by a relativistic gas, for which γ = 43. This yields Ecav= 4P V . Ecav= 1 γ − 1P V + P V = γ γ − 1P V (1) The formula Ecav = 4P V allows us to determine the energy contained in the cavities

by determining two quantities: the thermal pressure P, and the cavity volume V. Previ-ous papers have estimated the cavity energy by taking a single pressure for each cavity (e.g. Sanders et al., 2009). Such an estimate assumes that the pressure does not change significantly between the inner and the outer radius of the cavity. The cavity volume has been estimated by assuming different kinds of symmetry, such as spherical, or pro-late/oblate ellipsoidal (e.g. Birzan et al., 2004; Dunn & Fabian, 2006; Rafferty et al., 2006, Sanders et al., 2009, McNamara et al., 2012).

There are several uncertainties in these estimates. The pressure is not constant through-out the cluster, but is a variable that decreases radially through-outwards. The pressure at the inner radius of the cavity will therefore be different from the pressure at the outer radius of the cavity. Because the pressure decreases faster at larger radii, the pressure difference within one cavity might become significant for cavities further from the center.

The circular cavity area on the image is determined by eye, which makes it hard to quantify the error. The assumption of spherical symmetry is also problematic. Cavities can have complex morphologies and might not be spherical at all, meaning that the uncertainty on the derived energy is difficult to quantify. The uncertainty will largely depend on how much the cavities differ from their assumed shapes.

(8)

1.5

Motivation and outline

As outlined in the previous section, the methods used in previous papers to estimate the cavity energy have some limitations. In this thesis we develop a new method to improve the cavity energy estimate. We set several demands for our method. We want it to be more precise, so that it is able to give precise values with well-defined error bars. We want it to be robust, so that it will be applicable to other clusters irrespective of the complexity of their cavities. We want it to be objective, so that it is completely quan-tifiable and not dependent on any estimates by eye. And we want it to be automated, with every action scripted. This automation will allow our method to ultimately be run on samples and determine cavity energies of many clusters in one go.

We aim to meet these demands by calculating precise values for the pressure and the volume. Instead of trying to calculate some cavity radius and assuming a spherical sym-metry, we will run a pixel-by-pixel cavity identification, and determine the pressure at that pixel. We can then determine the total energy content while making less assump-tions about the shape and size of the cavities. Moreover, we will also be able to better quantify the uncertainties in our calculations, something that is harder to do with the simple estimate of a sphere or ellipsoid. We will compare our results with estimates done in the literature, to assess both the reliability of this technique and the potential differences with other methods. We will apply our method to one specific target, while keeping in mind that the method should be able to be applied to any cluster.

In Chapter 2 we describe the target we have selected and the reasons why we have se-lected it. In Chapter 3 we describe the Chandra telescope, and give an overview of the available data on the target and the processing that we have applied to it. In Chapter 4, we derive a smooth pressure profile based on our processed data. In Chapter 5 we discuss different ways to identify the cavities, and calculate a cavity volume. In Chapter 6 we combine the pressure profile and the cavity volume to calculate the cavity energies. In Chapter 7 we discuss the flaws and strengths of our method. In Chapter 8 we show the scientific implications of the results that we have obtained. We conclude in Chapter 9.

(9)

2

Target selection

The target that we will study in this thesis is the cluster 2a0335+096: a cool core cluster at a redshift of 0.0349, first detected in 1978 (Cooke et al. 1978). 2a0335+096 is an interesting target because of the large amount of structure that Chandra has detected in its core. It is a prime example of a cool core cluster with a complex morphology,, with different generations of cavities created through multiple outbursts. These cavities could therefore have shapes that deviate greatly from a sphere. With our method we will try to map the complex cavity morphology of the system. Testing the method on a complex cluster will also ensure that the method is more robust, and more likely to work on more regular, less complex clusters.

2a0335+096 is among the 25 brightest clusters in the X-ray sky (Edge et al., 1990). Its brightness means our data will be of better quality, which ultimately enables us to better constrain the final derived energies. At the same time, the target is not so bright that our method will only be useful on this particular system. The fact that it is bright but not too bright gives us a balance between quality of the results and applicability to other clusters.

The system has not only been observed in X-ray, but in other wavelengths as well. Of particular interest is radio emission, because the cavities are filled with a relativistic plasma emitting synchroton radiation. The X-ray cavities and radio lobes should there-fore be correlated, as has been studied by Sanders et al. (2009). Including radio lobes in the analysis might help to further constrain the cavity edges. While this thesis focuses exclusively on X-ray observations, it is something that could be considered for future work.

The structure of the system has been studied in literature (e.g. Mazotta et al., 2003; Sanders et al., 2009; Farage et al., 2012). Sanders et al. identify several interesting fea-tures of 2a0335+096. There are several cool, bright X-ray clumps, consisting of multiple temperature components between 0.5 and 2 keV. There is a low-temperature spiral in the cluster, a feature that has been seen in more clusters, for example in Perseus (Fabian et al., 2000) and Abell 2029 (Clarke et al., 2004). Low-temperature spirals are often associated with the presence of cold fronts, because both are caused by the interaction between the cluster and a small subhalo (Ascasibar & Markevitch, 2006). The cold front is observed to the south-east of the cluster.

Mazotta et al. detected at least 2 X-ray cavities (2003), while Sanders et al. (2009) later detected 5 (figure 2). Some of the cavities are anti-correlated with 1.5 GHz radio emission, although this correlation is not as strong for every cavity. Because of spectral aging, some cavities might have no or very weak radio emission associated with them. Sanders et al. find evidence for one such a ’ghost cavity’. The fact that the cavities have significantly different ages supports the idea that the cavities have been created through multiple generations of AGN outbursts.

(10)

Figure 2: Residual image of 2a0335+096, from Sanders et al. (2009). The markings A, B, C, D indicate 4 X-ray cavities (a fifth cavity is detected but not visible on this image). The cold front can also be seen as a surface brightness depression, south of cavity B.

The energy content of these five cavities has been estimated by Sanders et al (2009) by assuming a spherical volume and a single pressure for each cavity. With this information, they calculate that the total 4PV enthalpy for the cavities is ∼ 5 × 1059 erg. Birzan et al. (2004) show that the total luminosity within the core is 2.9 × 1044 erg/sec. By

dividing the cavity energy by the luminosity within the core radius, we find that the bubbles carry enough energy to heat the cluster for ∼ 5 × 107years.

3

Data reduction

3.1

The Chandra X-ray Observatory

The data was obtained with Nasa’s Chandra X-ray Observatory, an X-ray satellite in orbit, operational since 1999. Chandra is one of Nasa’s Great Observatories, and has contributed greatly to the field of X-ray astronomy. An artists impression of the space-craft is shown in figure 3. The observatory has 4 mirror-pairs and features two observing instruments. The High Resolution Camera (HRC) is able to precisely determine the po-sition of an incoming X-ray photon. With the popo-sition information, it can construct detailed images with a resolution of up to 0.5 arcsecond. It is primarily used for imag-ing hot matter in the remnants of exploded stars (Chandra: Science Instruments, 2013). The Advanced CCD Imaging Spectrometer (ACIS) is an array of CCD cameras (Garmire et al., 2003). ACIS can construct an X-ray image while measuring the energy of each in-coming photon. This capability makes it possible to perform X-ray spectroscopy across

(11)

Figure 3: Artists impression of the Chandra spacecraft (Weiss, 2014)

extended structures, and study temperature or chemical variations. Because we would like to study temperature variations across the cluster, we will be using ACIS observa-tions of our target. We will therefore focus on the ACIS instrument in some more detail.

The ACIS instrument is described by Garmire et al.(2003). It consists of an array of 10 CCD’s of 1024x1024 pixels, each with a field of view of 8.3 by 8.3 arcminutes. For any given observation, a selection of chips can be selected. Any combination of up to 6 CCD’s can be used. Figure 4 shows the layout of the CCD’s. The chips shaded grey in the upper image are the ACIS-I chips. This is an array of 4 front-illuminated CCD’s, placed tangent to the focal surface, that can be used for wide-field imaging. The front-illuminated CCD’s have a lower background, which can be advantageous when ob-serving diffuse objects. The ACIS-S chips are placed linearly in a 1x6 array tangent to the spherical grating. These chips can be used for both imaging and spectroscopy. The S3 and S1 chips are back-illuminated, which results in a better response at low energies. This is why cool-core clusters are often observed on the back-illuminated chips. With the front and back-illuminated chips, ACIS can detect X-ray photons between 0.4- 11 keV.

ACIS can operate in two different modes: Timed Exposure (TE) mode and Continuous Clocking (CC) mode. In TE mode, a frame time between 0.2 and 10 seconds can be chosen, with an optimal time of 3.2 seconds. The frame time can be lowered, but this will result in time in which no actual observing is done, because the readout time is 3.2 seconds. There is a chance that two photons will hit the same pixel within the readout time. This will cause the two events to register as only one event, with the energy of the two photons combined. This effect is called photon pile-up. Because we are observing a cluster, which is an extended and diffuse object, the count-rate per pixel is low enough that pile-up is not a problem.

(12)

Figure 4: Layout of the ACIS CCD chips. The x indicates the aimpoint for each of the two arrays (Chandra: Science Instruments, 2013).

For observations that require a high time resolution, CC mode can be used. CC mode gives a very fast readout time of only 3 msec, but this comes at the expense of one spatial dimension. The obtained image will therefore be 1x1024 pixels.

3.2

Data

The Chandra Data Archive (CDA) stores all previously made observations, which are freely available to download. A search through the archive shows the observations of 2a0335+096 that have been made so far. They are summarized in table 1. All observa-tions have been made with the ACIS-S chips and the target on the back-illuminated S3 chip.

Table 1: Summary of the observations of 2a0335+096 that have been made so far. The exposure time here is the unfiltered exposure time, before any processing.

Obs ID Instrument Exposure time (ks) PI name public release date 919 ACIS-S 19.72 Fukazawa 15/09/2001 7939 ACIS-S 49.54 Sanders 23/12/2008 9792 ACIS-S 33.74 Sanders 23/12/2008

(13)

3.3

Data processing

Chandra data can be reduced, filtered, and analysed with a software system called Chan-dra Interactive Analysis of Observations (CIAO). CIAO can automate certain steps in the analysis, such as reprocessing, or filtering data within a good time interval or good energy range. It also contains several analytical tools such as plotting software and a fitting and modelling package called SHERPA. For our analysis, we have used CIAO 4.6. The instrumental effects of the telescope and the data processing steps we take to counter these effects, are all described on the CIAO website (CIAO, 2015).

The data that is downloaded from the CDA contains different files. The most important data files are listed below.

The level 2 event file The most important file: the level 2 event file is a large 1-dimensional array recording the position and energy of every photon hitting the detector. The file is usually binned by position to show a counts image, which shows the amount of photons that hit each pixel on the CCD. The level 2 event file is created by filtering the level 1 event file on Good Time Intervals (GTI’s). Bad pixel file A file that lists the bad pixels during the observation. Feeding this file

to a CIAO command will make sure the bad pixels will not be included in the calculation

Aspect solution file To avoid radiation damage to the CCD, the Chandra pointing position on the sky is a regular pattern over the course of a typical observation. The aspect solution file records the pointing position of the telescope on the sky as a function of time. This file is used to correct for this motion during processing to produce an unblurred image

Mask file The mask file records which parts of the detector were used for the observa-tion. This information is needed to produce a spectrum of the data.

To make sure that the newest calibration is applied to all the data, we run CIAO’s automated reprocessing script. The reprocessing script automates some data processing steps, and applies the newest calibrations to the data. The output of the reprocessing script is re-calibrated level 2 event file. Below, we will explain the steps that are encom-passed by the reprocessing command.

Due to a readout error on the ACIS-S4 Chip, bright streaks fall upon rows of pixels for every observation. The location of these streaks varies between readouts, and they can therefore not be removed through standard bad pixel filtering. Therefore, the data first needs to be destreaked. After that, a new bad pixel file can be made. The bad pixel file is generated by identifying bad pixels from the calibration database, and identifying hot pixels caused by cosmic rays.

The next step is to generate a new level 1 event file, which has the latest calibration ap-plied. Early in the Chandra mission, the ACIS CCD was damaged by too much proton

(14)

Figure 5: Plot of sigma clipping for Obs ID 919. The green points are not filtered, since they are close enough to the mean count rate. The white data points are filtered out of the dataset.

exposure. This proton damage causes a loss of charge during the CCD readout, an effect called Charge Transfer Inefficiency (CTI). The CTI is slowly increasing over time, so the recalibration corrects for the effect of the CTI as a function of temperature and time. The CTI also has an effect on the effective gain of the telescope, so a time-dependent gain adjustment is also aplied.

With the level 1 event file, a level 2 event file can be generated. This is done by fil-tering the level 1 event file on GTI’s, to take out any possible flares. CIAO includes a procedure called sigma clipping that filters flares by taking out any data points that are significantly above the mean countrate of the exposure. The result of such a sigma clipping is shown in figure 5.The filtered exposure times of the data are shown in table 2. We can see from figure 5 and table 2 that none of the datasets had significant flares. With the creation of a new level 2 event file, the reprocessing is done. Because the level 1 event file has no further relevance, we will simply refer to the level 2 event file as ’the event file’ in the rest of this thesis.

Table 2: The exposure time of every ObsID after filtering on GTI’s. The total filtered exposure time is 101.46 ks

Obs ID Unfiltered Exp time (ks) Filtered Exp time (ks)

919 19.72 18.70

7939 49.54 49.02

9792 33.74 33.74

The event file contains the raw data of the observation. However, this file still contains instrumental artifacts. These artifacts are a result of the fact that the total observing area of the telescope is not the same as the effective area. The difference is caused by numerous position- and energy-dependent effects, such as mirror reflectivity and the

(15)

decrease in throughput for off-axis objects (vignetting). The effective area is therefore dependent on energy and position. To remove these artifacts, it is necessary to generate an exposure map, which specifies the effective area at each pixel of the image. Dividing the counts image by the exposure map will yield a flux image, which can be used to calculate the surface brightness of the object.

To calculate the exposure map, we first need to calculate the instrument map: a map of the effective area at each location as projected on the detector. Because the effective area is energy-dependent, we need to know at which energies to perform our calculation. We achieve this by taking a spectrum of the central 80 arseconds of the source, and fitting a spectral model to it with Xspec. The fit uses the MEKAL plasma emission model (Liedahl et al., 1995). This model describes an emission plasma in collisional equilibrium. The MEKAL model is combined with an absorption model (wabs) to de-scribe the galactic absorption. A single-component model did not provide a good fit to the spectrum, so we have fit a two-component model where each component has its own temperature, hydrogen density, abundance, redshift and spectral normalization. Such a two-component model is also consistent with observations by Mazotta et al. (2003) and Sanders et al. (2009), who found that the bright clumps in the core have a temperature different from the surrounding ICM.

For this fit, the hydrogen density and the redshift were frozen while the galactic column density, temperature, abundance and spectral normalization were allowed to vary. The spectrum and the model are shown in figure 6. The results of the fit are shown in table 3.

Table 3: The fit result of the two-component spectral fit. The column density is part of the WABS galactic absorption model and has only one component. The other parame-ters are part of the MEKAL model.

Fit parameter Component 1 Component 2 Column density 1022(cm−2) 0.22759 ± 0.00182467 kT (keV) 2.61061 ± 0.0380809 1.15666 ± 0.095996 nH (cm−3) (Frozen) 1.0 1.0 Abundance 0.631834 ± 0.0162257 0.631834 ± 0.0162257 Redshift (Frozen) 0.0349 0.0349 Normalization (4.17133 ± 0.09809) × 10−2 (4.47044 ± 1.11160) × 10−3 CIAO can use the spectrum as a means of weighing each energy band, and calculate the instrument map. The exposure map is then made by projecting this instrument map on the sky. This is different from the static instrument map, because the instrument makes small motions throughout the exposure. This information is saved in the aspect solution file. With the instrument map and the aspect solution file, CIAO can calulate the exposure map with the mkexpmap command.

(16)

Figure 6: The spectrum of the central 80 arcseconds of 2a0335+096. A 2-component MEKAL model is fit to the spectrum. The residual image is plotted below.

(17)

Before making a flux image, the background image must also be determined. This can be done by choosing a region within the image that is far from any source, and taking the average counts rate in this region as the background. Because 2a0335+096 is an ex-tended source, it is hard to find such a background region on our data. We therefore used background images from the ACIS Calibration Database. These are sets of blank-sky images, created for different CCD’s and focal plane temperatures. These background images are then scaled to match the exposure of our datasets in the 9.0 - 12.0 keV band. This energy range is chosen because this band is dominated primarily by background photons .The static background images do not account for the random motion of the instrument. Therefore, they also need to be projected onto the sky with the mkexpmap command, so that the resulting background images follow the exact same motion as our data.

We now have exposure maps and background maps for every chip of the ACIS observa-tion. For each observation, the maps for every chip are reprojected to a common set of World Coordinate System (WCS) coordinates and combined into one map that includes all chips of one of the three observations. These background maps and exposure maps can then be added to make a large mosaic ’total background map’ and ’total exposure map’. In the same way, the event files for each observation can also be added and repro-jected to WCS to form the ’total event file’. With the mosaic files, we have combined all the information of the three observations. The total event file mosaic is shown in figure 7.

We can now create a flux image by subtracting the background from the counts image, and dividing by the exposure map, F lux image = Counts image−Backgroundexposure map . The flux image is filtered to show only the 0.5 - 7.0 keV band. The core of the flux image is shown in figure 8. The last step that we need to perform on the flux image is filtering out the point sources. This is done manually, by drawing regions over point sources on the image and setting the flux there to zero. In this way, the point sources will not interfere with our spectral analysis and surface brightness determination.

4

Determining the Pressure Profile

The pressure profile is one of the two components that we need to compute the cavity energy. The pressure is in turn dependent on the temperature and the density, through the ideal gas law. The ACIS is instrument is able to determine the energy of each incom-ing photon. This makes us able to see the spectrum of the data, and fit a spectroscopic model to it. By fitting a spectrum, we can determine the temperature and the density of the data.

4.1

Binning

The event file of 2a0335+096 does not contain enough counts to be able to do reliable spectral fitting for each pixel: many of the pixels have only one or zero counts. We

(18)

Figure 7: Mosaic of all counts images, combining the event files of all three observations into one

(19)
(20)

Figure 9: Example annular binmap for a S/N ratio of 100

therefore need to bin the data, which will give a spectrum with more data points, and consequently better constraints on the quantities we want to fit to the spectrum.

We want to bin the data in such a way that we can eventually fit a radial profile through the value in every bin. To obtain a radial profile, we wrote a script that bins the data adaptively into annular bins around a specified center pixel. The adaptive binning en-sures that each bin contains around the same number of counts, and therefore that the S/N ratio stays constant. This also means that the errors in each bin are constant. In reality, the errors of the outer bins will be higher, because the contribution of back-ground counts increases. For the center pixel, we used the BCG of the target (as shown in Sanders et al., 2009). The script also calculates the surface brightness for every bin, which will be used later. The resulting binmap is shown in figure 9. With annular bins we can eventually fit a profile through the computed value in every bin. This will allow us to get a smooth profile, from which we will be able to compute a pressure value at every pixel.

We have made maps for S/N ratios of 50 and 100. However, the core of the cluster is bright enough that the annular bins will still be small in radius even with a S/N of 100. We have therefore chosen to continue with the S/N 100 binmap. It will give us better constraints on the temperature and the density, while the bins are still not so far from eachother that it will significantly increase the errors on the pressure profile fit.

4.2

Spectral fitting

The spectrum that is extracted from the data is not the same as the physical spectrum. Two files are needed to correct for instrumental effects. The Ancillary Response File (ARF) is a 1-dimensional array that corrects for the quantum efficiency and effective

(21)

area of the telescope. The Redistribution Matrix File (RMF) is a matrix that maps the relation between the detector pulse height and the energy of the photon. Because the resolution of a detector is not infinite, there will be some ’counts spreading’ by the detector. For high resolution instruments the matrix will be almost completely diagonal, but with ACIS there can be significant deviations.

With the binmaps, we can proceed to do spectral fitting for every bin, and obtain maps of the variables that we fit. The fitting is automated by the script in the following way:

1. From the input binmap, make CIAO region files

2. For every bin extract the source and background spectra from the input event files, with the CIAO specextract command. Specextract takes the event file, the bad pixel file, aspect solution file and mask file and generates the spectrum and the ARF and RMF files.

3. Call SHERPA, CIAO’s fitting and modelling package, to fit a MEKAL spectral model to every bin. The redshift and an estimate for the galactic column density are required input. Single or multiple temperature models can be chosen, and dif-ferent parameters can be frozen or thawed. The fits are performed simultaneously on all 3 observations.

4. Store the resulting parameters (t, σt, Z, σZ, norm, σnorm, χ2) in parameter maps,

where every bin now contains the value of the associated parameter in that bin. The spectrum in any given bin could consist of gas in different phases, that carry dif-ferent temperatures with them. It is therefore possible to treat the spectrum as sum of the emission from different gases with different temperatures. In this case, just fitting one temperature will not lead to a satisfactory result, and we would need to perform a fit with 2 or more components. While making the exposure map, we have seen that a two-component model gives a good fit to the spectrum of the inner 80 arcseconds of the cluster (figure 6). This is backed up by Sanders (2009) who also see evidence for for a multi-component core , because the bright blobs in the core have a different temperature than the surrounding medium.

However, while the core as a whole is well approximated by a two-component model, we have found that a single-component model gives a good fit for an individual annular bin.. Because the region in a single bin is a lot smaller and has a lot less counts, fitting two-component model would lead to a bad quality fit. At a S/N ratio of 100, adding an extra component to the fit did not improve the quality. Therefore we have assumed a single component model throughout.

The MEKAL spectral model fits several parameters to the spectrum. Some of these parameters can be frozen, in order to reduce the errors on the other parameters. Before the fit, we estimate the galactic column density nH with the help of HEASARC’s nH

(22)

Figure 10: Plot of temperature (kT) as a function of radius (arcsec), with 1σ error bars.

during the fit. The redshift is also frozen at the known value of 0.0349 and the hydrogen density is frozen at 1.0cm3. This leaves us with 3 parameters to fit: the temperature,

the metallicity, and the spectral normalization, as well as their associated errors, and the χ2 value for each bin. The output of the script is a total of seven parameter maps.

We have also ran the fitting procedure with the hydrogen density unfrozen, but found that this leads to bigger errors on the other parameters, while not significantly changing the output.

4.3

Temperature profile

The temperature map is obtained by fitting a temperature to the spectrum in each bin. A plot of the temperature as a function of radius is shown in figure 10. The temperature is the coolest in the center, which is expected because there the gas is densest and has the most collisional cooling. The temperature decreases a very small amount before it slopes back up. This effect is seen in other clusters as well, and is likely caused by the contribution of non-thermal emission from the central AGN to the innermost bin. At larger radii, the errorbars increase, and they get very big for the outermost bins. This is because, for the outermost bins, the spectrum consists almost entirely of background counts. This makes it much harder to constrain the temperature in these bins.

(23)

4.4

Density profile

The density is dependent on the spectrum normalization, which is a property that has been calculated in the fitting process. The spectrum normalization is dependent on the emission measure, which is a measure of the amount of emitting material within the volume sampled by the spectrum. The relationship between emission measure and density is given by equation 2. To obtain the density for each bin, we convert between the emission measure (EM) and the spectral normalization (Y) by multiplying with the MEKAL constant in equation 3 (Dorman, 2013). The MEKAL constant depends on the angular distance from the center (Da) and the redshift z. We also assume abundances of

X=0.75 and Y =0.24. From this, it follows that nHne= 1.23n2. We can then compute

the density through equation 4.

EM = Z nHnedV (2) Y = EM ∗ 10 −14 4π(Da(1 + z))2 = EM ∗ cM EKAL (3) n = r Y 1.23 ∗ cM EKAL× V (4) The volume can be calculated analytically because we are using spherical shells with known inner and outer radii. Figure 11 gives an illustration of the volume along the line of sight. Suppose that we want to know the volume inside the green bin, as we see it in the 2-dimensional image. Regions A1 and A2 are not part of the 2-dimensional bin, so they are not counted in the volume calculation. The total bin volume is the solid of rotation of region B around the vertical axis. In terms of an inner radius rmin and

an outer radius rmax, the volume is given by equation 5. The 3 terms in the equation

correspond to the total volume contained within the bin, minus the volume of cylinder DEFG, minus the volume of the spherical caps north and south of the cylinder.

(5) V = 4π 3 r 3 max− 2(πr 2 min q r2 max− r2min) −2π 6 (rmax− q r2 max− r2min)  3rmin2 + (rmax− q r2 max− rmin2 ) 2 

With equations 4 and 5, we can calculate the density. The resulting density plot is shown in figure 12. The gas is dense in the center of the cluster, and the density slowly decreases outwards, as we would expect. The gas density increases for a few bins before it starts decreasing. This is the same effect as we saw on the temperature profile, and reflects the fact that our central bin may include additional spectral component due to the central AGN.

4.5

Pressure profile

The pressure is dependent on the temperature and the density through the ideal gas law, which can be written as equation 6. We can now take the temperature and density in

(24)

Figure 11: Illustration of the binvolume for a sample cluster (M. Wise, private commu-nication).

(25)

Figure 12: Plot of the density (cm−3) as a function of radius (arcsec), with 1σ error

(26)

each bin, and compute the pressure in exactly the same manner. The calculation yields a pressure curve from which we can calculate the radial pressure profile. This pressure map is shown in figure 13(a).

We would like to model the pressure to a smooth function. We therefore fit a beta model through the data (Cavaliere & Fusco-Femiano, 1976). The beta model is often used to describe the properties of X-ray emitting gas. This profile assumes a fairly flat dependence in the core, followed by a steep dropoff beyond a certain radius, as given by equation 7. Pcoreindicates the value of the pressure at the core. rcoreis the core radius,

where the pressure begins to drop off. β is the factor that determines the steepness of the pressure drop. As we can see from 13(a), the errors on the outer bins grow increas-ingly large. Similar to the large bins on the temperature profile, these larger error bars are caused by the increasing contribution of background counts. For this reason, we have not taken the outer few bins into account during the fitting. The beta model fit is shown in figure 13(b). The fit output is shown in table 4.

P = nkT (6) P (r) = Pcore(1 + r rcore 2 )−3β+12 (7)

Once we have performed the fit, we can find out the error at each point by Monte Carlo simulation. With the fit parameters and the covariance matrix, we generate a set of 200 random, normally distributed fit parameters. With these randomly generated fit parameters we plot a distribution of beta models, and take the width of this distribution as the 1σ error bar. The pressure profile with smooth error bars is shown in figure 13(c), and is the profile that we will ultimately use for our energy calculation.

Table 4: Output of the pressure fit Pcore(erg/cm3) (3.086 ± 0.100) × 1010

rcore(arcsec) (94.34 ± 6.447)

(27)

Figure 13: Steps to obtaining a smooth pressure profile. The top image(a) shows the pressure for each bin. The middle image (b) shows a beta model fit through these data

(28)

5

Cavity Identification and Volume

Now that we have the smooth pressure profile, the next step is to determine the total volume contained in the cavities. To do that, we first have to identify the total cavity area on our image. We therefore need a method to determine which pixels we classify as cavity pixels.

To identify cavities, we make the assumption that the flux image of 2a0335+096 is com-posed of two different parts. Firstly, the large-scale, smooth surface brightness profile. This surface brightness profile follows a beta model, similar to the pressure profile in section 4.5. The cluster contains a bright core, a steep drop in brightness beyond a certain core radius, and eventually flattening out at a background level. The second component of the image are the smaller-scale deviations from the smooth model. These deviations are seen in the center of the cluster and are either brighter clumps of gas or dips in the surface brightness, relative to the smooth component.

If we identify the smooth profile, and subtract it from the flux image, we are left with a residual image. This residual image should only show the deviations, without any interference from the smooth surface brightness profile. We can then try to identify cavity pixels on this image, by looking at how much the pixels deviate from the mean. To make a good residual image we will have to determine the smooth surface brightness profile as precisely as possible. There are multiple approaches to this, as we will discuss in sections 5.1 to 5.4.

Once we have identified all the cavity pixels on our image, the next step is to convert the cavity area into a cavity volume. We have seen in the introduction, that the approach that has often been taken is to assume a spherical or elliptical shape. There are also other projection methods possible. We discuss our method in section 5.5.

5.1

Unsharp masking

Unsharp-masking is technique which can enhance features of a certain scale in an image by subtracting a smoothed version of the original image. This unsharp mask is a blurry version of the actual image, smoothed over certain scales. When the unsharp mask is subtracted from the image, we are left with a residual image that enhances features below the smoothing scale, while features with sizes around the smoothing scale will generally disappear. This technique is also used by Sanders et al. (2009), to highlight the cavities of 2a0335+096. Two unsharp masked residuals made with two different smoothing scales are shown in figure 14.

While this is a quick and easy technique, that immediately gives better insight in the core structure of 2a0335+096, there are also some problems with unsharp masking. The creation of an unsharp mask introduces extra parameters to the problem. To make the residual image we need to decide the smoothing scale that we would like to use.

(29)

Choos-Figure 14: Residuals made by unsharp masking, by subtracting the data smoothed over 20 pixels (top) or over 30 pixels (bottom). The cavities and bright features of the object become more enhanced when smoothed over a radius of 30 pixels. Arc-like artifacts can also be vaguely seen around the object

(30)

ing a scale that is too large or too small will not properly highlight the features that we would like to see. We have followed the example of Sanders et al. (2009), and used a smoothing scale of 30 pixels. From figure 14, we can see that choosing a smoothing scale of 30 pixels better highlights the cavities than choosing a smoothing scale of 20 pixels. However, this choice is a subjective one made by eye and by guessing a good smoothing scale, based on the scale length of the cavities. It is hard to quantify which smoothing scale corresponds the best with the real smooth surface brightness profile of the object. It is also hard to automate such a subjective definition, and automation is ultimately one of our goals.

Another problem is that the image can be smoothed with different smoothing func-tions. In the image above we have smoothed with a Gaussian. However, introducing a Gaussian smoothing function introduces arc-like artifacts in the image, because the shape of the smoothing function might not correspond with the shape of the surface brightness function. These artifacts can bias our search for cavity pixels, because the surface brightness might be over- or underestimated at certain points.

Because of the smoothing and the subtraction it is hard to quantify the uncertainties involved when making the residual image. We can see in figure 14 that the residuals for different smoothing scales can look very different. Yet, this difference hinges on a fairly arbitrary decision, and we do not have a good way of saying which residual image correspond closest to the real situation. This is one of the reasons why it is hard to quote errors on the cavity energy calculation.

5.2

Surface brightness profile fit

Because of the problems with unsharp masking, we try a different technique. We have calculated the surface brightness and associated errors for the same annular bins that we used in the pressure calculation in section 4.1. We can fit a beta model to this data to obtain a smooth radial surface brightness profile. Subtracting this profile from the flux image gives us another way to obtain a residual image. We fit a beta model to the data, as described in equation 8.

S(r) = Score{1 + (

r rcore

)2}−3β+1

2 + Sb (8)

Anologous to the pressure profile fit, S(r) is the surface brightness at radius r, Score is

the surface brightness in the core, rcorethe radius of the core, and β a factor that

deter-mines how steeply the surface brightness drops outwards. Finally, Sb is a background

term that the function should eventually flatten towards. The fit can be seen in figure 15. The output fit parameters can be seen in table 5. With this fit, we can make a radial profile image, where each pixel has the surface brightness value that is determined by the beta model and the fitted parameters. This image can then be subtracted from the real image to obtain the residual image, shown in figure 16.

(31)

Figure 15: Beta model fit of the the radial surface brightness profile

Figure 16: Residual image made by subtracting the radial SB beta model fit from the image

(32)

Table 5: Output of the 1-dimensional surface brightness fit fit Score(counts cm−2 arcsec−2 s−1) (1.466 ± 0.053) × 10−7)

rcore (arcsec) 31.22 ± 1.125

β (5.580 ± 0.064) × 10−1 Sb (counts cm−2 arcsec−2 s−1) (2.399 ± 0.157) × 10−10

Figure 16 shows the residual image that is created by subtracting the radial surface brightness profile from the image. Similar to the residual image created by unsharp masking, we can again see arc-like artifacts on the residual image. These seem to be caused by the same issue: we have assumed a radially symmetric brightness profile, while our original target is not a perfectly radially symmetric source. The mismatch in shapes causes these artifacts. Something else we can see from this residual image, is the big black spot on the north side of the center. This black spot would suggest the presence of a large cavity at this location, which is not visible on the unsharp masked residual image. This suggests that the beta model fit is not precise enough, causing too much flux to be subtracted from this area. Another factor that will play a role in this is the strength of the surface brightness dips compared to the overall surface brightness function. What we are trying to fit is the smooth surface brightness function, yet we have to perform our fit on the entire image, including all deviations from the smooth function. Larger deviations will distort our fit more, and cause the resulting residual image to be less accurate.

The large surface brightness dip on the residual image will result in artificially large values in our cavity energy calculations. We will therefore consider a more flexible model in the next section.

5.3

Elliptical fit

One problem with the two residuals that we have created above, is that arc-like artifacts are created when doing the subtraction. In the radial surface brightness profile fit, we have also seen that the fit is not precise enough, causing too much or too little flux to be subtracted from certain areas. This will heavily distort our cavity identification procedure.

To obtain a better smooth profile, we need to find a surface brightness profile that bet-ter matches the actual shape of the object. We can improve on the surface brightness profile by performing a 2-dimensional, rather than a 1-dimensional fit. By fitting in 2-dimensions, we don’t have to assume radial symmetry and we can fit a more general shape. The fitting procedure is inspired by GALFIT, software designed to fit surface brightness models to optical data (Peng et al., 2002; Peng et al., 2010). GALFIT uses a general elliptical shape, which can be modified by Fourier modes to make highly irreg-ular shapes. The shape is described by equations 9 and 10.

(33)

Figure 17: Sample surface brightness images for different parameter choices(Peng et al., 2010) r0(x, y) = r x2+ (y q) 2 (9) r(x, y) = r0(x, y) ( 1 +X m amcos(m[θρa+ φm]) ) (10)

Equation 9 describes the radius of an ellipse with an ellipticity factor q, with q=1 for a circular shape that becomes more elliptical as q approaches 0. Equation 10 then mod-ifies the radius of equation 9 by adding Fourier terms to it. In this equation, θρa is

the cosine phase angle. φmand am are free parameters, that describe the rotation and

the amplitude of the Fourier mode. Figure 17 shows how these parameters modify the elliptical shape for varying q, am, θρaand φm.

In principle, endlessly complicated shapes can be made by adding many different Fourier modes with different amplitudes and rotations. In our fitting procedure, we choose to only add the first-order Fourier term, m=1. This is done for two reasons: firstly, the shape of 2a0335+096 seems to match this type of shape quite well. Secondly, our X-ray data is much more limited than the optical data that GALFIT was designed for. With just the first-order Fourier term, we already have 9 free parameters. We would like to limit our free parameters as much as possible to constrain the parameters as well as possible. By filling in equation 10 for m=1, we obtain equation 11.

(34)

r(x, y) = r0(x, y) {1 + a cos(θρa+ φ)} (11)

Equation 9 and 11 describe the shape that we want to fit, through the radius. We can insert this radius into the beta profile, given by equation 8. These three equations together describe the model that we want to fit to the flux image. Taken together, we end up with 9 free parameters to fit:

Central pixel coordinates These two free parameters, cp x and cp y, describe the coordinates of the center of the ellipse. Initially, we used the location of the central BCG (Sanders 2009), but found that taking this as the central pixel does not necessarily correspond to the best fit. These parameters turn out to have a large impact on the χ2value, and consequently are left as free parameters. Ellipticity The factor q that describes the ellipticity of the shape. The shape is circular

for q=1 and becomes more elliptical as q goes to 0.

Fourier parameters These are the Fourier amplitude a and the rotation φ.

Beta model parameters These are the parameters that we fit in our original beta model: the central surface brightness Score, the core radius rcore, the beta factor

β and the background Sb.

The fit is done with Maximum Likelihood Estimation (MLE). MLE is a way of deter-mining the likelihood that a model with certain parameters reproduces the data that you observe. By maximizing this likelihood, we can find which fit parameters are most likely to reproduce our data. To find the likelihood, we need our physical model, de-scribed above in equations 8,9 and 11. We also need the statistical distribution. For our data, which consists of photons arriving one by one at the telescope, this is a Poisson distribution. Equation describes a simple Poisson distribution, dependent only on a background rate λ.

P (y|λ) =e

−λλy

y! (12)

Equation 12 gives the probability that a singe pixel has the value Y, based on a Poisson distribution with just one parameter λ. In our case, the value of every pixel is de-scribed by the physical model, which we call m(θ) with K parameters θ = θ1, θ2, ..., θK.

The likelihood that we are looking for is then the Poisson probability for every pixel, multiplied over all N pixels. This likelihood is given by equation 13.

L(θ) = P (y|θ) = N Y i=0 e−mi(θ)m i(θ)yi yi! (13)

Because the likelihood can get quite large, it is more convenient to calculate the log of the likelihood instead of the normal likelihood. This change will allow us to simplify the equation, because taking the log of the product sign Q allows us to convert it into a summation sign P. Furthermore, we switch signs in the equation, because it is

(35)

numerically easier to minimize something than to maximize something. So instead of maximizing the likelihood, we will minimize the negative log-likelihood:

− log L(θ) =

N

X

i=0

(−mi(θ) + yilog (mi(θ)) − log (yi! )) (14)

During the fitting process,we want to make sure that our parameters stay within ac-ceptable ranges. We do this by setting priors on all of the parameters. Priors are a way of specifying our advance knowledge of the model. We can specify bounds on every parameter by setting a very high prior value on values outside of our permissible range: for example, we don’t want the central pixel of the ellipse to lie outside of the image bounds. While fitting, the algorithm might try to fit a parameter value outside of the specified range, but the high prior value will ensure that the minimization algorithm forces the parameter back into the desired range again. The prior allows us to also set a higher probability for certain pixels that are more likely to be near the best-fit value. The prior function is multiplied with the likelihood function to obtain a posterior function. This is the function that we ultimately minimize.

Sample runs of the fitting procedure show that the log-likelihood function is orders of magnitude bigger than the prior function and dominates the value of the log-posterior function. This means that the precise nature of the priors that we set are relatively unimportant, since it is going to have very little impact on the final value of the log-posterior function. The most important thing the priors do in our case is to make sure that the parameters don’t jump outside permissible values. We have therefore set ranges on all fit parameters, and made all values within that range equally likely. The fitting output is shown in figure 18.

5.4

MCMC sampling

Now that we have performed the fit, we need some way to measure its quality. The most straightforward to do this is with the χ2-statistic, which sums the squared deviation

be-tween the data and the model. In our case, the χ2is not a very good measure of the fit

quality because of the way our problem is set up: deviations from our model is exactly what we are looking for. We already know that our data can have deviations at certain points on the image, and so the χ2-value could be high in those places, provided that the cavities are strong. The χ2-value can therefore be high for two different reasons:

either the cavities contain large surface brightness dips, or the quality of the fit is bad. The χ2 value doesn’t differentiate between these two cases, and as a result it does not

tell us what we want to know.

Rather than trying to find a goodness-of-fit statistic, we would like to sample our log-posterior function around the best-fit parameters. Because we have 9 fit parameters, the log-posterior function is a highly complex function in a 9-dimensional parameter space. We can indirectly probe this function by setting up a Markov chain that has

(36)
(37)

the same distribution as our probability distribution function. If we iterate through this Markov chain enough times, and take samples at every point proportional to the value of the log-posterior function at that point, we end up a with a parameter distri-bution that looks like our actual log-posterior function. This is the basic idea behind Markov-Chain Monte Carlo (MCMC) sampling. (For a review, see Andrieu et al., 2003).

There are families of MCMC algorithms that have different procedures to set up the Markov chain, and how exactly to walk through it, and which values need to be accepted or rejected. We have used the python emcee package, which uses Goodman and Weare’s Affine Invariant Ensemble Sampler (Goodman & Weare, 2010). We draw a random sample of parameters from a normal distribution with our best-fit parameters as the mean and the covariance matrix as standard deviation. We feed these samples to the Sampler, which will then perform its Markovian walk through the log-posterior function.

The result of the MCMC run is a large set of parameters that sample the log-likelihood function. We can draw a random sample from this parameter set, make a surface bright-ness model from these parameters, make a residual image, and use this residual image in our cavity energy calculation. If we repeat this procedure with many residuals, created by sampling random parameters from the MCMC run, we end up with a distribution of energies. From this distribution we can calculate a mean and a variance, which will tell us how different sets of parameters, sampled around our best-fit parameters, affect our final calculations.

We can check the results of the MCMC run by plotting the entire walk through param-eter space for each paramparam-eter, shown in figure 19. From this plot we can see that the value for each parameter hovers nicely around the best-fit values. This is a strong indica-tion that the parameter distribuindica-tion we have indeed looks like the log-posterior funcindica-tion.

5.5

Residuals, Cavity Areas and Volumes

5.5.1 Residuals overview

We have considered three methods to obtain a residual image. One is with the resid-ual image made through unsharp masking, shown in figure 14. The advantage of this method is that it is very easy and computationally inexpensive to do, and that one can quickly gain insight in the structure of the cluster. The disadvantages are additional parameters to the problem, that are hard to optimize: the smoothing scale, and the smoothing shape. The smoothing shape leads to arc-like artifacts on the image, which might skew our calculations.

The second method by performing is by taking the surface brightness from each annular bin, and fitting a 1-dimensional beta model through it. This residual image is shown in figure 16. However, this yielded a residual image of poor quality. The same arc-like

(38)
(39)

artifacts are visible on the residual image because of the mismatch in shape, and it is clear that some parts are severely over- or underestimated. Therefore, we decided to improve on it.

To better approximate the shape of the object, our third method was to perform a 2-dimensional surface brightness fit. Instead of trying to determine a quality-of-fit statistic, we MCMC-sampled the log-likelihood function of this fit, and then obtaining a residual image calculated from random parameter sets out of this sample. The best-fit residual image, made by subtracting the model calculated with the best-fit parameters, is shown in figure 18. We will use the unsharp masked residual image, and the family of residuals created from the MCMC sample for our further analysis.

5.5.2 Cavity Identification

The original intent was to make a pixel-by-pixel mask for every residual image, and calculate a volume from this mask. We can make a mask by comparing the data with a simulated model from the MCMC sample. For every pixel, we can check what the likelihood is that the data is reproduced by this simulated model. This likelihood then allows us to make a cavity mask. We could, for example, choose a cutoff likelihood of 0.001: this means if the data perfectly agrees with the model, only 1 in 1000 pixels will be below this likelihood. This should allow us to filter for deviations from the smooth surface brightness model, and give us a way to identify cavities.

Unfortunately, a mask created in this way, shown in figure 20, does not allow us to reliably calculate a cavity volume. The pixels that have a big enough brightness dip to be classified as cavity pixels are too spread out and never form a single area that can be classified as a cavity. This makes it hard to specify the edges of every cavity, and to determine which areas can be said to be a single cavity in the first place: something that is important for the volume calculation. More data and a better signal to noise ratio would improve this situation. The fact that 2a0335+096 is such a complex object exacerbates the problem: the relatively large deviations from the smooth profile make both unsharp masking and the surface brightness fitting methods more unreliable.

Instead of using the mask show in figure 20, we use a contour algorithm on the residual image. Creating contours at a certain smoothing scale and contour limit will create cavities with well-defined edges. This will allow us to consider the volume for each cavity separately, before adding them all together. An example of contours around the cavities is shown in figure 21. When making contours, two additional parameters enter the problem: the smoothing scale of the image, and flux limit at which we place the contours. We need to consider these parameters more closely to find suitable choices for them.

(40)

big-Figure 20: Example of a mask created by comparing the data and the model, with a cutoff likelihood of 0.05. The red pixels in the image are cavity pixels.

ger than the PSF of the telescope. Smoothing over a scales close to or bigger than the cavity size, will ’smoothe out’ the cavities, which will make us unable to identify them. Not smoothing at all will give us the same problem as when making the mask: it will yield many small, 1 or few-pixel sized cavities spread out over the image. We therefore choose different smoothing scales between these two extremes, at 3, 5, and 7 pixels. This range of smoothing scales allows us to get an idea of how the smoothing scale affects our results. We will discuss the choice for a contour limit in the next section.

5.5.3 Choosing a contour limit

Choosing a correct flux limit to place the contour at is not straightforward. We can’t use a pixel-by-pixel treatment like when making the mask. In placing contours, we are forced to use one value at which we can place the contours over the image. Naively, we would just take the mean and standard deviation of the entire residual image, and then place the contour at µ − nσ. This however makes the assumption that the residual image is normally distributed, which is likely not the case: the underlying distribution, i.e. our original event file, is a Poisson distribution.

To check the distribution of the residual image, we plot a histogram of the central 400x400 pixels of the best-fit residual image, as shown in figure 22. A normal

(41)

distribu-Figure 21: Example of cavity contours on the best-fit residual image with a smoothing scale of 5 pixels

tion with the mean and variance of the pixel values is plotted over the histogram. One can clearly see that the pixels of the residual image do not follow a normal distribution. This is not surprising, considering that original counts image follows a Poisson distribu-tion.

The distribution of the central region in figure 22 already contains the cavity pixels that we are looking for. Instead of looking at the entire central region, we can also plot the distribution of a region slightly outside the center. In such a region, the surface brightness is still close to the value it has in the center, but there are no cavities or bright clumps that skew the distribution. This means that the region should show only the smooth part of the residual image, where the pixels are dominated by the smooth surface brightness profile. Such a region is shown in figure 23. If we plot the distribution in this region, we should get the expected distribution in the absence of any physical deviations from the smooth model.

The pixel distribution of the box shown in figure 23 is shown in figure 24. Based on this distribution, we can define a cutoff value at a certain percentage of the total histogram area. We can for example choose a cutoff at an area of 0.1, or 10%, meaning that 10% of all pixels in the distribution lie to the left of the cutoff value. We make the assumption that the smooth distribution of this region is the smooth part of the residual image for the entire center, and that this the distribution that we would see when there are no deviations from the smooth model. The 10% cutoff is illustrated in figure 24 by the green bins.

(42)

pixel has a value below the cutoff, the chance is 10% that it is not a cavity but simply an outlier of the smooth part of the residual image. We base our contour limits on these cutoffs. In the rest of this thesis, when we refer to ’a contour limit of 10%’, we mean ’The contour limit to the left of which lie 10% of the pixels of the distribution of a smooth region just outside of the center of the image’.

The proximity of the pixels influences the cavity area that will be detected: we expect the outliers to be randomly distributed throughout the image, while the cavity pixels are likelier to be close to one another. Smoothing the image will ensure that a one-pixel contour will not easily form around the outlier pixels, while at the same time increasing the chance that a group of cavity pixels in close proximity is seen as one entire cavity. We choose our cutoff values at 1%, 5% and 10%, so that we can study the impact of the contour limit on the final energy distribution.

The distribution in figure 24 is double-peaked. This illustrates the fact that our as-sumption is not perfect, and that the distribution within the chosen region might not perfectly represent the smooth distribution throughout the entire center. The double peak is likely caused by the same arc-like artifacts that we saw in the unsharp masked and 1-dimensional surface brightness fit residual images. The artifact causes the peak pixel value to be different in different parts of the chosen region. While this has a large impact on the middle part of the distribution, the left part of the distribution, where the cutoff value is placed, is not affected too much. Nevertheless, the imperfectness of our assumption is another reason to choose different cutoff values.

5.5.4 Conversion to Volume

The contours yield a 2-dimensional cavity area, but we still need to convert this to a volume. In principle, the value of the surface brightness dip at each pixel could be used as a measure of cavity depth. A bigger negative deviation from the smooth surface brightness profile implies that the cavity at that point is deeper. We would need to make assumptions about the geometry of the cluster to determine how exactly the sur-face brightness dip corresponds to a cavity depth. This method would allow us to map the 3-dimensional structure of the cavity more precisely.

However, attempts to make a mask in the previous section have already shown us that the data is not sensitive enough to even identify cavities pixel-by-pixel. This also ex-cludes us from using the surface brightness dip at each pixel as a measure of the depth. Given the ’perfect’ data with infinite sensitivity, this would surely be the best method, as it would have to make no assumptions about the volume of the cavity, and one could simply map it pixel-by-pixel. Since our data is not sensitive enough, we have to find another way to convert from cavity area to volume.

Referenties

GERELATEERDE DOCUMENTEN

This is done based on selecting the variables with large effects (charge point effects and battery degradation), and selecting variables most relevant for smart charging

Since most charge points have two sock- ets, possibly facilitating two cars at the same time, this research will explore potential power loss on the micro-scale of the charge

In Bourdieusian terms, they are objectifi- cations of the subjectively understood practices of scientists Bin other fields.^ Rather than basing a practice of combining methods on

Ypres did not compensate the much lower numbers of its deputies by higher individual activity, äs did those of the Free Quarter — except on the level of the lower officials..

When there are zero entries in a contingency table, the estimated odds ratios are either zero, infinity, or undefined, and standard methods for categorical data analysis with

In Bourdieusian terms, they are objectifi- cations of the subjectively understood practices of scientists Bin other fields.^ Rather than basing a practice of combining methods on

***p < 0.001, Abbreviations: kPSC: kidney-derived perivascular stromal cell; bmMSC: bone marrow-derived mesenchymal stromal cell; FGF: fibroblast growth factor; HGF:

Retroflex affricates in opposition with alveopalatal affricates are found in several Andean and pre-Andean languages: Quechuan, Jaqaru (Aymaran), Chipaya, Araucanian, Kamsá,