• No results found

Combined analysis of FDG-PET and CT images with application to brain cancer using Matlab-based program

N/A
N/A
Protected

Academic year: 2021

Share "Combined analysis of FDG-PET and CT images with application to brain cancer using Matlab-based program"

Copied!
49
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Combined analysis of FDG-PET and

CT images with application to brain

cancer using Matlab-based program

Leon Walter Vink 10189912

University of Amsterdam

Master Thesis

Study: Physics: Life and Health

Period of writing: 1st of September – 26th of June 2015 Date of handing in: June 2016

Supervisor Holland: Gustav Strijkers Supervisor Portugal: Nuno Matela Second assessor: Ton van Leeuwen

(2)
(3)

Abbreviations

FDG Fluorodeoxyglucose

CT Computed Tomography

PET Positron Emission Tomography

FC Fundação Champalimaud

ITA Injection-to-Acquisition AD Alzheimer Disease

DLB Dementia with Lewy Bodies FTD Fronto-Temporal Dementia PPA Primary Progressive Disease NPH Normal Pressure Hydrocephalus TSH Thyroid Stimulating Hormones QOL Quality of Life

DOF Degrees of Freedom LOR Line of Responds FT Fourier Transformation HU Hounsfield Unit

TOF Time of Flight BP Back Projection

BPF Back Projection Filtered FBP Filtered Back Projection MI Mutual Information

SSD Sum of Squared Differences SAD Sum of Absolute Differences

(4)
(5)

Index

1. Introduction ... 9

1.1. Medical Imaging Modalities…………

... 11 1.1.1. Computed tomography…

... 11 1.1.2. Filtered back projection…

... 14

1.1.3. Hounsfield units.. ...

…………. 21

1.1.4. Fluorodeoxyglucose - positron emission tomography…….. ... 21 1.1.5. Decay correction... …………. 23 1.1.6. Standard Uptake Value……… 24 1.2. Image registration. ... …………. 24

1.2.1. Intensity based image registration………… ...…………. 25

1.2.2. Mutual information based image registration... 26

1.2.3. Transformations...

…………. 27

1.2.4. Nearest neighbor interpolation... 28

2. Methods... …………..29

2.1. Sample of patients...

…………. 29

2.2. Loading images into the main program...

…………. 30

2.3.Visualization of images...

(6)

2.4. Standard Uptake Value...

…………. 30

2.5. Injection-to-Acquisition time...

…………. 31

2.6. Rescaling the images...

…………. 31

2.7. Creation of the reference image and standard deviation image...

…………. 32 2.8. Image registration... …………. 33 2.9. Abnormality detection... …………. 33 2.10. Region of interest... …………. 35

2.11. Grey- and white matter detection...

…………. 35 2.12. Additional research... …………. 36 3. Results... …………. 38 4. Discussion... …………. 41 4.1. Data... …………. 42

4.2. Grey matter detection...

………… 42

4.3. Grey matter layer detection...

…………. 42

4.4. Extra visualization tools...

…………. 42

4.5. Injection-to-Acquisition time differences...

…………. 43

4.6. Brain map...

…………. 43

(7)

4.8. Suggested future research... …………. 43 5. Conclusion... …………. 44 6. References... …………. 45

(8)

Abstract:

Background: Improvement imaging diagnostics is always desired. Early and correct diagnosis of brain cancer is dearly needed in order to start treatment in early stage, increase the survival rate and quality of life (QOL) of patients.

Aim: This paper has the aim to improve imaging diagnostics by developing a program that assists during brain cancer diagnosis. The program should be able to analyze combined FDG-PET and CT images which should improve the accuracy and reproducibility of the diagnosis. Combining both modalities provides the information needed to diagnose brain cancer and plan the treatment.

Method: For this paper Matlab will be used to develop such a program. The program should contain an abnormality detection, image registration and an abnormality location comparison of both modalities. FDG-PET describes the metabolic activity in the brain and can be used to characterize the tumor due to the fact that a tumor will show an higher metabolic activity. The CT images will be used as anatomical reference in order determine the location and the size of the tumor.

Results: In order for optimal abnormality detector, a reference image and a standard deviation map can be produces of an maximal amount of 10 loaded images. This can be loaded into the program and used for the abnormality detection of either FDG-PET or CT depending which images are used to calculate the reference image and standard deviation image. Before detecting abnormalities in FDG-PET the image needs to be corrected for the injection-to-acquisition (ITA) time. After the abnormality detection the locations of the abnormalities of both modalities are compared. If they coincide, this location will be marked as a region of interest. Semi-automatic grey matter detection is also developed .

Conclusion: The program works and is able to execute a combined analysis of FDG-PET and CT images. It might be used as clinical research tool. If so it will be used to detect differences in FDG uptake.

Keywords: Abnormalities, FDG-PET,CT, Injection-to-acquisition time, standard

(9)
(10)

1. Introduction

For the age between 15 and 34 about 2% of the mortality rate in men and 1.4% of the mortality rate in women is brain cancer. In 39% of the cases the tumor is a glioblastoma multiforme (GBM), which is a type IV tumor with a survival rate of only 6% in 2 years after diagnosis [1]. Early and correct diagnosis is therefore critical to start treatment in an early stage, increase the survival rate and quality of life (QOL) of patients [2]. In recent years several methods have been developed in order to correctly classify brain cancer by imaging. However, improved imaging diagnostics are still dearly needed. In order to improve imaging diagnostics Fundação Champalimaud came up with the idea to create a Matlab-based computer program that assists in the analysis of fluorodeoxyglucose positron emission tomography (FDG-PET) and computed tomography (CT) brain images of patients with brain cancer. Radio-active

fluorodeoxyglucose (FDG), a sugar analog, accumulates in cancerous tissue because of increased metabolic activity in tumors and can therefore be used to diagnose, localize and characterize brain tumors [3][4][5] . CT images are used to provide anatomical reference. When the tumor reaches a certain size, it also becomes visible on CT images which provides additional confirmation of tumor location and size. Combining both modalities provides the information needed to diagnose brain cancer and plan the treatment. [6][7]. Visual

interpretation of FDG-PET and CT images is difficult and subjective. Objective quantification of FDG uptake requires co-registration of CT and PET images and segmentation of the brain to objectively quantify FDG-PET uptake in the tumor and various relevant regions of the brain. This approach improves accuracy and reproducibility. PET and CT images used in this report were acquired by the Fundação Champalimaud (FC).

The primary features of the developed program are:

(1) Program that makes a reference image and a standard deviation map from loaded images. This is practical for research purposes.

(2) Image registration; Register the reference image with the patient image in order to prepare for analysis

(3) Find abnormalities in terms of standard deviation differences compared to a reference image and its standard deviation map to see whether the patient image differs from an average normal brain image.

(4) Comparing abnormality locations in both modalities to see whether the locations of the abnormalities in CT coincide with the ones from FDG-PET.

(5) Enlargement and 3D visualization for efficient diagnosis

(6) Semi-automatic grey matter detection and volume calculation to apply grey matter layer detection.

(7) Grey matter layer detection to exclude false-positives in increased metabolic activity regions (zeg dat dit niet gelukt is qua tijd en dat als het gelukt zou zijn dat het dan helemaal niet accuraat is sinds de grey matter detection al een benadering is en dan zou de grey matter layer detection ook een benadering zijn en dus niet betrouwbaar zijn. Eerst zou ere en betere grey matter detection moeten zijn alvorens de grey matter layer detection gecreeerd kan worden.)

(11)

The analysis part consists of a program that is based on semi-automatic grey- and white matter detection in the CT images based on thresholds. A well-known confounding factor in FDG-PET images is that thick grey matter layers display increased metabolic activity [3]. The program will exclude these false-positives from further analysis. Since grey- and white matter detection is difficult for CT due to the fact that Hounsfield Units (HU) values can differ between patients, user interaction is required to set appropriate HU thresholds for correct segmentation [3]. The Matlab program can be applied beyond analysis of brain cancer, e.g. for the

characterization of depression, epilepsy, and dementia, conditions which are known to display alterations in brain FDG uptake. After further validation studies, the developed program might develop into a clinical tool to assist the radiologist and oncologist in analyzing these type of images. Analysis of differences in FDG uptake in terms of standard deviation in normal-appearing grey matter involved brain conditions, including epilepsy, dementia can be performed. Additionally, information can be extracted from FDG-PET image analysis of measurements with different injection-to-acquisition times.

In the FC, FDG-PET images are acquired using either an inject-to-acquisition (ITA) time of < 60 minutes or > 3 hours. If a patient is scanned with different ITA times, the differences can be revealed in standard deviations. This may help to determine which ITA time is preferred and how fast the contrast agent is being distributed. Since after three hours some of the contrast agent will be filtered out of the vascular system by the kidneys, the amount of contrast agent is lower when the patient is scanned > 3 hours after injection compared to < 60 minutes. This is however only true for longer ITA times. For short ITA times it only can change with tumor grade [8].

Proposed additional research using the program:

(1) Analysis of differences in FDG uptake for normal’s and abnormal´s, in which normal’s are healthy patients with no abnormal FDG uptake and abnormal´s are patients with an abnormal FDG uptake. Using this analysis new diagnostic parameters may be found. (2) Analysis of the differences in FDG uptake for patients with an ITA time of < 60 minutes

and patients with an ITA time of > 3 hours. This may give some information about the optimal ITA time.

The layout of this report is as follows. In order to understand the underlying theory of this program some basic information of the program will be explained first. The basics of Computed Tomography (CT) and Fluorodeoxyglucose Positron Emission Tomography (FDG-PET) will be introduced. This includes x-ray- interaction and attenuation with the body, filtered back projection and the transport and detection of FDG as contrast agent. Furthermore, the mathematical explanation of image registration will be provided. After the methods, where the content of the project will be discussed, the sample of patients will be described. At last the results show the completed program and the outcome of the additional research.

(12)

1.1. Medical Imaging Modalities

The program that is created during this project is called CompareMechanism and uses CT- and FDG-PET images. Combining these two images for diagnosis creates a reliable set of

parameters to diagnose brain cancer. FDG-PET shows the metabolic activity in the brain which is particularly interesting since a cancerous region will have a significant higher metabolic activity that pinpoints to a location and characterization of the tumor [4]. The CT is helpful because of the morphological images that might confirm the localization and the size of the tumor [7]. At the FC, the PET-CT Philips Gemini TF TOF 16 with 120 kVp and a time of flight (TOF) is used. This scanner provides 128 x 128 matrix images with a field of view (FOV) of 256 mm and a slice thickness of 2 mm.

1.1.1. Computed tomography

Computed tomography (CT) is a medical imaging technique based on the attenuation of X-rays by tissues in the body. CT is superior in showing bony structures, which is in contrary to MR that better shows soft tissue structures [9]. The word ‘’tomography’’ stands for slice which refers to the method of scanning. The CT scanner scans per slice so that the reconstruction shows multiple slices representing the complete region of interest.

CT uses X-rays that penetrate through the body. Due to the different absorption factor of the different types of tissue the image can be reconstructed. There are several types of CT scanners but the main idea is that the X-ray tube turns around the body shooting x-ray beams at the body as the patient is going through the center of the machine [10].

Figure 1: the rotating x-ray tube sends x-ray’s through the body and will be detected by rotating X-ray detectors on the other side of the body. The patient lies on a bed that slides through the machine.

(13)

In the patients the photons of the x-ray beam interact with molecules. Depending on the energy of the photon, the x-ray interaction might either be photoelectric absorption or Compton (incoherent) scatter [10]. In photoelectric absorption, the photon gets absorbed completely and uses his energy to eject an electron in a random direction that is able to ionize neighboring atoms. However, there will be no scatter events [11].

Figure 2: Photoelectric absoption. The energy of the incident gamma ray gets completely absorbed by the electron. The energy absorption causes the electron to eject [11].

Compton scattering is an incoherent kind of scatter. As the photon interacts with the atom, part of the energy of the photon will be used to eject or ionize an electron. However, due to the fact that the photon still has energy, it will scatter of and move on in an altered direction [12]. This angle of scatter can be calculated with the following formula:

𝜆′− 𝜆 = ℎ

𝑚𝑒𝑐(1 − 𝑐𝑜𝑠𝜃) (1)

Where λ´ is the wavelength of the altered photon in meters, λ the wavelength of the original photon in meters, h is Planck’s constant in J⋅s, me the mass of an electron

kilograms, c the speed of light in meters per second and θ the angle of scattering of the altered photon in degrees [12]. In this formula h/me c can be considered the Compton

length and is equal to 2.43⋅10-12 meters. Using the photon energy formula

𝐸 =ℎ𝑐𝜆 (2)

, where E is the energy in Joule, the energy of the photons can be calculated [12]. Filling in formula (1) the formula can be rewritten as

𝐸𝛾′ = 𝐸𝛾 1+ 𝐸𝛾 𝑚0𝑐2(1−𝑐𝑜𝑠𝜃) (3) 𝐸𝛾′ = 511𝑀𝑒𝑉 2−𝑐𝑜𝑠𝜃 (4)

(14)

, where Eγ’ is the photon energy after the collision and Eγ the photon energy before the

collision [14]. On his part this photon can either be absorbed or scattered afterwards. The ionized electron can cause DNA damage [12].

Figure 3: Compton scattering. The energy of the incident photon is partly absorbed by and electron that due to this energy absorption gets ejected. As a result of the partly absorbed photon, a low energy photon scatters in a random direction [12].

The probability of the occurrence of photoelectric absorption is approximately proportional to (Z/E)3 where Z is the atom number of the impact molecule and E the

photon energy. As the energy increases, the likelihood of interaction with atoms

decreases. The effect is dominant for energies lower than 100 keV. Compton scattering occurs with a wider range of energies, however, with higher energies the amount of scattering events decreases [13].

The detectors on the other side of the x-ray tube record the amount of x-rays coming from the body. After the x-rays are recorded, the image can be reconstructed using a filtered back projection reconstruction [15].

The attenuation of the x-rays can be described by the Lambert-Beer law [15]:

𝐼 = 𝐼0 𝑒−µ𝑥 (5)

Where I is the intensity of the x-ray beam after penetration of the body and I0 the x-ray

beam that entered the body. X the distance ad µ the attenuation coefficient that varies per tissue type.

(15)

Figure 4: The simplest graphical explanation of the Lambert-Beer law [15].

However, the body is not homogeneous and will have multiple tissue types.

Figure 5: The Lambert-Beer law for multiple tissue types [15]

Therefore the lambert-Beer will become:

𝐼 = 𝐼0 𝑒−(µ1𝑥1+µ2𝑥2+𝑥3𝑥3+⋯ ) (6)

𝐼 = 𝐼0 𝑒− ∑ µ𝑖𝑥𝑖

𝑛

𝑖 (7)

In this case every type of tissue has a different thickness xi, for i = 0, 1, 2, 3, etc. This

formula can be altered due to the fact that the X-ray production of the CT scanner has an energy E. 𝐼 = 𝐼0∙ ∫ 𝑒− ∑ µ𝑖𝑥𝑖 𝑛 𝑖 𝑑𝐸 𝐸𝑚𝑎𝑥 0 (8)

This can be rewritten as:

𝑙𝑛𝐼0

𝐼 = ∫ 𝜇(𝐸)𝑥 𝑑𝑥 𝑥

0 (9)

In the case of a brain tumor the attenuation coefficient is higher than that of the surrounding brain matter due to its dense structure. It should therefore appear darker on the image. However, as said before, CT is more precise at bony structures. Therefore, the small brain tumors are not well visualized.

1.1.2. Filtered back projection

As the detectors, which are rotating with the same speed as the x-ray tube detects the x-rays, several 1D images are stored. These images are intensity plots or projections of the amount of x-rays per location. On the y-axis it has the intensity of x-rays coming through the object of interest and on the x-axis it saves the spatial information.

(16)

Figure 6: By the rotating acquisition of the CT scanner the projections show the 1-D representation of an 2-D object [16].

These projections can be explained by the Radon Transform Theorem. This theorem shows a relationship between the projections of a two-dimensional object obtained by rotational scanning and the 2-dimensional object itself. Each single projection is obtained by a scan along the line of responds (LOR) at a fixed angle θ which represents a projective transformation of the 2-dimensional function onto a polar coordinate space being (x’,θ). The result can be shown in a sonogram [16]. An example is shown below in figure 7.

Figure 7: The sonogram is adding the projections of all angles and transform the result into polar coordinates [16].

The Radon transform is defined as

𝑝𝜃(𝑥′) = ∫ ∫ 𝑓(𝑥, 𝑦)𝛿(𝑥𝑐𝑜𝑠𝜃 + 𝑦𝑠𝑖𝑛𝜃 − 𝑥′)𝑑𝑥𝑑𝑦 +∞

−∞ (10)

Where pθ(x’) is the projection of f(x,y) in the polar coordinate space (x’,θ), f(x,y) is the

2-dimensional function that is obtained by integrating along the line that has the normal vector in θ direction as shown in figure 8.

(17)

Figure 8: A single projection of a 2-D object [16].

Since f(x,y) is obtained by integrating over y’ and since the coordinate system (x’,y’) is obtained by rotating the (x,y) coordinate system over the angle θ, the relation between x, y, x’, y’ and θ can be expressed as:

𝑥′ = 𝑥𝑐𝑜𝑠𝜃 + 𝑦𝑠𝑖𝑛𝜃 (11)

𝑦′ = −𝑥𝑠𝑖𝑛𝜃 + 𝑦𝑐𝑜𝑠𝜃 (12)

Therefore, since we know that the delta function only fires when x’ = xcosθ + ysinθ and that dxdy = dx’dy’ pθ(x’) can be written as follows:

𝑝𝜃(𝑥′) = ∫ ∫ 𝑓(𝑥′, 𝑦′)𝑑𝑦′ +∞

−∞ (13)

Using the Fourier Slice Theorem or the Central Slice Theorem we can prove that if we take the Fourier Transformation (FT) of the projections pθ(x’) that that is equal to the

cross section of the 2-D FT of the object which is perpendicular to the direction of the projections. This is important if we want to reconstruct the original image [16]. A visualization of the Central Slice Theorem is shown in the image below.

(18)

Figure 9: The 1-D Fourier transformation of the projection is equal to the 2-D Fourier transformation of the object [16].

Taking the 1-D FT of a single projection Pθ(v) can be expressed as follows

𝑃𝜃(𝑣) = ∫ 𝑝𝜃(𝑥′) +∞

−∞ 𝑒

−𝑖𝑣𝑥′𝑑𝑥 (14)

Using formula (13) and implementing the formulas (11) and (12) we get

𝑃𝜃(𝑣) = ∫ ∫ 𝑓(𝑥′, 𝑦′)𝑒−𝑖𝑣𝑥′𝑑𝑥′𝑑𝑦′ +∞ −∞ (15) 𝑃𝜃(𝑣) = ∫ ∫ 𝑓(𝑥, 𝑦)𝑒−𝑖𝑣(𝑥𝑐𝑜𝑠𝜃+𝑦𝑠𝑖𝑛𝜃)𝑑𝑥𝑑𝑦 +∞ −∞ (16)

This term of Pθ(v) should be equal to the 2-D FT of f(x,y). In order to calculate the 2-D

FT of f(x,y) we need to transform to polar coordinates using

𝑣𝑥 = 𝑣𝑐𝑜𝑠𝜃 (17)

𝑣𝑦= 𝑣𝑠𝑖𝑛𝜃 (18)

𝐹(𝑣𝑥, 𝑣𝑦) = ∫ ∫ 𝑓(𝑥, 𝑦)𝑒−𝑖(𝑣𝑥𝑥+𝑣𝑦𝑦)𝑑𝑥𝑑𝑦 +∞

−∞ (19)

Implementing formulas (15) and (16)

(19)

This shows formulas (16) and (20) are equal and that we can state that as shown in the figure 9. 𝑃𝜃(𝑣) = 𝐹(𝑣𝑐𝑜𝑠𝜃, 𝑣𝑠𝑖𝑛𝜃) = 𝐹(𝑣𝑥, 𝑣𝑦) (21)

Until now it is shown that if we take the 1-D Fourier Transform of a single projection pθ(x’) we get Pθ(v), which is equal to the 2-D FT of the object F(vx, vy) itself. In order to

reconstruct the original image f(x,y) back, the 2-D inverse FT need to be taken of F(vx,

vy) [16].

The simplest reconstruction method is the back-projection (BP) method which sums up all the projections of the different angles θ. However, this also will show that using only the simple BP method a blur will appear [16]. Using formula (10) we can state that the BP can be expressed as 𝐵𝑃(𝑥, 𝑦) = ∫ 𝑝𝜃(𝑥′)𝑑𝜃 𝜋 0 = ∫ ∫ ∫ 𝑓(𝑥, 𝑦)𝛿(𝑥𝑐𝑜𝑠𝜃 + 𝑦𝑠𝑖𝑛𝜃 − 𝑥′)𝑑𝑥𝑑𝑦 𝑑𝜃 +∞ −∞ 𝜋 0 (22)

In order to avoid confusion x and y will be replaced by q and r. Using formula (9) we get

𝐵𝑃(𝑥, 𝑦) = ∫ ∫ ∫0𝜋 −∞+∞𝑓(𝑞, 𝑟)𝛿(𝑞𝑐𝑜𝑠𝜃 + 𝑟𝑠𝑖𝑛𝜃 − (𝑥𝑐𝑜𝑠𝜃 + 𝑦𝑠𝑖𝑛𝜃))𝑑𝑞𝑑𝑟 𝑑𝜃 (23) = ∫ ∫ 𝑓(𝑞, 𝑟) 𝑑𝑞𝑑𝑟 +∞ −∞ ∫ 𝛿((𝑞 − 𝑥)𝑐𝑜𝑠𝜃 + (𝑟 − 𝑦)𝑠𝑖𝑛𝜃)𝑑𝜃 𝜋 0

In order to proceed an assumption is needed about the term in the delta function. We state that

(𝑞 − 𝑥)𝑐𝑜𝑠𝜃 + (𝑟 − 𝑦)𝑠𝑖𝑛𝜃 = √(𝑞 − 𝑥)2+ (𝑟 − 𝑦)2(𝑠𝑖𝑛𝜑 ⋅ 𝑐𝑜𝑠𝜃 + 𝑐𝑜𝑠𝜑 ⋅ 𝑠𝑖𝑛𝜃)

(24)

Where sinφ⋅cosθ + cosφ⋅sinθ can be replaced by sin(θ+φ). Now to make it easier the Dirac delta function will be rewritten as

𝛿(𝑔(𝜃)) = ∑ 1

|(𝑑𝑔(𝜃)𝑑𝜃 )|𝜃=𝜃𝑘|𝛿(𝜃 − 𝜃𝑘)

𝑘 (25)

This formula suggests that for a finite number of θ=θk g(θ) = 0. In order to obtain this

sin(θ+φ) needs to be zero which implies that θ=π-φ. Implementing formula (24) and (25) in formula (23) provides 𝐵𝑃(𝑥, 𝑦) = ∫ ∫ 𝑓(𝑞, 𝑟)( 1 |√(𝑞−𝑥)2+(𝑟−𝑦)2cos(𝜋)| +∞ −∞ 𝛿(𝜃 − (𝜋 − 𝜑))𝑑𝑞𝑑𝑟 (26) = ∫ ∫ 𝑓(𝑞, 𝑟)( 1 |√(𝑞 − 𝑥)2+ (𝑟 − 𝑦)2⋅ 1| +∞ −∞ ⋅ 1 ) 𝑑𝑞𝑑𝑟

(20)

Using the convolution definition BP becomes 𝐵𝑃(𝑥, 𝑦) = 𝑓(𝑥, 𝑦) ⋅ ( 1

√𝑥2+𝑦2) = 𝑓(𝑥, 𝑦) ⋅

1

𝑠 (27)

This shows that reconstructing the image using the simple BP method, the image will get blurred by convolution with a factor 1/s. In order to de-blur this image the back-projection filtering (BPF) method can be used. However, this has the disadvantage that due to the convolution of the filter term, the BP(x,y) is accented more than the original image f(x,y). The problem lies in the fact that the projection data is first back-projected, then filtered in Fourier space and finally inverse Fourier Transformed. In order to apply this method, first see how the blur reacts under the FT [16].

𝐹𝑇 [1 𝑠] = 𝐹𝑇 [ 1 √𝑥2+𝑦2] = 1 √𝑣𝑥2+𝑣𝑦2 =1 𝑣 (28)

This shows that

𝐹𝑇[𝑓(𝑥, 𝑦)] = 𝐹(𝑣𝑥, 𝑣𝑦) = 𝑣 ⋅ 𝐹𝑇[𝐵𝑃(𝑥, 𝑦)] (29)

In words this means that the 2-D FT of the back-projection filter image F(vx, vy) or the

FT of the original image is equal to the un-blur factor v times the 2-D FT of the back-projected image. This phenomenon is called inverse filtering or deconvolution. Now in order to reconstruct the original image the inverse FT needs to be taken of the 2-D FT of the back-projection-filtered image F(vx, vy) [16]. This is the same as

𝑓(𝑥, 𝑦) = 𝐵𝑃(𝑥, 𝑦) ⋅ 𝐹𝑇−1[𝑣] (30)

However, in order to get rid of the disadvantage of accenting BP(x,y) more than the original image f(x,y) another method called the Filter Back-Projection Method (FBP) is used. This method applies the filters first before it performs the back projection, which avoids the BP(x,y) of being more accentuated than the original image f(x,y). This happens in the BPF method where the filtering is applied after the back projection. In order to reconstruct the original image, the FBP method uses an inverse FT of the 2-D FT of the back-projection filter image F(vx, vy) [16].

𝑓(𝑥, 𝑦) = ∫ ∫ 𝐹(𝑣𝑥, 𝑣𝑦)𝑒2𝑖𝜋(𝑣𝑥𝑥+𝑣𝑦𝑦)𝑑𝑣𝑥𝑑𝑣𝑦 +∞

−∞ (31)

Converting this back to polar coordinated will provide

𝑓(𝑥, 𝑦) = ∫02𝜋∫0+∞𝑣𝐹(𝑣𝑐𝑜𝑠𝜃, 𝑣𝑠𝑖𝑛𝜃)𝑒2𝜋𝑖𝑣(𝑥𝑐𝑜𝑠𝜃+𝑦𝑠𝑖𝑛𝜃)𝑑𝑣𝑑𝜃 (32) The first integral can be separated into two subintervals when using the Fourier Slice Theorem, being (0, π) and (π, 2π). Also we can substitute F(vcosθ, vsinθ) with Pθ(v) as

proved before by the Central Slice Theorem and shown in formula (21).

𝑓(𝑥, 𝑦) = ∫ ∫ 𝑣𝑃𝜃(𝑣)𝑒2𝜋𝑖𝑣(𝑥𝑐𝑜𝑠𝜃+𝑦𝑠𝑖𝑛𝜃)𝑑𝑣𝑑𝜃 +∞

0 𝜋

(21)

Of these two terms, the subinterval (π, 2π) can be altered to the form of 𝑓2(𝑥, 𝑦) = ∫ ∫ 𝑣𝑃𝜃+𝜋(𝑣)𝑒2𝑖𝜋𝑣(𝑥𝑐𝑜𝑠(𝜃+𝜋)+𝑦𝑠𝑖𝑛(𝜃+𝜋))𝑑𝑣𝑑𝜃 +∞ 0 2𝜋 𝜋 (34) 𝑓2(𝑥, 𝑦) = ∫ ∫ 𝑣𝑃𝜃+𝜋(𝑣)𝑒−2𝑖𝜋𝑣(𝑥𝑐𝑜𝑠𝜃+𝑦𝑠𝑖𝑛𝜃)𝑑𝑣𝑑𝜃 +∞ 0 2𝜋 𝜋 (35) 𝑓2(𝑥, 𝑦) = ∫ ∫ −𝑣𝑃𝜃+𝜋(−𝑣)𝑒2𝑖𝜋𝑣(𝑥𝑐𝑜𝑠𝜃+𝑦𝑠𝑖𝑛𝜃)𝑑𝑣𝑑𝜃 +∞ 0 2𝜋 𝜋 (36)

This shows that for the frequency space the next definition can be state

𝑃𝜃(𝑣) = 𝑃𝜃+𝜋(−𝑣) (37)

This changes formula (33) to

𝑓(𝑥, 𝑦) = ∫ ∫+∞|𝑣|𝑃𝜃(𝑣)𝑒2𝑖𝜋𝑣(𝑥𝑐𝑜𝑠𝜃+𝑦𝑠𝑖𝑛𝜃) −∞ 𝑑𝑣𝑑𝜃 𝜋 0 = ∫ ∫ |𝑣|𝑃𝜃(𝑣)𝑒 2𝑖𝜋𝑣𝑥′ +∞ −∞ 𝑑𝑣𝑑𝜃 𝜋 0 (38)

which is the final formula for the FBP. As stated before the Radon transforms which are all the 1-D FT of a single projections Pθ(v) over all angles θ will first be multiplied by the

1-D ramp filter |v| after which it will be back-projected. In the image below an illustration is shown of the simple BP method without filtering and the FBP [16].

Figure 10: The result of the back projection method and the filtered back projection method [16].

In this image it is obvious that without a filter the image will have blur due to

convolution. Besides the halo around the white ball, there is also an intensity difference inside the ball; where the inside appears to be whiter than on the edges. Due to the projection the middle of the ball will be over intense. This blur has been determined to be 1/s. Some of this so called 1/s noise can be filtered out using the FBP [16]. Image reconstruction can however also be done with iterative algorithms. Due to increased computer power the use of iterative reconstruction methods increases [17].

(22)

Pixel intensity in CT images is expressed in terms of Hounsfield Units (HU). HU is the unit of the quantitative measure of linear attenuation coefficient [37]. This is a linear scale that is based on the attenuation coefficient [18].

𝐻𝑈 = 1000 ∙ 𝜇− 𝜇𝑤𝑎𝑡𝑒𝑟

𝜇𝑤𝑎𝑡𝑒𝑟− 𝜇𝑎𝑖𝑟 (39)

As the tissue gets denser, the attenuation coefficient gets larger and so will the HU. In the image this will appear darker [19].

1.1.4. Fluorodeoxyglucose - Positron Emission Tomography

Positron emission tomography (PET) is a noninvasive medical imaging technique that assesses the biochemical process such as the metabolic activity in for instance the brain. It images biological function through molecular imaging. In order to do so a contrast agent is used. The main contrast agent used is fluoro-deoxy-D-glucose (FDG) which has a high similarity with glucose labeled with the isotope 18F. This has the great

advantage that the contrast can be guided past the blood brain barrier [20]. The FDG gets, like normal sugar, transported by the transporter GLUT1 (glucose transporter 1) [45]. Due to an increased glucose consumption and an increased hexokinase activity in cancerous cells, over-expression of metabolic activity can direct to cancerous regions. However, in contrast to glucose, FDG cannot be metabolized [21], [22].18F is produced

in a cyclotron and has a half-life of 109.7 minutes [20]. Using the half-life the activity of the contrast agent can be calculated using~

𝐴(𝑡) = 𝐴(0)𝑒−𝑡(ln(2)𝜏 ) (40) where A(t) is the activity in Becquerel at time t in minutes, A(0) the activity in Bq at t=0 and τ the half-life in minutes [24]. After the 109.7 minutes the isotope will decay sending a proton by Positron decay. During Positron decay a proton (p) will transform into a neutron (n) so that it emits a positron (e+) and a neutrino (v) [23].

𝑝 → 𝑛 + 𝑒++ 𝑣 (41)

This positron travels several millimeters before it loses his kinetic energy and stops moving. On the halted position the proton is captured in the electric field of an electron (e-) and the two will annihilate into two photons (γ) [23].

𝑒++ 𝑒→ 𝛾 + 𝛾 (42)

This is a process where both the mass of the electron and the proton is converted into energy with the following formula from Einstein:

𝐸 = 𝑚𝑐2 (43)

where E is the energy in joules, m the mass in kilograms and c the speed of light in meters per second. Since the mass of a positron is the same as that of an electron, being 9.10938356⋅10-31 kilograms but then with an opposite charge and the speed of

(23)

This is ~511 keV per gamma-photon since the netto impulse of the environment is negligible.

Figure 11: Annihilation of a positronium into two photons with equal energy [23].

The emitted photons will travel at an angle of 180 degrees from each other if the velocity of the positronium (the electron and its counterpart the positron) is neglected. Due to this phenomenon the line of responds (LOR) can be determined by the so called coincidence detection [46]. Using timers that record the time of flight (TOF) of both photons, the position can be determined with more precision so that the

reconstruction will be more accurate [23]. In the image below the conventional- and the TOF PET is shown.

Figure 12: the time of flight calculates which photon of two produced during one annihilation arrives earlier at the detectors in order to determine a more precise annihilation location [25].

Without TOF the annihilation could have happened anywhere on the LOR. There is no indication of a specific location. Using TOF the time when both photons arrive at a detector on opposite locations, the time distance from the center point can be determined using the formula

(24)

∆𝑥 =𝑐∆𝑡

2 (44)

Where Δt = t2-t1 and Δx is the distance from the center point. This helps to improve the

signal to noise ratio of the image and leads to a decreased radiation dose and a decreased scan time [26]. However, not all recorded events are true coincidence events. Some of them might be random or accidental due to multiple events occurring on the same time or due to a changed LOR because of to Compton scattering. These events are called random- or scatter coincidences and are shown in the image below. All physical effects, being Compton scattering and photoelectric absorption, exist also in PET. Therefore, it requires an image reconstruction procedure as well [25].

Figure 13: The three difference coincidences in PET; true-, random- and accidendental coincidences [25].

The CT part in PET-CT is only there to generate topographic images that allow attenuation correction, visualization of morphological and anatomical structures [20].

1.1.5. Decay correction

Since the FDG-PET scan can be made with differing ITA time, the intensity of the voxels will differ per ITA time as well. As the contrast agents is filtered out of the blood by the kidneys, images acquired with a larger ITA time will have a lower contrast than images acquired with a smaller ITA time. In order to correct for this decay artifact formula 40 will be used

𝐴(𝑡) = 𝐴(0)𝑒−𝑡( ln(2)

𝜏 ) (45)

where A(t) is the activity in Becquerel at time t in minutes, A(0) the activity in Bq at t=0 and τ the half-life in minutes [24]. In order to correct the image formula 40 will be changed to the form

𝐼𝑚(0) = 𝐼𝑚(𝑡) ∗ 𝑒𝑡(ln(2)𝜏 ) (46) This shows that A(t) is replaced by the obtained image Im(t) without any atomic decay correction and that A(0) is replaced by Im(0) that represents the image with decay correction [24].

(25)

1.1.6. Standard Uptake Value

The Standard Uptake Value (SUV) is a measure for the FDG uptake and is used commonly used in FDG-PET/CT oncology imaging and assessment. IT can be used as a marker for assessing the patient responds to cancer therapy. Due to the fact that the injected dose and the weight of the patient are needed to calculate the SUV, some variability concerning these two paremeters can be removed [43]. The SUV can be calculated using the formula

𝑆𝑈𝑉 = 𝑎

𝐷𝑖𝑛𝑗/𝑊 (46)

, where a is the radioactivity activity concentration in kBq/mL, Dinj the decay corrected

injected dose in kBq and W the weight of the patient in grams. This formula shows that uniformly distributed contrast agent in the body will provide a SUV of 1g/ml in every part of the body [43].

1.2. Image registration.

Image registration is an iterative execution that overlays two images and geometrically aligns them. These images can either be mono-modal or multi-modal; meaning that both the so called fixed and moving image are either the same or from different image modalities. First the moving image will be transformed using an initial transformation matrix. Because the set of data points of the moving images changed it needs to be interpolated to match the set of data points of the moving image. After that the moving- and fixed image go through a similarity measurement that the images are analyzed by some registration method. If, using this registration method the images are considered registered the image registration is done. If not, the cycle repeats itself until the

similarity between both images is maximized and the images can be considered registered [27], [44].

(26)

Figure 14: the iterative execution of image registration. The moving image gets transformed and interpolated before it will be compared to the fixed image. This cycle repeats itself until the maximal registration is reached. The image is registered [29].

There are several image registration types such as the intensity based-, the partition based- and the mutual information based image registration [28]. The image

registration tool used in CompareMechanism is an intensity based image registration.

1.2.1. Intensity based image registration

Intensity based registration in one of the most known image registration and is used in the ‘imregister(‘’)’ function of Matlab [29]. The name already tells that the image

registration is done based on the intensities of the voxels. The registration can be done by either a mono-modal- or a multi-modal registration.

There are multiple mono-modal similarity measures, of which two are the sum of squared differences (SSD) [30] and the sum of absolute differences (SAD) [31]. The SSD is a simple measure while the SAD is less sensitive on large intensity difference within the images [31]. Their formulas are shown below [31].

𝑆𝑆𝐷 = ∑ (𝐴𝑖 𝑖− 𝐵𝑖)2 (48)

𝑆𝐴𝐷 = ∑ |𝐴𝑖 𝑖− 𝐵𝑖| (49)

Both SSD and SAD work the same way. It uses two image blocks; one block of the fixed image and one block of the moving image. In order to compare these two blocks with each other, the blocks are laid over each other. Per overlaid pixel either the

(27)

moving image block and B being a pixel of the fixed image block. Summing the values will provide either the SSD or SAD, depending on whether the squared differences or the absolute differences are calculated. Although the blocks might differ in size, the block of the moving image must always be smaller. In this case the SSD or the SAD will be calculated per location such that the moving image block fits in the moving image block. The lowest SSD or SAD will be the best match [30].

1.2.2. Mutual information based image registration

The second possible way to make multi-modal work is the application mutual

information (MI). This method is also used in the ‘imregtform’ function of Matlab [29]. MI was introduced by Maes et al. (1997) and Viola and Wells (1997) for rigid image registration of multi-modal scans [31]. However, it has been applied to a wide range of medical analysis tasks in image registration. Particularly the rigid body registration of multi-modal image showed abundant registration accuracy. Therefore it is used in many similarity measures for intensity based multi-modal image registration [32]. This application looks at the mutual information in both images or in easier terms how well one image is able to explain the other [33].

Consider two blocks that are partially overlapping. The surface of the left one is H(X) and the right one is H(Y). The joint surfaces are having a total surface of H(X,Y) so that the overlapping part of H(X) and H(Y) is the MI, being I(X:Y). This leaves only the non overlapping parts that are H(X|Y) and H(Y|X). The MI can therefore be defined as

𝑀𝐼 = 𝐻(𝑋) + 𝐻(𝑌) − 𝐻(𝑋, 𝑌) (50)

where H(A) and H(B) are marginal entropies (entropies that are adjacent) of both images A and B and H(A,B) the joint entropy of both images [30]. However, it can also be defined as

𝐼(𝑋: 𝑌) = 𝐻(𝑋) − 𝐻(𝑋|𝑌) (51)

𝐼(𝑋: 𝑌) = 𝐻(𝑌) − 𝐻(𝑌|𝑋) (52)

𝐼(𝑋: 𝑌) = 𝐻(𝑋, 𝑌) − 𝐻(𝑋|𝑌) − 𝐻(𝑌|𝑋) (53)

where H(X|Y) and H(Y|X) are the conditional entropies (entropies that are dependent on certain conditions or variable). However, to keep it simple formula (50) is used in order to calculate the MI. In order to do so the Shannon entropies are used. The Shannon entropies in the formula of MI, being H(A) and H(B) can be defined as

𝐻(𝑋) = −𝐾 ∑𝑛𝑖=1𝑝(𝑥𝑖) ⋅ log 𝑝(𝑥𝑖) (54)

(28)

where X is defined as the image, being either A or B, n the number of different intensities, pi the estimated intensity probabilities, and K being a positive constant.

H(A,B) is determined as the joint entropy and is defined as

𝐻(𝐴, 𝐵) = −𝐾 ∑𝑛𝑖=1 ∑𝑚𝑗=1𝑝(𝑥𝑖, 𝑦𝑗) ⋅ log p(xi, yj) (56)

where pi (a,b) is the probability of having an intensity pair (a,b) in an image pair (A,B)

[30]. Using Baye’s classical theorem and formula (54) (55) and (56), the mutual information can be calculated [34]. The result will be

𝐼(𝑋: 𝑌) = − ∑ ∑ 𝑝(𝑥𝑖, 𝑦𝑗) ⋅ log ( 𝑝(𝑥𝑖,𝑦𝑗) 𝑝(𝑥𝑖)𝑝(𝑦𝑗)) 𝑚 𝑗=1 𝑛 𝑖=1 (57)

MI has however a disadvantage being that it is a global measure. It ignores its spatial neighborhood of a chosen pixel within one image and therefore ignores the spatial information which is shared across the images. For this reason it can be difficult to estimate MI locally that may lead to false local optima in non-rigid image registrations. In addition, MI is computational complex and can therefore take more time to compute than a simple intensity metric, such as the SSD [31]. In order to reduce the effect of changing the image overlap on MI and to improve alignment, Studholme et all. (1999) introduced the normalized mutual information (NMI) [33]. This normalized mutual information (NMI) is defined as

𝑁𝑀𝐼 =𝐻(𝐴)+𝐻(𝐵)

𝐻(𝐴,𝐵) (58)

which is the sum of probability distributions of both the fixed and the moving image divided by the joint probability distribution of again both the fixed and the moving image [30], [33].

1.2.3. Transformations

As the image registration takes place, the moving image will get transformed using transformations in order to match the fixed image. There are four types of

transformations: rigid, affine, linear and nonlinear. The rigid transformation is simply rotation of the image or shifting the image in either the X or Y direction defining six degrees of freedom (DOF) being 3 translational and 3 rotational [33].

The similarity transformation or linear transformation is similar to the rigid

transformation but it includes an isotropic scaling factor. It defines 9 DOF being the six DOF of the rigid-body transformation plus three scaling DOF. However, if this scaling factor is anisotropic, the transformation is called affine [33].

In the affine transformation the parallelism mainly remains. It defines 12 DOF being 3 shearing DOF plus 9 DOF of the similarity or linear transformation. However, it is a general linear transformation. The parallelism does, however, not remain in projective transformations. It exceeds the 12 DOF of the Affine transformation depending on the

(29)

transformation models used in the image registration. The deformable transformation is therefore not linear. It changes lines and points [33].

1.2.4. Nearest neighbor interpolation

After the transformation of the moving image, the coordinate systems of both the moving and the fixed image do not coincide. In order to merge them a simple

interpolation called nearest neighbor interpolation is developed. This interpolation looks at a point in image A and checks what the closest point in image B is. The voxel of image a will then take the value of the closest voxel [35] in image b as shown in figure 15. To even be more precise, the tri-linear interpolation is developed. This interpolation will give the voxel of image A a weighted average of its neighboring voxels in image B depending on their closeness. The closer the voxel in image B to the voxel in image A, the higher this voxel value will contribute to the average. For optimal image registration for several types of images there are multiple types of image registration. The

interpolation that is used by the image registration tool of CompareMechanism is called bilinear interpolation. This interpolation takes a weighted average of the four closest points. This average will be the voxel value of this particular point [36].

Figure 15: the graphical representation of nearest neighbor interpolation and bilinear interpolation. The nearest neighbor interpolation searches the closest pixel to the to be interpolated pixel. The interpolated pixel will get the value of the closest pixel. In bilinear interpolation the interpolated pixel will get a scaled pixel value of all four pixels around.

(30)

2. Methods

The program created in this paper is a multifunctional program. Images can be loaded into the program, altered, analyzed, and visualized. In this section of the paper the formulas behind the operations in Matlab will be explained. Additionally, some information is provided about what the program can do. First some information about the images used to create this program.

2.1. Sample of patients

The data acquired for this paper comes from the Champalimaud Foundation. All data is selected and clinically classified by a nuclear medicine specialist from the

Champalimaud foundation. There are three parts of data: healthy subjects (9), Patients with defects (12) and patients with metastasis (9). The patients with defects can have: Degenerative Dementia (DD) which can include

 Alzheimer disease (AD). This is the most common neurodegenerative dementia [39] which starts slow and speeds up in time. A symptom is short term memory loss [38].

 Dementia with Lewy bodies (DLB). This is the second most common dementia type. It is closely related to Parkinson and can cause rapid cognitive decline which might lead to hallucinations. Patients with DLB are unable to plan and have a lack of analytical and abstract thinking [39].

 Fronto-temporal dementia (FTD). This is a progressive degeneration of the fronto-temporal lobe. Rapid decline in cognitive and motor functions are symptoms of the disease [40].

Furthermore, there are cases of primary progressive aphasia (PPA) that can pathological overlap with fronto-temporal lobar degeneration and Normal Pressure Hydrocephalus (NPH) which is a brain malfunction due to the decreased absorption of a cerebrospinal fluid. This is also one of the causes of dementia [41].

Patient ITA time Disease Abnormality detected by nuclear medicine specialist

1 3 h 30 min Normal -

2 3 h 50 min 30 min Depression? DFT

Hipometabolim back cingulated cortex, back temporal lobes

3 DFT Hipermetabolism frontal

dorso-lateral, back cingulate cortex, temporal cortex

4 3 h 30 min DFT/APP Hipometabolim left hemisphere

(dorso-lateral frontal, back cingulate, temporal cortex

(complete left temporal lobe), dors-lateral parietal, insular cortex and front cingulate left

(31)

epilepsy hemisphere

7 DFT? Fronto-temporal lobe

8 3 h 60 min Generalized

idiopathic epilepsy

Hipometabolism in right front without abnormalities

9 3 h 30 min 30 min DFT/Bipolar disease

Areas with reduction in FDG uptake

10 4 h 30 min DFT with 3 year

of evolution

Hipometabolism in frontal, parietal, temporal, and subcortical bilateral

11 5 h 30 min Depression/DFT -

12 3h 30 min Normal/

Refractory epilepsy

Left temporal lobe

2.2. Loading images into the main program

The Matlab program is made for a variable size of 3D images. These images can either be dicom, v3d, gipl, hdr, isi, nii, vmp, raw, xif, vtk, mha, vff, or par files. Using the uigetfile function the file can get selected so that it can be opened using the fopen function. However, dicom will be opened using the dicomread function. The loaded images, that can be rescaled to Hounsfield units, will get saved as a double in the workspace of the Matlab program and can therefore be called from the program. Pressing the button info all information about the file will be displayed in an extra window. The loading tool is developed by Dirk Jan Kroon

(http://www.mathworks.com/matlabcentral/fileexchange/29344-read-medical-data-3d), [29].

2.3. Visualization of images

The images can be visualized using the enlargement tool. This is a tool is originally developed by Joshua Stough

(http://www.mathworks.com/matlabcentral/fileexchange/37268-3d-volume-visualization) but has been somewhat improved [29]. This tool shows the chosen image in the

sagittal, coronal and axial planes and has the ability of changing the brightness of the images using slide bars. Clicking on one of the images and sliding up and down, changes the orientation in the other two images for better visualization. It shows a line in all planes so that the user can see what he/she is seeing and in what orientation. Another visualization tool is a three dimensional visualization. The three dimensional representation is smoothened.

2.4. Standard uptake value

When the FDG-PET patient image is loaded, the standard value uptake (SUV) map can be calculated. This is a value that often is used to check for the uptake in tissue. In order to calculate the SUV map, the activity concentration of the medical scanner needs to be known. For the Philips Gemini TF the activity concentration is 17.4 kBq/mL which results in 125000 counts per second in the PET scanner [47]. This is used for the

(32)

calculation of the SUV which is the activity concentration per count times the weight and the counts per second in the patient image divided by the injected dose at time zero. This is calculated per pixel so that a complete SUV map can be constructed. The maximum SUV is shown in an edit box besides the calculate button.

2.5. Injection-To-Acquisition correction

Before image registration, the images need to be scaled for the injection-to-acquisition time. This is the time between contrast agent injection and acquisition. In time the contrast agent is filtered out of the blood stream by the kidneys. This results in a different decay due to the fact that there is less contrast agent present in the vascular system for a larger ITA time.Therefore, the voxel intensities are different for images wiht diferente ITA time. In order to correct for the decay artefact the images will be rescaled by formula (45). 𝐼𝑚(0) = 𝐼𝑚(𝑡) ∗ 𝑒𝑡(ln(2)𝜏 )

(59)

, where t is the ITA time in seconds, τ is the half-life time of the contrast agent used in seconds, Im(0) the corrected image and Im(t) the obtained image before correction. In the program there will be an edit box where the ITA time can be inserted. Pressing the calculate button, the images will be rescaled.

2.6. Rescaling the images

As the images are rescaled for the ITA time the images are not scaled with respect to each other. Due to the fact that some voxel intensities are too low for correct

abnormality detection some detections cannot be performed. In order to execute the detection, the images need to be rescaled with respect to each other. Also in order to make the comparison easier to interpret the images per modality get rescaled between 0 and a chosen value. This value can be inserted into an edit box. Pressing the button rescale the images be rescaled. When the reference- and the patient image are rescale and a new patient image is added, it will be rescaled automatically.

The rescaling option is not only present in the main program. It also is used in the program that calculates the reference image and the standard deviation map since as the patient image is rescaled, the reference image and its standard deviation map need to be rescaled as well for correct abnormality detection. Another part of the main

program where the rescale option is added is in the grey matter detection. In order to make the grey matter detection even more user friendly, the CT image used for grey matter detection can be rescaled. The formula used to rescale is

𝐼𝑚𝑅= 𝐴 ∗

𝐼𝑚−𝐼𝑚𝑚𝑖𝑛

𝐼𝑚𝑚𝑎𝑥−𝐼𝑚𝑚𝑖𝑛 (60)

, where ImR is the rescaled image, A is the maximum value in which the image is

rescaled, Im the image itself, Immin the minimum pixel value in image Im and Immax the

maximum pixel value in image Im. In order to average the images they need be registered.

(33)

2.7. Creation and use of reference image and standard deviation map

In order to create a reference image an averaging tool is developed. Images can be loaded using the same loading tool used in the main program and will be displayed in a listbox. All images can be corrected for atomic decay inserting the ITA time and the half-life time of the contrast agent used; 109.8 minutes in case of FDG. The ITA time can be inserted in an edit box and the half-life time of the contrast agent can be

adjusted when other contrast agents with different half-life times are used. Pressing the calculate button will rescale the images for the inserted ITA time. The formula used is

𝐼𝑚(0) = 𝐼𝑚(𝑡) ∗ 𝑒𝑡(ln(2)𝜏 ) (61) , where t is the ITA time in seconds, τ is the half-life time of the contrast agent used in seconds, Im(0) the corrected image and Im(t) the obtained image before correction. If necessary, all images can be intensity scaled as well. This rescale is meanly

performed when abnormality detection is not possible due to low voxel intensities. If so, it should be scaled between the same intensity values as the patient image loaded in the main program. The formula used is:

𝐼𝑚𝑅= 𝐴 ∗

𝐼𝑚−𝐼𝑚𝑚𝑖𝑛

𝐼𝑚𝑚𝑎𝑥−𝐼𝑚𝑚𝑖𝑛 (62)

, where ImR is the rescaled image, A is the maximum value in which the image is rescaled, Im the image itself, Immin the minimum pixel value in image Im and Immax the maximum pixel value in image Im. In order to average the images they need be registered before. Clicking on one of the images the image registration tool (explained later on) will open the program that registers the chosen image with the first loaded image. When all images are loaded, corrected, scaled and registered the average can be calculated. This calculation is done per pixel using the formula

𝐼𝑎𝑣𝑒𝑟𝑎𝑔𝑒 = ∑𝑛𝑛=1𝐼𝑛

𝑛 (63)

,where Iaverage is the average pixel value on place (x,y,z) in the average image, In is a

pixel value on place (x,y,z) in image n, and n is one of the images loaded in the program. Automatically, the created reference image will be saved as a dicom file. In order to detect abnormalities a standard deviation map is needed also. Using the average image, per pixel the standard deviation can be calculated using the formula

𝜎 = ∑ √1 𝑛∗ (𝐼𝑛 22 𝑛𝐼𝑛) 𝑛 𝑛=0 (64)

, where σ is the standard deviation pixel value on place (x,y,z) within the standard deviation image, n is the amount of images used to calculate the standard deviation, and In the standard deviation pixel value on place (x,y,z) in image n. For research

(34)

purposes it is also possible to create only a standard deviation map for just one subject. This can be selected in a popup menu.

In order to save the the reference image and the standard deviation map correctly, the modality and, in case of PET the ITA time can be selected in a popup menu. This saves the images automatically under a different name. Using a popup menu in the main program the saved reference image or standard deviation map can be opened in the main program.

This tool is also used in order to create a pre-made reference image and standard deviation map for two difference ITA times: < 60 minutes and > 3 hours. These images are used for detecting differences in the patient image compared to the reference image. The reference image is built up by an average of the intensities per pixel of 10 healthy subjects.

2.8. Image registration

This image registration tool is developed by Bret Shoelsen

(http://www.mathworks.com/matlabcentral/fileexchange/34510-image-registration-app) and is adapted to this program [29]. It can register mono-modal and multi-modal images using the translational-, rigid-, similarity-, or affine transformations. When a transform type is chosen the image registration can be tuned using optimizer

parameters. These parameters include the growth factor, maximal iterations, amount of spatial samples, epsilon, initial radius, pyramid level and the amount of histogram bins. The growth factor is the rate that the search radius grows per iteration. Its ideally factor is according to 1.05. The search radius starts with the initial radius, ideally set by Matlab at 6.25e-3, and ends when the maximal iteration is reached. This maximal iteration is the amount of iteration done per pyramid level and is set according to Matlab on 100. The amount of spatial samples is the amount of bins the images is divided into, which is in case of Matlab 500. Epsilon is a scalar that adjusts the minimal size of the search radius. Therefore it controls the accuracy of the convergence. The set epsilon according to Matlab is 1.5e-6. Finally, the amount of histogram bins are the amount of histogram towers of the pixel intensities [29]. In order to run the image registration Matlab version 2012a or later need to be used. In order to register two images it uses the imregister- and the imregtform functions. Imregtform uses mutual information in order to estimate the geographic transformation that will be used during image registration. Imregister registeres the moving image with the fixed image using the geopgrahic transformation that is estimated by imregtform. Imregister uses an intensity based-image registration [29].

(35)

Now that the images are registered, the detection of abnormalities can start. This calculation is done per modality. Per pixel the intensity at location (x,y,z) of the patient image will be compared to the intensity at location (x,y,z) of its reference image plus the intensity at location (x,y,z) of the reference image its standard deviation map times A.

The A stands for the amount of standard deviations. Meaning that if you want to check whether the patient image differs from the reference image with 3 standard deviation, the A will be + 3. For the abnormality detection 7 different formulas are used.. In order to speed up the comparison a loop will run per slice that compares each pixel of the patient image with each pixel of the reference image and its standard deviation map. The comparison is based on the amount of standard deviations per pixel that the patient image differs from the reference image. This reference image is an average image of 10 healthy brains. Using this reference image the standard deviation in calculated, which is the variation of a pixel and its neighbors.

This formula is used in order to create a standard deviation image of the reference image so that the differences of the reference and the patient image can be calculated by 𝑃𝑖,𝑗,𝑘 < 𝑅𝑖,𝑗,𝑘− 3𝜎 (65) 𝑅𝑖,𝑗,𝑘− 3𝜎 ≤ 𝑃𝑖,𝑗,𝑘< 𝑅𝑖,𝑗,𝑘− 2𝜎 (66) 𝑅𝑖,𝑗,𝑘− 2𝜎 ≤ 𝑃𝑖,𝑗,𝑘< 𝑅𝑖,𝑗,𝑘− 𝜎 (67) 𝑅𝑖,𝑗,𝑘− 𝜎 ≤ 𝑃𝑖,𝑗,𝑘 ≤ 𝑅𝑖,𝑗,𝑘+ 𝜎 (68) 𝑅𝑖,𝑗,𝑘+ 𝜎 < 𝑃𝑖,𝑗,𝑘 ≤ 𝑅𝑖,𝑗,𝑘+ 2𝜎 (69) 𝑅𝑖,𝑗,𝑘+ 2𝜎 < 𝑃𝑖,𝑗,𝑘≤ 𝑅𝑖,𝑗,𝑘+ 3𝜎 (70) 𝑃𝑖,𝑗,𝑘 > 𝑅𝑖,𝑗,𝑘+ 3𝜎 (71)

where Pi,j,k is the patient image, i, j, and k are the locations in x,y,and slice, Ri,j,k is the reference image, σi,j,k is the standard deviation image of the reference image and a = 1, 2, 3 being the amount of standard deviations. If one of these formulas apply to a certain pixel, the pixel in Pi,j,k has a abnormality of a standard deviations at the location (x, y, slice) = (i, j, k). Depending on which formula applies to the pixel, the amount of standard deviation difference can be determined. For example if formula 60 applies there will be a standard deviation difference of -3σ at location (x, y, slice) = (i,j,k). After computing the code the abnormalities are calculated in amount of standard deviations difference ranging between -3 to 3 standard deviations. The abnormalities are shown with a morphological background. This part of the code has been a previous master thesis of Carolina Nogueira Guedes of the University of Lisbon.

(36)

2.10. Regions of interest

Using the abnormalities of both the CT and the FDG-PET the location comparison is initiated. In this step the abnormalities of both modalities (1) and (2) are compared with each other in terms of location.

Figure 16: The main program. FDG-PET and CT images can be loaded into the program in order to detect abnormalities. Abnormalities of both modalities can be compared in terms of location in order to pinpoint regions of interest that correspon d to both modalities.

Again this is done per slice per pixel. The program makes of this fourth dimension of both abnormality images (one per modality) a binary image where the 1 stands for an abnormality and a 0 for normal. Adding the images of both modalities will provide an image that contains 0's, 1's, and 2's. Subtracting only the 2's will provide the

abnormalities that correspond in location in both the FDG-PET- and the CT scan. ). If abnormalities match in location it will be marked as a region of interest (ROI). This region is a three dimensional volume that includes one complete ROI including the tumor(s) (3). The coordinates of the eight corner points of the ROI will be used to subtract the ROI from the CT patient image in order to display the ROI (5). This region of interest can be zoomed into (5). Using the popup menu (4) the amount of differences in standard deviations can be selected so that only these ROI with the selected amount of standard deviation difference is shown. The ROI are subtracted from the patient image using the fourth dimension of the abnormality image.

(37)

2.11. Grey matter detection

Having some regions of interest with abnormal activity from the FDG-PET and some abnormalities from the CT, might not indicate that there is something abnormal in that particular region. Some higher metabolic activity showed in the FDG-PET can be due to multiple grey matter layers overgenerating the FDG-PET signal. Since the grey matter is emitting more gamma rays than white matter, multiple grey matter layers will therefore generate more activity than there actually is and will shadow the activity of the small white matter regions in between the multiple grey matter layers. If the location of the high metabolic activity region in the FDG-PET matches a region of a thick grey matter layer in the grey matter detection image, the region can be ruled out of the ROI’s. In order to subtract the grey- and white matter from the patient’s image (1) a tool is developed since grey- and white matter detection for CT is not straightforward.

Figure 17: the semi-automatic grey matter detection. The CT image will be loaded and using a lower- and upper threshold the grey – and white matter can be detected. While the grey- and white matter is detected, the volume is calculated as well.

Using the Hounsfield Unit thresholds (5,6) parts of the brain can be subtracted (2,3). Since in every CT image the HU’s of grey- and white matter is different these thresholds can be adjusted using a lower- and upper limit (5,6). In such a way the grey- and white matter can be detected semi-automatically. Unfortunately, the grey matter layer detection is not developed. Due to time and material reasons, it will be passed on to suggested research. Additionally, using information about the size of the voxels of the medical images, the volume of either the grey- or white matter is determined (7).

2.12. Additional research

Due to the fact that the program is capable of detection abnormalities this program can also be used for research purposes assisting the radiologist and oncologist in analyzing FDG-PET- and CT images. Besides analyzing differences in FDG uptake in brain cancer, FDG uptake in brains of patients with epilepsy or dementia can also be analyzed and might provide valuable information in terms of new diagnostic parameters. 12 cases are researched in order to check certain regions. However, due to the drawback that images from the FC couldn’t be opened, there are not enough cases to be analyzed. Additional to the abnormality difference,

(38)

times; > 3 hours and < 60 minutes. 12 Images of the same patient with two different injection times are compared in order to get the differences. This might say something about the accuracy of the FDG-PET images using different injection times. Proposed additional research using the program:

(1) Analysis of differences in FDG uptake for normal's and abnormal's, in which normal's are healthy patients with no abnormal FDG uptake and abnormal's are patients with an abnormal FDG uptake. Using this analyzation new diagnostic parameters may be found.

(2) Analysis of the differences in FDG uptake for patients with an ITA time of < 60 minutes and patients with an ITA time of > 3 hours. This may give some information about the optimal ITA time.

(39)

3. Results

The main purpose of this paper was to create a program that is able to analyze CT and FDG-PET images in order to improve the diagnosis of brain cancer. Loading images and per modality comparing them using a reference image that is based on an average of 10 healthy subjects. The results of this comparison can be analyzed. This program is created in MATLAB and has several functions.

Figure 18: The main program. FDG-PET and CT images can be loaded into the program in order to detect abnormalities. Abnormalities of both modalities can be compared in terms of location in order to pinpoint regions of interest that correspond to both modalities.

The program can load several types of images: dicom, v3d, gipl, hdr, isi, nii, vmp, raw, xif, vtk, mha, vff and par files. If a patient image is loaded its file name will appear in a listbox (14) from where previously opened images can be opened again. Loading the FDG-PET and CT scans (2, 4), the comparison can be performed in order to get images with standard deviation differences (6, 10) compared to a reference image (1, 7). Per modality the reference image differs. However, before comparing the images need to be corrected for atomic decay of the FDG contrast agent by inserting the ITA time (3, 15). The images are scaled (4, 9) as well for optimal comparison.

(40)

As for the reference image for the FDG-PET image, there is a choice of two types of reference images since the patient images can be acquired with either an ITA time of < 60 minutes or > 3 hours. All reference images are made of an average of 10 healthy subjects with either an ITA time of < 60 minutes or > 3 hours depending on the reference image.

In order to calculate the reference image a tool is developed. In this separate program images can be loaded using the same tool used in the main program where dicom, v3d, gipl, dr, isi, nii, vmp, raw, xif, vtj, mha, vff or par files can be loaded (A). As the images are loaded, they will be shown in the listbox. Before applying the image registration the images can be corrected for the atomic decay inserting the ITA time and pressing correct (B). This will correct for the activity of the contrast agent that decreases over time. Secondly the images can be scaled (4, 9). This is important when the reference- and patient image are compared. As the images are scaled the images can be registered. Clicking on one of the images in the listbox will open the image registration tool. The chosen image will be registered to the first loaded image. If the images are scaled, corrected and registered the reference image and its standard deviation map over all loaded images (C) can be calculated and saved as dicom file. The standard deviation is calculated based on the result of the average of the 10 healthy subjects. The reference image and its standard deviation map can be loaded into the main program. Now the abnormality detection can start.

However, just one standard deviation map of a single image can be made as well. This can be used for research purposes when for instance the difference in FDG uptake with two different ITA times is determined. One of the images will be the reference image. Based on this image the standard deviation is made also.

Figure 20: the reference- and standard deviation map image creator. Images can be loaded into the program and intensity scaled for ITA time. An general intensity scale can also be applied.

In order for an optimal abnormality detection, the patient- and the reference image need to be registered. For optimal registration, there is a choice of rigid, translational,

(41)

register the reference- with the patient image per modality and the patient image of FDG-PET and CT. Image registration is also used for the calculation of the reference image since the images need to be aligned before the average over the loaded images is calculated.

Figure 21: Image registration by Bret Shoelsen. There are multiple options to register the image. The program is about to execute monomodal- or multimodal registration with translational, rigid, similarty or affine transformations.

For FDG-PET, the weight of the patient and the amount of contrast agent can be inserted to calculate the standard uptake value (SUV) (5). Inserting and pressing the button calculate will show the maximum SUV in the patient image. Selecting the SUV in the check box (5) will show the complete SUV image, which is calculated per pixel in the field where normally the abnormalities of FDG-PET will be shown (6). Unselecting the checkbox will show the FDG-PET abnormalities again.

When the standard deviation difference images of both modalities are made, they can be compared. This results in a map what shows where the coinciding locations of the differences are of both modalities (11). For optimal registration both images will be registered. The coinciding standard deviation difference map is used to show the regions of interest.

Referenties

GERELATEERDE DOCUMENTEN

What is new in these films is mainly the search for the past and present in the collective imagination and the sur- mounting of the stereotypes presented in

Specifically: (1) the HCI literature does not look at variation in image description data; and (2) on the IC-side, current evaluation practices and image description datasets do

He cites three views: the Lost Tribe per- spective (the Bèta Esra'el are originally ethnie Jews, descendants from ancient Israélites); the Convert perspective (they are not

Finally, this paper makes suggestions with regard to adapting teaching in terms of students’ behaviour based on their computer anxiety and Internet self-efficacy as well as

This chapter presents a general survey of relevant safety related publications and shows how they contribute to the overall system safety of domestic robots by grouping them into

Die argument is dat Allan Boesak, deur ‘n kreatiewe mengsel van bevrydingsteologie, swart teologie en Gereformeerde teologie, wat voortaan in hierdie studie as die teologie van

SAAM OOR TOEKOMS. Hierdie gevolmagtigde raad begin sing onder Ieiding van 1 word bclas met die taak om mcvv. Harris &lt;Kaapstad). tc propageer en te

This model shall capture the relationship between GDP growth of South Africa (a commonly used indicator of economic growth) and different variables that are said to have