• No results found

Detection of active spleen haemorrhage in whole-body computed tomography: A patch based multi-atlas approach

N/A
N/A
Protected

Academic year: 2021

Share "Detection of active spleen haemorrhage in whole-body computed tomography: A patch based multi-atlas approach"

Copied!
54
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Detection of active spleen haemorrhage

in whole-body computed tomography

A patch based multi-atlas approach

Jorrit Posthuma j.s.posthuma@amc.uva.nl

10459251

Biomedical Engineering and Physics (BMEP) Academic Medical Center, Amsterdam, The Netherlands

University of Amsterdam, Amsterdam, The Netherlands

Thesis submitted to obtain the degree of Master of Science in Medical Informatics

Date: September 15, 2017 University of Amsterdam, BMEP Amsterdam, The Netherlands Dr. H.A. Marquering R. S. Barros MSc Dr. F.P.J.M. Voorbraak

(2)

Mentors

Dr. Henk A. Marquering • Scientific staff member • Academic Medical Center

• Biomedical Engineering and Physics, Radiology • h.a.marquering@amc.uva.nl

• Room L0-106 (65182)

Renan Sales Barros, MSc • Daily mentor

• PhD candidate

• Academic Medical Center

• Biomedical Engineering and Physics • r.salesbarros@amc.uva.nl

Tutor

Dr. Frans P.J.M. Voorbraak

• Academic Medical Center, University of Amsterdam • Department of Medical Informatics

• f.p.voorbraak@amc.uva.nl • Room J1B-110 (65853)

(3)

Acknowledgements

I would like to thank a few people for helping and supporting me during the course of this research project.

First of all, I would like to thank Renan, Henk and Frans for their endless support and feedback during the time of my research. Their enthousiasm and positive

feedback helped me persue my goals. I would also like to thank Ludo and TraumaNet for providing me with the necessary data to develop, test and validate the method. Then, I would like to thank the people from the Biomedical Engineering and Physics department to make me feel welcome and making it a very pleasant time.

At home, my friends and family, Jasper, Tim, Maaike, Nella and Hessel never failed to support and motivate me and keep me on the right track. And last, but certainly not least, I would like to thank Jos´e, my girlfriend, for supporting me in every way possible, and always keeping faith.

Jorrit Posthuma September 15, 2017

(4)
(5)

Abstract

Whole-Body CT (WBCT) scanning has shown to have a positive effect on clinical outcome over selective-scanning. As a result, the increased availability of WBCT scans results in a higher workload for radiologists. This research demonstrates an automated method for segmentation of active haemorrhage within the spleen on WBCT scans within a trauma setting. Automated methods can support radiologists, and help reduce workload and increase quality of diagnosis. Our method uses three steps to create the segmentation. It first creates an approximate location of the spleen, devides it into classifiable patches, and performs the

manual decision tree classification. For the intermediate trauma-organ segmentation step, our method has an average DICE of 0.80 and for haemorrhage segmentation an average DICE of 0.54. This shows that automated methods are capable of WBCT active haemorrhage segmentation and could be of value within the clinic after extensive future research. A larger dataset is required to be able to investigate into more advanced classification methods and to validate the results for clinical use.

(6)

Samenvatting

Er is aangetoond dat het maken van een CT scan van het volledige lichaam (WBCT) een positief klinisch effect heeft vergeleken met een selectieve CT scan. Dit heeft geresulteerd in een toename van WBCT scans en een verhoogde werkdruk voor radiologen. Dit onderzoek demonstreert een geautomatiseerde methode voor het segmenteren van actieve bloedingen in de milt op WBCT scans in een trauma setting. Geautomatiseerde methodes kunnen radiologen ondersteunen, verlagen de werkdruk en verbeteren de kwaliteit van de diagnose. Onze methode werkt in drie stappen om tot de segmentatie te komen. De eerste stap is het maken van een ruwe schatting van de locatie en regio van de milt. Daarna wordt deze regio opgedeeld in classificeerbare gebieden. Als derde stap worden de gebieden geclassificeerd aan de hand van een handmatig geconstrueerde beslisboom. De milt-segmentatie stap heeft een gemiddelde DICE index van 0.80 en de uiteindelijke actieve bloeding segmentatie een gemiddelde DICE index van 0.54. Dit laat zien dat het mogelijk is voor geautomatiseerde methodes om actieve bloedingen te segmenteren en van waarde kunnen zijn voor de kliniek.

Een grotere dataset is nodig om de mogelijkheden van geavanceerde classificatiemethodes te onderzoeken en om de resultaten te valideren voor gebruik in de kliniek.

(7)
(8)

Table of Contents

Acknowledgements i Abstract iii Glossary viii 1 Introduction 1 1.1 Narrative . . . 1

1.2 Medical imaging in trauma care . . . 2

1.2.1 CT imaging . . . 2

1.2.1.1 Benefits . . . 3

1.2.1.2 Detriments . . . 3

1.3 Spleen and haemorrhages . . . 3

1.3.1 Contrast . . . 4 1.4 Limiting scope . . . 4 1.5 Data . . . 5 1.6 Libraries . . . 6 1.7 Structure . . . 7 2 Organ segmentation 8 2.1 Whole body haemorrhage detection . . . 8

2.2 Organ segmentation methods . . . 10

2.2.1 Trauma imaging . . . 10

2.3 Registration . . . 10

2.3.1 Knowledge transfer . . . 12

2.3.2 Multi-Atlas registration . . . 12

2.4 Atlas selection and label fusion . . . 14

(9)

2.6 Parameter search . . . 19 2.7 Result . . . 19 3 Patch segmentation 21 3.1 Tissue segmentation . . . 21 3.1.1 Clustering . . . 23 3.1.1.1 Fuzzy clustering . . . 23 3.1.2 Watershed . . . 23 3.1.3 Metric segmentation . . . 25 3.2 Filtering . . . 27 3.2.1 Bone filtering . . . 27 3.2.2 Kidney filtering . . . 27 4 Classification 29 4.1 Manual decision tree . . . 30

4.2 Random forests . . . 31 4.3 Neural networks . . . 31 4.4 Method selection . . . 32 5 Validation 33 5.1 Organ segmentation . . . 33 5.1.1 Liver . . . 33 5.1.2 Spleen . . . 34

5.2 Spleen haemorrhage segmentation . . . 34

6 Discussion and conclusion 37 6.1 Comparison with existing methods . . . 37

6.2 Strengths and weaknesses . . . 38

6.3 Future work . . . 39

6.4 Clinical Relevance . . . 39

6.5 Conclusion . . . 39

Bibliography 40

(10)

Glossary

ANN Artificial Neural Networks ATLS Advanced Trauma Life Support

CECT Contrast-Enhanced Computed Tomography CT Computed Tomography

DICE SrensenDice coefficient GPU Graphics Processing Unit HU Hounsfield Unit

NCCT Non-contrast Computed Tomography ROI Region Of Interest

SSD Sum of Squared Differences

TBCT Total Body Computed Tomography

Voxel Value represented in a three-dimensional space. A value represented in a two-dimensional space is called a Pixel. Voxels can have a asymmetrical rectangular shape.

(11)

Chapter

1

Introduction

This chapter provides an introduction on the topic of trauma care, its use of radiological imaging and the related possibilities for research we investigated.

1.1

Narrative

Steven, a 21-year-old law student, is biking back home from having drinks with his friends. He drank a little more than planned, so when his wheel hits the kerb, he is not able to keep his balance and falls.

Luckily, a bystander sees him lying on the street shortly after the accident and calls an ambulance. He does not remember much, but indicates that his head hurts most. After arriving at the hospital, the attending physician decides to make a CT scan of Steven’s head. The scan does not show any severe problems, so he is sent home with the diagnosis of a mild concussion.

In the middle of the night, Steven wakes up with a heavy pain in his abdomen and decides to go back to the hospital. After performing a CT scan of his abdomen, the physicians discover blood within his spleen. Apparently, when Steven fell off his bike, he first fell with his stomach on one of his handlebars, causing the rupture in the spleen which was missed during the initial examination.

(12)

1.2

Medical imaging in trauma care

The story about Steven is an example of a situation where a selective CT scanning procedure was applied. With this procedure, only the part of the body that the physician deems relevant for making a diagnosis is scanned. This is the default procedure for in a trauma care setting as recommended by ATLS [1].

1.2.1 CT imaging

A CT scanner is a machine that uses X-rays to create volumetric images of its content. On these images, matter that is radiodense (harder to pass through by the X-rays) appears brighter than matter that is radiolucent. This lets us to look inside the human body and at the organs without intervention and allowing us to make assessments about the health of a patient.

Over time, CT scanners have become faster and require lower radiation levels while providing better quality [2]. In addition, scanners are becoming more widespread and are located more closely to the trauma centre in a hospital [3]. Because of these improvements, the idea arose to not only perform a selective scan (figure 1.1a) of a patient but to investigate the benefits of performing a Whole-Body CT scan (figure 1.1b, also known as WBCT, Total-Body CT scan, TBCT or pan-scan) instead [4].

(a) Selective scan

(b) Whole Body CT

(13)

1.2.1.1 Benefits

One of the benefits of performing a WBCT is that it reduces the probability to miss information relevant to the patient, resulting in an increased survival rate [5]. For example, in Stevens case, a radiologist could see the ruptured spleen if he would have performed a WBCT instead of a selective scan of his head. This way, they could have stopped the bleeding while it was still ongoing.

1.2.1.2 Detriments

For the patient, the main disadvantage of a WBCT scan, compared to a selective scan, is an increased radiation dose. It is still unclear if the long-term benefit of increased clinically relevant findings outweighs the negative impact of the increased radiation dose [6].

For the radiologist, an important difference of performing a WBCT scan is the increased assessment time of the scan. For example, a head CT scan has around 120 slices with each a 512 x 512 resolution, where a WBCT scan can reach a 1000 slices with the same resolution. For a radiologist, this means a significant increase in the amount of data to be evaluated. This is where computer automation could play a role. It could help evaluate the data quicker and more precise. This research will investigate into and develop an automated method that can be used to assist radiologists assessing the WBCT images.

1.3

Spleen and haemorrhages

Instead of investigating all possible traumas, we first focus on internal bleeding, also called haemorrhages. Haemorrhage after blunt trauma is a major contributor to death after trauma [7]. Of all possible locations where a haemorraghes can occur, an injured spleen is the most frequent cause of haemorraghes occurring in the abdomen.

We can formulate our goal as the following question:

“Is it possible to automate detection of active spleen haemorrhages in whole-body CT scans within trauma care?”

Before we can answer this question, it is important to know how haemorrhages appear on a CT image. As explained, matter is displayed based on its radiographic density. This is represented in Hounsfield Units (HU), where demineralised

(14)

1.3.1 Contrast

For a radiologist, a CT scan is a tool that can be used to discover a disruption of blood circulation within the body. To increase the visibility of blood, it is possible to add a contrast agent into the blood volume of the patient. This contrast agent has a higher radiological density and therefore provides more contrast within the image.

Figure 1.2a shows a Non-Contrast CT (NCCT). On average, the intensity of the spleen on an NCCT is around 45 HU [8]. However, biological aspects, like genetics and diseases like haemosiderosis, sickle-cell disease and lymphoma, can influence its appearance on a CT scan [8]. In figure 1.2b, a contrast agent was administered to the patient, and the CT scan was made during the arterial phase. During this arterial phase, the density of the spleen typically appears heterogeneous. This irregular pattern can be mistaken for an active haemorrhage. During the venous phase, as shown in figure 1.2c, a healthy spleen appears (close to) homogeneous on a Contrast-enhanced CT [8] and is, therefore, better suitable to assess for haemorrhages.

(a) Non-Contrast CT (b) Arterial phase Contrast CT (c) Venous phase Contrast CT

Fig. 1.2: Contrast and timing intensity differences of a healthy spleen [8]

1.4

Limiting scope

To make detection (a binary question) of active spleen haemorrhages more quantifiable, we focus mainly on haemorrhages segmentation. Within image

(15)

processing, segmentation is the process of partitioning the image into multiple segments. In our case, this would be the segmentation of the image into haemorrhagic tissue and non-haemorrhagic tissue. Therefore, our main research question is:

“How to segment active spleen haemorrhages in whole-body CT scans in an acute trauma setting?”

In section 2.1 a naive method for whole body haemorrhage detection is described and shown to have a serious problem. To overcome this problem we propose in section 2.2 an approach consisting of two parts: segmenting the spleen within the whole-body scan and subsequently analysing the spleen tissue and segmenting the haemorrhagic areas within the segmented spleen. We then naturally arrive at the following research sub questions:

“How to segment the spleen within a traumatic whole-body CT scan?” “How to detect haemorrhagic tissue within the spleen?”

1.5

Data

Throughout this research, we use the following datasets to build, test, train and validate the methods:

• 20 healthy patient Whole-Body CT atlases with multi-organ segmentations • From Visual Concept Extraction Challenge in Radiology (VISCERAL)

Anatomy2 Gold Corpus

• Used as part of the organ segmentation method

• 5 trauma patient Whole-Body & Abdominal CT scans with spleen segmentation

• From Academic Medical Center, Amsterdam, The Netherlands • Segmentations performed by external expert

• Used to develop and fine tune organ segmentation method • ID: WHITEFEET SEG {2, 5, 6, 9, 10}

• 5 trauma patient Whole-Body & Abdominal CT scans with spleen and liver segmentation

• From Academic Medical Center, Amsterdam, The Netherlands • Segmentations performed by external expert

• Used to validate organ segmentation method • ID: WHITEFEET SEG {1, 3, 4, 7, 8}

(16)

• 8 trauma patient Whole-Body & Abdominal CT scans with spleen haemorrhage segmentation

• From Academic Medical Center, Amsterdam, The Netherlands • Segmentations performed by external expert

• Used to train patch segmentation and classification method • ID: WHITEFEET PATCH {2, 7, 8, 9, 13, 15, 16, 17}

• 4 trauma patient Whole-Body & Abdominal CT scans with spleen haemorrhage segmentation

• From Academic Medical Center, Amsterdam, The Netherlands • Segmentations performed by external expert

• Used to validate patch classification method • ID: WHITEFEET PATCH {10, 12, 14, 18}

1.6

Libraries

Our method is inplemented using the Python 3 programming language and the Jupyter (previously known as iPython) environment. We used several libraries and tools to develop our software, of which the following are the most important:

• ITK Insight Segmentation and Registration Toolkit Basic image and segmentation operations https://itk.org

• Elastix Rigid and nonrigid registration http://elastix.isi.uu.nl

• NumPy Vector, matrix operations and analysis http://www.numpy.org

• Keras High level Neural Network design http://keras.io

• Tensorflow Low level Neural Network execution http://tensorflow.org

• Scikit-learn Clustering and classification http://scikit-learn.org

• VMTK The Vascular Modeling Toolkit Vesselness filtering

http://vmtk.org

• ANTs Advanced Normalization Tools Statistical label fusion

(17)

1.7

Structure

The rest of this thesis is built up as follows. In chapter 2 existing methods for organ segmentation, and their applicability to Whole-Body and emergency CT images, are discussed, and the design choices made before and during the development of our organ segmentation method are explained. Chapter 3 and 4 will focus on the haemorrhage segmentation method, where chapter 3 describes the details involving the patch segmentation, and the chapter 4 will explain how these patches are classified into haemorrhagic and non-haemorrhagic tissue. Chapter 5 shows the validation results of a test dataset applied at the separate method parts and the method as a whole. The final chapter will summarise and evaluate the results of the research as well as the research itself.

(18)

Chapter

2

Organ segmentation

This chapter explains the design choices made before and during the development of the organ segmentation method. We evaluate previously reported methods for organ segmentation, and their applicability to whole-body and emergency CT images.

A CT scan is a three-dimensional scan that allows a look inside the human body without invasive measures. Tissue of the human body on a CT scan is represented by the amount of (X-ray) radiation that can pass through, describing its radiodensity. As different tissue types have various radiodensities, they als appear as different intensities on the image. A radiologist uses prior knowledge about the expected radiodensities of human tissue to look for anomalies. This knowledge comes from years of training and results in heuristics that not only takes the local intensity, but also takes spatial information, and the relationship between different tissues into account. The heuristics as applied by radiologists also varies, which explains interobserver variability [9].

2.1

Whole body haemorrhage detection

When a haemorrhage is ongoing during the scan, it leaks blood with contrast into the surrounding tissue. The haemorrhagic tissue appears hyperdense (lighter, with a higher HU) compared to the surrounding non-haemorrhagic tissue. Our primary goal is to detect these hyperdense areas. Because the radiodensity

(19)

of human tissue strongly varies, the radiodensity of haemorrhagic tissue also varies similarly.

To get a better sense of the variation of intensities that can be found in a whole-body CT, we use a sample image from our spleen haemorrhage dataset. Within a

whole-body CT scan, the intensity values commonly lie between -1024 HU and 3071 HU. Inhaled air is a mixture consisting mostly of oxygen, nitrogen and water. This mixture results in a slightly higher HU than pure oxygen. This knowledge makes it possible to locate the air in the lungs using a threshold method. A threshold method creates a mask where only the voxels between a lower and upper limit are selected. When using a lower limit of -950 HU, and an upper limit of -700, we get a mask as shown in figure 2.1a.

When looking at abdominal tissue, the intensity differences between the abdominal organs are significantly smaller. In our example Contrast-Enhanced CT (CECT) image, as shown in figure 2.1b and 2.1c, the intensity of the liver ranges between 50 HU and 130 HU, while the spleen ranges between 65 HU and 175 HU. For these organs, differentiation based on intensity levels is more difficult. In figure 2.1d, the lower and upper bounds of the threshold are selected only to show the haemorrhagic tissue in the spleen. However, as shown, these intensity values can be found throughout the whole body and not only in haemorrhagic tissue.

(a) Threshold mask from -950 HU to -700 HU

(b) Threshold mask from 50 HU to 130 HU

(c) Threshold mask from 65 HU to 175 HU

(d) Threshold mask from 175 HU to 225 HU

Fig. 2.1: Different intensity threshold masks of a single contrast-enhanced whole-body CT image

(20)

When developing a whole-body haemorrhage detection method, we need to take into account what values of radiodensity are expected in a particular spatial area to be able to differentiate between healthy and haemorrhagic tissue.

2.2

Organ segmentation methods

A way to make use of spatial information during whole-body haemorrhage segmentation is to start with segmenting the separate organs. This allows for organ-specific heuristics to overcome the intensity and structural differences between organs when segmenting haemorrhage tissue.

Region-based methods, as well as edge-detection-based methods, use only the image data as input. Both methods have proven to work well in situations where organ boundaries are visible and clear [10]. However, within a trauma setting, boundaries often fade or disappear.

2.2.1 Trauma imaging

In figure 2.2, we see a sample CT image from our training dataset. Image 2.2a is the image as scanned, and figure 2.2b is the same image, but with the spleen manually segmented by a human expert. The patient has a grade IV (AAST spleen injury scale) spleen rupture with contrast extravasation which is an indicator for active haemorrhage.

Because of the devascularization of the spleen, the intensity level is unpredictable which makes it harder perform intensity-based segmentation. In addition, haemorrhages can still appear in the devascularized tissue. Segmentation solely based on intensity values would, therefore, exclude the devascularized tissue. A human expert also uses prior knowledge and spatial information to help the manual segmentation and prevent exclusion of devascularized tissue. For our automated segmentation, we used image registration to make use of prior knowledge and spatial information. Image registration is commonly used as a method for organ segmentation in healthy patients [11], [12], [13], [14].

2.3

Registration

Image registration is the method of transforming an image in such a way, that it increases the similarity with a reference image. This transformation makes it possible to combine or transfer knowledge between the images that would

(21)

(a) Without spleen mask (b) With spleen mask

Fig. 2.2: Example slice of trauma CT scan is illustrating segmentation complexity in trauma imaging.

otherwise be impossible. Within registration, the transformed image is called the moving image, and the reference image is called the fixed image. To illustrate this process, we will register one of our training images WBCT’s and show the intermediate steps of the registration process and the final knowledge transfer. We start with a fixed CT image with spleen haemorrhage (figure 2.3a) and moving CT atlas (figure 2.3b) image. The goal is to transform the moving (2.3b) image in three dimensions in such a way, that the differences are (close to)

minimal. As shown in figure 2.3c, when the two images are combined, the differences are clear.

Within image registration, there are several transformation models. A division can be made between global and local transformation. Global transformations like translation (moving), scaling and rotation apply to the image as a whole. Local transformations on the other hand, like B-spline, are capable of making local ’elastic’ adjustments. It is common to use a combination of global and local transformation models to improve robustness [15]. The output of the first registration step is the moving input for the next. The result of the second registration step is the moving input for the third, and so on.

Our first registration step is a translation registration. During this step, the moving image is moved/translated to increase the similarity. The resulting image is shown in figure 2.3d, and the same resulting image combined with

(22)

the original fixed image (figure 2.3a) is shown in figure 2.3g. Compared to the original combined image in figure 2.3c, it is already a significant improvement. The second registration step is a combination of other affine transformations: rotation, scaling and shearing. As it uses the result of the translation step (figure 2.3d) as input, the location of the image does not change as much. Figure 2.3e

shows the result of the affine registration. Because we can only show a two-dimensional slice, it might look as if the fit has not improved, while the overall three-dimensional fit has improved. In figure 2.3h, we show the result again combined with the

original fixed image (figure 2.3a).

We conclude by refining the result using a B-spline registration. A B-spline uses a grid where every point of the grid can be adjusted to increase the similarity. The surrounding points are stretched or compressed accordingly to compensate for the adjustment. The grid can therefore also be imagined as elastic. The result of the B-spline registration is shown in figure 2.3f, and combined with the original image in figure 2.3i.

2.3.1 Knowledge transfer

As shown in figure 2.3i, the results of the registration are quite good, making it possible to transfer knowledge from one image to another. For the moving image in figure 2.3b, we have a mask for the right lung as shown in figure 2.4a. By applying the transformations from the three registration steps to this mask, we get a new mask, as shown in figure 2.4b. This mask can then used as lung mask for the original fixed image (figure 2.3a), as shown in figure 2.4.

2.3.2 Multi-Atlas registration

To be able to transfer the segmentations to our images, we first need images where the spleen already is segmented. For this, we use the collection of twenty healthy patient atlases. These atlases are WBCT scans with (a subset of) the organs segmented. To accommodate for the diversity of the human body, we use a multi-atlas approach, which has proven to result in overall higher quality segmentations [16]. This multi-atlas approach means applying the registration steps to all atlases separately, and reducing or combining the segmentations into one single segmentation.

(23)

(a) Fixed image (b) Moving image (c) Fixed + moving image

(d) After translation step (e) After affine step (f) After B-spline step

(g) Fixed + translation step (2.3d) (h) Fixed + affine step (2.3e) (i) Fixed + B-spline step (2.3f)

Fig. 2.3: Three registration models applied at two testing WBCT scans. Figure 2.3a as fixed image, and figure 2.3b as moving. Figure 2.3d to 2.3f show the registration results, with figure 2.3g to 2.3i show the same results combined with the fixed image (figure 2.3a), demonstrating the improving similarity

(24)

(a) Moving image (2.3b) + mask (b) Transformed (2.3i) + final mask (c) Fixed image (2.3a) + final mask

Fig. 2.4: Example of knowledge transfer after registration

2.4

Atlas selection and label fusion

Commonly used label fusion methods to reduce or combine segmentations are majority vote [16], weighted vote, and online and offline statistical fusion. Modern multi-atlas healthy organ segmentation ([11], [12]) and spleen specific ([13],

[14]) segmentation methods all use some form of statistical fusion. Online statistical fusion not only uses the labels, but also the images to determine the best fit. However, as we will show later, because we use healty patients as atlasses, and trauma patients as target image, these online statistical methods perform worse. As weighted voting uses weights as additional input for merging, we analysed if there is any correlation between the similarity of the image and atlas and the quality of the registration. As measure between the images, we use the Sum of Squared Distances (SSD) dissimilarity measure, which is defined as:

SSD =

n

X

i=1

(Ti− ˆAi)2

Here, T is the target image (e.g. 2.3a), and ˆA stands for the registration atlas images (e.g. 2.3f), where i loops over the voxels. The quality of the registration is determined by calculating the DICE coefficient between the segmentation after registration and a manually segmented reference segmentation. The DICE coefficient is defined as:

DICE = 2|Trs∩ ˆAs| |Trs| + | ˆAs|

(25)

Here, Trsis the collection of spleen-labed voxels of the reference segmentation,

and ˆAsis the collection of spleen-labed voxels of the registration result segmentation.

As shown in figure 2.5, with an average Pearson correlation coefficient of -0.140, a weak correlation exists between the image SSD and segmentation DICE. The reason for a negative correlation comes from the fact that we compare a similarity measure with a dissimilarity measure. Because of the weak correlation we decided not to use SSD as weights for weighted voting and only use the SSD to eliminate the absolute worst registrations. This means we only focus on the majority vote and the offline statistical fusion method.

Fig. 2.5: The circles and line show the SSD between the target image and the registered atlas image result. The SSD is normalized so the worst matching atlas (with the highest SSD) is normalized to 1. The graph is then ordered from atlas with the smallest to the atlas with largest SSD value. The square values show the DICE coefficients of the atlases segmentations with the reference segmentation. The Pearson correlation coefficients for the different images are -0.173, -0.325, -0.105, 0.003 and -0.100.

For the comparison between majority voting and the statistical methods, we compare the STAPLE [17] and Local Weighted Joint-Fusion as the reference statistical methods [18]. It has been shown that these statistical methods perform significantly better than majority voting when applied to images from healthy subjects [11], [12]. Majority voting uses a qualified majority of atlasses to determine

(26)

if a voxel should be classified as spleen or not. The amount of atlasses of required for a positive classification will be the result of the combined parameter search with other registration parameters which will be discussed in depth in section 2.5. Figure 2.6 shows the DICE coefficients for all segmentations. It shows that the Joint-Fusion performs similar or worse in every image compared with the majority voting method and STAPLE method. We suspect it performs worse because the Joint-Fusion method uses not only the organ segmentations as input but also the registration images. The method can get confused because of the large intensity differences of the traumatic tissue. The Joint-Fusion method works well when the boundaries of the organs are clear, such as in healthy patients, but appears to have a negative effect when segmenting organs in trauma patients because of the larger intensity differences.

Fig. 2.6: Comparison between statistical fusion method Local Weighted Joint-Fusion by Wang [18], STAPLE and the Majority Voting method. All methods are compared agains the reference segmentation.

2.5

Multi-phase registration

The whole-body registration method tries to optimise the similarity between the moving and the fixed image for the whole body without giving priority to a specific area. By using a multi-phase approach [19], we can give higher priority

(27)

to the region surrounding the target organ. The location of this region can be constructed from the result of the whole-body registration. From here on, we will refer to the whole-body registration step as the global registration, and to the locally focused registration step as the local registration.

The complete organ-segmentation is as follows:

Global registration

1. Perform three-step registration (translation, affine, B-spline) for all atlases (figure 2.7a)

2. Combine registration results using majority voting (figure 2.7b) 3. Dilate combined mask and create bounding box (figure 2.7c)

The bounding box resulting from the first step will be concentrated around the targeted organ. To lower the chance of having too much of a narrow focus region for the local registration, we include all atlas segmentations and add a small margin to the region by dilating the result mask. The SSD similarity of the local registration result and the target images has a stronger correlation with the quality of the segmentation than the global registration step. This correlation allows us to select the top atlases before performing the local majority voting.

Local registration

4. Apply bounding box to all atlases

5. Perform tree-step registration on all boxed atlases 6. Select top atlases based on SSD (figure 2.7d)

7. Combine registration results using majority voting (figure 2.7f) Since we use elastix [20] as our registration framework, we have the ability to choose how similarity between the fixed and moving image is calculated. The framework tries to optimise a metric which can be specified and can be a combination of one or more similarity metrics and penalty metrics. For the global registration, we use the Advanced Normalized Correlation (ANC) [20] as the similarity metric. This is a normalized metric based on the mean squared differences, similar to the SSD metric. This metric works well in finding an approximate target region for the organ of interest as it tries to minimise overall differences.

For the local registration, we use the Advanced Mattes Mutual Information (AMMI) metric [21] as the similarity metric. AMMI gives a higher priority to

(28)

(a) The global registration segmentations of all atlasses overlapped

(b) The majority voted merging result of the global registration segmentations

(c) The dilated registration from which the bounding box will be determined

(d) The local registration segmentations of the atlasses overlapped

(e) The majority voted merging result of the local registration segmentations

(f) Figure as final result overlapping the original image

Fig. 2.7: Intermediate segmentation results from the organ segmentation method as implemented run on the example image in figure 2.2a

(29)

areas that are more similar between the fixed and moving image than to regions that are less similar. Because we register traumatic patients with healthy patient atlases, we want the method to give priority to the healthy parts of the organ and surrounding healthy tissue.

2.6

Parameter search

In addition to the ANC metric, we added a Transform Bending Energy penalty to restrict the amount of local transformation during the B-spline step of the global registration. This prevents the B-spline method to overfit the moving image towards the fixed image. Overfitting would lead to artefacts and a lower quality of the segmentation. The similarity metric used during the global registration is defined as:

Similarity(A, T ) = W1∗ AdvancedNormalizedCorrelation(A, T )

+ W2∗ TransformBendingEnergyPenalty(A)

As the weights W1and W2are added to give relative priority to one of the two

components, we set W1to one, and determined the optimal value for weight

W2. During our parameter search, we optimised five different parameters, measured

by the resulting DICE with to the reference segmentation. As figure 2.8 shows, a high transform bending energy penalty (TPW) W2with a high voting threshold

results in the best segmentation compared with the reference image. The highest average DICE coefficient is 0.72 with a standard deviation of 0.11.

2.7

Result

Our final injured organ segmentation has a DICE performance of 0.72 which is expectedly lower thanv typically achieved using healthy organ segmentation

methods. Then again, the requirement of the method is to create organ segmentations that are precise enough to be used as input for an organ-specific haemorrhage

segmentation. In the next chapter, we will show that our organ segmentation quality is sufficient.

(30)

Fig. 2.8: Parameter search results to optimise parameters resulting in the highest average DICE overlap with the reference images. TPW is the transform bending energy penalty weight, and LF are the label fusion parameters: top N atlases global registration, global weighted voting threshold, top N atlases local registration and local weighted voting threshold

(31)

Chapter

3

Patch segmentation

This chapter describes the design choices made before and during the development of the patch segmentation method. We evaluate previously reported segmentation and clustering methods, and their applicability to segment haemorrhagic tissue within the spleen.

Within the organ, a distinction has to be made between haemorrhagic and non-haemorrhagic tissue. We start with the approximate segmentation of our target organ resulting from the method in chapter 2. From the approximate segmentation we create a target search area by selecting the bounding box of the registration and adding a margin. This margin increases the sensitivity of the method and reduces the risk of missing haemorrhagic tissue close to the border of the organ. The target search image is on average one fiftieth of the original whole-body image. Figure 3.1 shows an example target search image that will be used during the description of the implemented method.

3.1

Tissue segmentation

The example scan and the expert segmented haemorrhagic tissue shows an average, minimum and maximum intensity of 191 HU, 53 HU and 305 HU. When we use these minimum and maximum intensities as lower and upper boundaries for a binary threshold filter, we get the mask as shown in figure 3.2a. This however results in mask with a DICE overlap of 0.007 compared to the reference segmentation. By ignoring the outlier values shown in figure 3.2c,

(32)

(a) CT scan of the spleen with haemorrhagic tissue

(b) Same scan as 3.1a with an approximate segmentation mask of organ (green) and expert segmented haemorrhagic tissue

Fig. 3.1: Crop image from WBCT scan showing the result from the organ segmentation method

(33)

we get a narrower intensity range. As figure 3.2b shows, using the narrower intensity range, the number of false positives decreases but misses about 16% of the haemorrhagic tissue. The DICE improves only to 0.039.

3.1.1 Clustering

A different strategy than classifying individual voxels is to cluster (group) similar voxels. This allows overcoming small variations in intensity that could lead to misclassification.

3.1.1.1 Fuzzy clustering

Fuzzy clustering is a commonly used method for medical image segmentation. Based on the intensity values, it will partition the data into a predefined amount of clusters. Figure 3.3a shows the result of the image divided into 20 clusters using the fuzzy c-means clustering method (FCM) [22]. As the noise has a strong influence, we first perform curvature anisotropic diffusion [23] to denoise the image while keeping the main features of the image. Results are shown in figure 3.3b. The result of the 20 cluster fuzzy c-means clustering method on the denoised picture is shown in figure 3.3c. The cluster containing the haemorrhage is shown in 3.3d. An aspect of FCM is that it does not take spatial information into account, while spatial information can be very usefull during clustering. To split up the clusters in smaller spatial separated clusters, it is possible to split up these clusters with a connected components method, but a better way would be to include the spatial information during the clustering. Spatial Distance Weighted Fuzzy C-Means (SDWFCM) does exactly this. Its result is shown in figure 3.3e. However, SDWFCM is so computationally intensive, that we were unable to cluster our three-dimensional images using this method.

3.1.2 Watershed

By performing a watershed transformation on a gradient magnitude image of the scan, we can divide the image into patches based on the borders found in the image. Figure 3.3g shows the gradient magnitude image of the denoised

image. The manually selected patches matching blood as result from the watershed are shown in figure 3.3h. The advantage of this method is that the resulting patches are always connected. When we select the patches with more than 50% overlap with the reference blood mask from our training set, the resulting

(34)

(a) Binary threshold filter applied at 3.1a with lower and upper bounds of 53 and 305 HU

(b) Binary threshold filter applied at 3.1a with lower and upper bounds of 150 and 250 HU 0 50 100 150 200 250 300 350 400 Intensity HU 0 50 100 150 200 250 300 350 400 Frequency

(c) Intensity histogram of expert segmented blood of example image 3.1a

(35)

masks have an average DICE of 0.70. This would be the maximum achievable DICE even if we would have a perfect classification method.

3.1.3 Metric segmentation

Because we wanted a better performance than the FCM, SDWFCM and watershed methods, we developed a new method inspired by the Surface segmentation method in [24]. Our goal was to increase the DICE overlap with the reference blood segmentations while keeping the number of patches as low as possible. The surface segmentation starts by assigning a patch to every voxel. Then the following error metric E is calculated between every two neighbouring patches Piand Pj:

E(Pi, Pj) = k1· (Ni− Nj)2 + k2· kCi− Cjk + k3· (|Pi| + |Pj|) +

k4· B(Pi+ Pj) + k5·

B(Pi, Pj)

(|Pi| + |Pj|)

where Ni, Ci, |Pi| and B denote the patch’s mean HU, centroid location, patch

size in voxels and the bounding box volume, respectively. k1,2,3,4,5are weights

that are used to define importance to the different parts. Our three-dimensional method is implemented as follows:

Initialization step

1. For every voxel, create a patch containing only that voxel 2. Calculate the metric for every 26-connected neighbouring voxel

Merging step

3. Get the two neighbour patches with lowest metric 4. Merge patches into one patch

5. Recalculate metrics for new patch with neighbouring patches 6. If metric is below threshold, go back to 3, otherwise stop

With an average of 1.4 million voxels in the ROI resulting from the registration step, 18 million error metrics are calculated during the initialization step. The 1.4 million voxels are then step by step merged untill the metric reaches a threshold. On average, the image is reduced to 2442 patches, during which 43 million new metrics between newly created patches are calculated. Figure 3.3i, 3.3j, 3.3k

(36)

(a) Fuzzy C-Means clustering on original image

(b) Denoised image after curvature anisotropic diffusion (c) Fuzzy C-Means clustering on denoised image 3.3b (d) Cluster from 3.3c matching blood reference mask

(e) Spatial Distance Weighted Fuzzy C-Means clustering on 3.3b

(f) Cluster from 3.3e matching blood reference mask

(g) Gradient magnitude image of 3.3b

(h) Labels matching reference blood mask from watershed method applied at 3.3g

(i) Our own patch segmentation method stopped at error metric 1000

(j) Our own patch segmentation method stopped at error metric 2000

(k) Our own patch segmentation method stopped at error metric 4000

(l) Labels matching reference blood mask from our method applied at 3.3k

Fig. 3.3: These figures show example result for the four different segmentation methods, FCM, SDWFCM, Watershed and our own method.

(37)

show the method stopped at an error metric threshold of 1000, 2000 and 4000. We automated exploration of parameter values, and found that using weights k1 = 5, k2 = 5, k3 = 1, k4 = 1, k5 = 1 with a metric limit of 4000 gave

us a good balance between patch size and patch accuracy, measures by DICE overlap with the reference blood mask.

3.2

Filtering

During the classification, we discovered that some structures close to the spleen could confuse our different classification methods. Therefore we implemented a kidney and bone filter. For a scematic detail of these filters, see appendix A.

3.2.1 Bone filtering

Our bone filtering consists of a combination of morphological and region growing methods and are executed as follows:

1. Select all voxels with HU > 500 as seed points 2. Region grow with an intensity lower limit of 250 HU 3. Dilate the mask with a kernel radius of 4mm

4. Threshold region grow with a lower limit of 150 HU using the result from step 1

5. Mask the result with the mask from step 3

6. Fill in holes by applying a voting operation on each voxel, with radius of 1mm

The two-fold region growing allows to grow into parts with a lower intensity, without the risk of growing into areas that are not within the area of the bone. Example results for both the kidney and bone filters are shown in figure 3.4.

3.2.2 Kidney filtering

The kidney filter is a little different from the bone filter as it also requires input from the registration step. The steps are as follows:

1. Create a mask using HU > 140

2. Separate mask into separate connected components

3. Filter masks using a threshold on the amount of overlap with the approximate kidney mask from the registration step

(38)

4. Take the largest mask, and use the voted hole filling with a radius of 2mm The masks as shown in figure 3.4 are used to filter the patches from the patch creation step. Every patch that has more than 50% overlap with or the kidney or the bone filter is discarded.

Fig. 3.4: 3D rendering of two examples of the bone (blue) and kidney (red) filters. The spleen lies in between the bones and the spleen (not shown).

(39)

Chapter

4

Classification

This chapter describes the design choices made before and during the development of the patch classification method. We evaluate previously reported classification methods and their applicability to classify haemorrhagic tissue within the spleen.

The final step is to classify the patches as haemorrhagic tissue and non-haemorrhagic tissue. We will look into the following classification methods: a decision tree,

random forests and neural networks. Before we use these methods to classify the patches, we convert the patches to a data format that the tested classification methods can work with. As we want to compare the methods, we want all methods to have the same available input. The only limiting factor is that both random forest classifier and neural networks need a fixed size input structure so we cannot use the raw intensity values of the patches. To overcome this limitation, we use a normalised histogram of the intensities. Our data structure is defined with the following features:

1. Patch statistics (a) Mean HU intensity (b) Minimal HU intensity (c) Maximal HU intensity (d) Voxel count

2. Vesselness statistics (a) Mean vesselness level (b) Minimal vesselness level

(40)

(c) Maximal vesselness level

3. Normalized patch histogram with 100 bins resulting from the values between 0 HU and 500 HU

4. Normalized histogram with 100 bins resulting from the neighbour patch values between 0 HU and 500 HU

Fig. 4.1

The vesselness is defined by performing a vesselness analysis developed by Sato [25] on the denoised image. Figure 4.1 shows a three-dimensional render of the thresholded result. As the method looks for tubular structures, we can use this to eliminate blood from the vessels close and inside the spleen.

4.1

Manual decision tree

The manually constructed decision tree is the simplest method we evaluate to classify the data. We chose to construct a manual decision tree because it gives us a good measurement of what minimal performance we should expect from the more advanced automated methods to be of value. The decision tree tries to classify patches based on a serie of rules which in our case are, the HU intensity range and vesselness value. As we saw in figure 3.2c, the intensity is normally distributed, so we can use our training dataset to calculate a confidence range for the expected haemorrhagic HU intensity which is used by the decision tree. The vesselness cut-off value is manually determined by inspecting the

(41)

training images. For a more detailed view of the decision tree, see appendix A. A limitation of the manually constructed decision tree is that you have to specify the rules that are being used, based on which features. In our case, we only use mean intensity and mean vesselness level to classify the patches. For large amounts of features, this would become very complex.

4.2

Random forests

Decision Tree machine learning methods are capable of automatically determine the cutoff values for the Decicision Tree. We focus on a improved version of the Decision Tree method, called Random forests that is less prone to overfitting. Random forests is a more advanced classification method that uses multiple randomly generated decision trees to classify data. The random forest classifier has been shown to work well as medical image segmentation method [26]. We evaluated the following models:

• Features 1 and 2 (Statistics)

• Features 1, 2 and 3 (Statistics and patch histogram) • Features 1, 2, 3 and 4 (Statistics and all histograms)

We evaluated all models with 3, 7 and 10 classifiers, and the latter also with 30 classifiers. However, the DICE overlap with the reference images never reached above 0.02. As our training set had a total of 29304 patches, of which 158 are haemorrhagic tissue, it shows that the method learns to classify everything as non-haemorrhagic, which is true for 99,5% of the patches, but will never result in the classification of haemorrhagic tissue. To overcome this problem, we balanced the amount of haemorrhagic and non-haemorrhagic patches in different ways, but without any notable improvement of the result.

4.3

Neural networks

Artificial Neural Networks (ANN) is a class of machine learning methods that learns using a mathematical model inspired by the biological neurone found in the human brain. By creating a network of mathematical neurones, it can learn from training data, and apply this knowledge to new data. Compared to a random forest classifier, an ANN can learn more complex patterns. We tried six deep neural network designs with one to four layers and different combinations of neurones per layer, applied at all feature sets as described in section 4.2. Every network design however resulted in a overfitting.

(42)

4.4

Method selection

As the manually constructed decision tree has the best result, we will use this for our complete method for haemorrhagic detection. In chapter 6, we will discuss why we think the more advanced classification methods failed to improve perfomance.

(43)

Chapter

5

Validation

This chapter shows the evaluation results of the organ segmentation and haemorrhage segmentation methods. Both methods are validated with our expert segmented validation test set.

Because the organ segmentation and haemorrhage segmentation are two separate methods, we also validated them separately. Performance of the methods is measured by calculating the DICE overlap with the reference images. We show the best and worst performing images to show the strengths and weaknesses of the methods. Because of the limited dataset of spleen haemorrhage patients, we use additional datasets for the organ segmentation part.

5.1

Organ segmentation

To demonstrate the method performance for other organs than only the spleen, we also have a test set to evaluate performance for liver segmentation.

5.1.1 Liver

The five validation images resulted in an average DICE of 0.80, with a minimum of 0.72 and maximum of 0.88 for liver segmentation. Figure 5.1a and 5.1b show the best and worst result.

(44)

5.1.2 Spleen

The five validation images resulted in an average DICE of 0.76, with a minimum of 0.62 and maximum of 0.85 for spleen segmentation. Figure 5.1c and 5.1d show the best and worst result.

5.2

Spleen haemorrhage segmentation

As both the random forests classifiers and neural networks were not able to produce any results, we only validated the manually constructed decision tree. The DICE coefficients of the haemorrhage segmentations using the decision tree are 0.84, 0.74. 0.50 and 0.10 resulting in an average of 0.54. Figure 5.2 shows slices from the validation set from best to worst.

The low DICE of 0.10 is the result from a misclassification of the vesselness filter. Our vesselness filter [25] looks for tubular structures, and in this case, the haemorrhage has a tubular shape. We could overcome this by looking at the other vessels and their intensity, and using this to detect outliers within the tubular structures.

(45)

(a) Best liver segmentation (b) Worst liver segmentation

(c) Best spleen segmentation (d) Worst spleen segmentation

(46)

Fig. 5.2: Slices from the all spleen haemorrhage segmentation validation images. Lightblue is true positive overlap between reference and our methods result.

(47)

Chapter

6

Discussion and conclusion

This chapter presents the conclusions and reflects on its place within other literature. It also provides a topic for future work and its applicability within clinical practice.

The multi-atlas, multi-phase registration method showed to be a good method for segmenting the spleen and liver within a traumatic whole-body CT scan. By using multiple atlases, we largely overcame differences between patients. By using the multi-phase approach, we can estimate the location of the organ while still being able to make local adjustments. Using the patch-based manually constructed decision tree based on intensity and vesselness after filtering the bone and kidney allowed us to find the small structures of haemorrhagic tissue. More advanced methods such as clustering methods and neural networking were unable to outperform our own decision tree in our small dataset.

6.1

Comparison with existing methods

There are several methods published about non-traumatic spleen segmentation [27], [11], [28] and [29]. Patients from some of the studies have diseases in organs like the liver [29] or cancer in other organs [11]. In 2016, Dandin [30] published the first method for segmenting an injured spleen. This method uses a manually created probability atlas to estimate the initial location of the organ. The probability atlas and validation images resulted from the same abdominal CT scanning

(48)

comparison, our method supports different scan types, such as WBCT and abdominal CT from different machines, different slice thicknesses, and different timings. Finally, whereas Dandin’s method has only been tested with injured spleens, our method works for both spleen and liver.

As segmentation of the injured spleen is a only recently researched topic because of limited Whole-Body CT scan availability, haemorrhage segmentation within the segmented injured spleen has not yet been done before. However, as [31] suggests, it is the logical next step. Haemorrhage segmentation within the brain is already much more advanced. Segmentation of intracerebral haemorrhage [32] is approaching a DICE of 0.90 using a Random Forest classifier. Comparing our patch-based method, which is only a part of the haemorrhage segmentation method, with existing methods like SDWFCM [33], it shows to deliver higher segmentation accuracy compared to FCM while performing significantly more computationally efficient.

6.2

Strengths and weaknesses

The presented organ segmentation is flexible in accepting different scan-types and quality images. The multi-atlas registration automatically filters atlasses with low agreement. We used a set of standard low-quality atlases, but the method could be easily be improved by using more and higher quality atlases. This also allows adapting for a wider variety of body types or diseases like situs inversus. As we mostly use organs surrounding the targetted organ to determine the location of the targetted organ, our precision is lower than state-of-the-art healthy organ segmentation methods. However, the quality is good enough to provide an ROI for the haemorrhage segmentation that not only looks for haemorrhages inside the organ but also close to the organ.

The haemorrhage segmentation method is capable of finding blood inside or close to the organ. By first filtering the bones and kidney, there are no false positives possible within those areas. It is also capable of making a fairly accurate distinction between blood within arteries running through the spleen and blood outside the arteries. However, this is mostly based on the shape of the blood segment, so it is possible that a tubular shaped haemorrhage is classified as an artery. A important weakness is our limited training and test set. This made it difficult to properly validate the method for a broader scale of images. It also made it impossible to work with more advanced classification methods like random forests and neural networks.

(49)

Finally, because speed is an important factor in trauma care, optimisations should be done before this method can be used in practice. However, by using GPU’s and parallel optimisation a large improvement can already be made.

6.3

Future work

We believe that exploration of more advanced classification methods is the area in which the most improvement can be made. It would require a significantly larger training and test dataset, but would also open opportunities to better understand the effect of scan types, reconstruction methods and contrast timings on the intensity levels of the organs and the haemorrhages.

The organ segmentation method could be improved by looking at edge-finding methods like active contour methods. However, because edges are sometimes non-existing, it is important to only use it as a guide as it might try to find an edge which does not exist.

6.4

Clinical Relevance

The clinical relevance of this study is limited. This comes partly due to our small dataset. We also believe that the manually constructed decision tree is not precise enough to base medical decisions on, as our DICE performance shows.

6.5

Conclusion

Our method shows that automated haemorrhage segmentation within the spleen is a promising method for supporting physicians in making better decisions in trauma care. However, there is still a lot of research to do in terms of speed and accuracy before it can be used in practice.

(50)

Bibliography

1. Subcommittee ATLS, “Advanced Trauma Life Support (ATLS) : the ninthR

edition.,” The Journal of Trauma and Acute Care Surgery, vol. 74, no. 5, p. 1363, 2013.

2. J. L. Wichmann, A. D. Hardie, U. J. Schoepf, L. M. Felmly, J. D. Perry, A. Varga-Szemes, S. Mangold, D. Caruso, C. Canstein, T. J. Vogl, and C. N. De Cecco, “Single- and dual-energy CT of the abdomen: comparison of radiation dose and image quality of 2nd and 3rd generation dual-source CT,” European Radiology, pp. 1–9, 2016.

3. S. Huber-Wagner, C. Mand, S. Ruchholtz, C. A. K¨uhne, K. Holzapfel, K. G.

Kanz, M. Van Griensven, P. Biberthaler, and R. Lefering, “Effect of the localisation of the CT scanner during trauma resuscitation on survival-a retrospective, multicentre study,” Injury, vol. 45, pp. S76–S82, 2014.

4. J. C. Sierink, T. P. Saltzherr, L. F. M. Beenen, J. S. K. Luitse, M. W. Hollmann, J. B. Reitsma, M. J. R. Edwards, J. Hohmann, B. J. a. Beuker, P. Patka, J. W. Suliburk, M. G. W. Dijkgraaf, and J. C. Goslings, “A multicenter, randomized controlled trial of immediate total-body CT scanning in trauma patients (REACT-2),” BMC Emergency Medicine, vol. 12, no. 1, p. 4, 2012. 5. Z.-j. Hong, C.-j. Chen, J.-c. Yu, D.-c. Chan, Y.-c. Chou, C.-m. Liang, and

S.-d. Hsu, “The evolution of computed tomography from organ-selective to whole-body scanning in managing unconscious patients with multiple trauma,” Medicine, vol. 95, no. September 2016, pp. e3465–e4935, 2016.

6. J. C. Sierink, K. Treskes, M. J. R. Edwards, B. J. A. Beuker, D. den Hartog, J. Hohmann, M. G. W. Dijkgraaf, J. S. K. Luitse, L. F. M. Beenen, M. W. Hollmann, and J. C. Goslings, “Immediate total-body CT scanning versus conventional imaging and selective CT scanning in patients with severe trauma (REACT-2): A randomised controlled trial,” The Lancet, vol. 388, no. 10045, pp. 673–683, 2016.

7. T. Dehli, A. Bagenholm, N. C. Trasti, S. A. Monsen, K. Bartnes, A. B˚agenholm,

(51)

a retrospective study,” Scandinavian Journal of Trauma Resuscitation & Emergency Medicine, vol. 23, no. 1, p. 85, 2015.

8. C. A. Karlo, P. Stolzmann, R. K. Do, and H. Alkadhi, “Computed tomography of the spleen: How to interpret the hypodense lesion,” Insights into Imaging, vol. 4, no. 1, pp. 65–76, 2013.

9. M. G. Linguraru, J. K. Sandberg, Z. Li, F. Shah, and R. M. Summers,

“Automated segmentation and quantification of liver and spleen from CT images using normalized probabilistic atlases and enhancement estimation,” Medical Physics, vol. 37, no. 2, p. 771, 2010.

10. A. Berlgherbi, I. Hadjidj, and A. Bessaid, “Morphological Segmentation of the Kidneys From Abdominal Ct Images,” Journal of Mechanics in Medicine and Biology, vol. 14, no. 5, p. 1450073, 2014.

11. R. Wolz, C. Chu, K. Misawa, M. Fujiwara, K. Mori, and D. Rueckert, “Automated abdominal multi-organ segmentation with subject-specific atlas generation,” IEEE Transactions on Medical Imaging, vol. 32, no. 9, pp. 1723–1730, 2013.

12. Z. Xu, R. P. Burke, C. P. Lee, R. B. Baucom, B. K. Poulose, R. G. Abramson, and B. A. Landman, “Efficient multi-atlas abdominal segmentation on clinically acquired CT with SIMPLE context learning,” Medical Image Analysis, vol. 24, no. 1, pp. 18–27, 2015.

13. B. Li, S. Panda, and Z. Xu, “Regression forest region recognition enhances multi-atlas spleen labeling,” MICCAI Challenge . . . , 2013.

14. Z. Xu, B. Li, S. Panda, A. J. Asman, K. L. Merkle, P. L. Shanahan, R. G. Abramson, and B. A. Landman, “Shape-Constrained Multi-Atlas Segmentation of Spleen in CT.,” Proceedings of SPIE–the International Society for Optical Engineering, vol. 9034, p. 903446, 2014.

15. R. Shams, P. Sadeghi, R. a. Kennedy, and R. I. Hartley, “A Survey of Medical Image Registration on Multicore and the GPU,” IEEE signal processing magazine, vol. 27, no. March, pp. 50 – 60, 2010.

16. R. A. Heckemann, J. V. Hajnal, P. Aljabar, D. Rueckert, and A. Hammers, “Automatic anatomical brain MRI segmentation combining label propagation and decision fusion,” NeuroImage, vol. 33, no. 1, pp. 115–126, 2006.

17. S. K. Warfield, K. H. Zou, and W. M. Wells, “Validation of Image Segmentation and Expert Quality with an Expectation-Maximization Algorithm,” pp. 298–306, 2002.

18. H. Wang, “Multi-Atlas Segmentation with Joint Label Fusion Multi-Atlas Segmentation with Joint Label Fusion,” vol. 35, no. November 2016, pp. 611–623, 2012.

19. S. K. Shah, K. Ullah, A. Khan, A. Kkan, W. Abdul, and I. Qasim, “Single versus Multi-step Non-Rigid Medical Image Registration of 3D Medical Images,” vol. 53, no. 1, pp. 99–105, 2016.

(52)

20. S. Klein, M. Staring, K. Murphy, M. Viergever, and J. Pluim, “elastix: A Toolbox for Intensity-Based Medical Image Registration,” IEEE Transactions on Medical Imaging, vol. 29, no. 1, pp. 196–205, 2010.

21. D. Mattes, D. R. Haynor, H. Vesselle, T. K. Lewellen, and W. Eubank, “PET-CT image registration in the chest using free-form deformations,” IEEE Transactions on Medical Imaging, vol. 22, no. 1, pp. 120–128, 2003.

22. J. C. Bezdek, R. Ehrlich, and W. Full, “FCM: The fuzzy c-means clustering algorithm,” Computers & Geosciences, vol. 10, no. 2-3, pp. 191–203, 1984. 23. R. R. Whitaker and X. Xue, “Variable-conductance, level-set curvature for image

denoising,” Proceedings 2001 International Conference on Image Processing (Cat. No.01CH37205), vol. 2, no. 5, pp. 142–145, 2001.

24. H. Fang and J. C. Hart, “Textureshop: Texture Synthesis as a Photograph Editing Tool,” ACM Transactions on Graphics, vol. 23, no. 3, pp. 354–359, 2004. 25. Y. Sato, S. Nakajima, N. Shiraga, H. Atsumi, S. Yoshida, T. Koller, G. Gerig,

and R. Kikinis, “Three-dimensional multi-scale line filter for segmentation and visualization of curvilinear structures in medical images,” Med Image Anal, vol. 2, no. 2, pp. 143–168, 1998.

26. R. Cuingnet, R. Prevost, D. Lesage, L. D. Cohen, B. Mory, and R. Ardon, “Automatic Detection and Segmentation of Kidneys in 3D CT Images Using Random Forests,” Medial Image Computing and Computer-Assisted Invervention - MICCAI’12. Lecture Notes in Computer Science, vol 7512, vol. 7512, pp. 66–74, 2012.

27. M. G. Linguraru, J. A. Pura, V. Pamulapati, and R. M. Summers, “Statistical 4D graphs for multi-organ abdominal segmentation from multiphase CT,” Medical Image Analysis, vol. 16, no. 4, pp. 904–914, 2012.

28. P. Campadelli, E. Casiraghi, S. Pratissoli, and G. Lombardi, “Automatic Abdominal Organ Segmentation from CT images,” ELCVIA: electronic letters on computer vision and image analysis, vol. 8, no. 1, pp. 1–14, 2009.

29. C. Li, X. Wang, J. Li, S. Eberl, M. Fulham, Y. Yin, and D. D. Feng, “Joint probabilistic model of shape and intensity for multiple abdominal organ segmentation from volumetric CT images,” IEEE Journal of Biomedical and Health Informatics, vol. 17, no. 1, pp. 92–102, 2013.

30. O. Dandin, U. Teomete, O. Osman, G. Tulum, T. Ergin, and M. Z. Sabuncuoglu, “Automated segmentation of the injured spleen,” International Journal of Computer Assisted Radiology and Surgery, vol. 11, no. 3, pp. 351–368, 2016. 31. S. M. Reza Soroushmehr, P. Davuluri, S. Molaei, R. H. Hargraves, Y. Tang,

C. H. Cockrell, K. Ward, and K. Najarian, “Spleen Segmentation and Assessment in CT Images for Traumatic Abdominal Injuries,” Journal of Medical Systems, vol. 39, no. 9, 2015.

32. J. Muschelli, E. M. Sweeney, N. L.Ullman, P. Vespa, D. F. Hanley, and C. M. Crainiceanu, “PItcHPERFeCT: Primary Intracranial Hemorrhage Probability

(53)

Estimation using Random Forests on CT,” NeuroImage: Clinical, vol. 14, pp. 379–390, 2017.

33. Y. Guo, K. Liu, Q. Wu, Q. Hong, and H. Zhang, “A New Spatial Fuzzy C-Means for Spatial Clustering,” WSEAS Transactions on Computers, vol. 14, pp. 369–381, 2015.

(54)

App

endix

A

Decision tree and filter methods

Input: Image Threshold > 500HU Region grow > 250 HU Region grow > 150 HU Dilate 4mm Mask A using B A B Input: Approximate kidney mask from registration

Threshold > 140 HU ComponentsConnected

Filter components with > 50% overlap with approximate

mask

Voting binary hole filling Voting binary hole

filling

Determine vesselness with Sato

Overlap with kidney mask

Overlap with bone

mask Kidney <=50% Input: Patch Bone Mean intensity vesselness <=50% Mean patch HU < 14 Vessel Haemorrhagic tissue Non-haemorrhagic tissue Between (µ - 1,5 * σ) and (µ + 1,5 * σ)

where µ and σ are calculated from training dataset Kidney mask Bone mask Vesseness map >50% > 50% >=14 Rest

Referenties

GERELATEERDE DOCUMENTEN

It is based on a combination of the articulated MOBY atlas developed in Chapter 2 and a hierarchical anatomical model of the mouse skeleton, and enables to achieve a fully

The established point correspondences (landmarks) on bone, lungs and skin provide suf- ficient data support to constrain a nonrigid mapping of organs from the atlas domain to

Even though the automated movement analysis for overall and specific body regions did not yield significant results between the experimental conditions, we did find a

Extrapolating advice from the NICE guidelines on intrapartum care, it seems reasonable to recommend that women whose pregnancies have been complicated by major APH or recurrent,

men een oorsprong: voor de gele gal de lever, voor het bloed het hart, voor het slijm de hersenen en voor de zwarte gal de milt.. Zeker, de ingedikte gal in de gal- blaas is

Automated algorithm for generalized tonic-clonic epileptic seizure onset detection based on sEMG zero-crossing rate. IEEE Trans

The first chapter provided a background and objectives of the study. It also contextualised international electoral observation in relation to the SADC

However, the idler and the laser fundamental experience net gain in a vibrational resonance, whereas all fields experience net loss in a resonant electronic transi- tion..