• No results found

The development of a body fat percentage calculation method based on CT scan tissue segmentation for a model for post-mortem interval estimation

N/A
N/A
Protected

Academic year: 2021

Share "The development of a body fat percentage calculation method based on CT scan tissue segmentation for a model for post-mortem interval estimation"

Copied!
39
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

The Development of a Body Fat Percentage

Calculation Method Based on CT Scan Tissue

Segmentation for a Model for Post-Mortem

Interval Estimation

(2)

The Development of a Body Fat Percentage Calculation Method Based on CT Scan Tissue

Segmentation for a Model for Post-Mortem Interval Estimation

MSc thesis in Medical Informatics, University of Amsterdam

Student:

Imane Haltout, 10857893

Research carried out at:

Academic Medical Center, University of Amsterdam

Department of Biomedical Engineering and Physics

Meibergdreef 9

1105 AZ Amsterdam

November 2016 – July 2018

Mentor:

Prof. Dr. M.C.G. Aalders

Academic Medical Center, University of Amsterdam

Department of Biomedical Engineering and Physics

m.c.aalders@amc.uva.nl

Tutor:

J.H. Leopold

Academic Medical Center, University of Amsterdam

Department of Medical Informatics

j.h.leopold@amc.nl

 

 

 

 

 

 

 

 

 

 

This thesis of own work is the second thesis on a project, for which another master thesis, also of own work, has already been submitted for the University of Amsterdam. See references.

(3)

Abstract

In the forensic field, the use of temperature as a determinant of the post-mortem interval (PMI) and time of death is a method that is often used. The current golden standard for the determination of the post-mortem interval is Henssges nomogram, which is unfortunately not so accurate. With the importance of technology in the current society, new digital models are being developed which could function as a more accurate replacement for the nomogram. In this study was tested whether the Phoebe model, a finite element model developed in Matlab at the Academic Medical Center in Amsterdam, could be improved. One of the input parameters of the model is the body fat percentage, which is currently calculated with a formula using the body weight and body measurements. In this study a new method for the calculation of the body fat percentage was introduced, using CT scan images. A new additional code was written in Matlab script, with which the fat percentage was calculated by segmenting fat tissue from the images. This code could replace the old procedure of fat percentage calculation, and be used in addition to the Phoebe model for the determination of the fat percentage. There were not enough bodies to test the code on, but with the only full body CT scan that was available for testing, the fat percentage was incorrect and was overestimated with 7%. The cooling curves that were generated by the Phoebe model using the new and old fat percentage were compared with real-time measured cooling curves. The accuracy of the model increased over real-time, as the predicted body temperature became closer to the measured body temperature over time. Also a sensitivity analysis was performed, in order to investigate the effect of the fat percentage on the cooling curves that are generated by the Phoebe model. The fat percentage has a minimal influence on these curves, but both the imaging techniques used for the segmentation and the Phoebe model need to be tested thoroughly with more bodies in order to be used in practice.

Abstract (Dutch)

De lichaamstemperatuur van een individu is in het forensische veld een veelgebruikt gegeven waarmee het post-mortem interval (PMI), en daarmee het tijdstip van overlijden wordt bepaald. Er bestaat momenteel geen methode waarmee laatstgenoemde gegevens met een kleine foutenmarge kunnen worden berekend. De huidige gouden standaard is het Henssge nomogram, een methode met enige beperkingen in het gebruik. Hierdoor zijn er in de afgelopen jaren verscheidene computermodellen ontwikkeld, waarmee wordt getracht op een innovatieve manier het PMI accurater te kunnen bepalen. Eén van die modellen is het Phoebe model, ontwikkeld op het Academisch Medisch Centrum te Amsterdam. In dit project is onderzocht of het mogelijk is om het lichaamsvetpercentage, één van de input parameters van dit model, op een andere manier te kunnen berekenen en daarmee het model te verbeteren. Momenteel wordt het vetpercentage berekend met een formule waarvoor het lichaamsgewicht en lichaamsafmetingen benodigd zijn. CT scans, die tegenwoordig in sommige gevallen ook van overledenen worden gemaakt, bevatten informatie over de lichaamssamenstelling. Voor dit project is er in Matlab script een code geschreven die apart van het Phoebe model kan worden gebruikt ter vervanging van de huidige berekeningsmethode. Met de code wordt vetweefsel in de afbeeldingen van de scan gesegmenteerd van overig weefsel, waarmee vervolgens het vetpercentage wordt berekend. Met de enige CT scan die beschikbaar was om de code te testen werd het percentage overschat met 7% en was daarmee incorrect. Het Phoebe model gebruikt afkoelingsgrafieken om het PMI te berekenen, welke zijn vergeleken met echte gemeten grafieken. De accuraatheid van het model verbetert over de tijd, daar de vergeleken grafieken kleinere verschillen vertonen naarmate de tijd verstrijkt. In een sensitiviteitsanalyse waarin de invloed van het vetpercentage op de gesimuleerde grafieken werd onderzocht, bleek het percentage een minimale invloed te hebben op het verloop van de grafieken. Echter moeten zowel het Phoebe model als de gebruikte berekeningsmethode voor het vetpercentage in Matlab uitgebreid getest en verbeterd worden alvorens in de praktijk te kunnen worden toegepast.

(4)

Keywords: Post-mortem interval (PMI) estimation, Body cooling, Finite element model, Tissue segmentation, Fat percentage calculation

Table of contents

1. Introduction ... 4

2. Preliminaries ... 6

2.1 Finite element computer models for body cooling ... 6

2.2 Morphological image processing techniques ... 8

2.3 The Hounsfield Unit scale ... 8

3. Methods ... 8

3.1 Testing image processing techniques ... 8

3.2 Writing the code in Matlab ... 9

3.3 Fat segmentation in Matlab ... 10

3.4 Air segmentation in Matlab ... 11

3.5 Calculation of the body fat percentage ... 12

3.6 Generating and comparing cooling curves ... 13

3.7 Sensitivity analysis ... 15

4. Results ... 16

4.1 Fat segmentation in Matlab ... 16

4.2 Air segmentation in Matlab ... 18

4.3 Calculation of the body fat percentage ... 19

4.4 Cooling curves comparing the method of fat percentage calculation ... 21

4.5 Sensitivity analysis ... 24

5. Discussion ... 25

6. Conclusion ... 32

References ... 34

Appendix: The written code in Matlab ... 38

1. Introduction

The post-mortem interval (PMI), the time that has passed since the time of death of an individual, is an important factor in the field of forensic science. The PMI can be used to determine the time of death, and is therefore an important type of evidence. For the determination of the PMI, different methods are available. Multiple methods have been investigated as an indication of the PMI, including algor mortis, rigor mortis, entomology or skin dehydration. However, the most commonly used method, and also the most reliable is the use of the body temperature. The use of the body temperature as a determinant of the PMI is a method that was already introduced in the 19th century [1, 2]. Since then, the use of this parameter has been investigated and many papers have been published about the matter.

This lead to the current golden standard for the determination of the PMI, the Henssge nomogram [3]. However, still many inaccuracies exist when the body temperature is used as a determinant of the PMI. The Henssge nomogram is also a method with several limitations [4]. For example, it is not possible to take into account any variation of factors over time, which includes changes in environmental temperature. Besides this, it is possible that the range of the PMI that is given with this method is quite large. The PMI and time of death are often used as

(5)

types of evidence, but when the range is too large they become less reliable in order to be used in court.

In a project that has been running for the past years at the Academic Medical Center (AMC), thermodynamics has been used to develop a finite element model for the cooling process of the body in Matlab. In the Phoebe model (PHysics Operated Exemplar of Body-temperature decrease), the environment and the body are built using small cubes of a fixed size of 1 cm3. These cubes all have an element assigned to them, which is either fat, tissue, clothes, floor, air or water. The elements have their own thermal and physical properties, which include densities, specific heats and conductivity coefficients. These properties and formulas are used to distinguish between the six different elements. The size and dimensions of the body, fat percentages, amount of clothes and the type of surroundings are entered by the user, after which the model simulates the body.

Initial experiments with the Phoebe model on the cooling of bodies also show great potential of the model [5]. However, there are still several aspects of the model that can be improved. Currently, all kinds of measurements of the bodies are measured by hand and entered manually into the model, including the body fat percentage. In several cases CT and/or MRI scans are made of diseased individuals after they arrive at the hospital. At the radiology department of the Academic Medical Center there are already several CT scans available of diseased individuals that have been made in the past. The DICOM images from these CT scans are part of the data that will be used in this study. In these images different types of body materials are represented, where in this project is focused on the fat layers and the presence of air inside the body. To obtain information about the composition of the body, these materials will be segmented from the CT images. The body fat is usually present as an outer layer of the body, and the body fat percentage is one of the input parameters of the model. If that fat layer can be segmented from the CT scans and is used to calculate the fat percentage with, this percentage could be used as input for the Phoebe model, which in turn creates a model of the body. It is then no longer needed to manually calculate this data, which could save time in forensic investigations. If the technique for extraction of the fat layer proves to be successful, it can also be applied to other materials.

In the field of medical imaging, specifically image analysis, different techniques are available to edit digital images with. These image processing techniques can be used to extract meaningful information out of images, which are in this case the different materials inside the body. For a human with the knowledge about these tissues this is a relatively easy task. However, an analysis like this will probably be more difficult for a computer. It is of interest to find a combination of image processing techniques with which the segmentation of tissues in the body can be executed. For this there are usually image analysis tools like object or edge detection available within software.

In the second part of this project is focused on the accuracy and sensitivity of the model. For this, real post-mortem body cooling curves of diseased individuals that are measured with sensors, will be compared with the prediction of the model using the new and old method of fat percentage calculation.

The research questions of this study are:

• Is it possible to extract the fat percentage from CT scans?

• Can this data be used as input data for a model in which bodies and their body cooling curves are modeled?

(6)

These are answered using the two following sub questions:

• What is the accuracy of a model in which body cooling curves are modeled in predicting the body cooling curve of a body?

• What is the sensitivity for variations in the input parameters?

The first two questions refer to the extraction of the body fat layer. First, it is important to know whether it is possible to extract the body fat percentage from the CT scans. If this is possible, the next question is whether this data is correct and whether it could therefore be used as input for the existing model. The two sub questions refer to the validation part. The aim of this project is to improve the Phoebe model by implementing the use of data from CT scans and to increase the accuracy of the model.

2. Preliminaries

2.1 Finite element computer models for body cooling

In recent years, the use of computer models has been implemented in many fields. Also in the field of forensic science research is being done about how models can be used in practice. One example is for the improvement of the accuracy of the PMI estimation. Although several computer models have been developed to determine the time since death by measuring the body temperature, the applicability and accuracy of these models are both still not optimal [6, 7, 8].

The TNO Model

One example of an existing model is the model that has been developed at the “Nederlandse Organisatie voor Toegepast-Natuurwetenschappelijk Onderzoek” (TNO) [6]. This TNO model has been developed by E. den Hartog and W. Lotens in 2004. It is based on temperature differences between the skin and core of the body. For the determination of the PMI only the rectal temperature is used, and the body is modeled using sixteen compartments. One advantage of this model is that, contrary to the Henssge nomogram, it is possible to model changes in the environment and in the body. However, the model still has some limitations. For example, the simulated body is quite schematic. Also, the body proportions are calculated using the length of the body only and are then deduced using ratios. Besides this, the use of the rectal temperature as a determinant of the PMI is also a method with several limitations. Inaccurate measurements of the rectal temperature will lead to an error of the PMI: for some individuals an error of only 0.5 °C can give error in the PMI of around half an hour. Also the depth at which the temperature is measured has an influence on the outcome. If the insertion depth of the thermometer is too small, the body temperature will be underestimated, which will lead to an overestimation of the PMI.

The Mall and Eisenmenger model

Another computer model that has been developed is the one by Mall and Eisenmenger in 2005. They presented a three-dimensional model, with which it is possible to model the body, and simulate body heat loss and heat gain from external factors. This was not possible with the TNO model. The heat loss can be simulated by conduction, convection and radiation, using physics heat transfer laws [7,8]. The model of the body consists of 8000 cubes and 10.000 nodes. These make up compartments that represent different tissues with their own thermal properties. However, also the Mall and Eisenmenger model has several limitations, the main disadvantage being its complexity. All organs and bones are simulated using the cubes. However, when a body is found at a crime scene, it is complicated to determine the position of the organs and bones, which in turn makes it difficult to model them. The temperature is modeled as a gradient between the core and skin of the body, and also in this model the rectal temperature is used to

(7)

estimate the PMI with. This temperature is obtained using one cube in the rectal area, which means that the same limitations arise for the PMI estimation for this model as for the TNO model. The limitations of each model make that both the TNO model as well as the Mall and Eisenmenger model are not optimal to be used in practice for PMI estimation.

The finite element method

The computer models described above are based on the finite element method, a numerical technique that is used to solve problems in several fields, including physics. One of the problem areas of interest is that of heat transfer, making the finite element method usable to solve the heat equation of body cooling. The finite element model solves problems using multiple equations. In order to do this, a problem is subdivided into smaller and simpler compartments, which are called finite elements. The equations of these smaller compartments are easier to solve, and are used together in order to model the larger problem [9]. In the two models described above, the larger problem, being the body, is divided into smaller compartments. To each compartment its own thermal properties and initial temperature are assigned. Using physics equations of heat transfer, the heat flux between the compartments and their neighbors can be calculated. At every time step this heat flux takes place simultaneously for all compartments. When the heat flux is calculated for all compartments at different time steps, the calculations all together can be used to calculate the cooling of the body over time.

The Phoebe model

The “PHysics Operated Exemplar of Body-temperature decrease” (Phoebe) model that is developed at the Academic Medical Center, is less complex than the Mall and Eisenmenger model and more precise than the TNO model. The Phoebe model has several advantages compared to the two other models. First of all, because of the small cube size it is possible to create quite a realistic model of a body. This makes that the Phoebe model is more accurate compared to the TNO model, as the body is less schematic. On the other hand, the Phoebe model is not as complex as the model from Mall et al., making it more suitable for use in practice. Besides this, the body proportions are measured and entered by the user, instead of being calculated by the model. It is therefore possible to take into account any differences in posture, which also has an effect on the cooling curve that is created. Lastly, the problems that arise when using the rectal temperature as a determinant of the PMI are not applicable for the Phoebe model. In the model a cooling curve is calculated for every cube over time using formulas for three different methods of heat transfer: conduction, convection and radiation. Every cube is given an initial temperature, and because of heat exchange between the six surrounding cubes, the cooling curve of the cube can be created. From these cooling curves the PMI can be determined. Several of these cooling curves on different body locations are used to calculate the PMI more accurately. For the PMI also a margin of error is taken into account, which is based on variation that can occur with uncertain variables. Places of the skin that are used for the determination of the PMI include the stomach, chest, leg and head, but the user can obtain the cooling curve for as many different places on the skin as needed. The rectal temperature can also be determined, but is only used for reference purposes and not to determine the PMI. The user can specify the initial temperature in °C, and this temperature is modeled for the entire body, without using a temperature gradient. It is not known yet which influence this has on the correctness of the simulated cooling curves. Other issues might also result in that a simulated cooling curve deviates a few degrees from a true cooling curve value. If this is the case, also in this model this difference might result in errors in the determination of the PMI. Also, the generated cooling curves and the curves that are measured to compare the simulated curves with, are skin temperature cooling curves, not curves for locations inside the body. An issue is that the skin temperature is lower than temperatures of locations inside the body, and this temperature also gets to a constant value more quickly.

(8)

2.2 Morphological image processing techniques

Erosion and dilation

The two basic operations in morphological image processing are erosion and dilation [10, 11]. These two operations were originally developed for binary images, and were later used on grayscale images as well. For these operations a small image with a pre-defined shape is used to probe the original image with. This small probing image, called the structuring element, can fit, hit (intersect with) or miss the original image. The edges of the original image are the areas where the probing image hits the original image [10, 11]. At these locations where the structuring element hits the original image, this original is altered and a new image is created. With erosion an image is created that is smaller than the original image. Erosion deletes a layer of pixels from the original, where the size of the layer that is deleted depends on the size and shape of the structuring element. Dilation enlarges an image, adding a layer of pixels to it. The size of this layer also depends on the shape and size of the structuring element [10, 11].

Opening and closing

Other morphological operations are based on these two basic operations. Two examples are the opening and closing operations. Opening is defined as erosion followed by dilation. On the other hand, the closing operation is defined as dilation followed by erosion [10, 11]. In image processing, the closing and opening operations are used for morphological noise removal in images. Using the opening operation, small objects causing noise can be removed from the image. Since with opening the erosion is followed by dilation, the opening operation is less destructive to the image compared to erosion only. Closing can be used to remove small holes in images. Just like dilation it enlarges the edges of regions in images, but it is less destructive to the original image shape compared to dilation only [11, 12].

2.3 The Hounsfield Unit scale

The Hounsfield Unit (HU) is a quantity that is often used in the context of CT scans [13]. With this scale radiodensity, i.e. to which extent X-rays are able to pass through a specific type of material, is expressed. The radiodensity differs for every type of material. Thus, the scale of the HU can be used to distinguish between different substances inside the body, as they each have a certain value or range of values on the Hounsfield scale which corresponds to them. Structures like bone and air are relatively easy to extract from images, as they are represented by colors that are contrasting with their environments. In the original DICOM images bones are mostly white and air is black, due to their radiodensity. However, fat is represented by a range of grey values, meaning that structures inside the body, which do not necessarily have to be fat, could also have a value that is present in this range.

3. Methods

3.1 Testing image processing techniques

A layer of fat tissue is usually present on the outer layer of the body. This layer could cause the body to cool differently compared to if this layer was not present, since the thermal conductivity characteristics of fat differ slightly from those of tissue. The presence of air in the body is for example the case inside the lungs. The thermal conductivity characteristics of air differ more from the thermal conductivity characteristics of fat and tissue. The cooling speed of fat and tissue are somewhat comparable, but the cooling speed of air is much higher [3]. This means that the presence of air inside the body influences the body cooling process in that specific part where the air is present, which makes it of interest to extract the presence of air in the body as well. For the segmentation of these different materials from the CT scans, multiple image

(9)

processing techniques and methods are available. In this case it was chosen to investigate the effect of different image processing techniques on the DICOM images in ImageJ first before starting to write the code [14].

Testing image processing techniques for fat segmentation in ImageJ

For this testing in ImageJ, a few images of the torso where the fat layer was present were uploaded onto the software. First the effect of segmenting the image by using a threshold was investigated, to visualize whether this would be an effective method for initial segmentation. To binarize the images specific values on the Hounsfield scale could be used. HU values between -200 and -30 HU correspond to fat [15]. However, in ImageJ it is not possible to set a threshold at a value corresponding to a specific HU value, so a threshold range was chosen that seemed to segment the fat layer by eye. The most important step was to check whether the thresholding would separate the fat layer from the rest of the tissue, and the exact numbers of the range would be specified later in Matlab. After this the morphological image processes were applied to the images. In ImageJ different combinations of these operations were tested on the images, to discover which combination of operations segments the different types of body materials best. The binarization of the images produced noise, consisting of small spots inside the body. To remove this noise from the images and to smooth the shape of the fat layer, the images were eroded, followed by dilation, which is an opening operation.

Testing image processing techniques for air segmentation in ImageJ

For the segmentation of the presence of air inside the body, the images needed to be binarized using a low HU value (around -1000) as the threshold [15]. Also for the segmentation of air in ImageJ this number could not be specified, and the value was chosen by eye. The binarization extracted the rough shapes of air inside the body. To remove irregularities from the edges of these shapes, the images were dilated followed by erosion, which is a closing operation.

In ImageJ only the general effects of these image processing techniques on the original CT images were investigated. The goal was to test first whether simple image processing techniques like the erosion and dilation operation, would be sufficient to segment air and fat from the images. It is not possible in ImageJ to exactly specify the strength of these operations like in Matlab. Therefore only the effect of these operations in different orders, i.e. the closing and opening operations, was investigated in order to visualize whether the image processing techniques would yield the expected results. After they proved to be relatively successful in segmenting air and fat from other tissues, these techniques were implemented into Matlab, as the Phoebe model is also written in Matlab script.

3.2 Writing the code in Matlab

In the image processing toolkit in Matlab, more options are available for image processing than in ImageJ [16]. It is possible to personalize the shape and size of a structuring element. It is also possible to choose a different structuring element for the dilation and erosion steps out of which the closing and opening operations consist. These choices all have different outcomes on the image that is created, as some are for example more destructive than others. In ImageJ the use of image processing techniques for the segmentation of fat tissue was only generally investigated. The code that was written in Matlab for this study is supposed to be used in addition to the Phoebe model, as a separate script to calculate one of the input parameters, the fat percentage. It is therefore convenient to write a loop in which all images could be processed to simplify the script. All loaded images can then be edited with that single code, and the script is not too difficult to be used in practice. In Matlab multiple combinations of shapes and sizes of structuring elements were tested for the different steps in the image processing procedures, to see which combination works best in a loop for all images. However, the body composition

(10)

differs throughout the body, which is represented in the images. Therefore the code could work differently on different images. Also due to time constraints of the project it was not possible to create a code that works perfectly on every image. This does have as a result though that on some images the code does not completely yield the desired effect, due to be being written in a loop. For example, the fat layer does not have the same thickness at every image. It could be the case that at images where the fat layer is thicker, the structuring element that was chosen for the erosion was just right, but at parts where the fat layer is thinner, the same erosion operation was too destructive. If the image is then dilated, the fat layer that is created around the body is not a continuous line, but consists of individual parts. These were sacrifices that had to be made during the writing process of the code.

CT scan for testing the code

For the writing process of the code and to test the code in between while writing, a CT scan of one body was made available. The final output of the code only displays the calculated percentage of body fat. However, a lot of sub steps take place in order to calculate this percentage, which are not necessarily displayed in between. An example of such a sub step is the effect of the erosion with a structuring element of a certain shape and size on the images of the CT scan. In order to write the code and test these sub steps, the effects of these steps were visualized. However, it was not possible to visualize every small adjustment to the code on the complete body, as the code takes a while to execute on all images. Also a lot of different options were tested, for example the effects of structuring elements of different shapes and sizes and of different functions. It would therefore be too time consuming to visualize the effect of every step immediately on all images of the CT scan. One would have to wait before the entire loop is executed in order to view the images. Since a lot of combinations of code were tested, this would unnecessarily take too much time.

Selection of the test set

In order to test the effects of the code on the DICOM images, the code was therefore first executed on a test set. This test set consisted of a few images of the previously mentioned CT scan that was made available to test the code on. The images were located at different places of the body with various body compositions. An image at chest height has a different combination of body tissues than an image that is taken of the skull. The effects of the image processing techniques could therefore also vary between the images, and for example be more or less destructive. Every small adjustment to the code was first executed on the images of the test set. The images were displayed and compared in between to visualize the effect of the code. The comparisons that were made were for example with an image of the previous step, in order to analyze the effect of the next image processing technique. Other comparisons were for example with images on which different functions of the same image processing technique were visualized, or with images on which a different structuring element was used. After the image processing techniques seemed to be successful on the test set, the techniques were written in a loop in which all images of the CT scan were processed, instead of having a code that only is executable on one single image. Then this code was executed for all images of the test CT scan, after which its effect on the total body was visualized.

3.3 Fat segmentation in Matlab

The opening operation

The images of the scan were first read into Matlab with a 3D viewer, where the HU value range for the binarization could be specified. Just like in ImageJ this process resulted in noise at the images inside the body, coming from structures with similar HU values as fat. In order to remove this noise as much as possible, the images were eroded. This was done using the Matlab function imerode. After testing out several combinations of structuring elements for the erosion, the

(11)

choice was made to erode the binarized images with a round structuring element with a radius of one, the smallest radius available [16]. Using a larger or differently shaped structuring element resulted in destruction of the fat layer at parts of the images where this layer was too thin. In order to restore the effect of the erosion as much as possible and to make the fat layer more continuous, the images were dilated using the imdilate function. For the dilation a line-shaped structuring element with a size of three was chosen, first in a horizontal orientation followed by a vertical orientation. Also of this structuring element the effect of different sizes was tested. A larger size altered the shape of the image too much, while a smaller size did not yield the desired dilation strength. Dilating the image with a round shaped structuring element was also possible [16]. However, a round shaped structuring element with a radius larger than one or two tended to make the edges of the objects in the image too diamond-shaped. This resulted in pointed and diamond shaped edges of the fat layer. On the other hand, a smaller round structuring element did not yield the desired dilation strength. Multiple structuring elements of different shapes and sizes were tested, but this combination of a small round shaped structuring element for the erosion, and a line shaped structuring element for the dilation in two orientations seemed to yield the best effect. This combination of structuring elements also kept the original shape of the image somewhat intact.

Further steps in extracting the fat layer

After the morphological opening operation, two operations followed to remove the noise in the images. First, any objects smaller than a certain pixel size were removed from the images, as these are mostly specks of noise in the image. The choice was made to remove structures of which the size was smaller than 45 pixels. This number was also found by testing out different options. If a smaller number was used, there was still too much noise present in the images and the operation was not effective. When a larger number was used too many structures that were part of the fat layer were also removed. Lastly any holes within the larger objects out of which the fat layer consists were filled to create more continuous shapes.

Other options for coding the opening operation

The effect of the function bwmorph was also tested, with which it is possible to perform the opening operation as well as the closing operation immediately, instead of coding the erosion and dilation separately. With this function the same structuring element is used for both the erosion and the dilation operation out of which the opening or closing operation consists. However, personalizing the structuring elements by testing the effect of different shapes and sizes yielded better results than the standard function. The structuring element that was used for the erosion would be too weak for the dilation operation. On the other hand, the two line shaped structuring elements that were used for the dilation operation would be too destructive for the erosion.

3.4 Air segmentation in Matlab

For the segmentation of air inside the body, the opposite operations compared to the fat segmentation process were performed in Matlab. Before the image processing techniques were applied, the CT scan images needed to be segmented again. Like has been mentioned before, the HU value threshold for the segmentation of air is different than the value range for fat [15]. Therefore the images were binarized again at the HU value corresponding to air. Also for the closing operation for the segmentation of air, the dilation and erosion operations were coded separately instead of using one function to directly execute the closing operation. For the dilation operation a round structuring element was used with a larger radius. This was done in order to fill the shape of the air structures as much as possible. The erosion was also performed with a round structuring element, but with a smaller radius, to minimize destruction of the shape. Also for these dilation and erosion steps different shapes and sizes of structuring

(12)

elements were used beforehand in order to test their effects. The last steps in the image processing procedure were again to remove small specks of noise and to fill any remaining holes in the air shapes.

3.5 Calculation of the body fat percentage

In order to calculate the total body fat percentage, the percentage of every single image that consists of fat was calculated. For this, the shape of the body outline of every image needed to be extracted as well.

Edge detection techniques

The first step in order to extract the body outline was to perform edge detection using the Matlab function edge. Within this function there were four different methods of edge detection available: Sobel, Prewitt, Roberts and Canny. These methods each use different approaches in order to perform the image segmentation. In order to extract the outline of the image, the result of the edge detection technique of the structures inside the body is not necessarily relevant. The effect on the outer shape of the body matter most. In order to compare the effect of the four methods, all methods were run on one image of the CT scan. It was not necessary to test the effect of these techniques on different images, like for example was done for the extraction of the fat layer. The operations that needed to be performed in order to extract the outline are not supposed to have different effects on images at different places of the body, since the body outline consists of the same tissue throughout the body. The images that were obtained from the different edge detection techniques were zoomed in at different places to visualize their effects. After visually comparing the effect of the four methods, the ‘Canny’ approach seemed to be the most effective, it created an image in which the edges were smooth and not pixelated compared to the other methods. The Canny approach finds the edges by looking for local maxima of the gradient of the image [17]. The method also uses two thresholds to detect strong and weaker edges, where the weaker edges are presented in the output if they are connected to strong edges. Therefore there was less noise present when the Canny approach was used as opposed to the other edge detection methods. The edge detection also detected the edges of structures inside the body, but only the edge of the shape of the body is important for the extraction of the outline.

Further steps in extracting the body outline

Due to different kinds of noise, the edge of the body outline that was created was not a continuous line. In order to solve this, the shapes that were created were dilated as much as possible until there were no holes present in the edge of the body outline. The dilation was also done using a larger line shaped structuring element in a horizontal and vertical orientation. This type of structuring element seemed to yield the nicest result without destructing the image shape. Also in this case larger round shaped structuring elements destructed the original shape of the image and made it too diamond shaped and pointed. After the dilation the holes in the shape that was created were filled. One small problem arose when the function to fill the image was used. The edges of the images are not detected as the edge of the shape. This has an effect on images where the black holes inside the shape touch the bottom of the image. In those images the shape is not treated as a continuous and closed object. Therefore the holes that touch the edge of the image are not filled. To solve this, a small white line of one pixel was drawn at the bottom of the image to seal off the shape. This was followed by the image fill function imfill to fill the holes in the image. This resulted in a filled silhouette of the body outline. However, since the image was previously dilated, the silhouette was larger than the actual body outline. To make the silhouette smaller, the image was therefore eroded. In order to find out how large the structuring element needed to be for the erosion, the imshowpair function was used to compare two images by placing them over each other. The image of the

(13)

dilated body outline was placed over the image of the extracted fat layer, which has about the same shape and size as the original body outline. With the imshowpair function it could be visualized how much larger the image of the dilated body outline is. Then different sizes of a line shaped structuring element in two orientations were tested in order to find the structuring element that shrunk the silhouette to the original size of the body outline. This imshowpair function was also used at different steps in the fat and air segmentation process, in order to test the effect of the structuring elements on the original images.

Formula for body fat percentage calculation

Now that both the fat layer and the body outline were extracted, the percentage of body fat could be calculated. This was done by calculating the proportion of fat compared to the outline. Since the binarized image had a black background and the shapes were white, the proportion of fat was calculated by counting the number of white pixels in each image and use them in the following formula:

number of white pixels in fat layer · slice thickness

number of white pixels in body outline · slice thickness · 100%

The slice thickness of the DICOM images could be extracted from the images. However, as can be seen in the function above, this parameter cancels itself out, and was therefore not needed to calculate the percentage of body fat. This means the function can be rewritten as:

number of white pixels in fat layer

number of white pixels in body outline   ∙ 100%

This formula is applicable for every image out of which a CT scan consists. A CT scan consists of hundreds, and sometimes a few thousand images. In order to calculate the percentage of body fat of the entire body, this formula was used for every single image out of which the CT scan consists, after which the results were summed. Therefore the sum of the number of pixels in the fat layer, should be divided by the sum of the number of pixels in the body outline:

Σ number of white pixels in fat layer

Σ number of white pixels in body outline   ∙ 100%

Just like the image processing techniques, this calculation was written in a loop in Matlab script, in which all DICOM images of a CT scan could be processed. The final result of the code can be found in the appendix.

3.6 Generating and comparing cooling curves

For the rest of this study two kinds of data were needed: first CT scans of bodies needed to be available, in order to run the written code. With the code the fat percentage of an individual of which a CT scan was made could be calculated, and the method of the fat percentage calculation could be evaluated. Secondly, of these bodies also measured cooling curves were needed.

Obtaining subject data

Of the bodies that were used in this study, the measurement of the cooling curves had been done in a previous study [5]. To measure these cooling curves, bodies that were brought to the morgue of the Academic Medical Center were used as soon as possible after death. These bodies were donated for research purposes, and the participants had signed willingly for this. It is important to bring the bodies to the morgue as soon as possible, as no cooling curves can be observed if the body has cooled down already. In that study the temperature was measured using Termochron

(14)

iButtons, which have an operating temperature range from -10 °C to 65 °C, and an accuracy of about 0.5 °C [18]. However, studies with these iButtons have shown that their accuracy could be higher, about 0.09 °C [19]. The measurement of the skin temperature was done at several places of the body, including the abdomen, chest, forehead and upper leg. The iButtons were placed on the skin at these locations, and were removed after 24 hours. The bodies stayed in the morgue throughout this time, with an environmental temperature of 2 °C to 3 °C. The curves that were measured during this time are part of the data that was eventually used in this study. There were only two participants available of which the cooling curves were measured that had also undergone a CT scan, and both the DICOM images and measured cooling curves were available. These were the participants of which their data was used for this study, and are the bodies from 22-09-2015 and 07-07-2016, where the dates refer to their death date. The CT scans were made at the radiology department of the Academic Medical Center. The body of which the CT scan images were used to test the code on could not be used for this comparison, as it had entered the morgue after 48 hours and no cooling curves could be observed.

Calculation of the body fat percentage

The measured cooling curves were compared with two other curves that were simulated by the Phoebe model for this study: cooling curves that were generated using the fat percentage of the previously used calculation method, and with cooling curves that were generated using the fat percentage that was found from the CT scans. By comparing these two kinds of curves with the ‘true’ cooling curve values that were measured, it was possible to view the effects of the fat percentage on the cooling curves. For the previous fat percentage calculation method, a formula was used for which the body weight and a few body measurements were needed [20]. The weight and body measurements are also input parameters for the Phoebe model, and were both measured at the study where the cooling curves were measured [5]. For the formula additional measurements at the arm, hip and waist were needed, which were also measured. The fat percentages were also calculated during the study, using an online body fat percentage calculator in which the formula was executed. The fat percentages that were found were used in this study. For the calculation of the body fat percentage with the CT scans, the images of the two participants were read into Matlab and were binarized with the same 3D viewer that was used for the test CT scan. Then the written code was run in which the image processing techniques that have been described before were executed at the right order. The final output of this Matlab script was a number, which corresponded to the calculated body fat percentage.

Issues with one CT scan

The scan of the body of 22-09-2015 was not made consecutively from head to toe. The body was first scanned from the head to the bottom of the torso. Then a second scan was made of the other half of the body, starting from the feet and ending at around the middle of the torso. As a result, part of the CT scan halves overlap and double photos were made of a certain part of the body. Unfortunately it was not possible to know exactly which part of the body and which part of the photo’s overlap. Since the times of both halves of the CT scan were not equal, the images were also treated as two different CT scans by DICOM viewing software. It was also not possible to delete double images by name, as the names of the overlapping images were also not equal. Therefore the body fat percentage could not be calculated using all available photos, as the overlapping images would influence the result. The code had been run before on only parts of the body of the test CT scan during the writing and testing process. It seemed that using the legs only was a less reliable method to calculate the fat percentage with. If mainly the legs were used to calculate the body fat percentage, the outcome would have a larger difference with the real fat percentage, compared to if the fat percentage was calculated using only the upper body. This could be due to the fact that in the upper body more fat is stored, especially in obese bodies [21]. In some individuals the legs could hold more fat, but it was found in the study of Kissebah et al.

(15)

that upper body segment obese individuals hold more large fat cells at their sites of adiposity, whereas at lower body segment obese subjects these sites are formed of normal size cells [21]. Thus the choice was made to use the CT scan that was made of the upper body to calculate the fat percentage with.

Estimating missing body measurements

One of the input parameters of the Phoebe model to simulate the body are the body measurements. As has been explained before, these were measured at the morgue when the measurement of the body cooling curves was done. From the body of 22-09-2015 the body width and thickness were not measured. Therefore the choice was made to upload the images of the body in Horos, a 3D medical viewer, in order to estimate these sizes there [22]. This was also done in order to investigate whether it is possible to digitally determine the body measurements that are needed as input, instead of having to manually measure and enter them. In Horos, the body width and thickness were first measured using the body of 07-07-2016, of which these sizes were known since they were already measured at the morgue. This was done in order to test the software and know the location at the body of 22-09-2015 where these sizes needed to be measured. Then the missing sizes were measured at the correct locations of the body of 22-09-2015.

Running the simulations

After the calculation of the fat percentage and the estimation of the missing body measurements the graphs were created with the Phoebe model (version 2015) in Matlab. The simulations for both bodies were first run using the fat percentage that was calculated with the previous method. Then the simulations were run using the fat percentage that was calculated in Matlab using the CT scans. The other parameters, of which some parameter values were found in literature, were kept at a constant value. The emissivity of the body was set to 0.96 [23], and the thermal conductivity coefficients for the body, fat and clothes were respectively 0.55, 0.2 and 0.03 Wm-1K-1 [24]. The specific heat of clothes was 4 Jkg-1K-1 [25], although the body of 07-07-2016 was brought to the morgue without any clothes. The participant of 22-09-2015 wore a diaper and a shirt, which were incorporated in the Phoebe model by selecting a shirt and short pants. The initial body temperature was set to 25 °C, as the measurements with the iButtons started at that temperature for both bodies. The environmental temperature was set to 3 °C, which was the temperature at the morgue. The bodies were placed at a table at the morgue during the measurements [5]. The table consisted of thin conductive material with air underneath the tabletop, therefore no floor was added to the simulations. The simulations were run for 24 hours.

3.7 Sensitivity analysis

Lastly, a sensitivity analysis was done for the fat percentage. The goal of this study was to test whether the fat percentage could be calculated with the written script in Matlab instead of using the current method to calculate the percentage. However, if the calculated percentage in Matlab is incorrect it could have an influence on the simulated cooling graphs. With this sensitivity analysis the influence of the fat percentage was investigated. Of both bodies two extra simulations were run, the first using a fat percentage that was 3% higher than the calculated fat percentage in Matlab, in the second simulation a fat percentage that was 3% lower was used. The results of the simulations where the two methods of fat percentage calculation were compared, and the results of the sensitivity analysis were visualized in graphs using Excel.

(16)

4. Results

One of the images in the previously mentioned test set was the 500th image of the CT scan. This image is an image at chest height, at which a fat layer is somewhat visible as a grey edge that is darker than the internal tissues (figure 1). Another image in the test set is the 250th image of the CT scan (figure 2). This is an image of a slice of the skull, with a different body composition compared to the first image.

In the image of the skull more bone structures and less outer tissue and fat are present compared to the image in figure 1.

4.1 Fat segmentation in Matlab

For the first step of the extraction of the fat layer these two images were binarized, including other images at comparable body locations that were part of the test set. In figures 3 and 4 the effect of the binarization is visible for the images at chest and skull height respectively. The fat layer is roughly extracted in these images from the other body tissues. The segmented fat layer at chest height is thicker and more continuous compared to the fat layer at skull height.

Figure 1: the original DICOM image of an image at chest height of the test set

Figure 2: the original DICOM image of an image at skull height of the test set

Figure 3: the binarized image at chest height of the test set

Figure 4: the binarized image at skull height of the test set

(17)

In these figures also the noise inside the body is visible. After the erosion step, this noise was mostly removed from the images. The effect of this erosion operation on both images is visible in figures 5 and 6.

On the image at chest height, the fat layer after erosion (figure 5) is almost the same size as the fat layer right after binarization (figure 3), and is not thinned out too much. The shape of the fat layer is also kept intact as much as possible. The fat layer of the image at skull height is already only a few pixels thin at some places after binarization (figure 4), and therefore it was almost unavoidable to obtain a completely continuous fat layer in this image after erosion. The fat layer was thinned out and disappeared at some places as a result of the erosion (figure 6), but this structuring element was the least destructive to the fat layer on this body location. The next step that was performed after the erosion was the dilation operation. The effect of this operation on both images is visible in figures 7 and 8.

As can be seen in figures 7 and 8, with the dilation operation the small white specks that are present inside the body are also dilated. This resulted again in too much noise inside the body. With the last two operations objects of a certain pixel size were removed, and any small holes

Figure 5: the eroded image of an image at

chest height of the test set Figure 6: the eroded image of an image at skull height of the test set

Figure 7: the dilated image of an image at

(18)

that were present inside the fat layer were filled to make the shape of this layer more continous. The effect of these two operations is visible in figures 9 and 10. As can be seen, also small specks were removed from the fat layer of the image at skull height, but this was also unavoidable when one loop for all images was written.

4.2 Air segmentation in Matlab

The air segmentation inside the body was performed in a similar manner. First the air inside the body was filtered out by binarizing the images at a low HU value. The 500th image of the CT scan that was used in the test set for the fat segmentation was used once again, since in this image there was also air present from the lungs. The result of the binarization is visible in figure 11.

In order to make the air inside the body a more continuous structure, the image was dilated with a round shaped structuring element, of which the effect is visible in figure 12. The next step was to erode the image again (figure 13), since as a result of the dilation the air structures became larger compared to the original air structures. The result of the last two steps, where holes in the air structures were filled and small specks of noise were removed is visible in figure 14.

Figure 9: final result of the operations for the extraction of the fat layer for the image at chest height

Figure 10: final result of the operations for the extraction of the fat layer for the image at skull height

Figure 11: the binarized image for air segmentation

Figure 12: the dilated binarized image for air segmentation

(19)

4.3 Calculation of the body fat percentage

The effect of all four edge detection techniques on the 500th image can be seen in figures 15-18. It is visible that the effect of all edge detection techniques on the image is similar, as most methods find the edges at the points where the gradient of the image has a maximum value [17]. Still there are slight differences on the images between every method.

Figure 15: the effect of the Sobel edge detection technique on the image at chest height

Figure 16: the effect of the Prewitt edge detection technique on the image at chest height

Figure 13: the eroded image of figure 12 for air

(20)

The Canny method was selected to proceed with for the extraction of the body shape, as this method yielded the smoothest result. Next, the image was dilated in order to fill all the holes at the outline of the body shape, and to connect the outline. This step was needed to make the edge of the image continuous (figure 19). After the dilation, the white line at the bottom of the image sealed off the shape and the holes inside the image were filled; the effect of these two operations is visible in figure 20.

Because of the dilation operation (figure 19) the body outline became larger than the original outline. Therefore the next step was to erode the image in order to bring back the size of the original shape. The effect of the erosion is visible in figure 21. As can be seen in the image, at parts where the shape touches the (left) edges of the image, the shape still stays connected to that edge. This was also a small artifact in the image processing that could not be solved.

Figure 17: the effect of the Roberts edge detection technique on the image at chest height

Figure 18: the effect of the Canny edge detection technique on the image at chest height

Figure 19: the effect of the dilation after the Canny edge detection technique

Figure 20: the effect after the holes in the image of figure 19 are filled

(21)

In figure 22 the effect of the imshowpair function is visible, which was used to place images on top of each other and determine the effect of the erosion operation. Here the images of figure 9 and figure 21 are placed on top of each other. When figure 20 instead of 21 was used, there was a thick white edge present outside the fat layer, since the body outline was too large. The imshowpair function visualized the effect of the erosion; when the structuring element was not strong enough, this white line would continue to be present. As can be seen in figure 22, there is almost no white line present outside the fat layer, which means that the chosen erosion strength was strong enough to erode the image almost to its original size.

4.4 Cooling curves comparing the method of fat percentage calculation

In the results section that has been described above, the results of writing and testing the code were described. Below the results of the comparison of the cooling curves of the bodies and of the simulations are described.

The first body of which the cooling curves were simulated was the body of 22-09-2015. This was a woman of 61 years old, with a length of 157 cm and a body weight of 87 kg. The body cooling curves were measured at the morgue 26 hours after death. She had arrived at the morgue 24 hours after death, and the CT scan was made the day after [5]. The body fat percentage that was found using the previous method was 34%. Like has been explained in the methods section, a CT scan of only half of the body could be used. The fat percentage that was calculated using that CT scan was 37%. The code was also run on the complete scan with overlapping photos for testing purposes, and the fat percentage that was found was 30%. The fat percentage of the upper body scan is the percentage that was eventually used as input for the Phoebe model. With Horos DICOM viewer, the missing width and the height of the body were estimated to be 47 cm and 25 cm respectively. Of this body the cooling curves were only measured at the abdomen, chest, forehead and upper leg. The Phoebe model also generated cooling curves at several other places of the body, but since there were no measured cooling curves to compare them with, only the simulated cooling curves at these locations were selected.

In figure 23 below the graphs of both simulations at these four locations of the body can be found. In one simulation the fat percentage is calculated using the previous method, in the other simulation the fat percentage is calculated with the code in Matlab. Also the measured cooling curves are plotted in the figure.

Figure 21: the effect after erosion on figure 20 Figure 22: the effect of the imshowpair function

(22)

In the graphs of figure 23 it can be seen that the curves of the simulations (red and green) overlap. Only the graph of the chest is an exception, the graph of the simulation where the CT scan images were used for the calculation of the fat percentage, follows the measured cooling curve. The graph of the other simulation cools down less quickly. The measured cooling curves at other body locations are somewhat different compared to the simulations: the measured curves of the forehead and upper leg initially cool down more quickly compared to the simulations, whereas the measured curve of the abdomen cools down less quickly after two hours. Over time, the simulated curves become closer to the measured curves and therefore more accurate, as the difference between them becomes smaller. Only the in the graph of the abdomen a relatively large difference remains between the simulated and measured curves.

The second body is the body of 07-07-2016. This was a woman of 94 years old, with a length of 159 cm and a body weight of 39 kg. The body cooling curves were measured at the morgue 11 hours after death [5]. The CT scan was made the same day, and was one consecutive scan, therefore all images of this scan could be used for the calculation of the fat percentage. The body

Figure 23 a-d: cooling curves of the body of 22-09-2015. The measured cooling curves (purple) are plotted against the simulated cooling curves using the previous method of fat percentage calculation (green), and the code in Matlab for fat percentage calculation (red)

a b

(23)

fat percentage that was found with the previous method was 21%, and the body fat percentage that was found using the full body CT scan was 28%. Of this woman all body measurements were measured when the cooling curves were measured. There were also curves measured at the belly button and lower leg, but only the curves of the same four body locations as the previous body were used. The measured and simulated cooling curves of this body are shown in figure 24 below.

Also in these graphs it can be seen that both curves that are generated in Matlab with the two fat percentages overlap, and that there is almost no difference between the red and green curves. The measured cooling curves deviate slightly from the simulated cooling curves, although the simulations of the curves of the forehead and abdomen are relatively similar to the measured curves. The measured curves of the chest and upper leg cool down more quickly in the first few hours compared to the simulations. The difference between the simulated and measured curves is also smaller compared to the body of 22-09-2015. The difference over time between the simulated and measured curves is also smaller compared to the previous body. The

Figure 24 a-d: cooling curves of the body of 07-07-2016. The measured cooling curves (purple) are plotted against the simulated cooling curves using the previous method of fat percentage calculation (green), and the code in Matlab for fat percentage calculation (red)

a b

(24)

simulated curves, except for the chest, meet the measured curves more quickly, and therefore the model becomes more accurate over time.

4.5 Sensitivity analysis

The sensitivity analysis was done for the curves of the calculated fat percentage of both bodies. The calculated fat percentage of the body of 22-09-2015 was 37%, and the calculated fat percentage of the body of 07-07-2016 was 28%. The graphs of the body of 22-09-2015 and the graphs of the simulation with a fat percentage that is 3% higher and 3% lower can be found below.

In figures 23 and 24 it was already visible that the simulated cooling curves did not differ significantly when other fat percentages were used, which can also be seen in these graphs. Only the graphs of the chest vary, where the curve of 3% lower differs most. That curve has the same shape as the other curves but is a few degrees higher.

Figure 25 a-d: cooling curves of the sensitivity analysis of the body of 22-09-2015. The simulated curves using the calculated fat percentage in Matlab of 37% (green) are plotted against simulated cooling curves using a percentage of 34% (red) and 40% (purple)

a b

(25)

The results of the sensitivity analysis for the curves of the body of 07-07-2016 can be found in figure 26 below.

Also in this sensitivity analysis the graphs of the simulations are similar. There is almost no difference between the graphs where different fat percentages are used. Only the graphs of the chest vary slightly, but as not much compared to the body of 22-09-2015.

5. Discussion

 

The two experiments that were conducted for this study focused on the fat percentage calculation of an individual. The development of the Phoebe model is part of a large project. Therefore there are multiple aspects that could be focused on in studies about this model. For example, in a study the predictability of the PMI of the Phoebe model could be tested, a study could focus on improving the model by adapting its source code or by writing additional code, or the focus could be on the process of how the model is used and whether this could be improved.

Figure 26 a-d: cooling curves of the sensitivity analysis of the body of 07-07-2016. The simulated curves using the calculated fat percentage in Matlab of 28% (green) are plotted against simulated cooling curves using a percentage of 25% (red) and 31% (purple)

a b

(26)

In this study the focus was on the last two aspects. A code was written in Matlab, which can be used in addition to the Phoebe model to calculate the fat percentage of an individual with. This would replace the current method of fat percentage calculation, and introduce the use of a scanning method in the process of using the Phoebe model. The code was written for DICOM images, and not specifically for CT scans. Thus, DICOM images of another type of scan like MRI can also be used. Also a sensitivity analysis was performed in order to investigate the effect of the fat percentage on the curves.

5.1 Interpretation of the results

Like has been explained before, of the two bodies that were made available to test the final code on, the scans of only one and a half body were suitable to use. Unfortunately this is not a number based on which large conclusions can be drawn about the correctness of the calculated fat percentage.

The body of 22-09-2015

From the body of 22-09-2015, of which only the scan of the upper body was used, the fat percentage was relatively close (3% higher) to the percentage that was found with the previous method. The fat percentage that was found with the two CT scan halves of the complete body was 4% lower than the percentage of the previous method. Thus, the difference between the fat percentages of the upper body and full body scan is 7%, which is quite a large number. For this specific body it is not possible to address this difference to one cause, since it is unknown whether this difference is caused by the overlapping photos, or by the calculation method that was used in the written code. Therefore no clear conclusion can be drawn about the correctness of the calculations from these results. The body can be used for conclusions about the sensitivity analysis, since there the difference between the fat percentages is of interest, instead of the fat percentage itself.

The body of 07-07-2016

Of the second body, a full body CT scan could be used to calculate the fat percentage with. The fat percentage that was found with the code differed 7% from the fat percentage of the previous method. Unfortunately, the difference between the two fat percentages is quite large, it is larger than the 3% difference that was found with only the upper body scan of the other body. However, still nothing can be said about how meaningful this number is, since there were no CT scans from other bodies to compare these results to. It could be that the position of the body during the CT scan influences the operation of the code. For example, the hands of the subject were placed on the stomach while the CT scan was made. It could be that when this is the case, small specks remain on the location of the hands after the erosion operation. When the images are then dilated, it is possible for these specks to blend into the body and therefore be included in the stomach tissue. As a result the stomach is expanded, and the hands are treated as fat tissue, while there is in fact not much fat tissue present on the hands. This might be solved by placing the hands of the individuals next to their body without letting them touch the sides. Whether this is really the case and could really be solved by positioning the hands differently cannot be said with the information that has currently been available for this study, but it is an aspect that should be taken in mind. Over time the difference between the simulated and measured graphs of both bodies became smaller, and the prediction of the model became more accurate.

Sensitivity analysis

With the version of the Phoebe model that has been used in this study, the sensitivity analysis showed that the fat percentage does not have a large influence on the shape of the curve. This was already visible in the simulation graphs of the body of 07-07-2016, where the fat

Referenties

GERELATEERDE DOCUMENTEN

We zien de laatste 15 jaar een explosie van op bomen groeiende mossoorten, inclusief meerdere voor Nederland nieuwe soorten.. Op vrijwel elke wegboom, ook in de steden, is dit

In het algemeen kan worden geconcludeerd dat er op basis van de veranderde droogvalduren op de slikken en platen van de Oosterschelde ten gevolge van de zandhonger vooral effect

Bottom Left Panel: The fraction of pairs with |∆[Fe/H| < 0.1 dex for data (black line; Poisson errors in grey) and the fiducial simulation (blue dashed line) as a function

We managed to use a very useful homomorphism α, and showed how knowledge about its image helped us, not only to prove the Mordell-Weil theorem, but also to create a formula that can

Curves commands draw straight lines between coordinate points or parabolic arcs with tangents at each point parallel to the straight line through adjacent points. [hsymbol

What we are concerned with in this thesis is using Drinfeld’s elliptic modules, or Drinfeld modules as we call them today, to do explicit class field theory for global function

Although the actual costs in the study were related to the one implant system used at Tygerberg Hospital-University of Stellenbosch Cochlear Implant Unit, it is believed that

Nu van de te construeren driehoek zowel de basis, de hoogte en de tophoek bekend zijn, is de driehoek te construeren met behulp van de zogeheten basis-tophoek-constructie..