• No results found

From reflectance as a function of absorption coefficient to path length distribution for photons in tissue using an inverse Laplace transform

N/A
N/A
Protected

Academic year: 2021

Share "From reflectance as a function of absorption coefficient to path length distribution for photons in tissue using an inverse Laplace transform"

Copied!
26
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

From reflectance as a function of absorption coefficient to path

length distribution for photons in tissue using an inverse Laplace

transform

Nathalie van Sterkenburg Student Number: 11037466

Supervisor: Dr. ir. D. J. Faber

Second Examinor: Prof. dr. ing. M.C.G. Aalders

Text: Report Bachelor Project Physics and Astronomy

Size: 15 EC

Conducted Between: 04-04-2018 and 29-06-2018

Department: Biomedical Engineering and Physics

Institute: Amsterdam UMC - location AMC

Faculty: Faculteit der Natuurwetenschappen, Wiskunde en Informatica

University: Universiteit van Amsterdam

(2)

Abstract

In optical imaging, it is useful to know the path length distribution of the light that is being measured. And the path length distribution of light is the inverse Laplace transform of the reflectance as a function of absorption coefficient, which can be measured. However, it is impossible to perform an exact inverse Laplace transform on the measurements. But there are algorithms that can perform an inverse Laplace transform. One of those algorithms, suggested by Harald Stehfest, was tested in this project. The algorithm was then used on a signal of reflectance as a function of absorption coefficient that was gained from a data set created by a Monte Carlo simulation of photons in tissue. From the results it became clear the algorithm does not work for signals that are not completely continuous. A way to work around this and still use the algorithm on the signal from the data set, is by making a fit to the signal.

(3)

Samenvatting

Wanneer fotonen door weefsel bewegen, zullen ze door de eigenschappen van het weefsel niet altijd recht vooruit blijven gaan. In plaats daarvan kunnen ze een willekeurige kant op worden verstrooid totdat ze het weefsel verlaten of worden geabsorbeerd. Een deel van de fotonen zal aan dezelfde kant het weefsel verlaten als waar het erin is gegaan, een foton wordt dan gereflecteerd. Door te meten hoeveel fotonen er door het weefsel worden gereflecteerd, kan er iets gezegd worden over de eigenschappen van het weefsel. Zo kan er bijvoorbeeld wor-den gekeken naar hoe de reflectie afhangt van de absorptieco¨effici¨ent. De absorptieco¨effici¨ent geeft aan hoe goed het weefsel fotonen kan absorberen. Dit verband is interessant omdat er een relatie bestaat tussen dit verband en de padlengtedistributie van de fotonen. Deze relatie is een inverse Laplace transformatie. Met de reflectie als functie van de absorptieco¨effici¨ent kan er dus iets gezegd worden over de padlengten die alle fotonen in het weefsel hebben afgelegd. Het is echter erg moeilijk om de inverse Laplace transformatie uit te voeren. Hij kan dan ook niet gebruikt worden om uit experimentele data een padlengtedistributie te halen. Om toch de padlengtedistributie te achterhalen, kan er bijvoorbeeld een algoritme gebruikt worden om toch aan de inverse laplacegetransformeerde te komen.

In dit onderzoek is zo’n algoritme getest, bedacht door Harald Stehfest. Er is gekeken hoe dit algoritme omgaat met gladde signalen en met signalen met toegevoegde ruis en hoe goed het algoritme werkt op experimentele data. Deze experimentele data werden gecre¨eerd met behulp van een Monte Carlo simulatie, die fotonen simuleerde die door weefsel bewogen. Uiteindelijk bleek dat het algoritme alleen met gladde functies om kan gaan en onbruikbare oplossingen geeft als er ruis in het signaal zit. Het is wel mogelijk om het algoritme te gebruiken op een fit die aan de experimentele data wordt gemaakt. Dit is uitgetest in het project, maar er moet nog verder onderzoek naar worden gedaan.

(4)

Contents

1 Introduction 5

2 Theory 6

2.1 The Algorithm . . . 6 2.2 Reflectance and Path Length Distribution . . . 7

3 Methods 7

3.1 Simulation . . . 7 3.2 Testing the Algorithm . . . 12

4 Results 15

5 Conclusion and Discussion 22

6 Acknowledgements 24

Appendices 26

(5)

1

Introduction

The goal of the medical field is to cure human beings. But when a condition is not visible on the outside, it has to be detected first. Optical imaging is a preferred method of characterizing tissue to detect abnormalities, because of its non-invasive nature [1, 2]. Examples include tissue perfusion and oxygenation [3, 4]. But to make use of optical imaging techniques, they have to be developed and understood. That’s why it is important to conduct research on these techniques. In this project the technique of reflectance spectroscopy, which has for example been used to study optical properties of tissue [5, 6] or volume reflection [7], was looked at. More specifically, the goal was to find a method to obtain the path length distribution of the reflected photons. This project was done in the context of medical imaging and so the medium the photons went through was always tissue.

There are multiple reasons why it is useful to know the path length distribution [8]. One of those is that the distribution indicates how deep into the tissue a photon could have gone. This helps determine what type of tissue is being measured, or it can serve as a check if the right type of tissue is being measured [9]. Say an experiment is being conducted where measurements have to be performed on blood vessels 1cm into the skin, but it turns out the longest path lengths in the path length distribution are only 0.5cm, then the conclusion can be drawn that the wrong type of tissue is being measured and that the experiment has to be conducted differently.

Another reason to study the path length distribution is to find out more about the properties of tissue that influence the propagation of light. These properties are the absorption coefficient µa, scattering coefficient µsand the anisotropy g. A way of doing this is by studying the relation

between the path length distribution and the reflectance as a function of absorption coefficient. These two are related by a Laplace transform and this relation is central to this project [8].

The path length of a photon can not be measured directly with reflectance spectroscopy. There are other techniques where this is possible [10, 11, 12, 13], but reflectance spectroscopy is an easy to use and thus preferable method. Because the path length distribution is related to the reflectance as a function of absorption coefficient, and this reflectance can be measured directly in reflectance spectroscopy, the path length distribution can be obtained through the reflectance. However, to get to path length distribution from reflectance, an inverse Laplace transform must be performed, and there is generally no easy analytical way of doing this. So instead, we looked at an algorithm proposed by Harald Stehfest [14] to perform the inverse Laplace transform and investigated if the results were accurate enough to be useful. Because no experiments are available to measure both the path length distribution and reflectance as a function of absorption coefficient, a direct validation was not possible. Instead, a Monte Carlo simulation was made which simulated photons moving through tissue. The algorithm used for the simulation was MCML. This simulation produced a data set with reflected photons that

(6)

also contained their path lengths. From this data set the reflectance could be calculated and the algorithm could be used on this reflectance to get a path length distribution which could be compared to the path length distribution that was derived directly from the data set.

The algorithm by Stehfest was intended to be used with ”real procedures” to determine the input values P (s) [14]. These values are noise-less. To investigate how the algorithm handles real data with noise, such as the outcome of the Monte Carlo simulation or a discrete set of data points on P (s), we tested two options. The first was using interpolation to obtain any data points necessary. The second was using a curve fit of a model of P (s) to the data.

2

Theory

2.1 The Algorithm

First, a definition of the Laplace transform has to be given. It is given by L{F (t)} = P (s) =

Z ∞

0

e−stF (t)dt. (1)

Here P (s) is the Laplace transform of F (t). The inverse Laplace transform is then

F (t) = L−1{P (s)}. (2)

This inverse Laplace transform is a lot harder to perform than the regular Laplace transform. But there are still ways to do this, albeit not exact. One of those ways is an algorithm proposed by Harald Stehfest [14].

The algorithm is based around the approximation of F (t) = ln 2 t N X i=1 ViP  ln 2 t i  . (3) Here, Vi is given by Vi = (−1)N/2+i M in(i,N/2) X k=[i+1 2 ] kN/2(2k)! (N/2 − k)!k!(k − 1)!(i − k)!(2k − i)!. (4) This equation was found based on a probablistic deviation [14].

The equation consists of two parts. To calculate a point of F (t), a list of Vi and a formula

to calculate P (a · i) are needed, where a = ln(2)/t and s = a · i. Vi is only dependent on N , the

size of the summation, and a higher N should theoretically mean a more accurate F (t), but in practice computer accuracy plays a role in this algorithm and therefore an optimal N can be found depending on both the function being evaluated and the accuracy of the pc being used. N also has to be an even number to perform all calculations in the algorithm succesfully. Realistic values for N are in the range 10 to 30.

This algorithm works best for outcomes F (t) that converge to 0 at ∞. This means it should lend itself well for exponenntial signals, which is the type of signal being used in this project.

(7)

2.2 Reflectance and Path Length Distribution

When photons are sent into tissue, they will be scattered inside the tissue and one of three things will happen. The photon, could be absorbed, transmitted on the other side of the tissue or reflected on the same side of the tissue where it entered. The reflected photons are of interest to us. The reflectance is mainly determined by the anisotropy g, scattering coefficient µs and

absorption coefficient µa. In figure 1, a few examples of possible paths of photons are shown.

For this project the reflectance as a function of absorption coefficient is used. A model for this relation that is not very accurate at the source but gets more accurate for greater radial distance r, is R(µa) = 1 4π  z0  µef f + 1 ρ1  e−µef fρ1 ρ2 1 + (z0+ 2zb)  µef f + 1 ρ2  e−µef fρ2 ρ2 2  , (5) where ρ1 =pz02+ r2, ρ2 =p(z0+ 2zb)2+ r2, µef f = p3µa(µa+ µ0s), D = (3µ0s)−1 and µ0s is

the reduced scattering coefficient, given by (1 − g)µs[15]. The model is based on the assumption

that photons are generated at a source depth of z0 = µ10

s, which is one reduced mean free

path. From here photons are emitted in all directions. The most important boundary condition is that no reflected light can cross the boundary back into the tissue. This can be achieved mathematically by defining an ”extrapolated boundary” at zb = 2D above the real boundary

and placing a ”virtual source” mirroring the source at z0 with respect to this extrapolated

boundary. The virtual source is located at (z0+ 2zb) above the real boundary. This is where

the terms z0 and (z0+ 2zb) come from in equation 5 and the definitions of ρ1 and ρ2.

This model was used to compare the simulation to and test the algorithm on.

The reflectance is the Laplace transform of the path length distribution of the photons for µa = 0 at the same radial distance r from the light source [8]. This path length distribution is

derived from the time-resolved diffuse reflectance [2] by setting path length l = c · t where c is the speed of light:

p(r, l) = const · l−52  z0· e− ρ21 4Dl + (z0+ 2zb) · e− ρ22 4Dl  , (6)

where const is a normalization constant. Equation 6 is the inverse Laplace transform of 5. This means the algorithm of Harald Stehfest can be used to get 6 from 5. So R(µa) is P (s) and p(l)

is F (t).

3

Methods

3.1 Simulation

In the first half of this project, a Monte Carlo simulation was build in Python. This simulation was based on the MCML algorithm as described in [16]. The source code is available on Github [17].

(8)

Figure 1: A schematic display of photons moving through tissue. A source of light is pointed at the tissue, inside the tissue every photon has a unique path. Three potential paths are shown. The model being used in this project assumes that each photon is partially absorbed in each step, until such a big part of the photon is absorbed that it vanishes. When a photon hits the boundary, part of it will be transmitted and the other part will continue propagating.

(9)

The simulation ran for different absorption coefficients from 0cm−1 to 10cm−1 with steps of 0.05cm−1. 1 million photons were simulated for each absorption coefficient. Each reflected photon was saved along with it’s weight, path length and radial distance. This data could later be used in the second part of this project, where Harald Stehfest’s algorithm [14] was tested.

In the simulation, photons were simulated to enter tissue and scatter until absorped or reflected. In this simulation, it was assumed that the beam of photons entering the tissue was infinitely thin and only photons that made it into the skin were simulated. So no check was done if the photons would be reflected at the surface, all photons simulated were already assumed to be in the tissue. It was also assumed that the tissue had only one boundary. This boundary was at z = 0, where for z > 0 there was tissue, and for z < 0 there was a air. This means the slab of tissue was assumed to be infinitely thick and infinitely wide, meaning all photons would have to be absorped or reflected. There was no transmission possible.

Now the inner workings of the simulation will be described in more detail. Step one of the simulation was creating a list of photons. Each photon contains a position (x, y, z) and a trajectory (ux, uy, uz), these start out as (0, 0, 0) and (0, 0, 1) respectively. The trajectories must always satisfy ux2+ uy2+ uz2 = 1. All of this is in accordance with the boundary of the tissue being z = 0 and the beam being an infinitely thin one that points along the z-axis. Each photon also has a path variable, in which the current path length of the photon is stored. Also, each photon has a weight that starts as 1 and will gradually decrease as the photon propagates, until the photons ceases to exist.

Once the list of photons is created, a loop is entered where each photon takes a step in each iterations. For each photon the length of the step it takes is decided by the formula

s = − ln(rand)/µt, (7)

where s is the length of the step, rand is a random number between 0 and 1 and µt= µs+ µa

where µsis the scattering coefficient and µathe absorption coefficient. µ1t is the mean free path.

Next the step is performed by altering the position like

x = x + s · ux (8)

y = y + s · uy (9)

z = z + s · uz. (10)

In the next part the photon weight is used. If the photon is still inside the tissue after the step, part of the photon is absorped, that part of the weight is removed, while the rest of the weight is scattered and will continue being simulated. If the photon has crossed the tissue boundary during the step, not much changes except that first part of the photon escapes the tissue, this part of the weight will be removed. The rest of the weight is sent back into the tissue where

(10)

it will again be absorped and scattered. How exactly the reflecting, absorbing and scattering works will be described later.

It is important to note that in a regular MCML simulation both the absorbed photon weight and reflected photon weight are stored in bins that correspond to the places where the weight was absorbed or reflected. The code in this project has three different modes. One of them saves photon weight in the regular way, with bins for both reflection and absorption. There is one mode in which only reflected photon weight is stored in bins, absorbed weight is simply deleted. Then there is a mode in which there is only one bin for reflected photon weight. This bin is at a certain radial distance and all weight that does not go in that bin is simply deleted.

Because it is always a fraction of the photon that is absorped or reflected, a photon will never be fully gone, but eventually the photon weight will be small enough to be inconsequential. So after a photon has been scattered, it is compared to a treshold value, 10−4 in this experiment. If a photon’s weight drops below this treshold, one of two things happen, decided by a random number. If the random number is above a certain CHAN CE, 0.1 in this experiment, the photon is terminated. But to make up for the weight of all terminated photons, if the random number is below chance, the photon weight will be spun up again to

weight = weight/CHAN CE, (11)

and the photon will continue to propagate.

Because these simulations are run for hundreds of thousands of photons and a list with all these photons in it might cause a memory error, only a fraction of photons run at the same time. Whenever a photon terminates, a new photon takes it place. Once the total amount of photons has been reached, a terminated photon won’t be replaced and will be completely removed from the list, until the list is empty.

That is the main part of the simulation. Now a more detailed description will be given of absorption, scattering and reflections.

The first and easiest is absorption. At the end of each step, the weight of the photon is diminished. The part of the weight that is absorped is determined by µa as

absorpedW eight = µa µt

· weight, (12)

this means the weight that will continue propagating is given by weight =  1 −µa µt  · weight = µa+ µs µa+ µs − µaµt  · weight = µs µt · weight. (13) So at the end of each step the photon weight has to be multiplied by a factor µs

µt to account for

the absorption.

Whenever part of the photon is absorped, the other part of the photon is scattered. To determine the new direction of the photon, the Henyey-Greenstein function is used. For the

(11)

purpose of this simulation, the calculations performed to execute this function are cos(θ) = 1 + g2−  1 − g2 1 − g + 2g · rand1 2! /(2 · g) (14) sin(θ) = (1 − cos(θ)2)0.5 (15) temp = (1 − uz2)0.5 (16) φ = 2π · rand2, (17)

where g is the anisotropy factor, θ is the angle of deflection, φ is the azimuthal angle, rand1 and

rand2 are random numbers. temp is a temporary variable used in the calculations.

If the current direction of the photon is (nearly) aligned with the z-axis, in this experiment if 1 − |uz| < 10−12, the equations to calculate the new direction are a simplified version of what they are otherwise. The simplified versions are

ux = sin(θ) · cos(φ) (18)

uy = sin(θ) · sin(φ) (19)

uz = cos(θ) · uz, (20)

and the versions for the other case are

ux = sin(θ) (ux · uz · cos(φ) − uy · sin(φ)) /temp + ux · cos(θ) (21) uy = sin(θ) (uy · uz · cos(φ) + ux · sin(φ)) /temp + uy · cos(θ) (22)

uz = − sin(θ) · cos(φ) + uz · cos(θ). (23)

One of these sets of equations is performed to scatter the photon.

Whenever a photon crosses the boundary from tissue into the other medium during its step, the photon is reflected before it is absorped and scattered. To reflect the photon, it first has to take back the step it just took. After this, the photon takes a partial step that puts it on the boundary. How big this partial step is, is determined by

s1 = |

z

uz|. (24)

The photon then takes this step and lands on the boundary. The part of the photon that is sent back into the tissue will have the same direction as it had before the reflection, except that it’s movement in the z-direction turns around so the photon actually moves back into the tissue rather than out of it, so

uz = −uz. (25)

The other thing that needs to be determined, is what part of the photons weight is reflected and what part sent back into the tissue. This is done using the Fresnel reflection equation. To

(12)

calculate this, some goniometric expressions are first specified: cos θ1 = uz (26) sin θ1= (1 − uz2)0.5 (27) sin θ2= sin θ1· ntissue nmedium (28) cos θ2 = (1 − sin θ12)0.5, (29)

where θ1 and θ2are the deflection angles, ntissueis the refractive index of the tissue and nmedium

the refractive index of the medium. For the purpose of this experiment, both indexes were 1. The Fresnel reflection equation can now be calculated as

Ri=

(sin θ1· cos θ2− cos θ1· sin θ2)2

2 ·

(cos θ1· cos θ2+ sin θ1· sin θ2)2+ (cos θ1· cos θ2− sin θ1· sin θ2)2

(sin θ1· cos θ2+ cos θ1· sin θ2)2· (cos θ1· cos θ2+ sin θ1· sin θ2)2

, (30) here Ri is the fraction of the weight that will continue to propagate in the tissue. The weight

of the photon is adjusted accordingly. After this, the photon takes the rest of the step and then part of it is absorped and the rest scattered. the size of the step the photon still has to take is

s2 = s − s1. (31)

Those are all steps of the simulation. To test if the simulation worked, the outcome of reflectance as a function of radial distance was plotted for three different µa for both the simulation and

the analytical solution from equation 5.

3.2 Testing the Algorithm

In the second half of the project, it was tested if Stehfest’s algorithm [14] could accurately perform an inverse Laplace transform on a signal of reflectance as a function of absorption coefficient. To do this the algorithm was first tested with test signals and an analytical solution for the reflectance, before moving on to test the algorithm on a data set and eventually testing it on a fit to the data set.

But before these test could be done, the algorithm had to be implemented in a code. At the start of the program, a list of Vi, equation 4, is created. There also has to be a way to

calculate P (s). P (s) is of course a point on the function that is being inversed. This point can be determined either through an exact function or a discrete set of points where additional values are gained through interpolation. Using these components, F (t) can then be determined for a set of t.

(13)

is to create a list G that contains all necessary factorials, looking like G[0 : N + 1]

G[0] = 1

G[i] = G[i − 1] · i.

Vi can be split up into two parts, a part dependent on k and i and a part solely depending

on k. Here i is the index used to iterate through the summation in equation 3 and k the index in equation 4. k is looped through for every value of i and so it is unnecessary to calculate the part solely dependent on k over and over again. That’s why a list H is created that holds all possible values of the part of Vi that’s only dependent on k. This list H looks like

H[0 : (N/2) + 1] H[0] = 1 H[1] = 2/G[(N/2) − 1] H[k] = k N/2· G[2 · k] G[(N/2) − k] · G[k] · G[k − 1].

With these two lists, the list of V [i] can be created. How this is done is described in the next piece of pseudo code.

sn = (−1)N/2+i

for each i from 0 until N + 1 : V [i] = 0

for each k from 1 until the smallest of i and N/2 : V [i] = V [i] + H[k]

(G[i − k] · G[2 · k − i]) V [i] = V [i] · sn

sn = −sn

That was the first part needed in calculating F (t), the second part is the formula for calcu-lating P (s). As mentioned earlier, the algorithm has to be tested. The first step of the tests is performing the algorithm with an exact analytical formula for P (s) of which the inverse Laplace

(14)

is known as well. Two sets are used to do this, and these sets are [18] F (t) = e−t (32) P (s) = 1 s + 1 (33) and F (t) = cos(t) (34) P (s) = s s2+ 1. (35)

The first set is used to verify that the algorithm works properly for exponential function. The second set is used to check if a function that does not converge to 0 at ∞ does indeed yield a less accurate result.

For both signals F (t) and the outcome of the inverse Laplace transformation on P (s) were plotted and compared. This gave an idea of the accuracy of the algorithm.

But to get an idea of how the algorithm handles inaccuracies, two more tests were done with the exponential test signal, equations 32 and 33. At first P (s) was multiplied by 1.1, giving the signal a systematic deviation and testing how the algorithm handled this. Then each time a value of P (s) was calculated it was multiplied with a random factor between 1011 and 1110, giving the signal a random deviation and testing how the algorithm handled this.

After the tests with test signals were completed, the algorithm also had to be tested for the model of the path length distribution and its Laplace transform, the reflectance as a function of absorption coefficient as described in equations 6 and 5 respectively.

So far all tests had been based on continuous, analytical functions where P could be cal-culated exactly for each possible s. And a data set only has a discrete set of s for which a P is known. The solution to this was interpolating for any other s required. For each s the two closest data points were found and the slope between these points was used to determine the correct value. It was assumed that the data points were close enough together that the part of the graph between them could be approximated linearly.

But to test if this assumption was correct, this procedure was first used on the exponential set, equations 32 and 33, and the analytical functions for the reflectance and the path length distribution. In these tests, instead of calculating P (s) exactly, first a set of data points was created and only these data points were used to determine additional P (s). With these test it could be determined how the interpolation influences the accuracy of the algorithm.

After this the algorithm could be tested on the data set which was generated using the simulation that was described earlier. Because the path length of each reflected photon was saved as well, a path length distribution could also be created directly from the data set. This path length distribution was then compared to the one from the algorithm and a conclusion could be drawn on whether or not the Stehfest’s algorithm can accurately perform an inverse

(15)

Laplace transformation on the reflectance as a function of absorption in tissue.

To finish things off, a fit of equation 5 was made to the data from the data set and the algorithm was used on this fit. There were three unknown constants in the model to determine, the radial distance r, scattering coefficient µs and anisotropy g. But because µs and g can

not be separated from each other in the equation, we used only two fitting parameters, radial distance r and reduced scattering coefficient µ0s. The fit was made to create a smooth P (s) for the algorithm rather than the erratic one from the data set and check how well the algorithm did for the smooth function. This did present the problem that the path length distribution became dependent on the model that was being fitted to the data. So this is something to always bear in mind when performing Laplace transformations on fitted functions.

4

Results

The reflectance is dependent on more variables than absorption coefficient and radial distance, but the other variables were kept constant througout the project. These constants were the reduced scattering coefficient µ0s= 10cm−1 and anisotropy factor g = 0.8.

In the first part of the project, a simulation was made to simulate photons in tissue. It was necessary to check if the simulation was accurate. To do this the reflectance as a function of radial distance was measured in the simulation and compared to equation 5, where reflectance was a function of radial distance rather than absorption coefficient and it could be tested for different absorption coefficients. The results of three of those tests can be found in figure 2. The data was normalized by the number of photons in the simulation and the area of the respective bin.

(16)

As expected, in all three figures the simulation and the function have different values for low radial distance, but around r = 0.05cm the two lines come together and fall with the same slope. It’s also expected that the lines become steeper for higher absorption coefficient, because more absorption means less photons will make it far. We can indeed see this happening in the figure.

(a) (b)

(c)

Figure 2: The results of reflectance as a function of radius from the simulation for three different ab-sorption coefficients, compared to equation 5. As µa gets larger, the slope of the graph becomes steeper.

The data was normalized by dividing by the number of photons in the simulation and the area of the respective bin.

(17)

The rest of the results were all related to the testing of the algorithm. As mentioned, the algorithm is dependent on the size of the summation, N , and the greater N the greater the accuracy, until the maximum number of significant figures of the computer is reached. For the computer used this was tested and N = 22 was the ideal number, so this number was used every time the algorithm is mentioned.

The algorithm was first tested on two test signals. The results of these tests can be seen in figure 3. One of the test was to check if it was true that the algorithm works better for F (t) that converge to 0 at ∞. Which can be concluded to be true based on the results. In figure 3a the outcome of the algorithm overlaps with the analytical solution, whereas the outcome of the algorithm in figure 3 attenuates quickly.

Figure 3a also confirms that the algorithm works and is accurate for at least some functions.

(a) F (t) = e−t (b) F (t) = cos t

Figure 3: The result of using the algorithm on two exact test signals. As expected, the algorithm performs better on the converging exponential signal. In fact, the algorithm is inaccurate for the goniometric function.

(18)

After the initial tests on exact functions, the algorithm was tested again on the exponential functions, but this time with slight deviations. Figure 4 contains the results. The algorithm handled a systematic deviation well, but did not give any useful results for the random deviation, suggesting that the algorithm does not handle noise well.

(a) Systematic deviation. F (t) belonging to P (s) = 1.1 ·s+11 .

(b) Random deviation. P (s) = a · s+11 . P (s) is still close to the original function.

(c) Random deviation. F (t) belonging to P (s) = a ·s+11 with 0.91 < a < 1.1.

Figure 4: The result of using the algorithm on an exponential function with slight deviation to see how well the algorithm handles noise. The systematic deviation provides a usable result, the random deviation doesn’t, even though (b) confirmes that the function with deviation is still close to the one without.

(19)

Next the algorithm was tested on the model for reflectance and its inverse Laplace transform, as described by equations 5 and 6. The solution for the path length distribution had to be normalized in the process. This was done by measuring the area under both graphs and equating them. For N = 22, the proper normalization constant was 1.85. This constant was accurate above r = 0.33cm. The results of the test on the model for three different radial distances are in figure 5. It is clear that the model does not work properly for small radial distances, but it gets more accurate for larger radial distances. This is in accordance with figure 2.

(a) (b)

(c)

Figure 5: The result of using the algorithm on the model for reflectance at three different radial distances, compared to the solution from the model for the path length distribution. The normalization for the solution from the model was 1.85. The algorithm becomes more accurate for larger r.

(20)

Because the goal was to use the algorithm on a data set, the algorithm was also tested on discrete versions of the exact functions, where the values for P (s) were determined through interpolation. This was done for both the exponential function and the model. The results of this are shown in figure 6. As seen in figure 6b and 6d, the algorithm does not give a useful outcome. There is no apparant logic in the values of F (t) nor is their a conventional function that could reasonably be fitted to the outcome. Because the deviations caused by the interpolation are small, this means the algorithm can not handle any noise. It only works on continuous functions.

(a) The points on P (s) used by the algorithm to calculate F (t) for both the continuous function and the discrete function. This is the test signal where F (t) is exponential.

(b) The outcome of using the algorithm on a dis-crete set based on the exponential test signal.

(c) The points on R(µa) used by the algorithm to

calculate p(l) for both the continuous function and the discrete function.

(d) The outcome of using the algorithm on a dis-crete set based on R(µa) at r = 1cm.

Figure 6: The result of making P (s) discrete and interpolating, rather than making P (s) continuous. This was done for the exponential test signal and for the model. This does not yield useful results.

(21)

The results of the algorithm used on the dataset are shown in figure 7. Both the data in figure 7a and 7b is normalized by dividing the values by the total number of photons in the simulation, which was 1 million for every absorption coefficient.

As before, the algorithm does not yield useful information for a discrete function and does not resemble the path length distribution from figure 7b. It can be concluded that interpolating is not enough to use the algorithm on a data set.

(a) The reflectance as a function of absorption coef-ficient as determined from the data set.

(b) The path length distribution as determined from the data set. This is what should come out of the algorithm.

(c) The result of using the algorithm on the data set.

Figure 7: The results of using the algorithm on data from the data set at r = 1cm compared to what should come out of it. The algorithm does not yield useful results and is in no way similar to the path length distribution taken directly from the data set.

(22)

Lastly, a fit was made to the data and the algorithm was used on this fit. This was done at r = 1cm, where the results for the two fitting parameters were r = 0.931 ± 0.003cm and µ0s= 12.4 ± 0.1cm−1. The fit and the graphs in figure 8 were made using a fitting and plotting program written in Labview (version 2017, National Instruments, USA), which was available at the Department.

As can be seen in figure 8a, the fit is very close together with both the model and the simulation. And after using the algorithm on the fit, it is still close together with the model and the simulation, as can be seen in figure 8b. It is important to note that of the three graphs in figure 8b, the only one that the algorithm was used on was the fit. The graph of the model is from equation 6 and the graph of the simulation is created directly from the path length saved in the Monte Carlo data set. Because all three graphs are very similar, it can be concluded that there is potential in making fits to a data set to get the path length distribution.

(a) Input is a graph of the model as described by equation 5. MC is a graph of R(µa) gathered from

the data set. Fit is a graph of a fit of equation 5 to the data set with fitting parameters r = 0.931 ± 0.003cm−1 and µ

s= 12.3 ± 0.1cm.

(b) Input is a graph of the model as described by equation 6. Monte Carlo is a graph of p(l) gathered from the data set. invLaplace of fit the algorithm used on the fit.

Figure 8: The results of using the algorithm on a fit made from the data at r = 1cm. The result is dependent on the fitting model, but the fit is smooth enough to yield a useful result.

5

Conclusion and Discussion

The results of each test can be sorted into two categories, useless and useful. Useless results are all results that have scattered points of a considerate amount of factors of ten greater than

(23)

expected, useful results are all results consisting of points that line up in a graph of approximately the right height. When looking at which functions for P (s) yielded useful results there is one characteristic they all had in common; their smoothness. The algorithm worked for all exact functions that it was used on. But signals that were not completely continuous, such as signals with a random deviation in them, or points calculated from lineair interpolating, rendered useless results. And in the case of turning a continuous function into a discrete one, the discrepancy between interpolating P (s) and calculating it exactly was of a factor 1000 smaller than the actual values. And still this was enough to make the algorithm useless. Meaning that making the simulation more accurate by simulating more photons likely won’t make a difference when using the algorithm. The algorithm is only useful in situations where a completely continuous function is used as P (s). It can never be used directly on a data set.

When looking more closely at list V , it becomes evident why even small discrepancies can have big effects on the outcome. V is dependent on kN2 · (2k)!, where k can get as big as N/2.

These numbers start small for k = 1 but they rise rapidly. Only fairly accurate models cancel out the big numbers in V and are able to produce a believable outcome.

But it might still be possible to use the algorithm by making a fit to the data, as was done in this experiment. This does indeed give a useful result, albeit dependent on the model used for the fit, because the inverse Laplace of the chosen model will be generated. This means that the algorithm can still be useful for testing models or determining parameters in a model. After all, an invalid model will give a useless, scattered outcome that cannot be mistaken for real. This can work as long one bears in mind that the result is dependent on the model.

There are a lot of things still to be tested about this algorithm and using inverse Laplace transformation in general to gain the path length distribution. In this project only one model for reflectance as a function of absorption coefficient was tested and this was done with one data set of a simplified model in the context of an infinitely thin beam of light travelling through an infinitely thick and wide slab of tissue. This project proves that Harald Stehfest’s algorithm for performing inverse Laplace transformation only works on perfectly smooth functions. It is still left to be determined if useful information can be gained from the path length distributions the algorithm provides. It is also unknown if the algorithm holds up for a fit to a real data set rather than one that was created from a simulation. And it is also not clear yet if the useful results from the algorithm, the ones that actually look like graphs, are accurate enough to work with in the long run. This is a first step in using inverse Laplace transformation to get path length distributions, a lot is still left to be investigated.

(24)

6

Acknowledgements

I would like to thank my supervisor Dirk Faber for giving me the opportunity to do this project and also for answering all my questions, giving me feedback when necessary and guiding me through the past three months.

References

[1] Welch AJ, van Gemert MJC. Overview of Optical and Thermal Laser-Tissue Interaction and Nomenclature. In: Welch AJ, van Gemert MJC, editors. Optical-Thermal Response of Laser-Irradiated Tissue. 2nd ed. Dordrecht: Springer Netherlands; 2011. Chapter 1. [2] Patterson MS, Chance B, Wilson BC. Time Resolved Reflectance and Transmittance for

the Non-Invasive Measurement of Tissue Optical Properties. Appl Opt. 1989;28:2331-6 [3] Miyake H, Nioka S, Zaman A, Smith DS, Chance B. The Detection of Cytochrome Oxidase

Heme Iron and Copper Absorption in the Blood-Perfused and Blood-Free Brain in Normoxia and Hypoxia. Anal Biochem. 1991;192:149-55

[4] Cope M, Delpy DT. System for Long-Term Measurement of Cerebral Blood and Tissue Oxygenation on Newborn Infants by Near Infra-Red Transillumination. Med & Biol Eng & Comput. 1988;26:289-94

[5] Patterson MS, Wilson BC, Wyman DR. The Propagation of Optical Radiation in Tissue I: Models of Radiation Transport and their Application. Lasers Med Sci. 1991;6:155-68 [6] Patterson MS, Wilson BC, Wyman DR. The Propagation of Optical Radiation in Tissue

II: Optical Properties of Tissues and Resulting Fluence Distributions. Lasers Med Sci. 1991;6:379-90

[7] Bolt RA, Ten Bosch JJ. Method for Measuring Position-Dependent Volume Reflection. Appl Opt. 1993;32:4641-5

[8] Bolt RA, Rinzema KR, Ten Bosch JJ. Path-Length Distributions of Photons Re-Emitted from Turbid Media Illuminated by a Pencil-Beam. Pure Appl Opt. 1995;4:123-37

[9] Hyde DE, Farrell TJ, Patterson MS, Wilson BC. A Diffusion Theory Model of Spatially Resolved Fluorescense from Depth-Dependent Fluorophore Concentrations. Phys Med Biol. 2001;46:369-83

[10] Faber DJ, van Leeuwen TG. Optical Coherence Tomography. In: Welch AJ, van Gemert MJC, editors. Optical-Thermal Response of Laser-Irradiated Tissue. 2nd ed. Dordrecht: Springer Netherlands; 2011. Chapter 18.

(25)

[11] Bosschaart N, Faber DJ, van Leeuwen TG, Aalder MC. Measurements of Wavelength Dependent Scattering and Backscattering Coefficients by Low-Coherence Spectroscopy. J Biomed Opt. 2011;16:030503

[12] Petoukhova AL, Steenbergen W, de Mul FFM. Path-length Distribution and Path-length-resolved Doppler Measurements of Multiply Scattered Photons by Use of Low-Coherence Interferometry. 2001;19:1492-4

[13] Popescu G, Dogariu A. Optical Path-length Spectroscopy of Wave Propagation in Random Media. 1999;24:442-4

[14] Stehfest H. Numerical Inversion of Laplace Transforms. Communications of the ACM. 1970;13:47-9

[15] Kim A, Wilson BC. Measurements of Ex Vivo and In Vivo Tissue Optical Properties: Methods and Theories. In: Welch AJ, van Gemert MJC, editors. Optical-Thermal Response of Laser-Irradiated Tissue. 2nd ed. Dordrecht: Springer Netherlands; 2011. Chapter 8. [16] Jacques SL. Monte Calor Modeling of Light Transport in Tissue (Steady State and Time

of Flight). In: Welch AJ, van Gemert MJC, editors. Optical-Thermal Response of Laser-Irradiated Tissue. 2nd ed. Dordrecht: Springer Netherlands; 2011. Chapter 5.

[17] van Sterkenburg NAC. Source Code at https://github.com/nathhje/bachelorproject. 2018. DOI:10.5281/zenodo.1311759

[18] Cohen AM. Numerical Methods for Laplace Transform Inversion. New York: Springer US; 2007. Page 198.

(26)

Referenties

GERELATEERDE DOCUMENTEN

The moderating effect of an individual’s personal career orientation on the relationship between objective career success and work engagement is mediated by

In this three-way interaction model, the independent variable is resource scarcity, the dependent variables are green consumption and product choice and the moderators are

ge- daan om fossielhoudendsediment buiten de groeve te stor- ten; er zou dan buiten de groeve gezeefd kunnen worden. Verder is Jean-Jacques bezig een museum in te

En omdat niemand beter weet dan ik hoe belangrijk Adrie en haar Afzettingen voor de WTKG zijn, ben ik direct naar huis gevlogen om u. op de hoogte te bren- gen van

This thesis presents three episodes in the life of a medical student to show how both the ontological paradigm of the medical school and their and medical students’ concept of

In this letter, we show that the illuminance distribution on a flat surface by a single LED with a generalized Lambertian radiation pattern can be well approximated by a

Naast en tussen de sporen uit de Romeinse sporen werden er verschillende kleinere kuilen aangetroffen die op basis van het archeologisch materiaal dat erin aanwezig was, beschouwd

Un- fortunately, most of these and related methods exploit the availability of het- erogeneous data sources in a sequential or an iterative way (see e.g. [72] for simultaneous