• No results found

Magnetic resonance fingerprinting

N/A
N/A
Protected

Academic year: 2021

Share "Magnetic resonance fingerprinting"

Copied!
63
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Magnetic resonance fingerprinting

Bart Tukker

July 14, 2017

Bachelor Thesis Mathematics, Physics and Astronomy Supervisors: dr. Rudolf Sprik, dr. Chris Stolk

Korteweg-de Vries Instituut voor Wiskunde

(2)

Abstract

Magnetic resonance fingerprinting (MRF) is a new approach to quantitative magnetic resonance imaging proposed in the paper “Magnetic Resonance Fingerprinting” from Nature by Duerk et al.. MRF uses a random RF pulse sequence, a dictionary of pre-determined signal evolutions for different physical parameters and a pattern matching algorithm to determine the parameters of a measured signal. The goal of this the-sis is to determine whether a random pulse sequence works better than a non-random pulse sequence for MRF. Also two pattern matching algorithms are compared with each other. The first pattern matching algorithm uses the correlation and the second one uses the least squares. MRF is implemented for three different pulse sequences and the two matching algorithms for 50 pulses and 500 pulses where the parameters T1, T2

and off-resonance frequency δω are estimated. In the first pulse sequence are the flip angles, axis angles and repetition times choosen constant. The second pulse sequence has random flip angles, a constant axis angle and random repetiton times. In the last pulse sequence are the flip angles, axis angles and repetiton times chosen randomly. The implementation is done for the single single voxel case and with a known proton density. The pulse sequences and matching algorithms are compared with each other by doing multiple simulation with noise and using statistics to quantify how well the parameters are estimated. For 50 pulses no pulse sequence performs significantly better than the other. When the amount of pulses is increased to 500, the random pulse sequence with random axis angles performs better than the other two. Least squares estimates the parameters best when the proton density is known.

Title: Magnetic resonance fingerprinting Author: Bart Tukker, 10797025

Supervisors: dr. Rudolf Sprik, dr. Chris Stolk

Second graders: dr. Tom Hijmans, prof. dr. Rob Stevenson Date: July 14, 2017

Korteweg-de Vries Instituut voor Wiskunde Universiteit van Amsterdam

Science Park 904, 1098 XH Amsterdam http://www.science.uva.nl/math

(3)

Contents

1. Introduction 4

2. Magnetic dipole moment in a magnetic field 6

2.1. Proton spin in a magnetic field . . . 6

2.2. Larmor precession . . . 7

2.3. Solution of a magnetic dipole moment in a static magnetic field . . . 8

3. Magnetization in a magnetic field 10 3.1. Magnetization of non-interacting protons . . . 10

3.2. Longitudinal relaxation time . . . 12

3.3. Transverse relaxation time . . . 12

3.4. Bloch equations . . . 14

4. RF pulse sequences 17 4.1. RF pulse and RF pulse sequence . . . 17

4.2. The spin echo and T2 measurement . . . 18

4.3. Inversion recovery and T1 measurement . . . 18

5. Magnetic resonance fingerprinting and its implementation 22 5.1. Magnetic resonance fingerprinting . . . 22

5.2. Equivalence of the correlation and least squares . . . 24

5.3. Implementation of MRF . . . 25

5.4. Statistics . . . 28

6. Results of the MRF simulations 30 6.1. Non-random pulse sequence . . . 33

6.2. Random pulse sequence with a constant axis angle . . . 37

6.3. Random pulse sequence with random axis angles . . . 39

6.4. Overview results and discussion . . . 40

7. Conclusion 43

8. Popular summary 44

A. Matlab code 46

B. Figures of the results 50

(4)

1. Introduction

Magnetic resonance imaging (MRI) is an imaging technique that uses nuclear spin res-onance to form an image. MRI is noninvase and relative safe since it uses magnetic fields and no ionizing radiation unlike other imaging techniques such as a CT scan or radiography that uses X-rays. That is why MRI is widely used in the medical field as a diagnostic tool and for medical research.

An additional benefit of MRI is that it can use multiple physical parameters such as the relaxation times of nuclear spins and the proton density to form an image. Since these parameters are characteristic for different materials, it can be used to distinguish different type of tissues in an image. However, in practice the images that are created with MRI are essentially qualitative where only the relative values of the parameters are used for imaging. With quantitative MRI additional informaton can be obtained by estimating the spatial variation of the physical parameters. This additional information can help in the discrimination of different types of tissues. However, the quantitative MRI methods that are used typically provide information for just a single parameter at a time. Moreover, the imaging can be really slow and is highly sensitive to the imperfections of the system.

In a paper from Nature by Duerk et al. [5] a novel approach to quantitative imaging is proposed. They call this method magnetic resonance fingerprinting (MRF). This technique is based on the idea that for different parameter values the measured signal evolution is unique, like a fingerprint. With MRF a dictionary of predetermined signal evolutions for different parameter values is made. Then the parameters of a measured signal are simultaneously determined by matching the signal evolution with a signal evolution from the dictionary.

Because this technique seems very promising, in this thesis is investigated whether a random pulse sequence performs better than a non-random pulse sequence for MRF. Two different pattern matching algorithms are used for this and are also compared with each other. Due to the complexity of MRF, it is too much for a bachelor thesis to analyse this generally. That is why the choice is made to analyse the pulse sequences and pattern matching algorithms by implementing it. The pulse sequences and matching algorithms are compared with each other by doing multiple simulation with noise and using statistics to quantify how well the parameters are estimated.

To be able to understand how MRF works, first is the basic theory of MRI studied. These are covered in the next three chapters. In chapter 2 is explained how a single proton spin behaves in a magnetic field. This motion is captured by differential equations which will also be solved. In chapter 3 are the relaxation times of the nuclear spins introduced. They are used to derive the differential equations that describe how an ensemble of spins behaves in a static external magnetic field. These differential equations

(5)

are known as the Bloch equations and will also be solved. In chapter 4 is the RF pulse sequence introduced and how it is used to determine the relaxation times. These three chapters about the basic theory of MRI are written by using the books from Brown et al. [2], Constantinides [3] and Oldendorf et al. [7].

The second half of the thesis is about MRF. In chapter 5 is described how MRF works, how it is implemented and what statistics is going to be used for the simulations. In chapter 6 are the results shown how well the parameters are estimated for three different pulse sequences and two different matching algorithms. This is done by doing multiple simulations and using statistics to quantify how well it performs. Then the results are discussed and explained.

(6)

2. Magnetic dipole moment in a

magnetic field

To be able to understand how MRI uses the motions of a nuclear spin in a magnetic field to obtain the information for the imaging, first is studied how the magnetic dipole moment behaves in a magnetic field. In this chapter the magnetic dipole moment of a single proton spin is introduced while the proton interactions with its surroundings are ignored. The motion of a magnetic dipole moment is described by differental equations. With this the Larmor precession formula is derived which tells with which frequency a magnetic dipole moment precess around the magnetic field. Then the solutions of the differential equations are found. The obtained equations are important for the next chapter where they are going to be used to derive the Bloch equations and its solutions including the proton spin interactions with its surroundings.

2.1. Proton spin in a magnetic field

The proton spin can be viewed as a small circulating current. This circulating is as-sociated with a magnetic dipole moment. The magnetic dipole moment ~µ is defined as

~ µ ≡ I

Z

d~a = I~a, (2.1) where I is the current and ~a the vector area. The magnetic dipole moment can be written in terms of its spin angular momentum ~J

~

µ = γ ~J , (2.2) where γ is the proportionallity constant. This constant is called the gyromagnetic ratio. The gyromagnetic ratio is determined experimentally and depends on the nuclues or particle. The gyromagnetic ratio for the proton is found to be

γ = 2.675 × 108rad/s/T. (2.3) The torque N on a current distribution in a constant magnetic field ~B in terms of the magnetic dipole moment is

~

(7)

The derivation of this formula can be found in the book of Griffiths [6]. If the torque is nonzero, then the change in the total angular momentum ~J is

~ J

dt = ~N . (2.5) By combining all these expressions, the following formula is obtained:

d~µ

dt = γ~µ × ~B. (2.6) By the right-hand rule ~µ rotates clockwise around the magnetic field, which is shown in figure 2.1. Since d~dtµ is perpendicular to ~µ, its magnitude remains constant

dt = 0. (2.7)

Figure 2.1.: Clockwise precession of a proton’s magnetic dipole moment ~µ around the magnetic field ~B. Figure comes from Brown et al. [2].

2.2. Larmor precession

By looking at figure 2.1, the following formula can be constructed:

(8)

One also has

|d~µ| = γ|~µ × ~B|dt = γµB sin θdt (2.9) This gives the equality γB|dt| = |dφ|. With this the Larmor precession formula is obtained

ω ≡ |dφ

dt| = γB. (2.10) So if the magnetic dipole moment is not perfectly aligned with a static magnetic field, the magnetic moment precesses around the magnetic field with the Larmor frequency.

2.3. Solution of a magnetic dipole moment in a static

magnetic field

Let ˆx, ˆy and ˆz be unit vectors parallel to the x, y and z-axis respectively. Then one can write ~µ(t) as µx(t)ˆx + µy(t)ˆy + µz(t)ˆz, where µx(t), µy(t) and µz(t) are the x, y and

z-component of ~µ respectively. Suppose ~B = B0z. Then equation (2.6) can be writtenˆ

as

d~µ

dt = γ[µx(t)ˆx + µy(t)ˆy + µz(t)ˆz] × B0zˆ = γ[−µx(t)ˆy + µy(t)ˆx]B0.

(2.11)

By splitting it in its components, the following three differential equations are obtained: dµx dt = γB0µy(t) = ω0µy(t), (2.12) dµy dt = −γB0µx(t) = −ω0µx(t), (2.13) dµz dt = 0, (2.14)

where ω0 is the Larmor frequency. To find the solutions for (2.12) and (2.13), take the

second derivative d2µx dt2 = −ω 2 0µx(t), d2µy dt2 = −ω 2 0µy(t).

The solutions for these two second-order differential equations are µx(t) = a cos(ω0t) + b sin(ω0t),

(9)

where a, b, c and d are constants. To satisy their inital condition, a must be µx(0)

and c must be µy(0). Putting these two solutions back into the first-order differential

equations, one gets

dµx dt = ω0[−µx(0) sin(ω0t) + b cos(ω0t)] = ω0[µy(0) cos(ω0t) + d sin(ω0t)], dµy dt = ω0[−µy(0) sin(ω0t) + d cos(ω0t)] = −ω0[µx(0) cos(ω0t) + b sin(ω0t)].

This implies that b = µy(0) and d = −µx(0). By putting these two expressions into

(2.12) and (2.13), the following solutions are obtained:

µx(t) = µx(0) cos(ω0t) + µy(0) sin(ω0t), (2.15)

µy(t) = µy(0) cos(ω0t) − µx(0) sin(ω0t), (2.16)

µz(t) = µz(0), (2.17)

for a magnetic field ~B = B0z.ˆ

The solution can also be written in a complex form. Define µ+(t) as µx(t) + iµy(t).

By writing out µx(t) + iµy(t), one obtains

µx(t) + iµy(t) = µx(0) cos(ω0t) + µy(0) sin(ω0t) + iµy(0) cos(ω0t) − iµx(0) sin(ω0t)

= µx(0)[cos(ω0t) − i sin(ω0t)] + iµy(0)[cos(ω0t) − i sin(ω0t)]

= µx(0)[cos(−ω0t) + i sin(−ω0t)] + iµy(0)[cos(−ω0t) + i sin(−ω0t)]

= [µx(0) + iµy(0)]e−iω0t.

So the following formula is obtained:

µ+(t) = µ+(0)e−iω0t. (2.18)

With this expression, it can be easily seen that the magnetic dipole moment in the xy-plane rotates with the Larmor frequency clockwise and that its magnitude is constant.

(10)

3. Magnetization in a magnetic field

In the last chapter the magnetic dipole moment of a single proton spin in pressence of a magnetic field is analysed. However, the interactions of the proton spin with its surrounding are ignored. The proton spin interactions with its neighboring atoms causes a change of the local magnetic field. This leads an ensemble of spins precessing at different frequencies than the Larmor frequency. Protons can also exchange spin energy with its surroundings due to these proton interactions. The quantum physical description can be found in the books of Brown et al. [2] and Constantinides [3].

In this chapter the magnetization is introduced. Its motion in a magnetic field with the effects of the proton interactions is described by a differential equation known as the Bloch equations. The solution of the Bloch equations will also be derived in this chapter. The The dynamics of the spin system is characterized by two constants which are the longitudinal relaxation time T1 and the transverse relaxation time T2. Since

these relaxation times are characteristic for a material, these two parameters are very important for MRI to differentiate tissues from each other.

3.1. Magnetization of non-interacting protons

The magnetization ~M is defined as the sum over the magnetic moments ~µi in a volume

V divided by this volume

~ M = 1 V X i ~ µi. (3.1)

If the interactions of the protons with their environment is neglected, the derivative of the magnetization with respect to the time is

1 V X i d~µi dt = γ V X i ~ µi× ~Bext,

where equation (2.6) is used. The differential equation for the magnetization is then d ~M

dt = γ ~M × ~Bext. (3.2) If this expression is written in its x-, y-, and z-component and set ~Bext as B0ˆz, then

(11)

it becomes the same as the equations for a single spin: dMx dt = γB0My(t) = ω0My(t), (3.3) dMy dt = −γB0Mx(t) = −ω0Mx(t), (3.4) dMz dt = 0. (3.5)

Their solutions are derived in the last chapter, which are

Mx(t) = Mx(0) cos(ω0t) + My(0) sin(ω0t), (3.6)

My(t) = My(0) cos(ω0t) − Mx(0) sin(ω0t), (3.7)

Mz(t) = Mz(0). (3.8)

From equation (2.18) can be seen that in a reference frame that rotates the xy-plane with the Larmor frequency ω0, the solutions of the magnetization then becomes

Mx(t) = Mx(0), (3.9)

My(t) = My(0), (3.10)

Mz(t) = Mz(0). (3.11)

So the derivative of the magnetization with respect to the time in this reference frame is d ~M

dt = 0. (3.12) It is useful to split the magnetization in terms of the parallel and the perpendicular component with respect to the external field. If ~Bextis B0z, then the parallel componentˆ

of the magnetization is

Mk = Mz, (3.13)

and the perpendicular component is ~

M⊥ = Mxx + Mˆ yy.ˆ (3.14)

If these two equations are put into (3.2), one gets the differential equations when the proton interactions are ignored:

dMk

dt = 0, (3.15)

d ~M⊥

(12)

3.2. Longitudinal relaxation time

It is energetically beneficial for the protons to align its spins with the external magnetic field. The spins can transfer their excess energy by interacting with neighboring atoms and radiating electromagnetic fields. The magnitude of the magnetization in this equi-librium state is proportional to the external field. For an external field ~Bext = B0z,ˆ

let the magnitude of the magnetization in this equilibrium state be M0. Due to the

proton interactions with its suroundings, the change of the longitudinal magnetization is proportional to the difference M0− Mz. The differential equation for the longitudinal

magnetization when the static external field ~Bext= B0z is applied, then becomesˆ

dMz

dt = 1 T1

(M0− Mz), (3.17)

where T1 is the longitudinal relaxation time. T1 depends on the environment of the

protons. The longitudinal relaxation time is very important for MRI because MRI uses this parameter to differentiate different types of tissues.

To obtain the solution of this differential equation, replace Mz with Mz∗+ M0. The

differential equation then becomes dMz∗+ M0 dt = dMz∗ dt = − 1 T1 Mz∗. The solution for this equation is

Mz(t)∗= ce−t/T1,

where c is a constant. If Mz(t)∗ is replaced with Mz(t) − M0, then the longitudinal

magnetization at t = 0 is

Mz(0) = M0+ c.

To satisfy the inital condition, c must be Mz(0) − M0. So the solution of differential

equation (3.17) is

Mz(t) = M0+ (Mz(0) − M0)e−t/T1. (3.18)

This equations states that when the magnetization is tipped away from its equilibrium state, the longitudinal magnetization regrows exponentially from the initial value, Mz(0),

to the equilibrium value, M0, with a rate that depends on T1.

3.3. Transverse relaxation time

Interactions of spins with their neighboring atoms cause a change of the local magnetic fields. The local magnetic field that a spin experiences is a combinations of the applied external field and the field of their neighbours. The variations in local fields causes spins to precess at different frequencies. The individual spins tend to fan out in time. This is

(13)

Figure 3.1.: The upper pictures shows the dephasing of a set of spins after tipping them in the transverse plane. The lower pictures shows the corresponding mag-netization vector of the spins. This figure comes from Brown et al. [2].

referred to as dephasing. The net magnetization vector decreases in magnitude because of the dephasing. This effect is illustrated in figure 3.1.

This decay in transverse magnetization adds an additional term to the differential equation and becomes

d ~M⊥ dt = γ ~M⊥× ~Bext− 1 T2 ~ M⊥, (3.19)

where T2 is the transverse relaxation time. Like the longitudinal relaxation time T1, T2

depends on the environment of the spins and is used in MRI to differentiate different type of tissues.

In a reference frame that rotates with the Lamour frequency, differential equation (3.19) becomes d ~M⊥ dt = − 1 T2 ~ M⊥. (3.20)

This has the solution

~

(14)

If the magnitude of the transverse magnetization is defined as M⊥≡ | ~M⊥|, the solution

becomes

M⊥(t) = M⊥(0)e−t/T2, (3.22)

which describes the decay of the transverse magnetization in a non-rotating reference frame. From the solution can be seen that the decay is exponential with a rate depends on T2 and the transverse magnetization goes to zero when the time goes to infinity.

In practice, the external magnetic field is not perfectly homogeneous. This adds an additional dephasing of the magnetization. The additional decay is characterized with decay time T20. The overal relaxation time T2∗ is defined as

1 T2∗ = 1 T2 + 1 T20. (3.23) In the next chapter, it will be shown that the loss of transverse magnetization due to T20 is recoverable. The intrinsic losses due to T2 are not recoverable, since they are caused

by local, random, time-dependent field variations.

3.4. Bloch equations

By combining the differential equation of the longitudinal magnetization (3.17) and the transverse magnetization (3.19), the differential equation of the magnetization becomes

d ~M dt = γ ~M × ~Bext+ 1 T1 (M0− Mz)ˆz − 1 T2 ~ M⊥. (3.24)

This differential equation is known as the Bloch equation. The Bloch equation is very important for MRI to determine the relaxation times T1 and T2.

Suppose ~Bext= B0z. By splitting the Bloch equation into its x-, y- and z-component,ˆ

the differential equations becomes dMx dt = ω0My− Mx T2 , (3.25) dMy dt = −ω0Mx− Mx T2 , (3.26) dMz dt = M0− Mz T1 , (3.27)

where ω0≡ γB0 is the Larmor frequency. The differential equation of the z-component

is the same as equation (3.17). Its solution has already been derived, which is equation (3.18). To solve the other two equations, replace Mxwith µxe−t/T2 and Mywith µye−t/T2.

The differential equation of the x-component then becomes dµxe−t/T2 dt = e −t/T2dµx dt − µxe−t/T2 T2 , = ω0µye−t/T2 − µxe−t/T2 T2 .

(15)

After canceling the terms µxe−t/T2

T2 and dividing both sides by e

−t/T2, the equation

be-comes

dµx

dt = ω0µy. Similarly for the y-component, it becomes

dµy

dt = −ω0µx.

Their solutions were already found in chapter 2, which are µx(t) = µx(0) cos(ω0t) + µy(0) sin(ω0t),

µy(t) = µy(0) cos(ω0t) − µx(0) sin(ω0t).

Changing µx back into Mxet/T2 gives

Mx(t)et/T2 = Mx(0)e0/T2cos(ω0t) + My(0)e0/T2sin(ω0t),

My(t)et/T2 = My(0)e0/T2cos(ω0t) − Mx(0)e0/T2sin(ω0t).

Then the complete set of solutions of the Bloch equations are

Mx(t) = e−t/T2[Mx(0) cos(ω0t) + My(0) sin(ω0t)], (3.28)

My(t) = e−t/T2[My(0) cos(ω0t) − Mx(0) sin(ω0t)], (3.29)

Mz(t) = M0+ (Mz(0) − M0)e−t/T1. (3.30)

In the reference frame that rotates xy-plane with the Larmor frequency, the solutions are

Mx(t) = e−t/T2Mx(0), (3.31)

My(t) = e−t/T2My(0), (3.32)

Mz(t) = M0+ (Mz(0) − M0)e−t/T1. (3.33)

When t is taken to infinity, the magnetization goes to the equilibrium value M0z. Impor-ˆ

tant to note is that the longitudinal and transverse magnetizations behaves independent from each other, so the magnitude of the magnetization, | ~M |, does not necessarily have a fixed length. In figure 3.2 the trajectory for an initial magnetization that lies in the transverse plane can be seen.

The Bloch equations in the non-rotating reference frame can be simplified by using the complex representation

(16)

Figure 3.2.: The trajectory of the magnetization for an initial magnetization that lies in the transverse plane along the y-axis. It shows the regrowth of longitudinal magnetization and the decay of the transverse magnetization which happens independently from each other. This figure comes from Brown et al. [2].

Writing out Mx and My gives

Mx(t) + iMy(t) = e−t/T2[(Mx(0) + iMy(0)) cos(ω0t) + (My(0) − iMx(0)) sin(ω0t)]

= e−t/T2[(M

x(0) + iMy(0)) cos(ω0t) − i(Mx(0) + iMy(0)) sin(ω0t)]

= e−t/T2(M

x(0) + iMy(0))[cos(−ω0t) + i sin(−ω0t)]

= e−t/T2−iω0t(M

x(0) + iMy(0))

Then the following equation is obtained:

M+(t) = M+(0)e−t/T2−iω0t. (3.35)

With this equation it can be easily seen by looking at the complex exponent e−iω0t

that the magnetization in the xy-plane rotates with the Larmor frequency. Since the magnitude of M+(t) is the same as the transverse magnetization M⊥ and does not

change by the complex exponent, the solution can be written as M⊥(t) = M⊥(0)e−t/T2,

which agrees with (3.22). By writing the Bloch equations in in a longitudinal and transverse component with respect to the external magnetic field, we following solutions are obtained:

Mk(t) = M0+ (Mk(0) − M0)e−t/T1, (3.36)

M⊥(t) = M⊥(0)e−t/T2, (3.37)

(17)

4. RF pulse sequences

The solutions of the Bloch equations can be used to rotate the magnetization away from its equilibrium state by applying an additonal magnetic field. In this chapter it will be shown how this is done with RF pulses. There will also be two basic RF pulse sequences introduced to see how an RF pulse sequence work. The first one is the spin echo method which is used to determine T2. The second one is the inversion recovery method which

is used to determine T1.

4.1. RF pulse and RF pulse sequence

An RF pulse rotates the magnetization by applying an external magnetic field ~B1 for

a finite time. In the reference frame that rotates the transverse plane with the Larmor frequency, the following equations tell how the magnetization changes by the RF pulse: Mk(t) = M1+ (Mk(0) − M1)e−t/T1, (4.1)

M+(t) = e−t/T2−iω1tM+(0), (4.2)

where M1 is the corresponding equilibrium magnetization to the external field ~B1.

If the RF pulse is turned on for a very short duration of time, the effect of the relaxation times T1 and T2 can be ignored. That is because typical values of ω1 = γB1 are much

larger than 1/T1 and 1/T2. This reduces the equations to

Mk(t) = Mk(0), (4.3)

M+(t) = e−iω1tM+(0). (4.4)

Thus, if an RF pulse is applied for a short amount of time τ , then the RF pulse causes a rotation of the magnetization with an angle of ωτ radians around ~B1.

If the applied magnetic field ~B1 is perpendicular to ~B0 and rotates the magnetization

by an angle 90 degrees, then the RF pulse is called a π/2-pulse. A rotation of 180 degrees is called a π-pulse.

A series of RF pulses performed one after another is called and RF pulse sequence. The time between RF pulses is called the repetition time T R. Between the RF pulses the signal can be measured and be used to determine T1 or T2. However, the coils that

measures the magnetization only pick up the rotating magnetization. So the measured signal contains only the transverse component of the magnetization. In Brown et al. [2] can be read how this works in more detail.

(18)

4.2. The spin echo and T

2

measurement

In practice, the external magnetic field is not perfectly uniform. This causes additional dephasing of the transverse magnetization characterized with time constant T20. T20 can often be so small that 1/T20 dominates 1/T2. Fortunately, the loss in magnetization

caused by T20 is recoverable. This can be done with an RF pulse sequence called the spin echo method.

The spin echo method consists of two RF pulses. First a π/2-pulse is applied and is followed by a π-pulse. Suppose a π/2- pulse is applied at t = 0 about the x-axis in the rotating reference frame that rotates with the Larmor frequency. Then the magnetization points along the y-axis. The individual spins at different positions ~r experience different magnetic fields that is not excatly equal to B0. The spins rotate with different frequencies

and start to dephase. The accumulated phase φ of a spin relative to the y-axis is φ(~r, t) = −γ∆B(~r)t, (4.5) where ∆B(~r) is the difference in magnetic field between the field a spin at position ~r experiences and the external field B0. At time t = τ the phase is −γ∆B(~r)τ . If at t = τ

a π-pulse is applied about the y-axis, a positive phase becomes negative, and vice versa φ(~r, τ+) = −φ(~r, τ−) = γ∆B(~r)τ. (4.6) Then for t > τ the accumulated phase is

φ(~r, t) = φ(~r, τ+) − γ∆B(~r)(t − τ ) = −γ∆B(~r)(t − 2τ )

= −γ∆B(~r)(t − TE),

(4.7)

with the echo time TE is defined as 2τ . Since the rate of phase accumaltion of each spin

remains unchanged, at time TE all spins has phase φ = 0, regardless of their positions

and the fields they experience. In figure 4.1 you can see what how the spin echo is formed by the spin echo method.

Additional echoes can be formed by applying π-pulses at times t = (2n − 1)τ with n ∈ N. This causes to form echoes at t = 2nτ = nTE. By measuring the signal at the

echo’s, the transverse relaxation time T2 can be determined. In figure 4.2 is the pulse

sequence diagram of the repeated spin echo method and the corresponding signal shown. In this figure can be seen that at the times t = 2nτ an echo is formed. Also a signal is drawn when the dephasing effect of T20 is removed.

4.3. Inversion recovery and T

1

measurement

The measurement of T1can be done through inversion recovery. However, unlike the spin

echo method that just needs a single experiment, multiple experiments are required to determine T1. Inversion recovery starts with the application of an π-pulse which inverts

(19)

Figure 4.1.: The formation of an echo by the spin echo method. Figure comes from Brown et al. [2].

the magnetization. Then at time t = TI a π/2pulse is applied. In figure 4.3 you can see

the pulse sequence diagram of inversion recovery.

If at t = 0 a π-pulse is applied, the magnetization after the RF pulse is

Mz(0+) = −M0 (4.8)

The magnetization at 0 < t < TI by using the solution of the z-component of the Bloch

equation (3.30) is

Mz(t) = M0+ (−M0− M0)e−t/T1 = M0(1 − 2M0)e−t/T1. (4.9)

At time t = TI an π/2-pulse is applied. The magnetization after at t > TI is

M⊥(t) = |M0(1 − 2M0)e−TI/T1|e−(t−TI)/T

(20)

Figure 4.2.: The pulse sequence diagram for multiple spin echoes and the corresponding signal. At times t = 2nτ , with n ∈ N, an echo is formed. At the echoes the signal has the same magnitude as a signal without the dephase effect of T20. Figure comes from Brown et al. [2].

Figure 4.3.: The pulse sequence diagram for the inversion recovery. It consists of an initial π-pulse followed by a π/2-pulse after time TI.

This shows that after the magnetization has been rotated in the transverse plane, the magnitude of the initial magnetization depends on TI. If a π/2-pulse is applied at time

TI = T1ln 2, (4.11)

the factor (1 − 2M0)e−TI/T1 becomes zero. So T1 can be found by varying TI until a zero

in the signal is measured. The graph in figure 4.4 shows the transverse magnetization right after t = TI as function of TI.

(21)

Figure 4.4.: The magnitude of the transverse magnetization for an inversion recovery at time TI as a function of TI. For TI = T1ln 2 the magnitude is zero. This

(22)

5. Magnetic resonance fingerprinting

and its implementation

In practise, the images that are created with MRI are essentially qualitative where only the relative values of the parameters are used for imaging. With quantitative MRI the spatial variation of the parameter values are also estimated. This provides additional information which can help in the discrimination of different types of tissues. However, the imaging with quatitative MRI is very slow and often only one parameter can be estimated at a time.

In a paper from nature of Duerk et al. [5] another approach of quantitative imaging is proposed. They call this method magnetic resonance fingerprinting (MRF). The differ-ence of MRF compared to traditional MRI is that MRF uses a random pulse sequdiffer-ence, a dictionary of predetermined signal evolutions for different known values of the param-eters, and a pattern matching algorithm. By matching an observed signal with one from the dictionary, all parameters are determined quantitatively at the same time.

In this chapter is first explained how MRF works. The mathematical notation for MRF follows closely the paper from Davies et al. [4]. Then it is shown how MRF is implemented. Two pattern matching algorithms are used in the implementation. The first one uses the correlation and the second uses least squares. Statistics is used to quantify how well MRF performs.

Due to the time constraint, only the single voxel case is implemented. In the paper from Nature can be read how it is done for three-dimensions. MRI in more dimension can be read in the books of Brown et al. and Constantinides.

5.1. Magnetic resonance fingerprinting

MRF uses a random pulse sequence. The random pulse sequence starts with an in-version pulse. Let N be the amount of pulses after the initial inin-version pulse in the pulse sequence. Then the random pulse sequence consists of RF pulses with flip angles α = {α1, ..., αN} and repetition times T R = {T R1, ..., T RN} where each flip angle and

repetition time is chosen randomly. In figure 5.1 is a pulse sequence diagram of a ran-dom pulse sequence illustrated. The ranran-dom pulse sequence can also have axis angles φ = {φ1, ..., φN} where the angles are chosen randomly. They indicate that the n-th RF

pulse rotates the magnetization around axis y = x tan(φn) for n = 1, ..., N .

After each pulse the signal MX + iMy is measured. Let the time between the pulse

and the measurement be T S, with T S < min{T R1, ..., T RN}. Let the measured signal

evolution be X ∈ CN, where the i-th component of X is the signal measured after the i-th RF pulse for i = 1, ..., N . X depends on parameters T1, T2, off-resonance δω

(23)

Figure 5.1.: An RF pulse sequence for random repetition times T R1, ..., T RN and

ran-dom flip angles α1, ..., αN.

and proton density ρ. δω is the difference between the frequency a proton spin precess with and the Larmor frequency. A mapping of the parameters can be made from the parameters to X. For a chosen random pulse sequence with flip angles α, axis angles φ, repetition times T R, and sample time T S, let B be the parameter map such that

X = ρB(θ), (5.1) where θ = {T1, T2, δω} is the parameter set of the parameters T1, T2 and δω. ρ is

multiplied with B because the signal is proportional to the proton density.

With this random pulse sequence a dictionary can be built. The dictionary consists of predetermined signal evolutions for different values of parameters T1, T2, and δω. For

a discrete set of parameter sets θ(k) = {T1(k), T2(k), δω(k)} with k = 1, ..., K, define the dictionary as D = {D1, ..., DK} with

Dk = B(θk) ∈ CN, (5.2)

where ρ is set to 1. It is not essential to use a dictionary for MRF, but the benefit of a dictionary is that the signal evolutions can be determined in advance and the dictionary can be used multiple times to determine the parameters in each voxel. It is also not a requirement to use a random pulse sequence for MRF, but it is an easy way to have a dictionary with signal evolutions that are distinct from each other.

After the dictionary is generated, a signal is observed. Let the observed signal be d = ρB(θ) ∈ CN, (5.3) where ρ and θ are unknown. To find the the parameters of the observed signal, a pattern matching algorithm is used. The pattern matching algorithm matches the observed signal with a signal Dk from the dictionary D. One pattern matching algorithm is done with

the correlation. For x, y ∈ CN define the inner product as hx, yi =PN

i=1xiyi and the

norm as ||x|| =phx, xi. Then define the correlation between the observed signal d and signal Dk from the dictionary as

Cor(d, Dk) =

|hd, Dki| ||d|| ||Dk||

(24)

From the Cauchy-Schwarz inequality it is known that |hd, Dki| ≤ ||d|| ||Dk||.

So the correlatation is less than or equal to 1. To find the signal from the dictionary that corresponds to the observed signal best, the correlation is maximized over k:

ˆ

k = arg max

k Cor(d, Dk). (5.5)

Then the parameters of the observed signal are θ(ˆk)= {T1(ˆk), T2(ˆk), δω(ˆk)} and ρ = |hd, Dˆki|

||Dˆk||2

. (5.6)

Another pattern matching algorithm uses the least squares. However, for the least squares to work, it is assumed that the proton density of the observed signal is known. For the single voxel case this is not a problem since the proton density can be determined by looking at the equilibrium magnetization M0. For an observed signal d with proton

density ρ and signal Dk from the dictionary, define the least squares as

LS(d, Dk) = ||

d

ρ − Dk||

2. (5.7)

The value is non-negative and is zero if dρ = Dk. The signal that represents the observed

signal best is found by minimizing the least squares over k: ˆ

k = arg min

k LS(d, Dk). (5.8)

Then the matched parameters of the observed signal are θ(ˆk)= {T1(ˆk), T2(ˆk), δω(ˆk)}. Because the dictionary is made for a discrete set of parameters, there is a discretization error on the estimated parameters. But there are also errors in the estimation of the parameters due to the noise on the observed signal. Part of the noise is due to the experimental equipment, but the main noise source is the pressence of strong RF pulses that are used to make a three-dimensional image. This noise is more like a background signal and is assumed to be statisctically uncorrelated for a single voxel.

5.2. Equivalence of the correlation and least squares

In this section will be shown that for a certain proton density ρ, the least squares and the correlation matching algorithms are equivalent. Physically, ρ is real valued and non-negative, but it is common in MRI to allow this parameter to absorb additional phase terms [4]. Therefore, ρ is often allowed to be complex valued, which will be assumed in this section. In the last section the least squares is defined as

LS(d, Dk) = ||

d

ρ − Dk||

(25)

where d is the observed signal and Dk = B(θk) the signal from the dictionary for

parameter set θk. To show the equivalence of the two matching algoritms, multiply the least squares with |ρ|2 and replace Dk with B(θk). Then by writing this out:

||d − ρB||2= hd − ρB, d − ρBi

= hd, di − 2Re(ρhB, di) + |ρ|2hB, Bi. (5.9) Let δ be a disturbance on ρ. Then |ρ + δ|2 = |ρ|2+ 2Re(ρδ) + |δ|2. If ρ is replaced with ρ + δ in equation (5.9), the additional terms are

−2Re(δhB, di) + 2Re(ρδ)hB, Bi + |δ|2hB, Bi. (5.10) For a stable ρ, this expression must be equal to zero. Assuming δ is small enough such that the last term can be neglected, equation (5.10) is zero when

ρ = hB, di

||B||2. (5.11)

If the absolute value is taken, then this is the same as the ρ in equation (5.6) that is calculated with the correlation. Putting this condition in equation (5.9), the least squares becomes ||d −hB, di ||B||2B|| 2 = ||d||2− 2Re(hB,hB, di ||B||2di) + hB, di ||B||2 2 ||B||2 = ||d||2− 2Re(hB, di ||B||2hB, di) + |hB, di|2 ||B||2 = ||d||2− |hB, di| 2 ||B||2 .

With the least squares matching algorithm, the parameters that correspond best with the observed signal d is found by minimalizing this expression over k:

||d||2|hB(θ(k)), di|2

||B(θ(k))||2 . (5.12)

Divide this expression by ||d||2 and it becomes the same as maximizing this equation over k:

|hB(θ(k)), di|

||d|| ||B(θ(k))||, (5.13)

which is the same as the correlation.

5.3. Implementation of MRF

To asses how well MRF performs, MRF is implemented for the single voxel case. In this implementation the proton density ρ is set to 1, so only the parameters T1, T2 and δω

(26)

Let ~M (t) = (Mx(t), My(t), Mz(t))T be the magnetization with ~M = (0, 0, M0)T be the

magnetization in equilibrium state. Let the reference frame be rotating with the Larmor frequency in the transverse plane. Assume for now that the off-resonance δω = 0. Then by using the solutions of the Bloch equations, the magnetization for t > 0 can be written as ~ M (t) =   Mx(0)e−t/T2 My(0)e−t/T2 M0+ (Mz(0) − M0)e−t/T1)   =   e−t/T2 0 0 0 e−t/T2 0 0 0 e−t/T1     Mx(0) My(0) Mz(0)  +   0 0 M0(1 − e−t/T1)  .

If the magnetization does not spin with the Larmor frequency, then the magnetization rotates with an angle of φ = 2πδωt around the z-axis. The rotation matrix

Rz(φ) =   cos(φ) − sin(φ) 0 sin(φ) cos(φ) 0 0 0 1   (5.14)

rotates a three dimensional vector around the z-axis with an angle of φ radians. For off-resonance δω, the magnetization becomes

~ M (t) = Rz(φ(t)) "  e−t/T2 0 0 0 e−t/T2 0 0 0 e−t/T1     Mx(0) My(0) Mz(0)  +   0 0 M0(1 − e−t/T1)   # = Rz(φ(t))   e−t/T2 0 0 0 e−t/T2 0 0 0 e−t/T1     Mx(0) My(0) Mz(0)  +   0 0 M0(1 − e−t/T1)  , where φ(t) = 2πδωt.

For easier notation, define matrices A and B as

Aθ(t) = Rz(2πδωt)   e−t/T2 0 0 0 e−t/T2 0 0 0 e−t/T1  , (5.15) Bθ(t) =   0 0 M0(1 − e−t/T1)  , (5.16)

where θ = {T1, T2, δω} is the parameter set to show the dependence of the matrices on

the parameters.

An RF pulse that rotates the magnetization with α radians around the y-axis can be calculated with the rotation matrix

Ry(α) =   cos(α) 0 − sin(α) 0 1 0 − sin(α) 0 cos(α)  . (5.17)

(27)

An RF pulse that rotates the magnetization with α radians around the axis y = x tan(φ) can be calculated with the rotation matrix

R(α, φ) = Rz(φ)Ry(α)Rz(−φ). (5.18)

By putting these together, the pulse sequence can be simulated. Let the random pulse sequence consist of N RF pulses with flip angles α = {α1, ..., αN}, axis angles

φ = {φ1, ..., φN} and repetition times T R = {T R1, ..., T RN}. Let the sample time

be T S. The pulse sequence begins with an inversion pulse. For parameter set θ, the magnetization right after the inversion pulse at t = 0 is

~ Mθ(0+) =   0 0 −M0  .

Then at the time just before T R1 the magnetization is

~

Mθ(T R−1) = Aθ(T R1) ∗ ~Mθ(0+) + Bθ(T R1).

Then RF pulse with an angle of α1 and around axis y = x tan(φ1) is applied and the

magnetization right after the RF pulse is ~

Mθ(T R+1) = R(α1, φ1) ∗ ~Mθ(T R−1).

After time T S the signal Mx+ iMy is read

~

Mθ(T R1+ T S) = Aθ(T S) ∗ ~Mθ(T R+1) + Bθ(T S). (5.19)

The magnetization at the times when the signal is read can be calculated by ~ Mθ(τn+ T S) = Aθ(T S) ∗ [R(αn, φn) ∗ Aθ(T Rn− T S) ∗ ~Mθ(τn−1+ T S) + Bθ(T Rn− T S)] + Bθ(T S), (5.20) where τn= n P i=1

T Ri and n = 2, ..., N . Then for signal Dk ∈ CN from the dictionary, the

n-th component is equal to

Dk,n = Mx,θ(τn+ T S) + iMy,θ(τn+ T S). (5.21)

To simulate noise on the observed signal, a disturbance δ ~M is added at the measure-ments of the signal. Since it is assumed that the background noise is uncorrelated, the noise is normal distributed. The probability density of the normal distribution is

f (x) = 1 σ√2πe

−(x−µ)2

(28)

where µ is the mean and σ the standard deviation. Let a value generated from the normal distribution be notated as

X ∼ N (µ, σ2). (5.23) Then let the magnitude of the disturbance be normal distributed with mean µ = 0 and a magnitude that is proportional to M0:

δM (σ) ∼ M0|N (0, σ2)|. (5.24)

The uniform distribution is used to rotate ~M randomly in any direction of the three-dimensional space. The probability density of the uniform distribution with minimum value a and maximum value b is

f (x) = (

1

b−a a ≤ x ≤ b,

0 otherwise . (5.25) Let a value from the uniform distribution with minimum value a and maximum value b be notated as

X ∼ U (a, b). (5.26) Then the disturbance is generated with this expression:

δ ~M (σ) ∼ Rz(2π ∗ U (0, 1)) Ry(π ∗ U (0, 1))   0 0 M0|N (0, σ)|  , (5.27)

where Rz and Ry are the rotation matrices from formula (5.14) and (5.17) respectively.

So this formula generates a magnetization with an azimuthal angle uniformly distributed between 0 and π radians, a polar angle uniformly distributed between 0 and 2π radians, and a normal distributed magnitude multiplied by M0 with mean µ = 0. Then the n-th

component of the observed signal d ∈ CN for parameter set θ is

dn= Mx,θ(τn+ T S) + δMx(σ) + iMy,θ(τn+ T S) + iδMy(σ), (5.28) where τn= n P i=1 T Ri.

5.4. Statistics

In the implementation of MRF, statistics is used to quantify how well MRF performs when noise is added to the observed signal. This is done by simulating the signal multiple times and calculating statistical quantities on the found parameters.

(29)

Let S be the amount of simulations. Then MRF finds parameter values (T11, ..., T1S),

(T21, ..., T2S) and (δω1, ..., δωS). Let x = (x1, ..., xS) with

xs=   T1s T2s δωs  , (5.29)

for s = 1, ..., S. Then the sample mean of x is

x =   T1 T2 δω  = 1 S S X s=1   T1s T2s δωs  = 1 S S X s=1 xs. (5.30)

The sample covariance matrix of x is defined as

C(x) = 1 S − 1 S X s=1 (xs− x)(xs− x)T. (5.31)

For i, j = 1, 2, 3, the (i, j)-th element of the covariance matrix is given by

Ci,j(x) = 1 S − 1 S X s=1 (xi,s− xi)(xj,s− xj).

The covariance matrix is a symmetric matrix with on the diagonal the variances and on the off-diagonal the covariances. The standard deviation of x is the square root of the variance: σ(x) =   pC1,1 pC2,2 pC3,3  . (5.32)

According to Taylor [8], the parameters of the observed signal can be estimated as   T1 T2 δω  = x ± σ(x) =   T1 T2 δω  ±   pC1,1 pC2,2 pC3,3  , (5.33)

(30)

6. Results of the MRF simulations

Three different pulse sequences are compared with each other in the simulations to see whether a random pulse sequence works better than a non-random pulse sequence. The first one is the non-random pulse sequence which has a constant flip angle, axis angle and repetition time. The second one is the random pulse sequence with random flip angles and repetition times, but a constant axis angle. The last pulse sequence is the random pulse sequence with random flip angles, random repetition times and random axis angles. Both the correlation algorithm and the least squares algorithm are used for every pulse sequence to see whether one performs better than the other. The simulations are done with a sequence length of 50 pulses and 500 pulses.

In the simulations M0is set to 1 and sampling time T S at 5.8 ms. All pulse sequences

uses a dictionary with signal evolutions for the same parameters. The dictionary is build up as follow. The parameter T1 ranges from 30 ms to 300 ms in steps of 30 ms, T2

ranges from 10 ms to 100 ms in steps of 10 ms, and δω from -5 Hz to 5 Hz in steps of 1 Hz. The dictionary contains the signal evolutions of all the possible parameter value combinations. So the dictionary is a grid that consists of 10 ∗ 10 ∗ 11 = 1100 signal evolutions. The dictionary is kept small such that the amount of memory to store the dictionary on a computer is limited, but also to keep the simulation time low so that many simulations can be done in a reasonable time.

For all experiments, the observed signal has parameters T1= 165 ms, T2 = 55 ms and

δω = 1.5 Hz and noise δ ~M (σ) with σ = 0.5 . Normally, the parameters of the observed signal is not exact the same as one from the dictionary. That is why the parameters are chosen in-between two values from the dictionary. This will give a discretization error of 15 ms for T1, 5 ms for T2and 0.5 Hz for δω. This is why the dictionary is chosen with

a constant stepsize such that the discretization error is constant.

To show how much the signal is disturbed by the noise, in figure 6.1 is a signal evolution with noise and a signal evolution with the same parameters but without noise shown for some random pulse sequence. In the two upper pictures is the magnetization of the x-component shown and in the lower two the y-component of the magnetization. The pictures show only snapshots of the magnetization that is measured at the sample time T S after each pulse. In these pictures can be seen that the signal with noise is quite different from the signal without noise.

Now, let an observed signal be simulated and matched with a signal from the dictionary for some pulse sequence. In figure 6.2 is the value of the matching shown for different parameters. In the left pictures are the values for the correlation shown and in the right pictures the values for the least squares. In each picture, one parameter varies while the other two remain fixed. In the blue figures T1 is varied, in the red pictures T2 and in the

(31)

Figure 6.1.: A signal evolution without noise and a signal evolution with noise are shown for some random pulse sequence. Both signals have the same parameters. The noise is δ ~M (0.5). In the upper two pictures the x-component of the magnetization is shown and in the lower two pictures the y-component.

(32)

correlation algorithm finds the maximum whereas the least squares finds the minimum. Due to the noise on the observed signal, the extremum is not exactly at the true value. In the pictures can be seen that the extrema lies mostly around the true value of the observed signal. The pictures also shows that values of the matchings goes monotonic to the extremum and then goes monotonic in the other direction. This happens for all three pulse sequences. Because of this, the true extremum point, if the dictionary was continuous, could be found by fitting a curve through the points. Furthermore, this shows that increasing the density of the dictionary does not cause the matching algorithm to match the observed signal with completely different parameters.

Figure 6.2.: The matching algorithms calculate the value of the matching with different signals from the dictionary. The observed signal has parameter values T1=

165 ms, T2 = 55 ms and δω = 1.5 Hz. In the left pictures are the values

for the correlation shown and in the right pictures the values for the least squares. In each picture, one parameter varies while the other two remain fixed. In the blue pictures is T1 varied, in the red pictures T2 and in the

green pictures is δω varied. The parameters vary from all the dictionary values. The correlation algorithm finds the maximum whereas the least squares finds the minimum.

(33)

6.1. Non-random pulse sequence

For the simulations is the non-random pulse sequence used where the flip angles, axis angles and repetition times are taken constant. The flip angles are set to 45 degrees, the axis angle to 0 degrees and the repetition time to 15 ms. These values tend to give me the best results. The amount of pulses is set to 50.

Figure 6.3.: The x-component of a few signal evolutions from the dictionary for the non-random pulse sequence. In the blue graphs T1 is varied, in the green graphs

T2 is varied and in the red graphs δω is varied. The black lines have the

smallest value of the varying parameter and the brightest colors the largest value.

(34)

Figure 6.4.: The y-component of a few signal evolutions from the dictionary for the non-random pulse sequence. In the blue graphs T1 is varied, in the green graphs

T2 is varied and in the red graphs δω is varied. The black lines have the

smallest value of the varying parameter and the brightest colors the largest value.

In figure 6.3 is the x-component of a few signal evolutions from the dictionary shown and in figure 6.4 the y-component of the same signal evolutions. The pictures show only snapshots of the magnetization that is measured at the sample time T S after each pulse. In each picture, one parameter varies while the other two remain fixed. The fixed values are T1 = 150 ms, T2 = 50 ms and δω = 2 Hz. In the blue graphs T1 is varied from 30

(35)

steps of 20 ms. And in the red graphs δω is varied from -4 Hz to 4 Hz in steps of 2 Hz. The black lines have the smallest value of the varying parameter and the brightest colors the largest value. In the pictures can be seen that different signals evolves a bit different from each other. An important observation is that they all go to an equilibrium value after about 20 pulses. Also the symmetry around δω = 0 in the x-component of the signal can be seen. This is due to the fact that for an axis angle of 0 degrees, the axis around which the magnetization is rotated by the RF pulses points along the y-axis.

Now the observed signal is simulated and matched with one from the dictionary. For each matching algorithm this is simulated for 20 times. In figure 6.5 is shown which parameter values are chosen for each simulation. In the upper pictures the signals are matched with the correlation algorithm and in the lower pictures with the least squares algorithm. Each simulation is indicated with a circle that differs in size. The red cross is the real value of the observed signal. In the left pictures are the matched parameters T1 and T2 shown for each simulation, in the pictures in the middle are the matched

parameters T1 and δω shown, and in the right pictures are the matched parameters

T2 and δω shown. In these pictures can be seen that the matched parameters are quite

widely spread. But they lie around the true value and not biased to one side for example. From these simulations the mean and the covariance matrix of the matched parameters can be calculated for the correlation and the least squares. The mean and the covariance matrix are µ =   180 67 1.8  , C =   3410.5 284.2 12.6 284.2 832.6 −12.2 12.6 −12.2 3.3   (correlation) µ =   171 60.5 1.5  , C =   2851.6 273.2 9.5 273.2 531.3 −10.3 9.5 −10.3 1.9   (least squares).

Then the estimated parameter values with error are

(T1, T2, δω) = (180 ± 58.4ms, 67 ± 28.8ms, 1.8 ± 1.82Hz) (correlation),

(T1, T2, δω) = (171 ± 53.4ms, 60.5 ± 23.1ms, 1.5 ± 1.4Hz) (least squares),

where the errors are calculated by taking the square root of the diagonal elements of the covariance matrix. The errors in the estimation of the parameters are quite large. For T1 the error is about 4 times the discretization error, for T2 about 5 times and for

δω about 3 times for both the correlation and the least squares. There is not much of a difference between the correlation and least squares, but least squares has a bit smaller error and estimates T2 and δω better than the correlation.

The same is going to be done but with 500 pulses. In figure 6.6 are the matched parameters shown for 20 simulations where in the upper pictures the correlation is used and in the lower pictures the least squares. The pictures are quite similar to the pictures with 50 pulses. The biggest difference is that with the least squares T1 and especially

(36)

Figure 6.5.: The parameter matching for the non-random pulse sequence with 50 pulses. This is done for 20 simulations and each simulation is indicated with a circle that differs in size. In the upper three pictures the matching is done with the correlation and in the lower three pictures with the least squares. The red cross is the real value of the parameters of the observed signal.

The mean and the covariance matrix for these found parameters are

µ =   184 58.5 1.6  , C =   1998.9 −633.2 36.9 −633.2 676.6 −13.3 36.9 −13.3 1.4   (correlation), µ =   165 61.5 1.3  , C =   236.8 39.5 0.05 39.5 350.3 −7.32 0.05 −7.32 0.33   (least squares). The estimated parameter values with error are

(T1, T2, δω) = (184 ± 44.7ms, 58.5 ± 26ms, 1.6 ± 1.19Hz) (correlation),

(T1, T2, δω) = (165 ± 15.4ms, 61.5 ± 18.7ms, 1.3 ± 0.58Hz) (least squares).

The correlation does not perform much better than with 50 pulses. However, the least squares has a much lower error of T1 and δω, which is already seen in the figure 6.6.

(37)

Figure 6.6.: The parameter matching for the non-random pulse sequence with 500 pulses. This is done for 20 simulations and each simulation is indicated with a circle that differs in size. In the upper three pictures the matching is done with the correlation and in the lower three pictures with the least squares. The red cross is the real value of the parameters of the observed signal.

For the next two pulse sequences, the pictures of the dictionary and the parameter estimation of the correlation and least squares are omitted. This is done because the pictures takes up a lot of space which reduces the readability of the text. These pictures, together with the the pictures from the non-random pulse sequence, are put in appendix B. The pictures are put next together so that they can be easier compared with each other.

6.2. Random pulse sequence with a constant axis angle

In this section the simulation are done with the random pulse sequence with a constant axis angle. The flip angles are chosen from a uniform distribution between 30 and 60 degrees, the axis angles is set to zero and the repetition times are chosen from a uniform distribution between 10.5 and 14 ms. Let the pulse sequence consist of 50 pulses. In figures B.3 and B.4 from appendix B are the x-component and y-component respectively

(38)

shown for a few signal evolutions from the dictionary. The value of the parameters are the same as in pictures of the last dictionary. The signals look quite similar to the signals of the non-random pulse sequence, but they keep oscillating a little bit. There is still a symmetry in the x-component of the signal around δω = 0. The expectation is that the random pulse sequence with constant axis angle performs similar to the non-random pulse sequence.

Now, let the observed signal be simulated and matched with a signal from the dic-tionary for 20 times. The mean and the covariance matrix of the parameters of these simulations are µ =   183 53 2  , C =   3022.1 22.1 26.8 22.1 516.8 −6.8 26.8 −6.8 2.2   (correlation), µ =   172.5 58 1.8  , C =   3114.5 142.1 23.7 142.1 258.9 −0.4 23.7 −0.4 1.1   (least squares).

The estimated values with error of the parameters are

(T1, T2, δω) = (183 ± 55ms, 53 ± 22.7ms, 2 ± 1.49Hz) (correlation),

(T1, T2, δω) = (172.5 ± 55.8ms, 58 ± 16.1ms, 1.8 ± 1.05Hz) (least squares).

The errors are quite big for almost all parameters except δω for the least squares. The least squares performs a bit better than the correlation for all parameters because the mean of T1 is much closer to the real value of the observed signal and the error in T2 is

quite smaller than the error with the correlation.

The matching is also done for 500 pulses for 20 simulations:

µ =   180 61.5 1.8  , C =   2084.2 −868.4 37.9 −868.4 602.9 −19.7 37.9 −19.7 1.9   (correlation), µ =   171 56.5 1.65  , C =   483.2 −64.7 4.58 −64.7 318.7 −6.02 4.58 −6.02 0.45   (least squares).

The estimated values with error of the parameters are

(T1, T2, δω) = (180 ± 45.7ms, 61.5 ± 24.6ms, 1.8 ± 1.36Hz) (correlation),

(T1, T2, δω) = (171 ± 22ms, 56.5 ± 17.9ms, 1.65 ± 0.67Hz) (least squares).

The correlation does not perform better with 500 pulses than 50 pulses, but the least squares have an improvement in the error of T1 and δω. These two errors are about as

small as the discretization error. There is no improvement in the estimation of T2 with

(39)

6.3. Random pulse sequence with random axis angles

The random pulse sequenced that is used for the next simulations has the same flip angles and repetition times as the last random pulse sequence, but the axis angles are chosen randomly from a uniform distribution between 0 and 360 degrees. Let the amount of pulses be 50. In figures B.5 and B.6 from appendix B are a few signals evolutions from the dictionary shown with the same setup as the figures of the dictionaries of the last two pulse sequences. In these pictures can be seen that the signals behaves quite a bit different from the signals of the last two pulse sequences. The signals do not go to an equilibrium and there is no symmetry anymore around δω = 0.

The mean and the covariance matrix of the parameters for these simulations is

µ =   175.5 61.5 1.2  , C =   1541.8 −572.4 −3.8 −572.4 661.8 1 −3.8 1 2.2   (correlation), µ =   175.5 59 1.35  , C =   973.4 −4.74 6.4 −4.74 241.1 0.89 6.4 0.89 1.61   (least squares).

The estimated values with error of the parameters are

(T1, T2, δω) = (175.5 ± 39.3ms, 61.5 ± 25.7ms, 1.2 ± 1.47Hz) (correlation),

(T1, T2, δω) = (175.5 ± 31.2ms, 59 ± 15.5ms, 1.35 ± 1.27Hz) (least squares).

This pulse sequence has errors about two to three times the discretization error. In this random pulse sequence least squares performs also better than the correlation.

The mean and the covariance matrix of the parameters for 20 simulations with 500 pulses are µ =   163.5 58.5 1.4  , C =   613.4 −41.8 0.63 −41.8 66.1 0.11 0.63 0.11 0.25   (correlation), µ =   166.5 57 1.55  , C =   329.2 52.1 1.5 52.1 53.7 −0.37 1.5 −0.37 0.26   (least squares).

The estimated values with error of the parameters are

(T1, T2, δω) = (163.5 ± 24.8ms, 58.5 ± 8.1ms, 1.4 ± 0.5Hz) (correlation),

(T1, T2, δω) = (166.5 ± 18.1ms, 57 ± 7.3ms, 1.55 ± 0.51Hz) (least squares).

This is a big improvement for both the correlation and the least squares. All parame-ters are estimated acurately and all errors are about as big as the discretization error, especially δω. Again, least squares performs better than the correlation.

(40)

6.4. Overview results and discussion

In table 6.1 are the results shown what the estimated parameters and their errors are for all three pulse sequences, for 50 pulses and 500 pulses, and for the correlation and the least squares algorithms. In this table the true value of the parameter of the observed signal is subtracted from the esimated parameter value and this is indicated with ∆.

This table shows that the non-random pulse sequence performs about as good as the random pulse sequence with a constant axis angle for both the correlation and the least squares and for both amount of pulses. When the correlation is used, these two pulse sequences do not perform better when the amount of pulses are increased to 500. This is expected from the pictures of the dictionary which can be seen in appendix B. After about 20 pulses the signal evolution of the non-random pulse sequence goes to an equilibrium value and the signal evolution of the random pulse sequence with a constant axis angle oscillates slightly about an equilibrium value. So increasing the length of the pulse sequence does not help differentiating the signal evolutions from each other. Because the correlation normalizes the magnitude of magnetization, all signals looks the same since they are essentially straight lines with different magnitudes. So the correlation keep having problems in matching the parameters when the amount of pulses is increased for these two pulse sequences. The pulse sequences do perform better when the least squares is used. Because least squares keeps the magnitude of the signal in account, the different signal evolutions from the dictionary can be distinguished from each other. By increasing the length of the pulse sequences, the magnitude of the observed signal averages out. This is why knowing the magnitude of the signal helps significantly in determining the parameters.

Pulse sequence Pulses Matching alg. ∆T1(ms) ∆T2(ms) ∆δω(Hz)

Non-random 50 Correlation 15 ± 58.4 12 ± 28.8 1.8 ± 1.82 Constant axis angle 50 Correlation 18 ± 55 −2 ± 22.7 0.5 ± 1.49 Random axis angle 50 Correlation 10.5 ± 39.3 6.5 ± 25.7 −0.3 ± 1.47 Non-random 50 Least squares 6 ± 53.4 5.5 ± 23.1 0 ± 1.4 Constant axis angle 50 Least squares 7.5 ± 55.8 3 ± 16.1 0.3 ± 1.05 Random axis angle 50 Least squares 10.5 ± 31.2 4 ± 15.5 −0.15 ± 1.27 Non-random 500 Correlation 19 ± 44.7 3.5 ± 26 0.1 ± 1.19 Constant axis angle 500 Correlation 15 ± 45.7 6.5 ± 24.6 0.3 ± 1.36 Random axis angle 500 Correlation −1.5 ± 24.8 3.5 ± 8.1 −0.1 ± 0.5 Non-random 500 Least squares 0 ± 15.4 6.5 ± 18.7 −0.2 ± 0.58 Constant axis angle 500 Least squares 6 ± 22 1.5 ± 17.9 0.15 ± 0.67 Random axis angle 500 Least squares 1.5 ± 18.1 2 ± 7.3 0.5 ± 0.51 Table 6.1.: The difference in estimated parameter value and the true value of the

ob-served signal with their corresponding error are shown in this table for all performed simulations. The difference is calculated by subtracting the true value from the esimated parameter value and this is indicated with ∆.

(41)

The random pulse sequence with random axis angles has a significant improvement in the estimation and error of the parameters when the amount of pulses is increased. This is because the signal evolutions from the dictionary do not go to an equilibrium value and keep showing different patterns from each other. So the results shows that when the pulse sequence is long enough, the random pulse sequence with random axis angles becomes better than the random pulse sequence with a constant axis angle and the non-random pulse sequence.

In the table can be seen that for the random pulse sequence with random axis angles and 500 pulses, the error is almost as small as the discretization error. The discretization error is 15 ms for T1, 5 ms for T2 and 1 Hz for δω. In this case the error can probably

be further reduced by increasing the density of the dictionary.

The mean of T1 is consistently larger than the true value because the signal evolutions

from the dictionary with a larger value of T1 are very similar to each other in both

the x-component and the y-component of the signal. So the chances are higher that the observed signal is more similar to a larger value of the parameter than a smaller value. T2 and δω are also estimated consistently too large for the non-random pulse

sequence and the random pulse sequence with a constant axis angle. This is probably because while the magnitude of the signal increases for a larger parameter, the relative increasement in magnitude decreases. So the signals becomes relatively more similar. For the random pulse sequence with random axis angles, T2is also consistently estimated

too big. The reason is less obvious, but it is probably because the signal evolutions of the y-component in the dictionary are more simular to each other for larger values of T2.

For all pulse sequences there is a trend that the least squares performs better than the correlation. This is probably because the proton density is known, which is now set at 1. Then the correlation and least squares are not equivalent because hB,di||B||2 is normally not

equal to 1 in the simulations. Least squares takes the magnitude in account whereas the correlation normalizes it. This extra information seems to help to match the observed signal better. So if the proton density is known, then least squares performs probably better than the correlation.

In table 6.2 are the covariances of the parameters shown for all three pulse sequences, for 50 pulses and 500 pulses, and for the correlation and the least squares algorithms. The covariance is scaled by dividing it by the stepsize of the parameters in the dictionary. So the covariance of T1 and T2 is divided by 30 ∗ 10 = 300, the covariance of T1 and

δω by 30 ∗ 1 = 30, and the covariance of T2 and δω is divided by 10 ∗ 1 = 10. It seems

from the table that there is no particular connection in the covariance between different amount of pulses or matching alrgorithms. The only remarkable observation is that the covariance for both the non-random pulse sequence and the random pulse sequence with constant axis angle when the correlation is used, has significant values for all parameter combinations with the same signs. This is probably because the signal evolutions from the dictionary for these pulse sequences go to an equilibrium after about 20 pulses. In the pictures of the dictionaries can be seen that a larger T1 has a smaller magnitude

in the x-component of the signal than a smaller T1. And larger values of T2 and δω

(42)

Pulse sequence Pulses Matching alg. T1, T2 T1, δω T2, δω

Non-random 50 Correlation 0.947 0.42 −1.22 Constant axis angle 50 Correlation 0.074 0.893 −0.68 Random axis angle 50 Correlation −1.908 −0.127 0.1 Non-random 50 Least squares 0.911 0.317 −1.03 Constant axis angle 50 Least squares 0.474 0.79 −0.04 Random axis angle 50 Least squares −0.016 0.213 0.089 Non-random 500 Correlation −2.111 1.23 −1.33 Constant axis angle 500 Correlation −2.895 1.263 −1.97 Random axis angle 500 Correlation −0.139 0.021 0.011 Non-random 500 Least squares 0.132 0.002 −0.732 Constant axis angle 500 Least squares −0.216 0.153 −0.602 Random axis angle 500 Least squares 0.174 0.05 −0.037

Table 6.2.: The covariance of the estimated parameters for all performed simulations. The covariances of different parameters are scaled with each other by dividing the covariance by the stepsize of the parameters in the dictionary.

and δω. Because the correlation does not take the magnitude of the signal in account, it becomes more likely that the observed signal is matched with a signal of different magnitude. When the observed signal is matched with a signal that has a small signal in the x-component, it is more likely going to be matched with a large value of T1 and a

small value of T2 and δω are chosen. But if the observed signal is matched with a signal

that has a large x-component, it is more likely going to be matched with a small value of T1 and a large value of T2 and δω. Because T1 is chosen small while T2 or δω are

chosen large or vice versa, the covariance of T1 and T2 or δω is negative. In the same

way a large T2 and large δω are chosen together and a smal T2 and small δω are chosen

together. So the covariance of T2 and δω is positive. It seems that this effect begins to

Referenties

GERELATEERDE DOCUMENTEN

Some of these influencers are specific to a certain industry (for example, inertia leads to behavioral loyalty, which in turn relates to profit is not applicable

We adapt a regeneration-time argument originally developed by Comets and Zeitouni [8] for static random environments to prove that, under a space-time mixing property for the

The random walk starts at 0 and a LLN is proved under the assumption that the IPS satisfies a space-time mixing property called cone-mixing, which means that the states inside

In Section 3 we assume a stronger space-time mixing property, namely, exponential mixing, and derive a series expansion for the global speed of the random walk in powers of the size

Aanleiding voor het onderzoek is de geplande verkaveling van het gebied ten noorden van de Beersebaan, die een bedreiging vormt voor eventuele archeologische resten die zich hier

Chats die je al gevoerd hebt Gesprekken die je gehad hebt Contacten die je toegevoegd hebt.. Onderaan je smartphone zitten

Toch wordt in de opleiding informele zorg niet altijd expliciet benoemd, merkt Rieke: ‘Ik zie dat studenten samenwerken met mantelzorgers?. Maar als ik vraag: wat doe je met

In een recent rapport van het Engelse Institution of Engineering and Technology (IET, zie www.theiet.org) wordt een overzicht gegeven van de redenen waarom 16-