• No results found

Intrasubject multimodal groupwise registration with the conditional template entropy

N/A
N/A
Protected

Academic year: 2021

Share "Intrasubject multimodal groupwise registration with the conditional template entropy"

Copied!
22
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Intrasubject Multimodal Groupwise Registration with the Conditional

Template Entropy

Mathias Polfliet, Stefan Klein, Wyke Huizinga,

Margarethus M. Paulides, Wiro J. Niessen, Jef Vandemeulebroucke

PII:

S1361-8415(18)30028-8

DOI:

10.1016/j.media.2018.02.003

Reference:

MEDIMA 1339

To appear in:

Medical Image Analysis

Received date:

13 April 2017

Revised date:

6 February 2018

Accepted date:

14 February 2018

Please cite this article as: Mathias Polfliet, Stefan Klein, Wyke Huizinga, Margarethus M. Paulides,

Wiro J. Niessen, Jef Vandemeulebroucke, Intrasubject Multimodal Groupwise Registration with the

Conditional Template Entropy, Medical Image Analysis (2018), doi:

10.1016/j.media.2018.02.003

This is a PDF file of an unedited manuscript that has been accepted for publication. As a service

to our customers we are providing this early version of the manuscript. The manuscript will undergo

copyediting, typesetting, and review of the resulting proof before it is published in its final form. Please

note that during the production process errors may be discovered which could affect the content, and

all legal disclaimers that apply to the journal pertain.

(2)

ACCEPTED MANUSCRIPT

Highlights

• A novel similarity metric for multimodal group-wise registration is proposed.

• The proposed method showed equivalent or im-proved registration accuracy.

• Pairwise mutual information is outperformed in terms of transitivity.

(3)

ACCEPTED MANUSCRIPT

Behaviour of entropy terms in

(4)

ACCEPTED MANUSCRIPT

Intrasubject

Multimodal Groupwise Registration with the Conditional

Template Entropy

Mathias Polflieta,b,c,∗, Stefan Kleinc, Wyke Huizingac, Margarethus M. Paulidesd, Wiro J. Niessenc,e, Jef Vandemeulebrouckea,b

aVrije Universiteit Brussel (VUB), Department of Electronics and Informatics (ETRO), Pleinlaan 2, 1050 Brussels, Belgium

bimec, Kapeldreef 75, B-3001 Leuven, Belgium

cBiomedical Imaging Group Rotterdam, Depts. of Radiology & Medical Informatics, Erasmus MC, Rotterdam, the

Netherlands

dHyperthermia Unit, Department of Radiation Oncology, Erasmus MC Cancer Institute - University Medical Centre

Rotterdam, the Netherlands

eImaging Science and Technology, Faculty of Applied Sciences, Delft University of Technology, the Netherlands

Abstract

Image registration is an important task in medical image analysis. Whereas most methods are designed for the registration of two images (pairwise registration), there is an increasing interest in simultaneously aligning more than two images using groupwise registration. Multimodal registration in a groupwise setting remains difficult, due to the lack of generally applicable similarity metrics. In this work, a novel similarity metric for such groupwise registration problems is proposed. The metric calculates the sum of the conditional entropy between each image in the group and a representative template image constructed iteratively using principal component analysis. The proposed metric is validated in extensive experiments on synthetic and intrasubject clinical image data. These experiments showed equivalent or improved registration accuracy compared to other state-of-the-art (dis)similarity metrics and improved transformation consistency compared to pairwise mutual information.

Keywords: groupwise image registration, multimodal, conditional entropy, principal component analysis, mutual information

1. Introduction

1

Biomedical image registration is the process of

spa-2

tially aligning medical images, allowing for an

ac-3

curate and quantitative comparison. An increasing

4

number of image analysis tasks calls for the

align-5

ment of multiple (more than two) images. Examples

6

include the joint analysis of tissue properties using

7

multi-parametric MRI (Huizinga et al., 2016; Wells

8

et al., 2015), spatio-temporal motion estimation from

9

Corresponding author

Email address: mpolflie@etrovub.be (Mathias Polfliet )

dynamic sequences (Metz et al., 2011; Vandemeule- 10

broucke et al., 2011), atlas construction (Fletcher 11

et al., 2009; Joshi et al., 2004; Wu et al., 2011) and 12

population analyses (Geng et al., 2009). 13

One approach to perform such a registration task 14

would be to take one image in the group as a refer- 15

ence and register all other images to this reference in 16

a pairwise manner. However, such an approach has 17

two distinct shortcomings. First, the choice of the 18

reference image inherently biases the resulting trans- 19

formations and subsequent data analysis towards the 20

chosen reference. Secondly, only a fraction of the to- 21

(5)

ACCEPTED MANUSCRIPT

is used in each pairwise registration, possibly leading

23

to sub-optimal results.

24

An alternative is to perform a groupwise

registra-25

tion in which all transformations are optimized

si-26

multaneously. Transformations are expressed with

27

respect to a common reference space, thereby

remov-28

ing the need for choosing a particular reference image,

29

and the bias associated with that choice.

Addition-30

ally, a global cost function simultaneously takes into

31

account all information in the group of images. In

32

this work we will address such groupwise similarity

33

metrics for multimodal registration problems.

34

Multimodal intensity-based pairwise registration

35

is commonly solved using mutual information

36

(MI) (Collignon et al., 1995; Viola and Wells III,

37

1995; Wells et al., 1996), since it assumes a

stochas-38

tic relationship between the two images to be

regis-39

tered. Extending MI to groupwise registration leads

40

to a high-dimensional joint probability density

func-41

tion with an exponentially increasing number of

his-42

togram bins. Sparsity becomes a major concern as

43

the number of images grows larger and limits the

ap-44

plication to small groups of images (Wachinger and

45

Navab, 2013).

46

A number of alternatives have been proposed to

47

perform multimodal groupwise registration. Orchard

48

and Mann (2010) proposed to use a Gaussian mixture

49

model instead of histograms to approximate the joint

50

probability density functions and Spiclin et al. (2012)

51

approximated the joint probability density functions

52

with a nonparametric approach based on a

hierar-53

chical intensity-space subdivision scheme. However,

54

both approaches remain limited by the sparsity in

55

the joint intensity space and perform poorly for large

56

groups of images.

57

Alternatively, one could represent the intensities

58

as a graph and relate the length of such a graph to

59

the entropy of the images (Hero et al., 2002). Such

60

an approach requires a computationally expensive

61

optimization for the construction of the graph and

62

is not continuously differentiable, making

gradient-63

based optimization difficult.

64

Z¨ollei et al. (2005) proposed the use of a voxelwise

65

stack entropy. Herein, the intensities of all separate

66

images in the group at a given sampled coordinate

67

are grouped into a one-dimensional probability

den-68

sity distribution. For each sampled coordinate, the 69

entropy is calculated and summed. However, for a 70

low number of images in the group, the probability 71

density functions are sparse which limits its use to 72

larger groups of images. 73

Wachinger et al. (2007) proposed to accumulate 74

all pairwise estimates of mutual information for all 75

possible pairs of images in the group under consid- 76

eration. Such an approach leads to a computation 77

time which is proportional to the square of the num- 78

ber of images, making its application to larger groups 79

of images increasingly difficult. 80

Joshi et al. (2004) developed an interesting metric 81

where the mean squared differences is used as a pair- 82

wise metric to compare every image in the group to 83

the average image. Herein the average image is up- 84

dated in each iteration. They applied the method to 85

monomodal brain atlas construction and it has also 86

been applied to thoracic 4D CT data (Metz et al., 87

2011) and 4D ultrasound of the liver (Vijayan et al., 88

2014). The approach carries a number of advan- 89

tages, such as the linear scaling of the computational 90

complexity with respect to the number of images in 91

the group and the possibility to parallelize the al- 92

gorithm, making it feasible for both small and large 93

groups of images. Bhatia et al. (2007) proposed to 94

use the normalized mutual information (Studholme 95

et al., 1999) as a pairwise similarity metric and the 96

average image as a template image on monomodal 97

intersubject data. The metric was termed the av- 98

erage normalized mutual information and has been 99

used (together with the average mutual information) 100

in subsequent literature as a metric for multimodal 101

groupwise registrations(Ceranka et al., 2017; Hallack 102 et al., 2014; Huizinga et al., 2016; Polfliet et al., 2016, 103 2017). However, the use of the average image as the 104

template image might not be appropriate in multi- 105

modal data with intensities of varying scales, ranges 106

and contrast. 107

In this work a novel similarity metric, the condi- 108

tional template entropy (CTE), is introduced for mul- 109

timodal groupwise registration based on this principle 110

of pairwise similarity with respect to a template im- 111

age. Following the original formulation by Joshi et al. 112

(2004), we first design a suitable pairwise metric to 113

(6)

ACCEPTED MANUSCRIPT

every image in the group. Afterwards we investigate

115

the use of a template image based on principal

com-116

ponent analysis.

117

Given the linear scaling of the computational

com-118

plexity, the metric can be applied to a wide range of

119

intrasubjectmultimodal groupwise registration

prob-120

lems, for both small and large groups of images, and

121

can be used as a general purpose metric. The

pro-122

posed metric is validated in extensive experiments on

123

synthetic and intrasubject clinical data,

demonstrat-124

ing equivalent or improved registration accuracy

com-125

pared to other state-of-the-art methods and improved

126

transformation consistency compared to pairwise MI.

127

2. Materials and Methods

128

2.1. Pairwise Registration

129

In pairwise registration, a target (moving, floating)

130

image IT is registered to a reference (fixed, source)

131

image IR. The transformationTθ, parameterized by

132

θ, needs to be determined that maps coordinates

133

from the reference image domain to the target image

134

domain (Fig. 1(a)). The registration can be defined

135 as an optimization problem 136 ˆ θ = arg min θ C (IR, IT◦ Tθ) . (1) Here,C is the cost function or objective value of the

137

registration problem, which is often represented as

138

a weighted sum of a dissimilarity metric, D, and a

139

regularization term,R, such that

140

C = D + λR , (2)

in which λ is the weight for the regularization.

141

2.2. Mutual information

142

In the pairwise approach, mutual information (MI)

143

(Collignon et al., 1995; Viola and Wells III, 1995;

144

Wells et al., 1996) is defined as a similarity metric

145 (S = −D) 146 SM I(IR, IT◦ Tθ) = H (IR) + H (IT◦ Tθ)− H (IR, IT◦ Tθ) . (3) (a) (b)

Figure 1: Graphical illustration for (a) pairwise registration (b) groupwise registration

Here, H(·) and H(·, ·) refer to, respectively, the 147

marginal and joint entropy of the marginal and joint 148

intensity distributions, often calculated via normal- 149

ized histograms. In Eq. (3), the first term expresses 150

the complexity of the reference image and the sec- 151

ond term is the entropy of the target image mapped 152

onto the reference, which favors transformations that 153

map onto complex parts of the target image. The 154

final term expresses the complexity of the shared or 155

common relationship between the reference and tar- 156

get image. It is maximized when the (statistical or 157

stochastic) relationship is stronger and thus less com- 158

plex (Wells et al., 1996). 159

Following Maes et al. (1997), MI can be rewritten 160

in terms of the conditional entropy (CE) 161

SM I(IR, IT ◦ Tθ) = H (IR)− H (IR|IT◦ Tθ) . (4) The conditional entropy H (A|B) describes the 162

amount of information that remains in a random vari- 163

able A once the random variable B is known. With 164

the entropy of the reference image being independent 165

(7)

ACCEPTED MANUSCRIPT

thenegatedconditional entropy and maximization of

167

the mutual information lead to equivalent solutions

168

of the registration problem.

169

2.3. Groupwise registration

170

In groupwise registration we consider a group of n

171

images for which the transformations to a common

172

reference frame are unknown. We can consider the

173

following optimization problem to determine these

174 transformations: 175 ˆ µ = arg min µ C (I1◦ Tµ1, . . . , In◦ Tµn) , (5) where Tµi is the transformation, parameterized by 176

µi, that maps the coordinates from the common

177

reference domain to the domain of the ith image

178

(Fig. 1(b)). µ is the vector formed by the

concate-179

nation of all separate transformation parameters µi,

180

and Ii is the continuous intensity function of the ith

181

image.

182

2.4. Template construction

183

Joshi et al. (2004) proposed the following

formula-184

tion for monomodal groupwise registration, in which

185

both the transformation parameters and a template

186

image are optimized

187 ˆ µ, ˆJ =arg min µ,J 1 n|S| n X i=1 X x∈S (Ii◦ Tµi(x)− J (x)) 2 , (6)

with J the continuous intensity function of a

tem-188

plate image, x the coordinate samples drawn from

189

the image and S the set of these samples. The

tem-190

plate image can be interpreted as being the image

191

that is most similar to the other images in the group

192

in terms of the mean squared differences. For a given

193

value of the transform parameters, the optimization

194

with respect to the template image J was solved

an-195

alytically to be the average image

196 J (x) = Iµ(x) = 1 n n X i=1 Ii◦ Tµi(x) . (7)

As such, the registration problem in Joshi et al. 197

(2004) is reduced to 198 ˆ µ = arg min µ 1 n|S| n X i=1 X x∈S Ii◦ Tµi(x)− Iµ(x) 2 . (8) 2.5. The conditional template entropy 199

In this work, a novel similarity metric for multi- 200

modal groupwise registration is proposed, based on 201 thisparadigm in which similarity of the group of im- 202

ages is measured with respect to an iteratively up- 203

dated template image. Considering the interpreta- 204

tion of the entropy terms given in Section 2.2, we 205

propose to measure similarity using the negated joint 206

entropy of each image in the group with the tem- 207

plate image, favoring transformations for which the 208

template explains the group of images well; and the 209

marginal entropies of each image in the group, en- 210

couraging transformations that map onto complex 211

parts of the images in the group. Note that this is 212

equivalent to a formulation based on the conditional 213

entropy: 214 ˆ µ, ˆJ = arg max µ,J 1 n n X i=1 H (Ii◦ Tµi)− H (J, Ii◦ Tµi) = arg max µ,J − 1 n n X i=1 H (J|Ii◦ Tµi) . (9) Observing the resulting metric, one can notice the 215

resemblance with a formulation based on mutual in- 216

formation. The difference lies in the absence of the 217

marginal entropy of the template image, H(J). As 218

we will demonstrate, this term counteracts the align- 219

ment of the group of images. A representative tem- 220

plate image is likely to grow sharper when converging 221

towards the optimal registration solution, leading to 222

a reduced complexity of its intensity distribution and 223

a decrease in the marginal entropy, which is oppo- 224

site of the desired optimization behavior. The pro- 225

posed method based on conditional entropy as shown 226

(8)

ACCEPTED MANUSCRIPT

To find the appropriate template image, we revisit

228

Eq. (6) where the template image could be obtained

229

analytically as the average image. Unfortunately,

230

Eq. (9) cannot be solved analytically with respect

231

to the template image, J, for a given set of

transfor-232

mationsif the trivial solution of a constant template 233

image with a single intensity is excluded.

Hypotheti-234

cally, one could set up an optimization scheme where

235

the template image is predefined by a functional

re-236

lationship and weights corresponding to the images

237

in the group. Herein, the optimization of the

trans-238

formation parameters could be alternated with the

239

optimization of the weights for the template image.

240

Such nested optimization is error-prone and costly, 241

and undesirable in this context. 242

Alternatively, instead of maximizing Eq. (9), we

243

proposea more pragmatic approach which maximizes 244

the variance in the template image. By definingJ as

245

the linear combination of the images in the group, 246

principal component analysis (PCA) can be used to 247

find the weights associated to the images. This has 248

previously been shown to reduce the noise due to mo-249

tion in the template image (Melbourne et al., 2007). 250

Additionally, negatively correlated intensities can be 251

accounted for to increase the contrast in the template 252

image, instead of decreasing the contrast as might be 253

the case for simple intensity averaging.

254

PCA defines a linear transformation from a given

255

high-dimensional space to a low-dimensional

sub-256

space whilst retaining as much variance as possible.

257

In this work, PCA is performed with each sampled

258

coordinate as a separate observation and the

differ-259

ent images in the group corresponding to different

260

features. The transformation to the 1-dimensional

261

subspace along which the most variance is observed,

262

is given by the eigenvector associated with the largest

263

eigenvalue. As such, the elements of this eigenvector

264

can serve as the weights for the construction of the

265 template image. 266 J (x) = IµP CA(x) = n X i=1 vi,µIi◦ Tµi(x) . (10)

Here, vµis the eigenvector associated with the largest

267

eigenvalue and the subscript µ is added to show its

268

dependence on the transformation parameters. This

269

template image, based on the principal component of 270

the PCA, will hereafter be referred to as the principal 271

component image. 272

Combining(9) and (10) leads to a novel similar- 273

ity metric, the conditional template entropy (CTE), 274

where similarity is expressed as the sumofthe condi- 275

tional entropy between every image in the group and 276

the principal component image: 277

SCT E(I1◦ Tµ1, . . . , In◦ Tµn) =−1 n n X i=1 H IµP CA|Ii◦ Tµi  . (11) 2.6. Optimization 278

The proposed metric was implemented as part of 279

the software package elastix (Klein et al., 2010) and 280

is publicly available. An adaptive stochastic gradi- 281

ent descent was employed to minimize the cost func- 282

tion (Klein et al., 2009). As such, the negated form 283

of Eq. (11) is used, to allow a minimization to take 284

place. The derivative of the proposed metric with re- 285

spect to µ was determined following the approach 286

of Th´evenaz and Unser (2000) in which B-splines 287

were used as a Parzen windowing function such that 288

the joint probability density functions pibetween the 289

template image and the ith image in the group

be-290 come 291 pi(ι, κ; µ) = α X x " βm ι P CA− IP CA µ (x) P CA ! βm  κ i− Ii(Tµi(x)) i  # . (12)

Here, α is a normalization factor to obtain a density 292

function,  is related to the width of the histogram 293

bin and βmis a B-spline function of the order of m. ι

294

and κ are the discretized intensities corresponding to 295

the template image and images in the group, respec- 296

tively. With B-splines fulfilling the partition of unity 297

constraint (Th´evenaz and Unser, 2000), we have 298

X ι∈LP CA X κ∈Li ∂pi(ι, κ; µ) ∂µ = 0 ∀i , (13)

(9)

ACCEPTED MANUSCRIPT

where LP CAand Liare the discrete sets of intensities

299

associatedwiththe principal component and the ith

300

image. This leads to

301 ∂SCT E ∂µ = − 1 n n X i=1 X ι∈LP CA X κ∈Li ∂pi(ι, κ; µ) ∂µ log pi(ι, κ; µ) pIi(κ; µ) (14)

With pIi(κ; µi) the probability density function of 302

the ith image. In Appendix A the derivative of the

303

principal component image with respect to the

trans-304

formation parameters is given.

305

2.7. Transformation degeneracy

306

Given the degeneracy of estimating n

transforma-307

tions for n images with an arbitrary global

trans-308

formation, we chose to constrain our transformation

309

following Bhatia et al. (2004) with

310 1 n n X i=1 Tµi(x) = x, ∀x , (15)

i.e the sum of all transformations is the identity,

ef-311

fectively registering the group of images to the mean

312

space. With Rosen’s Gradient Projection Method

313

(Luenberger, 1973) this is solved by setting

314 ∂C ∂µi 0 = ∂C ∂µi − 1 n n X j=1 ∂C ∂µj . (16)

and using this projected gradient in the stochastic

315

gradient descent optimization.

316

2.8. Regularization

317

Following Geng et al. (2009) we used a groupwise

318

regularization term, the groupwise bending energy

319 (GBE) 320 RGBE(Tµ1, . . . ,Tµn) = 1 |S| X x∈S 1 n n X i=1 d X l,m=1 ∂ 2T µi(x) ∂xl∂xm 2. (17) Herein, d is the spatial dimension of the images.

Reg-321

ularization was performed in all clinical experiments

322

with a deformable transformation model.

323

3. Data and Experiments 324

A total of six experiments were conducted with two 325

on synthetic data and four on clinical intrasubject 326

data. Herein, the proposed conditional template en- 327

tropy (SCT E) was compared to the average mutual 328

information (SAM I) 329 SAM I(I1◦ Tµ1, . . . , In◦ Tµn) =1 n n X i=1 h H Iµ  + H (Ii◦ Tµi) − H Iµ, Ii◦ Tµi  i . (18)

Furthermore, two auxiliary similarity metrics were 330

implemented to investigate complementary advan- 331

tages of the proposed methodology, respectively the 332

advantage of using the conditional entropy (SCE) and 333

the advantage of using the principal component im- 334

age (SP C). 335 SCE(I1◦ Tµ1, . . . , In◦ Tµn) =−n1 n X i=1 H Iµ|Ii◦ Tµi  , (19) 336 SP C(I1◦ Tµ1, . . . , In◦ Tµn) =1 n n X i=1 h H IµP CA  + H (Ii◦ Tµi) − H IµP CA, Ii◦ Tµi  i . (20)

For the clinical data, the four previously discussed 337

groupwise similarity metrics were used in addition to 338

the PCA2 metric proposed in Huizinga et al. (2016) 339

and pairwise MI (Eq. 3) as a baseline for comparison. 340 PCA2 was proposed for the registration of images for 341 which the intensity distribution could be represented 342 into alow-dimensionalsubspace and is given as 343

DP CA2(I1◦ Tµ1, . . . , In◦ Tµn) = n X i=1 iλi . (21)

Herein, λi refers to the ith eigenvalue of the correla- 344 tion matrix of the images in the group. In Huizinga 345

(10)

ACCEPTED MANUSCRIPT

et al. (2016) it was subsequently validated on

346

monomodal and quantitative MRI image data for 347

which such a low-dimensional subspace exists. PCA2 348

can be thus considered as a specialist metric specif-349

ically designed to register such images. To demon-350

strate the more generic nature of the proposed 351

methodology, CTE was compared to PCA2 for both 352

quantitative MRI and multimodal image data. 353

All registrations were performedinan intrasubject

354

manner and the images were normalized by z-scoring

355

to allow for a fair comparison to the similarity

met-356

rics employing the average image. In the pairwise

357

registration of a group of images,one image (the first

358

in the sequence) was chosen as a reference to which

359

all others were mapped. Note that other strategies

360

for choosing the reference image in pairwise registra-361

tions for a group exist, such as the pre-contrast

im-362

age in dynamic contrast enhanced sequences (Kim

363

et al., 2011), the end-expiration in 4D CT (Saito

364

et al., 2009)or the mid-way image in computational 365

anatomy (Reuter et al., 2010).

366

As the optimization strategy, interpolation

algo-367

rithm, random sampler and transformation model is

368

equivalent for all (dis)similarity metrics, any

differ-369

ence in results can be solely attributed to the use of

370

a different dissimilarity metric.

371

The proposed methods were validated with two

val-372

idation criteria. First, the groupwise target

registra-373

tion error (gTRE)

374 gTRE (µ) = 1 n n X i6=r 1 |Pi| |Pi| X j ||Ti,r(pi,j)− pr,j|| (22) was used as a measure for the accuracy of the

reg-375

istration with ground truth annotations of certain

376

anatomical landmarks in the images. In Eq. (22)

377

r is the index of the reference image, Pi the

collec-378

tion of landmarks in the ith image, T

i,r the

trans-379

formation that maps the coordinates from the ith

380

image to the reference image and pi,j the jth

land-381

mark from the ithimage. In a groupwise setting T i,r

382

was determined through the composition of the

for-383

ward transformation, that maps the coordinates from

384

the common reference space to the reference image,

385

with the inverse transformation, that maps the

coor-386

dinates from the ith image to the common reference

387

Figure 2: Composition ofTµr andTµ−1i to obtainTi,r

space: Ti,r = Tµr◦ Tµ−1i (Fig. 2) (Metz et al., 2011). 388

To allow for a fair comparison between pairwise and 389

groupwise registrations, all validation measurements 390

were performed in the same reference space, i.e. the 391

same image which was chosen as a reference in the 392

pairwise registrations. 393

Secondly, we computed the transitivity error 394

(Christensen et al., 2006; Metz et al., 2011) to assess 395

the quality of the transformation 396

Tra (µ) = 1 |S| X x∈S n X i n X l6=i ||Ti,r(x)− Ti,l(Tl,r(x))|| . (23) The transitivity error measures the transitive prop- 397

erty of the transformations in a group of images and 398

can be interpreted as a measure for the consistency 399

of the transformations in a groupwise setting. For 400

pairwise registration the use of different reference 401

images is required to measure the transitivity and 402

the bias associatedwiththe choice will influence the 403

results, whereas in groupwise registration, all trans- 404

formations are estimated simultaneously and are in- 405

herently transitive (when the inverse transformation 406

is available). As the inverse is approximated itera- 407 tively and the source for the transitivity error in the 408 groupwise methods, no comparisonsare made among 409

the groupwise metrics based on the transitivity er- 410 ror. The maximum transitivity error of the groupwise 411 methods is reported and compared to the transitivity 412 error of the pairwise method. 413

The cost function hyperparameters (the number of 414

histogram bins and regularization weight) were cho- 415

(11)

ACCEPTED MANUSCRIPT

dataset. The different regularization weights are

re-417

ported in Table 1. Due to the arbitrary sign of the 418

projection vectorfor the principal component image, 419

the number of histogram bins (used to calulate the 420

entropy) areat least doubled compared to the num-421

ber of histogram bins in registrations using the av-422

erage image. Other optimization hyperparameters

423

such as the spatial samples in the stochastic

opti-424

mizer and the number of iterations were settotheir

425

default value. All registration hyperparameters in

426

pairwise registrations were kept equal to those in the

427

groupwise approach.

428

Results for the gTRE were compared in a

pair-429

wise manner among all similarity metrics (totaling

430

64 comparisons). The Wilcoxon signed-rank test was

431

used for significance testing at a significance level of

432

0.05 adjusted by the Bonferroni correction for

multi-433

ple comparisons.

434

3.1. Black&White

435

To illustrate the effect the entropy term of the

tem-436

plate image has on the optimization, an experiment

437

was performed on synthetic data. Eleven identical

438

black-and-white images were progressively and

simul-439

taneously translated along the horizontal axis and the

440

similarity metric values were computed. A mask was

441

used to keep the sampling domain constant. Fig. 3

442

shows a single black-and-white image and the

aver-443

age image of the group of images when they are at

444

maximal displacement (15 mm).

445

3.2. Multimodal Cubes

446

To further investigate registration accuracy, 100

447

registrations were performed on a group of six

im-448

ages (256 × 256 × 256 voxels) each containing two

449

cubes, one surrounding the other. The intensities of

450

the cubes and the backgrounds were set at random

451

intensities to simulate a multimodal setting (Fig. 4).

452

For each group of images a random set of deformable

453

transformations was generated with a grid spacing of

454

8×8×8 voxels. The gTRE of the corners of the cubes

455

was used to quantify the registration accuracy.

456

Figure 3: (a) A single black-and-white image (b) Average im-age of the group at their maximal misalignment.

3.3. Thoracic 4D CT 457

Thoracic 4D CT data (Fig. 5) was taken from 458

the publicly available POPI and DIR-LAB datasets 459

which include, respectively, 6 and 10 sequences of 10 460 respiratoryphases each (Castillo et al., 2009; Vande- 461

meulebroucke et al., 2011). Thoracic4D CTdata is 462 often considered as monomodal data. However, mi- 463

nor intensity changes can occur due to changes in 464

the voxel density in the lungs associated with the in- 465

halation and exhalation of air (Sarrut et al., 2006) 466 leading several authors to employ adapted or multi- 467 modal metrics for lung registration (Murphy et al., 468

2011). 469

The POPI dataset contains three patients with 100 470

manually identified landmarks in the lungs for every 471

breathing phase and three patients with 100 land- 472

marks in end-inspiration and end-expiration phases 473

with an inter-rater error of 0.5±0.9 mm. In the DIR- 474

LAB dataset, all patients have 300 landmarks in the 475

lungs for the inspiration and expiration phases and 476

75 in the four phases in between and an intra-rater 477

error between 0.70 and 1.13 mm. Accuracy of the 478

registration was determined using the gTRE with re- 479

spect to the inspiration phase,the first image in the 480

dynamic series. 481

A deformable registration was performed using cu- 482

bic B-splines with a final grid spacing of 12.0 mm. 483

(12)

ACCEPTED MANUSCRIPT

Figure 4: (a-f)A single slice of the six cubes used in the Multimodal Cubes experiment. (g) The average image and (h) the

principal component image at alignment.

(a) (b) (c) (d) (e) (f)

Figure 5: (a -c)Three of the ten phasesused in the Thoracic 4D CT experiment.The images differ mainly in the position of the

diaphragm and structures in the lungs due to breathing. (d) The average image at misalignment. (e) The principal component at misalignment. (f)Absolute difference imageof the average and principal component image. Note that the largest differences occur in regions where motion is present(i.e. the diaphragm), indicated by red arrows. The image contrast is optimized for the range of intensities present in each individual image.

(a) (b) (c) (d) (e) (f)

Figure 6: (a-c)Three of the five imagesused in the Carotid MR experiment. (d) The average image at misalignment. (e)

The principal component at misalignment. (f)Absolute difference imageof the average and principal component image. Note that the largest differences occur either at bordersofstructures due to motion, indicated by red arrows, or in homogeneous regions due tothemultimodal nature of the data, indicated by a green arrow. The image contrast is optimized for the range of intensities present in each individual image.

(13)

ACCEPTED MANUSCRIPT

Table 1: The regularization weights used for each metric and clinical dataset.

Thoracic 4D CT Carotid MR Head&Neck RIRE

PCA2 500 100 2×106 -MI 0.02 50 100 -AMI 0.05 100 2000 -PC 0.2 100 2000 -CE 0.01 100 5000 -CTE 0.2 100 5000

-Table 2: Summary of the registration parameters used in the experiments. Two values are reported for the number of histogram bins, separated by a forward slash. The first value reflects the number of bins used in pairwise registration and groupwise registrations based on the average image. The second value gives the number of bins used in groupwise registrations based on the principal component image. Values separated with a backward slash indicate multiple settings within the applied optimization strategy.

Dataset Histogram bins Resolutions Grid spacing Spatial samples Iterations

Multimodal Cubes 32/96 2 6.0 2048 2000

Thoracic 4D CT 48/96 4 12.0 2048 2000\4000

Carotid MR 48/128 2 8.0 2048 2000

Head&Neck 64/144 2 64.0 2048 2000

RIRE 48/128 5\2 - 2048 2000

demeulebroucke et al. (2012). For each resolution

485

level 2000 iterations were performed, except for the

486

lastresolutionwhere 4000 iterations were allowed.

487

3.4. Carotid MR

488

MR image sequences were acquired of the carotid

489

artery by Coolen et al. (2015). The acquisitions were

490

performed with a gradient echo MRI sequence for

dif-491

ferent flip angles and TE preparation times (Fig. 6).

492

Each sequence consisted of five images and was

per-493

formed for eight patients. The bifurcation of both

494

carotid arteries was identified for each patient and

495

consequently used as a landmark in the validation of

496

the registration.

497

For this data we performed a deformable

registra-498

tion with cubic B-splines and a final grid spacing of

499

8.0 mm. van ’t Klooster et al. (2013) has shown that a 500

deformable registration is needed in such acquisitions 501

of the carotid arteries. Masks around the carotid

ar-502

teries were used as region of interest for registration.

503

3.5. Head&Neck 504

As part of radiotherapy planning, 22 patients un- 505

derwent a CT, MR-T1 and MR-T2 imaging protocol 506

of the head and neck region (Fortunati et al., 2014, 507

2015; Verhaart et al., 2014)(Fig. 7). In each acquisi- 508

tion between 15 to 21 landmarks were used to quan- 509

tify the registration accuracy in terms of gTRE. The 510

intra-rater variability of the landmarks was approxi- 511

mately 1mm. 512

Prior to registration, all images were resampled to 513

the smallest voxel spacing present in the group of 514

images. A deformable transformation was used in 515

two resolution levels using cubic B-splines with a final 516

grid spacing of 64.0 mm, as suggested by Fortunati 517

et al. (2014). 518

3.6. RIRE 519

The RIRE database (West et al., 1997) includes 18 520

patients with up to five different imaging modalities 521

(14)

ACCEPTED MANUSCRIPT

Figure 8: (a) CT image, (b) MR-PD image, (c) MR-T1 image, (d) Mr-T2 image and (e) PET image used in the RIRE experiment. −15 −10 −5 0 5 10 15 Translation (mm) (a) −2.0 −1.5 −1.0 −0.5 0.0 0.5 1.0 Metr ic Value AMI PC CE CTE −15 −10 −5 0 5 10 15 Translation (mm) (b) 0.5 1.0 1.5 2.0 2.5 A ver age entrop y of the images in the group , 1/ n nP i=1 H (Ii ◦ Tµ i ) Average Image Principal Component Image

−15 −10 −5 0 5 10 15 Translation (mm) (c) 0.5 1.0 1.5 2.0 2.5 Entrop y of the template image ,H (J ) Average Image Principal Component Image

−15 −10 −5 0 5 10 15 Translation (mm) (d) 0.5 1.0 1.5 2.0 2.5 A ver age of the joint entropies , 1/ n nP i=1 H (J ,Ii ◦ Tµi ) Average Image Principal Component Image

Figure 9: Results for the Black&White experiment where 11 black-and-white images were progressively and simultaneously

translated. (a) The metric values. (b) The average of the entropies of the images in the group. (c) The entropy of the template

image. (d) The average of the joint entropies.

of the following modalities available: CT, PET,

MR-523

T1, MR-T2, MR-PD. Fiducial markers and a

stereo-524

tactic frame were used to determine the ground truth

525

transformations for CT to MR and PET to MR. Four

526

to ten landmarks were available for each patient as

527

a ground truth for the registrations and their target

528

registration error was computed through the webform

529

of the RIRE project, where rigid displacements be-530

tween acquisitions were assumed.

531

To increase the robustness of the optimization, a 532

two-step approach is used. First, a translation is 533

optimized and used as an initialization for a second 534

full rigid transformation with three translational and 535

three rotational degrees of freedom. The registration 536

was performed with five and two resolution levels, 537

respectively. Similar to the Head&Neck dataset,

pre-538

processing was performed by resampling the images

539

in the group to the smallest voxel spacing.

540

The registration hyperparameters for the different 541

experiments are summarized in Table 2. 542

4. Results 543

4.1. Synthetic Data 544

The behavior of the metric value and its sepa- 545

rate components in the Black&White experiment are 546

shown in Fig. 9 as a function of the translation. The 547

Black&White experiment shows that the metric be- 548

havior ofSAM I and SP C is equal to the behavior of 549

the entropy of the images in the group. The contribu- 550

tion of the entropy of the template image completely 551

cancels out the contribution of the joint entropy in 552

SAM I andSP C as can be seeninFig. 9(c-d). The re- 553

sulting optimization is only driven by the complexity 554

of the images in the group and not by their shared 555

(15)

ACCEPTED MANUSCRIPT

AM I P C CE CT E 1.0 1.5 2.0 2.5 3.0 3.5 4.0 4.5 gTRE(mm)

Figure 10: Boxplots for the results of the Multimodal Cubes experiment. Significant differences between two methods are

indicated with black barsbelowthe boxplots.

The results for the Multimodal Cubes experiment

557

are shown in Fig. 10. When comparing the

similar-558

ity metrics,SCT E (1.71±0.11 mm)significantly

out-559

performed all otherentropy-based groupwise metrics 560 (2.80±0.32 mm, 2.73±0.34 mm and 1.74±0.11 for 561 SAM I,SP C and SCErespectively). 562 4.2. Clinical Data 563

Results for the gTRE in experiments on clinical

564

data are visualized with boxplots in Fig. 11&12.

565

For the experiments on the Thoracic 4D CT and

566

Carotid MR datasets (Fig. 11), no statistically

sig-567

nificant differences were observed in terms of gTRE

568

for the investigated information-based metrics.

569

In the Head&Neck experiment (Fig. 12) the best

570

results are achieved bySCT E with a gTRE of 2.74±

571

1.17mm performing significantly better compared to

572

SAM I,SP C and DP CA2.

573

PairwiseSM I performed best in the RIRE

experi-574

ment (Fig. 12) with a gTRE of 2.29±0.72mm (SCT E,

575

2.33± 0.57mm), but no significant differences were

576

found compared to the otherentropy-basedmetrics.

577

DP CA2 performs worst, with the differences being

578

statistically significant. A group of images was found

579

to be misregistered following Tomaˇzeviˇc et al. (2012) 580

when the gTRE is larger than the largest voxel spac- 581

ing in the images. No misregistrations were obtained 582

forSCT E,SCEandSM I whereasSAM IandSP Cmis- 583

registered two patients andDP CA2 misregistered 14 584

patients. 585

In all four experiments on clinical data, pairwise 586

MI performed worst in terms of transitivity, whereas 587

the transitivity error for groupwise metrics reduced 588

to (close to) zero (Table 3). 589

In Table 4, the values are given for the average 590

runtime of the experiments performed in this work. 591

The use of the conditional entropy does not induce an 592

extra computational burden, whereas the use of the 593

principal component images does. This discrepancy 594

originates from an additional loop over the sampled 595

coordinates, needed to perform the PCA and deter- 596

mine the weights of the eigenvector. Note that for 597

more complex registrations with a regularizer, the 598

additional computation time is relatively small com- 599

pared to the total cost. 600

5. Discussion 601

Results on the Thoracic 4D-CT and Carotid MR 602

dataset showed equivalent performance of the pro- 603

posed methodology compared to other state-of-the- 604

art methods in terms of registration accuracy. 605

The results for the Multimodal Cubes, Head&Neck 606

and RIRE results were consistent. In all three 607

datasets the accuracy improved for the proposed for- 608

mulation compared to SAM I, and the improvement 609

was found to be statistically significant in the former 610

two experiments. Throughout these experiments the 611

behavior of the auxiliary metrics SCE and SP C was 612

also consistent. Using the conditional entropy instead 613

of mutual information led to a large improvement, 614 whileusing the principal component image improved 615

the accuracy modestly. The combination of both con- 616

tributions led to the best results in all three experi- 617

ments compared to other groupwise metrics. As ex- 618

pected, the PCA2 metric performed poorly in mul- 619

timodal registrations where a quantitative model or 620

low-dimensional subspace is not available. 621

In all experiments based on clinical data, the tran- 622

(16)

ACCEPTED MANUSCRIPT

P CA2 M I AM I P C CE CT E P CA2 M I AM I P C CE CT E 0.6 0.8 1.0 1.2 1.4 1.6 1.8 2.0 2.2 gTRE(mm) Thoracic 4D CT Carotid MR

Figure 11: Boxplots for the results of the Thoracic 4DCT and Carotid MR experiment. Significant differences between two methods are indicated with black bars below the boxplots.

P CA2 M I AM I P C CE CT E P CA2 M I AM I P C CE CT E

100

101

gTRE(mm)

Head&Neck RIRE

Figure 12: Boxplots for the results of the Head&Neck and RIRE experiment. Significant differences between two methods are

indicated with black bars above the boxplots. Note the logarithmic scale on the y-axis.

Figure 13: (a) CT image, (b) MR-PD image, (c) MR-T1 image, (d) MR-T2 image, (e) PET image, (f) average image and (g)

(17)

ACCEPTED MANUSCRIPT

Table 3: Average transitivity errors for the clinical datasets. For the groupwise approaches, the maximum average transitivity error among all groupwise methods is reported. The values are given in mm.

Thoracic 4D CT Carotid MR Head&Neck RIRE

MI 5.65× 10−1 2.68× 10−1 2.14 1.47

Groupwise approaches

< 3.39× 10−2 < 7.66× 10−3 < 1.85× 10−2 0

Table 4: Average runtime for the registrations in the different experiments. The values are given in minutes.

Multimodal Cubes Thoracic 4D CT Carotid MR Head&Neck RIRE

PCA2 - 212 28 20 4

AMI 22 238 31 23 7

CE 22 252 31 23 7

PC 26 248 36 36 54

CTE 26 276 36 36 55

pared to SM I for groupwise approaches. These

re-624

sults emphasize the added value of the implicit

ref-625

erence space in multimodal groupwise registration.

626

Whereas a pairwise approach has to perform two

sep-627

arate registrations with different reference images to

628

obtain a concatenated transformation, in a

group-629

wise approach all transformations are evaluated

si-630

multaneously and with a substantially lower

transi-631

tivity error. These results are consistent with

previ-632

ous findings in monomodal data (Geng et al., 2009;

633

Metz et al., 2011).

634

In summary, for experiments based on images

635

where no or modest changes in intensity

distribu-636

tions are present (‘Thoracic 4D-CT’ and ‘Carotid 637

MR’), CTE showed comparable performance to

pre-638

viously proposed groupwise methods and pairwise

639

MI. In experiments with strongly varying intensity

640

distributions (‘Multimodal Cubes’, ‘Head&Neck’ and 641

‘RIRE’), CTE showed superior performance to

pre-642

viously proposed groupwise methods and performed

643

on par to pairwise MI, with little to no transitivity

644

error.

645

Fig. 5(f) and 6(f) highlight the differences in the 646

average and principal component images. Herein, the 647

absolute difference image between the average and 648

principal component imageis given in the ‘Thoracic 649

4D CT’ and ‘Carotid MR’ dataset, respectively, for 650 a single patient. Herein, the largest differences occur 651 in regions where the motion is greatest near moving 652 structures or edges. This is consistent with previous 653 work, where the principal component image was used 654 to separate motion present in the images (Feng et al., 655 2016; Hamy et al., 2014; Melbourne et al., 2007). For 656 multimodal registrations, the benefit of PCA over av- 657 eraging can be seen by considering cases in which 658 images with an inverted intensity profile are merged 659 into the template image, as shown in Fig. 4(g-h) and 660 Fig. 13. For the ‘Multimodal Cubes’ experiment, 661 PCA lead to an increase of the contrast-to-noise ra- 662 tio from 7.4 to 32.5 compared to simple averaging. 663 Fig. 13 shows the average and principal component 664

imagewhen applied to the ventricles for an arbitrary 665 patient in the RIRE dataset. With the T2 modal- 666 ity having an inverted intensity profile, the principal 667 component image is able to retain the contrast in the 668 template image. In the average image the intensities 669 cancel out and the ventricles are poorly visible. 670 Two limitations should be stated with respect to 671 current work. Firstly, only intrasubject data has 672 been employed. Intersubject data is characterized by 673 greater variability of intensity profiles and morphol- 674 ogy, and has been reported to considerably increase 675

(18)

ACCEPTED MANUSCRIPT

the complexity of groupwise registration (Hamm

676

et al., 2009; Tang et al., 2009). It remains to be ver-677

ified how CTE would perform when confronted with 678

such data. 679

Secondly, in this work a methodology was used

680

where the images are deformed and compared to

681

the template image in the implicit reference system.

682

However, previous work has shown that deforming

683

the template image to the images in the group suits

684

a generative model better (Allassonni`ere et al., 2007;

685

Ma et al., 2008). In methodologies where the

tem-686

plate is deformed to the images in the group, no need

687

exists to constrain the transformations to the

aver-688

age deformation space (Eq. 16). This was shown to

689

be advantageous, as such constraints could exclude

690

some legitimate results (Aganj et al., 2017). We

ex-691

pect the proposed metric to perform equally well in

692

such frameworks as it is independent of the

transfor-693

mations that were used.

694

6. Conclusion

695

In this work we proposed a novel similarity metric

696

for intrasubject multimodal groupwise registration,

697

the conditional template entropy. Theproposed

met-698

ric was evaluated in experiments based on synthetic

699

and clinical intrasubject data and showed equivalent

700

or improved registration accuracy compared to other

701

state-of-the-art (dis)similarity metrics and improved

702

transformation consistency compared to pairwise

mu-703

tual information. These improvements were achieved

704

mainly by the use of the conditional entropy, whereas

705

the use of the principal component image contributed

706

modestlyin our experiments.

707

Acknowledgment

708

The research of S. Klein, W.J. Niessen and W.

709

Huizinga is supported by the European Union

Sev-710

enth Framework Programme (FP7/2007-2013) under

711

grant agreement no. 601055, VPH-DARE@IT.

Ad-712

ditionally, the authors would like to thank Dr. B.F.

713

Coolen and Dr. A.J. Nederveen for providing the

714

‘Carotid MR’data. The authors would also like to

715

thank the anonymous reviewers for their feedback

716

and remarks on this work, which substantially im- 717

proved its quality. 718

Appendix A. Derivative of principal compo- 719

nent image 720

We determined the derivative of the principal com- 721

ponent image with respect to the transformation pa- 722

rameters. The principal component image is given by 723

Eq. (10) and repeated here 724

IP CA µ (x) = n X i=1 vi,µ Ii◦ Tµi(x) = v T µI (x) . (A.1)

Herein, I (x) is the column vector representing all 725

image intensities across the group for a given sampled 726

coordinate. The derivative becomes 727

∂IP CA µ (x) ∂µ = ∂vT µ ∂µ I (x) + v T µ ∂I (x) ∂µ , (A.2)

Following de Leeuw (2007) for the derivative of an 728

eigenvector: 729

∂vµ

∂µ =− (C − eI) +∂C

∂µvi,µ , (A.3) with C the correlation matrix of the intensities, sim- 730

ilar to Huizinga et al. (2016), I the identity matrix, e 731

the eigenvalue associatedwithvµand+the notation 732

for the Moore-Penrose inverse (de Leeuw, 2007). The 733

derivative of the correlation matrix is given as 734

∂C ∂µ = 1 |S| − 1 ∂Σ−1 ∂µ M− M T M− MΣ−1 + Σ−1∂M T ∂µ M− M  Σ−1 + Σ−1 M− MT ∂M ∂µΣ −1 + Σ−1 M− MT M− M ∂Σ−1 ∂µ ! (A.4) Herein, M refers to the data matrix with the intensi- 735

(19)

ACCEPTED MANUSCRIPT

image intensity repeated along its columns, Σ is the

737

diagonal matrix with the standard deviations of the

738

images intensities as its diagonal elements. All

no-739

tations correspond to those found in Huizinga et al.

740

(2016) and we have ignored the derivative of the

av-741

erage image intensities likewise.

742

References

743

Aganj, I., Iglesias, J.E., Reuter, M., Sabuncu,

744

M.R., Fischl, B., 2017. Mid-space-independent

de-745

formable image registration. NeuroImage 152, 158–

746

170.

747

Allassonni`ere, S., Amit, Y., Trouv´e, A., 2007.

To-748

wards a coherent statistical framework for dense

749

deformable template estimation. Journal of the

750

Royal Statistical Society: Series B (Statistical

751

Methodology) 69, 3–29.

752

Bhatia, K.K., Hajnal, J., Hammers, A., Rueckert,

753

D., 2007. Similarity metrics for groupwise

non-754

rigid registration, in: Medical Image

Comput-755

ing and Computer-Assisted Intervention–MICCAI

756

2007. Springer, pp. 544–552.

doi:10.1007/978-3-757

540-75759-7 66.

758

Bhatia, K.K., Hajnal, J.V., Puri, B.K., Edwards,

759

A.D., Rueckert, D., 2004. Consistent groupwise

760

non-rigid registration for atlas construction, in:

761

Biomedical Imaging: Nano to Macro, 2004. IEEE

762

International Symposium on, IEEE. pp. 908–911.

763

doi:10.1109/isbi.2004.1398686.

764

Castillo, R., Castillo, E., Guerra, R., Johnson, V.E.,

765

McPhail, T., Garg, A.K., Guerrero, T., 2009. A

766

framework for evaluation of deformable image

reg-767

istration spatial accuracy using large landmark

768

point sets. Physics in medicine and biology 54,

769

1849. doi:10.1002/mrm.25634.

770

Ceranka, J., Polfliet, M., Lecouvet, F., Michoux, N.,

771

de Mey, J., Vandemeulebroucke, J., 2017.

Regis-772

tration strategies for multi-modal whole-body MRI

773

mosaicing. Magnetic Resonance in Medicine .

774

Christensen, G.E., Geng, X., Kuhl, J.G., Bruss, J.,

775

Grabowski, T.J., Pirwani, I.A., Vannier, M.W.,

776

Allen, J.S., Damasio, H., 2006. Introduction to 777

the non-rigid image registration evaluation project 778

(NIREP), in: International Workshop on Biomed- 779

ical Image Registration, Springer. pp. 128–135. 780

doi:10.1007/11784012 16. 781

Collignon, A., Maes, F., Delaere, D., Vandermeulen, 782

D., Suetens, P., Marchal, G., 1995. Automated 783

multi-modality image registration based on infor- 784

mation theory, in: Information processing in med- 785

ical imaging, pp. 263–274. 786

Coolen, B.F., Poot, D.H., Liem, M.I., Smits, L.P., 787

Gao, S., Kotek, G., Klein, S., Nederveen, A.J., 788

2015. Three-dimensional quantitative T1 and T2 789

mapping of the carotid artery: Sequence design 790

and in vivo feasibility. Magnetic Resonance in 791

Medicine doi:10.1002/mrm.25634. 792

Feng, Q., Zhou, Y., Li, X., Mei, Y., Lu, Z., Zhang, Y., 793

Feng, Y., Liu, Y., Yang, W., Chen, W., 2016. Liver 794

DCE-MRI registration in manifold space based on 795

robust principal component analysis. Scientific Re- 796

ports 6. 797

Fletcher, P.T., Venkatasubramanian, S., Joshi, 798

S., 2009. The geometric median on rieman- 799

nian manifolds with application to robust at- 800

las estimation. NeuroImage 45, S143–S152. 801

doi:10.1016/j.neuroimage.2008.10.052. 802

Fortunati, V., Verhaart, R.F., Angeloni, F., van der 803

Lugt, A., Niessen, W.J., Veenland, J.F., Paulides, 804

M.M., van Walsum, T., 2014. Feasibility of mul- 805

timodal deformable registration for head and neck 806

tumor treatment planning. International Journal 807

of Radiation Oncology* Biology* Physics 90, 85– 808

93. doi:10.1016/j.ijrobp.2014.05.027. 809

Fortunati, V., Verhaart, R.F., Verduijn, G.M., 810

van der Lugt, A., Angeloni, F., Niessen, W.J., 811

Veenland, J.F., Paulides, M.M., van Walsum, T., 812

2015. MRI integration into treatment planning of 813

head and neck tumors: Can patient immobiliza- 814

tion be avoided? Radiotherapy and Oncology 115, 815

(20)

ACCEPTED MANUSCRIPT

Geng, X., Christensen, G.E., Gu, H., Ross, T.J.,

817

Yang, Y., 2009. Implicit reference-based

group-818

wise image registration and its application to

struc-819

tural and functional MRI. Neuroimage 47, 1341–

820

1351. doi:10.1016/j.neuroimage.2009.04.024.

821

Hallack, A., Chappell, M.A., Gooding, M.J.,

Schn-822

abel, J.A., 2014. A new similarity metric for

group-823

wise registration of variable flip angle sequences

824

for improved T 10 estimation in DCE-MRI, in:

825

Biomedical Image Registration. Springer, pp. 154–

826

163. doi:10.1007/978-3-319-08554-8 16.

827

Hamm, J., Davatzikos, C., Verma, R., 2009.

Effi-828

cient large deformation registration via geodesics

829

on a learned manifold of images, in:

Interna-830

tional Conference on Medical Image Computing

831

and Computer-Assisted Intervention, Springer. pp.

832

680–687.

833

Hamy, V., Dikaios, N., Punwani, S., Melbourne,

834

A., Latifoltojar, A., Makanyanga, J., Chouhan,

835

M., Helbren, E., Menys, A., Taylor, S., et al.,

836

2014. Respiratory motion correction in dynamic

837

MRI using robust data decomposition registration–

838

application to DCE-MRI. Medical image analysis

839

18, 301–313.

840

Hero, A., Ma, B., Michel, O.J., Gorman, J.,

841

2002. Applications of entropic spanning graphs.

842

Signal Processing Magazine, IEEE 19, 85–95.

843

doi:10.1109/msp.2002.1028355.

844

Huizinga, W., Poot, D., Guyader, J.M., Klaassen, R.,

845

Coolen, B., van Kranenburg, M., van Geuns, R.,

846

Uitterdijk, A., Polfliet, M., Vandemeulebroucke,

847

J., Leemans, A., Niessen, W., Klein, S., 2016.

848

PCA-based groupwise image registration for

quan-849

titative MRI. Medical Image Analysis 29, 65–78.

850

doi:10.1016/j.media.2015.12.004.

851

Joshi, S., Davis, B., Jomier, M., Gerig, G., 2004.

Un-852

biased diffeomorphic atlas construction for

com-853

putational anatomy. NeuroImage 23, S151–S160.

854

doi:10.1016/j.neuroimage.2004.07.068.

855

Kim, M., Wu, G., Shen, D., 2011. Groupwise

registra-856

tion of breast DCE-MR images for accurate tumor

857

measurement, in: Biomedical Imaging: From Nano 858

to Macro, 2011 IEEE International Symposium on, 859

IEEE. pp. 598–601. 860

Klein, S., Pluim, J.P., Staring, M., Viergever, 861

M.A., 2009. Adaptive stochastic gradient de- 862

scent optimisation for image registration. Inter- 863

national journal of computer vision 81, 227–239. 864

doi:10.1007/s11263-008-0168-y. 865

Klein, S., Staring, M., Murphy, K., Viergever, 866

M.A., Pluim, J.P., 2010. Elastix: a toolbox for 867

intensity-based medical image registration. Med- 868

ical Imaging, IEEE Transactions on 29, 196–205. 869

doi:10.1109/tmi.2009.2035616. 870

van ’t Klooster, R., Staring, M., Klein, S., Kwee, 871

R.M., Kooi, M.E., Reiber, J.H., Lelieveldt, B.P., 872

van der Geest, R.J., 2013. Automated registration 873

of multispectral mr vessel wall images of the carotid 874

artery. Medical physics 40. 875

de Leeuw, J., 2007. Derivatives of generalized eigen 876

systems with applications. Department of Statis- 877

tics, UCLA . 878

Luenberger, D.G., 1973. Introduction to linear 879

and nonlinear programming. volume 28. Addison- 880

Wesley Reading, MA. 881

Ma, J., Miller, M.I., Trouv´e, A., Younes, L., 2008. 882

Bayesian template estimation in computational 883

anatomy. NeuroImage 42, 252–261. 884

Maes, F., Collignon, A., Vandermeulen, D., Marchal, 885

G., Suetens, P., 1997. Multimodality image reg- 886

istration by maximization of mutual information. 887

Medical Imaging, IEEE Transactions on 16, 187– 888

198. doi:10.1109/42.563664. 889

Melbourne, A., Atkinson, D., White, M., Collins, D., 890

Leach, M., Hawkes, D., 2007. Registration of dy- 891

namic contrast-enhanced MRI using a progressive 892

principal component registration (PPCR). Physics 893

in medicine and biology 52, 5147. 894

Metz, C., Klein, S., Schaap, M., van Walsum, 895

T., Niessen, W.J., 2011. Nonrigid registration 896

(21)

ACCEPTED MANUSCRIPT

t B-splines and a groupwise optimization

ap-898

proach. Medical image analysis 15, 238–249.

899

doi:10.1016/j.media.2010.10.003.

900

Murphy, K., Van Ginneken, B., Reinhardt, J.M.,

901

Kabus, S., Ding, K., Deng, X., Cao, K., Du, K.,

902

Christensen, G.E., Garcia, V., et al., 2011.

Evalu-903

ation of registration methods on thoracic CT: the

904

EMPIRE10 challenge. IEEE transactions on

med-905

ical imaging 30, 1901–1920.

906

Orchard, J., Mann, R., 2010. Registering a

907

multisensor ensemble of images. Image

Pro-908

cessing, IEEE Transactions on 19, 1236–1247.

909

doi:10.1109/tip.2009.2039371.

910

Polfliet, M., Klein, S., Huizinga, W., de Mey, J.,

Van-911

demeulebroucke, J., 2016. The pythagorean

aver-912

ages as group images in efficient groupwise

regis-913

tration, in: 2016 IEEE 13th International

Sym-914

posium on Biomedical Imaging (ISBI), IEEE. pp.

915

1261–1264. doi:10.1109/isbi.2016.7493496.

916

Polfliet, M., Klein, S., Niessen, W.J.,

Vandemeule-917

broucke, J., 2017. Laplacian eigenmaps for

mul-918

timodal groupwise image registration, in: SPIE

919

Medical Imaging, International Society for Optics

920

and Photonics. pp. 101331N–101331N.

921

Reuter, M., Rosas, H.D., Fischl, B., 2010. Highly

922

accurate inverse consistent registration: a robust

923

approach. Neuroimage 53, 1181–1196.

924

Saito, T., Sakamoto, T., Oya, N., 2009.

Com-925

parison of gating around expiration and

end-926

inspiration in radiotherapy for lung cancer.

Radio-927

therapy and oncology 93, 430–435.

928

Sarrut, D., Boldea, V., Miguet, S., Ginestet, C., 2006.

929

Simulation of four-dimensional CT images from

de-930

formable registration between inhale and exhale

931

breath-hold CT scans. Medical physics 33, 605–

932

617. doi:10.1118/1.2161409.

933

Spiclin, Z., Likar, B., Pernus, F., 2012.

Group-934

wise registration of multimodal images by an

ef-935

ficient joint entropy minimization scheme. Image

936

Processing, IEEE Transactions on 21, 2546–2558.

937

doi:10.1109/tip.2012.2186145.

938

Studholme, C., Hill, D.L., Hawkes, D.J., 1999. An 939

overlap invariant entropy measure of 3D medical 940

image alignment. Pattern recognition 32, 71–86. 941

doi:10.1016/s0031-3203(98)00091-0. 942

Tang, S., Fan, Y., Wu, G., Kim, M., Shen, D., 2009. 943

RABBIT: rapid alignment of brains by building in- 944

termediate templates. NeuroImage 47, 1277–1287. 945

Th´evenaz, P., Unser, M., 2000. Optimization of mu- 946

tual information for multiresolution image registra- 947

tion. Image Processing, IEEE Transactions on 9, 948

2083–2099. doi:10.1109/83.887976. 949

Tomaˇzeviˇc, D., Likar, B., Pernuˇs, F., 2012. 950

Multi-feature mutual information image registra- 951

tion. Image Analysis & Stereology 31, 43–53. 952

doi:10.5566/ias.v31.p43-53. 953

Vandemeulebroucke, J., Bernard, O., Rit, S., Ky- 954

bic, J., Clarysse, P., Sarrut, D., 2012. Auto- 955

mated segmentation of a motion mask to pre- 956

serve sliding motion in deformable registration 957

of thoracic CT. Medical physics 39, 1006–1015. 958

doi:10.1118/1.3679009. 959

Vandemeulebroucke, J., Rit, S., Kybic, J., Clarysse, 960

P., Sarrut, D., 2011. Spatiotemporal mo- 961

tion estimation for respiratory-correlated imag- 962

ing of the lungs. Medical physics 38, 166–178. 963

doi:10.1118/1.3523619. 964

Verhaart, R.F., Fortunati, V., Verduijn, G.M., 965

van der Lugt, A., van Walsum, T., Veenland, 966

J.F., Paulides, M.M., 2014. The relevance of MRI 967

for patient modeling in head and neck hyperther- 968

mia treatment planning: A comparison of CT 969

and CT-MRI based tissue segmentation on sim- 970

ulated temperature. Medical physics 41, 123302. 971

doi:10.1118/1.4901270. 972

Vijayan, S., Klein, S., Hofstad, E.F., Lindseth, F., 973

Ystgaard, B., Langø, T., 2014. Motion tracking 974

in the liver: Validation of a method based on 4D 975

ultrasound using a nonrigid registration technique. 976

Medical physics 41. doi:10.1118/1.4890091. 977

Viola, P., Wells III, W., 1995. Alignment by max- 978

(22)

ACCEPTED MANUSCRIPT

of the Fifth International Conference on Computer

980

Vision, IEEE Computer Society. p. 16.

981

Wachinger, C., Navab, N., 2013. Simultaneous

regis-982

tration of multiple images: Similarity metrics and

983

efficient optimization. Pattern Analysis and

Ma-984

chine Intelligence, IEEE Transactions on 35, 1221–

985

1233. doi:10.1109/tpami.2012.196.

986

Wachinger, C., Wein, W., Navab, N., 2007.

Three-987

dimensional ultrasound mosaicing, in:

Medi-988

cal Image Computing and Computer-Assisted

989

Intervention–MICCAI 2007. Springer, pp. 327–335.

990

doi:10.1007/978-3-540-75759-7 40.

991

Wells, J.A., O’Callaghan, J., Holmes, H., Powell,

992

N.M., Johnson, R., Siow, B., Torrealdea, F.,

Is-993

mail, O., Walker-Samuel, S., Golay, X., et al.,

994

2015. In vivo imaging of tau pathology using

multi-995

parametric quantitative MRI. Neuroimage 111,

996

369–378. doi:10.1016/j.neuroimage.2015.02.023.

997

Wells, W.M., Viola, P., Atsumi, H., Nakajima, S.,

998

Kikinis, R., 1996. Multi-modal volume

registra-999

tion by maximization of mutual information.

Med-1000

ical image analysis 1, 35–51.

doi:10.1016/s1361-1001

8415(96)80004-1.

1002

West, J., Fitzpatrick, J.M., Wang, M.Y., Dawant,

1003

B.M., Maurer Jr, C.R., Kessler, R.M.,

Maciu-1004

nas, R.J., Barillot, C., Lemoine, D., Collignon,

1005

A., et al., 1997. Comparison and evaluation of

1006

retrospective intermodality brain image

registra-1007

tion techniques. Journal of computer assisted

1008

tomography 21, 554–568.

doi:10.1097/00004728-1009

199707000-00007.

1010

Wu, G., Jia, H., Wang, Q., Shen, D., 2011.

Sharp-1011

mean: Groupwise registration guided by sharp

1012

mean image and tree-based registration.

NeuroIm-1013

age 56, 1968–1981.

1014

Z¨ollei, L., Learned-Miller, E., Grimson, E., Wells,

1015

W., 2005. Efficient population registration of

1016

3D data, in: Computer Vision for

Biomedi-1017

cal Image Applications. Springer, pp. 291–301.

1018

doi:10.1007/11569541 30.

Referenties

GERELATEERDE DOCUMENTEN

The experiment had a 4 (answer presentation) x 2 (question type) mixed factorial design, with answer presentation (brief answer with an illustrative visual, extended answer with

“ORG” denotes the original uncorrected sequence, “PW” denotes pairwise motion correction, “SYN” denotes pairwise motion correction with synthetic image, “GW” denotes

However, for any probability distribution, we define a quantity called the entropy, which has many properties that agree with the intuitive notion of what a

232 DZFFERENTIAL ENTROPY Certain authors (e.g., Gallager [120]) prefer to define the mutual information between two continuous random variables directly as the

In addition, we derive the dual conjugate function for entropy coherent and entropy convex measures of risk, and prove their distribution invariant representation.. Keywords:

The kernel spectral clustering results in a model based on the subgraph selected by the FURS subset selection technique and uses its out-of-sample extension property to provide

First, a new stopping criterion based on normalized mutual information is devised, which allows us to decrease the number m of selected pivots, and hence, the computational

The kernel spectral clustering results in a model based on the subgraph selected by the FURS subset selection technique and uses its out-of-sample extension property to provide