• No results found

Cover Page The handle http://hdl.handle.net/1887/92883

N/A
N/A
Protected

Academic year: 2021

Share "Cover Page The handle http://hdl.handle.net/1887/92883"

Copied!
29
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Cover Page

The handle

http://hdl.handle.net/1887/92883

holds various files of this Leiden University

dissertation.

Author: Koolstra, K.

(2)
(3)

2

A

CCELERATING COMPRESSED

SENSING IN PARALLEL IMAGING

RECONSTRUCTIONS USING AN

EFFICIENT CIRCUL ANT

PRECONDITIONER FOR CARTESIAN

TRAJECTORIES

KIRSTENKOOLSTRA†, JEROEN VANGEMERT†, PETERBÖRNERT, ANDREWWEBB, ROB

REMIS

(4)

A

BSTRACT

Purpose: Design of a preconditioner for fast and efficient parallel imaging and

com-pressed sensing reconstructions for Cartesian trajectories.

Theory: Parallel imaging and compressed sensing reconstructions become time

con-suming when the problem size or the number of coils is large, due to the large linear system of equations that has to be solved in`1and`2-norm based reconstruction

al-gorithms. Such linear systems can be solved efficiently using effective preconditioning techniques.

Methods: In this paper we construct such a preconditioner by approximating the

sys-tem matrix of the linear syssys-tem, which comprises the data fidelity and includes total variation and wavelet regularization, by a matrix that is block circulant with circulant blocks. Due to this structure, the preconditioner can be constructed quickly and its in-verse can be evaluated fast using only two fast Fourier transformations. We test the per-formance of the preconditioner for the conjugate gradient method as the linear solver, integrated into the well-established Split Bregman algorithm.

Results: The designed circulant preconditioner reduces the number of iterations

re-quired in the conjugate gradient method by almost a factor of 5. The speed up results in a total acceleration factor of approximately 2.5 for the entire reconstruction algorithm when implemented in MATLAB, while the initialization time of the preconditioner is negligible.

Conclusion: The proposed preconditioner reduces the reconstruction time for parallel

imaging and compressed sensing in a Split Bregman implementation without compro-mising reconstruction stability, and can easily handle large systems since it is Fourier-based, allowing for efficient computations.

(5)

2.1. I

NTRODUCTION

The undersampling factor in Parallel Imaging (PI) is in theory limited by the number of coil channels [1–4]. Higher factors can be achieved by using Compressed Sensing (CS) which estimates missing information by adding a priori information [5, 6]. The a priori knowledge relies on the sparsity of the image in a certain transform domain. It is possible to combine PI and CS, e.g. [7] and [8], achieving almost an order of magni-tude speed-up factors in cardiac perfusion MRI and enabling free-breathing MRI of the liver [9].

CS allows reconstruction of an estimate of the true image even in the case of con-siderable undersampling factors, for which the data model generally describes an ill-posed problem without a unique solution. This implies that the true image cannot be found by directly applying Fourier transforms. Instead, regularization is used to solve the ill-posed problem by putting additional constraints on the solution. For CS, such a constraint enforces sparsity of the image in a certain domain, which is promoted by the `0-norm [6, 10, 11]. However, practically the`1-norm is used instead as it is the

clos-est representation that is numerically feasible to implement. The wavelet transform and derivative operators, integrated in total variation regularization, are examples of sparsifying transforms that can be used in the spatial direction [8, 12–16] and temporal dimension [9], respectively.

Although CS has led to a considerable reduction in acquisition times either in par-allel imaging applications or in single coil acquisitions, the benefit of the`1-norm

regularization constraint comes with the additional burden of increased reconstruc-tion times, because`1-norm minimization problems are in general difficult to solve.

Many methods have been proposed that solve the problem iteratively [12, 14, 17–23]. In this work, we focus on the Split Bregman (SB) approach because of its computa-tional performance, and its well-established track record [14, 24–28]. SB transforms the initial minimization problem, containing both`1and`2-norm terms, into a set

of subproblems that either require solving an`2-norm or an`1-norm minimization

problem, each of which can be approached using standard methods.

The most expensive step in SB, which is also present in many other methods, is to solve an`2-norm minimization problem, which can be formulated as a linear least

squares problem, e.g. [29]. The system matrix of the least squares problem remains constant throughout the SB iterations and this feature has shown to be convenient for finding an approximation of the inverse system matrix as is done in e.g. [30]. This ap-proach eliminates the need for an iterative scheme to solve the`2-norm minimization

problem, but for large problem sizes the initial computational costs are high, making it less profitable in practice.

An alternative approach for eliminating the iterative scheme to solve the`2-norm

minimization problem was demonstrated in [31]. In this approach, extra variable split-ting is introduced to separate the coil sensitivity matrices from the Fourier transforms, such that all individual subproblems can be solved directly in the case of Cartesian sampling. This can lead to a considerable reduction in reconstruction time, provided that the reconstruction parameters are optimized. Simulations and in vivo experi-ments showed significant improveexperi-ments in convergence compared to non-linear con-jugate gradient and a monotone fast iterative shrinkage-thresholding algorithms. The extra variable splitting introduces penalty parameters, however, and unstable

(6)

ior can occur for certain parameter choices due to nontrivial null-spaces of the opera-tors [31–33]. This can be seen as a drawback of this approach. Furthermore, determin-ing the extra parameters is obviously nonunique. Considerdetermin-ing the fact that each image slice would be reconstructed optimally with possibly different reconstruction parame-ters, we prefer the more straightforward SB scheme. Moreover, for non-Cartesian tra-jectories, direct solutions are no longer possible and iterative schemes are needed.

Alternatively, to keep the number of reconstruction parameters to a minimum and to minimize possible stability issues, preconditioners can be used to reduce the num-ber of iterations required for solving the least squares problem [34]. The incomplete Cholesky factorization and hierarchically-structured matrices are examples of precon-ditioners that reduce the number of iterations drastically in many applications [35, 36]. The drawback of these type of preconditioners is that the full system matrix needs to be built before the reconstruction starts, which for larger problem sizes can only be done on a very powerful computer due to memory limitations. Although in [37–39] a penta-diagonal matrix was constructed as a preconditioner, solving such a system is still relatively expensive. In addition, before constructing the preconditioner, patient-specific coil sensitivity profiles need to be measured, which often leads to large initial-ization times. In [31, 40], the extra variable splitting enables building a preconditioner independent of coil sensitivity maps, resulting in a preconditioner for non-Cartesian reconstructions, but one that is not applicable for the more stable SB scheme.

In this work, we design a Fourier transform-based preconditioner for PI-CS recon-structions and Cartesian trajectories in a stable SB framework, that takes the coil sen-sitivities on a patient-specific basis into account, has negligible initialization time and which is highly scalable to a large number of unknowns, as often encountered in MRI.

2.2. T

HEORY

In this section we first describe the general parallel imaging and compressed sensing problems. Subsequently, the Split Bregman algorithm, which is used to solve these problems, is discussed. Hereafter, we introduce the preconditioner that is used to speed up the PI-CS algorithm and elaborate on its implementation and complexity.

2.2.1. P

ARALLEL

I

MAGING

R

ECONSTRUCTION

In parallel imaging with full k-space sampling the data, including noise, is described by the model

FSix = yfull,i for i = 1,...,Nc

where the yfull,i∈ CN ×1 are the fully sampled k-space data sets containing noise for

i ∈ {1,..,Nc}, with Ncthe number of coil channels, and x ∈ CN ×1is the true image [3].

Here, N = m · n, where m and n define the image matrix size in the x and y-directions, respectively, for a 2D sampling case. Furthermore,Si ∈ CN ×N are diagonal matrices

representing complex coil sensitivity maps for each channel. Finally,F∈ CN ×Nis the discrete two-dimensional Fourier transform matrix. In the case of undersampling, the data is described by the model

RFSix = yi for i = 1,...,Nc, (2.1)

(7)

where yi∈ CN ×1are the undersampled k-space data sets for i ∈ {1,..,Nc} with zeros at

non-measured k-space locations. The undersampling pattern is specified by the bi-nary diagonal sampling matrixR∈ RN ×N, so that the undersampled Fourier transform

is given byRF. Here it is important to note thatRreduces the rank ofRFSi, which

means that solving for x in Eq. (2.1) is in general an ill-posed problem for each coil and a unique solution does not exist. However, if the individual coil data sets are combined and the undersampling factor does not exceed the number of coil channels, the im-age x can in theory be reconstructed by finding the least squares solution, that is, by minimizing ˆx = argmin x (N c X i =1 ° °RFSix − yi ° ° 2 2 ) , (2.2)

where ˆx ∈ CN ×1is an estimate of the true image.

2.2.2. P

ARALLEL

I

MAGING

R

ECONSTRUCTION WITH

C

OMPRESSED

S

ENS

-ING

In the case of higher undersampling factors, the problem of solving Eq. (2.2) becomes ill-posed and additional regularization terms need to be introduced to transform the problem into a well-posed problem. Since MR images are known to be sparse in some domains, adding`1-norm terms is a suitable choice for regularization. The techniques

of parallel imaging and compressed sensing are then combined in the following mini-mization problem ˆx = argmin x ( µ 2 Nc X i =1 ° °RFSix − yi ° ° 2 2+ λ 2¡kDxxk1+ ° °Dyx ° ° 1¢ + γ 2kWxk1 ) , (2.3)

withµ,λ and γ the regularization parameters for the data fidelity, the total variation,

and the wavelet, respectively [8]. A total variation regularization constraint is intro-duced by the first-order derivative matricesDx,Dy∈ RN ×N, representing the numerical

finite difference scheme

Dx(x)|i , j= xi , j− xi −1,j i = 2,..,m, j = 1,..,n

Dy(x)|i , j= xi , j− xi , j −1 i = 1,..,m, j = 2,..,n

with periodic boundary conditions

Dx(x)|1, j= x1, j− xm, j j = 1,..,n

Dy(x)|i ,1= xi ,1− xi ,n i = 1,..,m

so thatDx andDy are circulant. A unitary wavelet transformW∈ RN ×N further

pro-motes sparsity of the image in the wavelet domain.

2.2.3. S

PLIT

B

REGMAN

I

TERATIONS

Solving Eq. (2.3) is not straightforward as the partial derivatives of the`1-norm terms

are not well-defined around 0. Instead, the problem is transformed into one that can be solved easily. In this work, we use Split Bregman to convert Eq. (2.3) into multiple minimization problems in which the`1-norm terms have been decoupled from the

(8)

`2-norm term, as discussed in detail in [14, 24]. For convenience, the Split Bregman

method is shown in Algorithm 1. The Bregman parameters bx, by, bw are introduced

by the Bregman scheme and auxiliary variables dx, dy, dw are introduced by writing

the constrained problem as an unconstrained problem. The algorithm consists of two loops: an outer loop and an inner loop. In the inner loop (steps 4-11), we first compute the vector b that serves as a right-hand side in the system of equations of step 5. Sub-sequently, the`1-norm subproblems are solved using the shrink function in steps 6-8.

Hereafter, the residuals for the regularization terms are computed in steps 9-11 and are subsequently fed back into the system by updating the right hand side vector b in step 5. Steps 4-11 can be repeated several times, but one or two inner iterations are normally sufficient for convergence. Similarly, the outer loop feeds the residual en-countered in the data fidelity term back into the system, after which the inner loop is executed again. The system of linear equations,

Aˆx = b, (2.4)

in line 5 of the algorithm follows from a standard least squares problem, where the system matrix is given by

A= µ Nc X i =1 (RFSi)HRFSi+ λ ³ DHxDx+DHyDy ´ + γWHW

with right-hand side

b = µ Nc X i =1 (RFSi)Hyi+ λ h DHx ³dkx− bkx´+DHy ³dky− bky´i+ γWH³dkw− bkw´.

In this work we focus on solving Eq. (2.4), which is computationally the most expensive part of Algorithm 1. It is important to note that the system matrixAremains constant throughout the algorithm and only the right hand side vector b changes, which allows us to efficiently solve Eq. (2.4) by using preconditioning techniques.

2.2.4. S

TRUCTURE OF THE

S

YSTEM

M

ATRIX

A

The orthogonal wavelet transform is unitary, so thatWHW=I. Furthermore, the

deriva-tive operators are constructed such that the matricesDx,Dy,DHx andDHy are block

cir-culant with circir-culant blocks (BCCB). The product and sum of two BCCB matrices is again BCCB, showing thatDHxDx+DHyDyis also BCCB. These type of matrices are

di-agonalized by the two-dimensional Fourier transformation, that is,

D1=FCFH or D2=FHCF

whereCis a BCCB matrix andD1andD2are diagonal matrices. This motivates us to

write the system matrixAin Eq. (2.4) in the form

A=FHFAFHF

=FHKF (2.5)

(9)

withK∈ CN ×Ngiven by K= µ Nc X i =1 FSiHFHRHRFSiFH | {z } Kc F³DHxDx+DHyDy ´ FH | {z } Kd I |{z} Kw . (2.6)

The termDHxDx+DyHDy is BCCB, so thatKd inKbecomes diagonal. If there is no

sensitivity encoding, that isSi=I∀i ∈ {1, .., Nc}, the entireKmatrix becomes diagonal

in which case the solution ˆx can be efficiently found by computing

ˆx =A−1b =FHK−1Fb (2.7)

for invertibleK. In practice, Fast Fourier Transforms (FFTs) are used for this step. With sensitivity encoding,Si 6=IandSiHFHRHRFSi is not BCCB for any i , hence matrixK

is not diagonal. In that case we prefer to solve Eq. (2.4) iteratively, since findingK−1

is now computationally too expensive. It can be observed that the system matrixAis Hermitian and positive definite, which motivates the choice for the conjugate gradient (CG) method as an iterative solver.

2.2.5. P

RECONDITIONING

A preconditionerM∈ CN ×N can be used to reduce the number of iterations required for CG convergence [41]. It should satisfy the conditions

1. M−1A≈Ito cluster the eigenvalues of the matrix pair around 1, and

Algorithm 1Split Bregman Iteration

1: Initialize y[1]i = yifor i = 1,...,Nc, x[1]= Sum of Squares(FHyi, i = 1,...,Nc),

Initialize b[1]x , b[1]y , b[1]w, d[1]x , d[1]y , d[1]w = 0 2: for j = 1 to nOuter do 3: for k = 1 to nInner do 4: b = µPNc i =1S H i F HRHy[ j ] i + λ h DHx(d[k]x − b[k]x ) +DHy(d[k]y − b[k]y )i + γWH(d[k]w − b[k]w )

5: solveAx[k+1]= b with x[k]as initial guess

(10)

2. determination ofM−1and its evaluation on a vector should be computationally cheap.

Ideally, we would like to use a diagonal matrix as the preconditioner as this is com-putationally inexpensive. For this reason, the Jacobi preconditioner is used in many applications with the diagonal elements from matrixAas the input. However, for the current application of PI and CS the Jacobi preconditioner is not efficient since it does not provide an accurate approximate inverse of the system matrixA. In this work, we use a different approach and approximate the diagonal fromKin Eq. (2.6) instead. The motivation behind this approach is that the Fourier matrices in matrixKcenter a large part of the information contained inSHi FHRHRFSi around the main diagonal ofK, so

that neglecting off-diagonal elements ofKhas less effect than neglecting off-diagonal elements ofA.

For the preconditioner used in this work we approximateA−1by

M−1=FHdiag{k}−1F, (2.8)

where diag{} places the elements of its argument on the diagonal of a matrix. Further-more, vector k is the diagonal of matrixKand can be written as

k = µkc+ λkd+ γkw, (2.9)

where kc, kdand kware the diagonals ofKc,KdandKw, respectively. Note thatKdand

Kware diagonal matrices already, so that only kcwill result in an approximation of the

inverse for the final system matrixA.

2.2.6. E

FFICIENT

I

MPLEMENTATION OF THE

P

RECONDITIONER

The diagonal elements kc;i ofKc;i=FSHi F H | {z } CHi RHR | {z } R FSiFH | {z } Ci

for a certain i are found by

noting thatCi=FSiFHis in fact a BCCB matrix. The diagonal elements kc;iofKc;ican

now be found on the diagonal ofCiHRCi, so that

kc;i= N X j =1 ej ³ cHj ;iRcj ;i ´ ,

with cHj ;ibeing the jthrow of matrixCiHand ej the jthstandard basis vector. Note that

the scalar³cHj ;iRcj ;i

´

is the jthentry of vector kc;i. SinceRis a diagonal matrix which

can be written asR= diag{r}, we can also write

(11)

where ◦ denotes the element-wise (Hadamard) product. Since the element-wise prod-uct of two BCCB matrices is again a BCCB matrix, the circular convolution theorem tells us [42, 43] that Fkc;i=F · ³ c1;iH ◦ cT1;i ´T ∗ r ¸ =F · ³ cH1;i◦ cT1;i ´T¸ ◦Fr.

The resulting matrix vector product in Eq. (2.10) can now be efficiently computed as

kc;i=FH ½· F³c1;iH ◦ cT1;i ´T¸ ◦Fr ¾ . (2.11)

Finally, the diagonal elements d of the diagonal matrixDwith structureD=FCFHcan be computed efficiently by using d =Fc1, where c1is the first row ofC. Therefore, the

first row cH1;iof matrixCHi is found as³c1;iH´T=FH¡sH i

¢T

, with sHi a row vector contain-ing the diagonal elements of matrixSi. For multiple coils Eq. (2.11) becomes

kc=FH (" F Nc X i =1 ³ cH1;i◦ cT1;i´T # ◦Fr ) , (2.12)

where the action of the Fourier matrix on a vector can be efficiently computed using the FFT.

SinceDHxDx+DHyDyis BCCB, the elements of kd can be quickly found by

evaluat-ing kd=Ft1, where t1is the first row ofDxHDx+DHyDy. Finally, the elements of kware

all equal to one, sinceKωis the identity matrix.

Alternatively, in Eq. (2.2) the summation over the coil sensitivity matrices can be re-moved by stacking the matrices. The derivation following this approach can be found online as supporting information.

2.2.7. C

OMPLEXITY

For every inner-iteration of the Split Bregman algorithm we need to solve the linear system given in Eq. (2.4), which is done iteratively using a Preconditioned Conjugate Gradient method (PCG). In this method, the preconditioner constructed above is used as a left preconditioner by solving the following system of equations:

M−1Aˆx =M−1b, (2.13)

where ˆx is the approximate solution constructed by the PCG algorithm. In PCG this implies that for every iteration the preconditioner should be applied once on the resid-ual vector r =Aˆx − b. The preconditionerM can be constructed beforehand since it remains fixed for the entire Split Bregman algorithm as the regularization param-etersµ, λ, and γ are constant. As can be seen in Table 2.1, M−1 is constructed in

(3 + 2Nc)N + (4 + Nc)N log N FLOPS. Evaluation of the diagonal preconditionerM−1

from Eq. (2.8) on a vector amounts to two Fourier transforms and a single multipli-cation, and therefore requires N + 2N log N FLOPS.

To put this into perspective, evaluation of matrixAon a vector requires (6+4Nc)N +

2NcN log N FLOPS, as shown in Table 2.1.

(12)

Table 2.1:FLOPS required for construction ofM−1and for evaluation ofAon a vector Operation FLOPS Construction ¡cH i ¢T =FH¡sH i ¢T ∀i ∈ {1, .., Nc}, NcN log N ofM−1 PNc i ³ c1;iH ◦ cT1;i´T 2NcN − N FH[F(. . .) ◦F(. . .)] N + 3N log N kd=FHt1 N log N k = kc+ kd+ kw 2N k−1 N Total (3 + 2Nc)N + (4 + Nc)N log N EvaluationA PNc i =1(RFSi) HRFS i Nc(3N + 2N log N ) + NcN − N on vector DxHDx+DHyDy 5N WHW 0 Summation of the

three terms above 2N

Total (6 + 4Nc)N + 2NcN log N

The upper bound on the additional costs per iteration relative to the costs for evaluat-ingAon a vector is therefore

lim N →∞ N + 2N log N (6 + 4Nc)N + 2NcN log N= 1 Nc ,

showing that the preconditioner evaluation step becomes relatively cheaper for an in-creasing number of coil elements. The scaling of the complexity with respect to the problem size is depicted in Figure 2.1 for a fixed number of coils Nc= 12.

Table 2.2:Initialization times for constructing the preconditioner for different problem sizes. Even for very large problem sizes the initialization time does not exceed 2 seconds. Additional costs are given as percentage of the total reconstruction time without preconditioning.

Problem size 128 × 128 256 × 256 512 × 512 1024 × 1024

Initialization time (s) 0.0395 0.0951 0.3460 1.3371

Additional costs (%) 1.7 0.85 0.52 0.48

2.3. M

ETHODS

2.3.1. MR D

ATA

A

CQUISITION

(13)

64x64 128x128 256x256 512x512 1024x1024 105 106 107 108 109 F L O P S M-1 on vector A on vector M-1A on vector

Figure 2.1:The complexity for different problem sizes. The number of FLOPS for the action of the preconditionerMon a

vector (black),Aon a vector (red), and the combination of the two (yellow) are depicted forNc= 12.

the in vivo data. A 12-element posterior receiver array, a 15-channel head coil, a 16-channel knee coil (also used for transmission) and a 16-element anterior receiver array were used for reception in the spine, the brain, the knee and the lower legs, respec-tively. The body coil was used for RF transmission, except for the knee scan.

For the spine data set, T1-weighted images were acquired using a turbo spin-echo

(TSE) sequence with the following parameters: field of view (FOV) = 340×340 mm2; in-plane resolution 0.66×0.66 mm2; 4 mm slice thickness; 15 slices; echo time (TE)/repetition time (TR)/TSE factor = 8 ms/ 648 ms/8; flip angle (FA) = 90°; refocusing angle = 120°; water-fat shift (WFS) = 1.5 pixels; and scan time = 2:12 min. T2-weighted TSE scans

had parameters: FOV = 340×340 mm2; in-plane resolution 0.66×0.66 mm2; 4 mm slice thickness; 15 slices; TE/TR/TSE factor = 113 ms/4008 ms/32; FA = 90°; WFS = 1.1 pixels; and scan time = 3:36 min.

For the brain data set, T1-weighted images were acquired using an inversion

re-covery turbo spin-echo (IR TSE) sequence with parameters: FOV = 230×230 mm2; in-plane resolution 0.90×0.90 mm2; 4 mm slice thickness; 24 slices; TE/TR/TSE factor = 20 ms/2000 ms/8; refocusing angle = 120°; IR delay: 800 ms; WFS = 2.6 pixels; and scan time = 05:50 min. T∗2-weighted images were measured using a gradient echo (FFE) se-quence with parameters: FOV = 230×230 mm2; in-plane resolution 0.90×0.90 mm2; 4 mm slice thickness; 28 slices; TE/TR = 16 ms/817 ms; FA = 18°; WFS = 2 pixels; and scan time = 3:33 min.

For the knee data set, T1-weighted images were acquired using an FFE sequence

with parameters: FOV = 160×160 mm2; in-plane resolution 1.25×1.25 mm2; 3mm slice thickness; 32 slices; TE/TR = 10 ms/455 ms; FA = 90°; WFS = 1.4 pixels; and scan time = 1:01 min.

For the calf data set, T1-weighted images were acquired using an FFE sequence

with parameters: FOV = 300×300 mm2; in-plane resolution 1.17×1.17 mm2; 7 mm slice thickness; 24 slices; TE/TR = 16 ms/500 ms; FA = 90°; WFS = 1.5 pixels; and scan time = 2:11 min.

The different acquisitions techniques (TSE, FFE) were chosen to address

(14)

ent basic contrasts used in daily clinical practice. Undersampling in the case of non-stationary echo signals, such as during a T2-decaying TSE train, can result in image

quality degradation. This effect can be mitigated, for example, in TSE scans using vari-able refocusing angle schemes as outlined in [44].

To show the performance of the preconditioner also in the presence of these and similar effects, scans in the brain were acquired directly in undersampled mode em-ploying a simple variable density sampling pattern, with acceleration factors R = 2 and R = 3. To validate the results, fully sampled data is acquired as well in a separate scan. Data for a T2-weighted TSE scan (R = 2, FOV = 230×230 mm2; in-plane resolution

0.90×0.90 mm2; 4 mm slice thickness; 1 slice; TE/TR/TSE factor = 80 ms/3000 ms/16; refocusing angle = 120°; WFS = 2.5 pixels; and scan time = 00:30 min), a FLAIR scan (R = 2, FOV = 240×224 mm2; in-plane resolution 1.0×1.0 mm2; 4 mm slice thickness; 1 slice; TE/TR/TSE factor = 120 ms/9000 ms/24; IR delay = 2500 ms; refocusing angle = 110°; WFS = 2.7 pixels; and scan time = 01:30 min) and a 3D magnetization prepared T1-weighted turbo field echo (TFE) scan (R = 3, FOV = 250×240×224 mm2; 1.0 mm3

isotropic resolution; TE/TR = 4.6 ms/9.9 ms; TFE factor = 112; TFE prepulse delay = 1050 ms; flip angle = 8°; WFS = 0.5 pixels; and scan time = 04:17 min).

2.3.2. C

OIL SENSITIVITY MAPS

Unprocessed k-space data was stored per channel and used to construct complex coil sensitivity maps for each channel [45]. Note that the coil sensitvity maps are normal-ized such that

ˆ Si= "N c X j =1 SHj Sj #−12 Si for i = 1,...,Nc.

The normalized coil sensitivity maps were given zero intensity outside the subject, re-sulting in an improved SNR of the final reconstructed image. For the data model to be consistent, also the individual coil images were normalized according to

mi= ˆSi Nc X j =1 ˆ SHj mj for i = 1,...,Nc.

2.3.3. C

OIL

C

OMPRESSION

Reconstruction of the spine data set was performed with and without coil compression. A compression matrix was constructed as in [46], and multiplied by the normalized individual coil images and the coil sensitivity maps, to obtain virtual coil images and sensitivity maps. The six least dominant virtual coils were ignored to speed up the reconstruction.

2.3.4. U

NDERSAMPLING

Two variable density undersampling schemes were studied: a random line pattern in the foot-head direction, referred to as structured sampling, and a fully random pattern, referred to as random sampling. Different undersampling factors were used for both schemes.

(15)

2.3.5. R

ECONSTRUCTION

The Split Bregman algorithm was implemented in MATLAB (The MathWorks, Inc.). All image reconstructions were performed in 2D on a Windows 64-bit machine with an Intel i3-4160 CPU @ 3.6 GHz and 8 GB internal memory.

Reconstructions were performed for reconstruction matrix sizes of 128 ×128, 256× 256, and 512 × 512, and the largest reconstruction matrix was interpolated to obtain a simulated data set of size 1024 × 1024 for theoretical comparison. For prospectively undersampled scans, additional matrix sizes of 240 × 224 were acquired. For the 3D scan, an FFT was first performed along the readout direction, after which one slice was selected. To investigate the effect of the regularization parameters on the performance of the preconditioner, three different regularization parameter sets were chosen as:

1. set 1µ = 10−3,λ = 4 · 10−3, andγ = 10−3 2. set 2µ = 10−2,λ = 4 · 10−3, andγ = 10−3

3. set 3µ = 10−3,λ = 4 · 10−3, andγ = 4 · 10−3.

The Daubechies 4 wavelet transform was used forW. Furthermore, the SB algorithm was performed with an inner loop of one iteration and an outer loop of 20 iterations. The tolerance (relative residual norm) in the PCG algorithm was set to² = 10−3.

2.4. R

ESULTS

Figure 2.2 shows the T1-weighted TSE spine images for a reconstruction matrix size of

512 × 512, reconstructed with the SB implementation for a fully sampled data set and for undersampling factors of four (R = 4) and eight (R = 8), where structured Carte-sian sampling masks were used. The quality of the reconstructed images for R = 4 and R = 8 demonstrate the performance of the compressed sensing algorithm. The differ-ence between the fully sampled and undersampled reconstructed images are shown (magnified five times) in Figure 2.2d and Figure 2.2e for R = 4 and R = 8, respectively.

The fully built system matrixA=FHKFis compared with its circulant approxi-mationFHdiag{k}Fin Figure 2.3a for both structured and random Cartesian under-sampling in the spine, without regularization to focus on the approximated term con-taining the coil sensitivities. The elements ofAcontain many zeros due to the lack of coil sensitivity in a large part of the image domain when using cropped coil sen-sitivity maps. These zeros are not present in the circulant approximation, since the circulant property is enforced by neglecting all off-diagonal elements inK. The entries introduced into the circulant approximation do not add relevant information to the system, because the image vector on which the system matrix acts contains zero signal in the region corresponding with the newly introduced entries. For the same reason, the absolute difference maps in the bottom row were masked by the coil-sensitive re-gion ofA, showing that the magnitude and phase are well approximated by assuming the circulant property. Figure 2.3b-d show the same results for the brain, the knee and the calves, respectively, demonstrating the generalizability of this approach to different coil set-ups and geometries.

The product of the inverse of the preconditionerM−1and the system matrixAis shown for the spine, the brain, the knee and the calves in Figure 2.4a-d, respectively.

(16)

Fully sampled R=4 R=8

Difference (5x) Difference (5x)

a

b

c

d

e

Figure 2.2:Reconstruction results for different structured Cartesian undersampling factors. (a) shows the fully sampled scan as a reference, whereas (b) and (c) depict the reconstruction results for undersampling factors four (R = 4) and eight (R = 8), respectively. The absolute difference, magnified five times, is shown in (d) and (e) for R = 4 and R = 8, respectively.

The reconstruction matrix has dimensions 512 × 512. Regularization parameters were set to µ = 1 · 10−3,λ = 4 · 10−3, and

γ = 1 · 10−3.

Different regularization parameter sets show that the preconditioner is a good approx-imate inverse, suggesting efficient convergence.

Table 2.2 reports the number of seconds needed to build the circulant precondi-tioner in MATLAB before the reconstruction starts, for different orders of the recon-struction matrix. Note that the actual number of unknowns in the corresponding sys-tems is equal to the number of elements in the reconstruction matrix size, which leads to more than 1 million unknowns for the 1024 × 1024 case. For all matrix sizes the ini-tialization time is negligible compared with the image reconstruction time.

Figure 2.5a shows the number of iterations required for PCG to converge in each Bregman iteration without preconditioner, with the Jacobi preconditioner and with the circulant preconditioner for regularization parametersµ = 10−3,λ = 4 · 10−3and

γ = 10−3and a reconstruction matrix size of 256 × 256. The Jacobi preconditioner does

not reduce the number of iterations, which shows that the diagonal of the system ma-trixAdoes not contain enough information to result in a good approximation ofA−1. Moreover, it shows that the linear system is invariant under scaling. The circulant pre-conditioner, however, reduces the number of iterations considerably, leading to a total speed-up factor of 4.65 in the PCG part.

(17)

Structured sampling Random sampling

Spine Brain

Knee

Structured sampling Random sampling

Structured sampling Random sampling

S ys te m m at ri x Ci rc ul ant a pproxi m at ion A bs ol ut e di ffe re nc e a b c S ys te m m at ri x Ci rc ul ant a pproxi m at ion A bs ol ut e di ffe re nc e Calves

Structured sampling Random sampling

d

Figure 2.3: System matrix and

its circulant approximation.

The first and the second

columns show the system

matrix elements for structured

and random undersampling

and R = 4, respectively, for the spine (a), the brain (b), the knee (c) and the calves (d). The top row depicts the elemen-twise magnitude for the true

system matrix A, the second

row depicts the elementwise magnitude for the circulant approximated system matrix and the bottom row shows the

absolute difference between

the true system matrix and the

circulant approximation. The

difference maps were masked

by the nonzero-region of A,

since only elements in the

coil-sensitive region of the

preconditioner describe the

final solution.

(18)

Structured sampling Random sampling

Spine Brain

Knee

Structured sampling Random sampling

Structured sampling Random sampling

P ara m et er s et 1 P ara m et er s et 2 P ara m et er s et 3 a b c P ar am et er s et 1 P ar am et er s et 2 P ar am et er s et 3 Calves

Structured sampling Random sampling

d

Figure 2.4:The new system ma-trix. The first column and the second column show the ele-ments of the effective new

sys-tem matrix M−1A for

struc-tured and random undersam-pling and R = 4, respectively, for the spine (a), the brain (b), the knee (c) and the calves (d). The rows show this result for the three studied regularization pa-rameter sets.

(19)

No Preconditioner Jacobi Preconditioner Circulant Preconditioner 2 4 6 8 10 12 14 16 18 20 Bregman counter 0 2 4 6 8 10 12 14 16 18 # pc g i te ra ti ons Set 1: No Preconditioner Set 1: Circulant Preconditioner Set 2: No Preconditioner Set 2: Circulant Preconditioner Set 3: No Preconditioner Set 3: Circulant Preconditioner

2 4 6 8 10 12 14 16 18 20 Bregman counter 0 2 4 6 8 10 12 14 16 18 # pc g i te ra ti ons a b c0 5 10 15 20 2 4 6 8 10 12 14 16 18 No Preconditioner

No Preconditioner with Coil Compression Circulant Preconditioner

Circulant Preconditioner with Coil Compression

Bregman counter # pc g i te ra ti ons

Figure 2.5: Number of iterations needed per Bregman iteration. The circulant preconditioner reduces the number of iterations considerably compared with the non-preconditioned case. The Jacobi preconditioner does not reduce the number of iterations due to the poor approximation of the system matrix’ inverse. (a) depicts the iterations for Set 1: ¡

µ = 1 · 10−3,λ = 4 · 10−3,γ = 1 · 10−3¢, whereas (b) depicts the iterations for Set 1, Set 2: ¡µ = 1 · 10−2,λ = 4 · 10−3,γ = 1 · 10−3¢,

and Set 3:¡

µ = 1 · 10−3,λ = 4 · 10−3,γ = 4 · 10−3¢ The preconditioner shows the largest speed up factor when the regularization parameters are well-balanced. (c) Shown are the number of iterations needed per Bregman iteration with and without coil compression applied. The solid lines and the dashed lines depict the results with and without coil compression, respectively.

128x128 256x256 512x512 1024x1024 Problem size 0 50 100 150 200 250 300 T ot al c om put at ion t im e pc g [s ] No Preconditioner Circulant Preconditioner 128x128 256x256 512x512 1024x1024 Problem size 0 50 100 150 200 250 300 T ot al c om put at ion t im e S B [s ] No Preconditioner Circulant Preconditioner a b c0 10 20 time [s]30 40 50 60 10-1 100 E rror No Preconditioner Circulant Preconditioner

Figure 2.6:Computation time for 20 Bregman iterations and different problem sizes. (a) Using the preconditioner, the total computation time for the PCG part in 20 Bregman iterations is reduced by more than a factor of 4.5 for all studied problem sizes. (b) The computation time for 20 Bregman iterations of the entire algorithm also includes the Bregman update steps, so that the total speedup factor is approximately 2.5 for the considered problem sizes. (c) The two methods converge to the same solution, plotted here for R = 4 and a reconstruction matrix size 256 × 256.

putation time for the reconstruction algorithm, plotted in Figure 2.6 for different prob-lem sizes. Figure 2.6a shows the total PCG computation time when completing the total SB method, whereas Figure 2.6b shows the total computation time required to complete the entire reconstruction algorithm. A fivefold gain is achieved in the PCG part by reducing the number of PCG iterations, which directly relates to the results shown in Figure 2.5a. The overall gain of the complete algorithm, however, is a factor 2.5 instead of 5, which can be explained by the computational costs of the update steps outside the PCG iteration loop (see Algorithm 1). Figure 2.6c also shows the error, de-fined as the normalized 2-norm difference with respect to the fully sampled image, as a function of time. The preconditioned SB scheme converges to the same accuracy as the original SB scheme, since the preconditioner only affects the required number of PCG iterations.

The number of iterations required by PCG for each Bregman iteration is shown in Figure 2.5b for the three parameter sets studied. The preconditioned case always out-performs the non-preconditioned case, but the speed up factor depends on the

(20)

larization parameters. Parameter set 1 depicts the same result as shown in Figure 2.5a and results in the best reconstruction of the fully sampled reference image. In param-eter set 2 more weight is given to the data fidelity term by increasing the paramparam-eterµ. Since the preconditioner relies on an approximation of the data fidelity term, it per-forms less optimally than for smallerµ (such as in set 1) for the first few Bregman it-erations, but there is still a threefold gain in performance. This behavior was already predicted in Figure 2.4. Finally, there is very little change between parameter set 3 and parameter set 1, because the larger wavelet regularization parameterγ gives more weight to a term that was integrated in the preconditioner in an exact way, as for the total variation term, without any approximations.

Figure 2.5c illustrates the required iterations when half of the coils are taken into account by coil compression. Only a small discrepancy is encountered for the first few iterations, since the global structure and content of the system matrix A remain the same, which demonstrates that coil compression and preconditioning can be com-bined to optimally reduce the reconstruction time.

The method also works for different coil configurations. In Figure 2.7 the result is shown when using the 15-channel head coil for the brain scans, the 16-channel knee coil for a knee scan and the 16-channel receive array for the calf scan. The circulant pre-conditioner clearly reduces the number of iterations, with an overall speed-up factor of 4.1/4.4 and 4.5 in the PCG part for the brain (TSE/FFE) and the knee, respectively.

Figure 2.8 shows reconstruction results for scans where the data was directly ac-quired in undersampled mode instead of retrospectively undersampled, for a T2-weighted

TSE scan, a FLAIR scan and a 3D magnetization prepared T1-weighted TFE scan,

lead-ing to PCG acceleration factors of 4.2, 5.1 and 5.4, respectively. The convergence behav-ior is similar to the one observed for the retrospectively undersampled data, demon-strating the robustness of the preconditioning approach in realistic scan setups.

The performance of the preconditioner is also stable in the presence of different noise levels, as shown by experiments in which the excitation tip angle was varied from 10 to 90 degrees in a TSE sequence, and results can be found online in Supporting Fig-ure S1.

2.5. D

ISCUSSION AND

C

ONCLUSION

In this work we have introduced a preconditioner that reduces the reconstruction times for CS and PI problems, without compromising the stability of the numerical SB scheme. Solving an`2-norm minimization problem is the most time-consuming part of this

al-gorithm. This`2-norm minimization problem is written as a linear system of

(21)

Fully sampled R=4 Difference (5x) 5 10 15 20 Bregman counter 0 2 4 6 8 10 12 14 16 # pc g i te ra ti ons No Preconditioner Circulant Preconditioner Convergence a b c d Fully sampled R=2 Difference (5x) Convergence e f g h 5 10 15 20 Bregman counter 0 5 10 15 20 # p cg i te ra ti ons No Preconditioner Circulant Preconditioner Fully sampled R=4 Difference (5x) Convergence i j k l 5 10 1 0 Bregman counter 0 2 4 6 8 10 12 14 16 18 20 # pc g i te ra ti ons No Preconditioner Circulant Preconditioner 5 10 15 20 Bregman counter 0 2 4 6 8 10 12 14 16 18 20 # pc g i te ra ti on s No Preconditioner Circulant Preconditioner Fully sampled R=3 Difference (5x) Convergence m n o p

Figure 2.7:Reconstruction results for different anatomies. (a) shows the fully sampled scan as a reference for the brain, whereas (b) depicts the reconstruction results for an undersampling factor of four (R = 4). The absolute difference, mag-nified five times, is shown in (c). The reconstruction matrix has dimensions 256 × 256 and regularization parameters were

chosen asµ = 1·10−3,λ = 4·10−3, andγ = 2·10−3. The convergence results for the PCG part with and without preconditioner

are plotted in (d), showing similar reduction factors as with the posterior coil. Results for the knee are shown in (e)-(f ) for a reconstruction matrix size 128 × 128 and an undersampling factor of 2 (R = 2). Regularization parameters were chosen as

µ = 0.1,λ = 0.4, and γ = 0.1. Results for the calves are shown in (i)-(l) for a reconstruction matrix size 256 × 256 and R = 4.

Regularization parameters were chosen asµ = 0.1,λ = 0.4, and γ = 0.1. Results for the brain FFE scan are shown in (m)-(p)

for a reconstruction matrix size 256 × 256 and R = 3. Regularization parameters were chosen as µ = 1,λ = 4, and γ = 1.

(22)

Fully sampled R=2

a b

d e

Convergence

f

Fully sampled R=2 Convergence

c g h i Fully sampled R=3 5 10 15 20 Bregman counter 0 2 4 6 8 10 12 14 16 # pc g i te ra ti ons No Preconditioner Circulant Preconditioner 5 10 15 20 Bregman counter 0 2 4 6 8 10 12 14 16 18 20 22 # pc g i te ra ti ons No Preconditioner Circulant Preconditioner Convergence 5 10 15 20 Bregman counter 0 2 4 6 8 10 12 14 16 18 20 22 # pc g i te ra ti ons No Preconditioner Circulant Preconditioner

Figure 2.8:Reconstruction results for data acquired in fully and undersampled mode. (a) shows a fully sampled scan as a

ref-erence for a T2-weighted TSE scan in the brain, whereas (b) depicts the reconstruction results for a prospectively

undersam-pled scan with an acceleration factor of two (R = 2). The reconstruction matrix has dimensions 256 × 256 and regularization

parameters were chosen asµ = 1,λ = 4, and γ = 1. The convergence results for the PCG part with and without preconditioner

are plotted in (c). Results for the FLAIR brain scan are shown in (d)-(f ) for a reconstruction matrix size 240 × 224 and R = 2.

Regularization parameters were chosen asµ = 1.4·102,λ = 5.7·102, andγ = 1.4·102. Results for a 3D magnetization prepared

T1-weighted TFE scan in the brain are shown in (g)-(i) for a reconstruction matrix size 240×224 and R = 3. Regularization

pa-rameters were chosen asµ = 0.5,λ = 2, and γ = 0.5. Note that the data in the left column stem from a different measurement

as the data in the middle column.

(23)

inverting a diagonal matrix and applying two additional FFTs.

With the designed preconditioner the most expensive`2-norm problem was solved

almost 5 times faster than without preconditioning, resulting in an overall speed up factor of about 2.5. The discrepancy between the two speed up factors can be explained by the fact that apart from solving the linear problem, update steps also need to be per-formed. Step 4 and steps 13-15 of Algorithm 1 are especially time consuming since for each coil a 2D Fourier transform needs to be performed. Furthermore, the wavelet computation in steps 4, 8, and 11 are time consuming factors as well. Therefore, speed up factors higher than 2.5 are expected for an optimized Bregman algorithm. Further acceleration can be obtained through coil compression [46, 47], as the results in this study showed that it has negligible effect on the performance of the preconditioner.

The time required to construct the preconditioner is negligible compared with the reconstruction times as it involves only a few FFTs. The additional costs of applying the preconditioner on a vector is negligible as well, because it involves only two Fourier transformations and an inexpensive multiplication with a diagonal matrix. Therefore, the method is highly scalable and can handle large problem sizes.

The preconditioner works optimally when the regularization terms in the mini-mization problem are BCCB matrices in the final system matrix. This implies that the total variation operators should be chosen such that the final total variation matrix is BCCB, and that the wavelet transform should be unitary. Both the system matrix and the preconditioner can be easily adjusted to support single regularization instead of the combination of two regularization approaches.

The BCCB approximation for the data fidelity term supports both structured and random Cartesian undersampling patterns and works well for different undersampling factors. The performance of the preconditioner was experimentally validated using a variable density sampling scheme to prospectively undersample the data. The conver-gence behavior shows similar results as the retrospectively undersampled case.

The regularization parameters were shown to influence the performance of the pre-conditioner. Since the only approximation in the preconditioner comes from the ap-proximation of the data fidelity term, the preconditioner results in poorer performance if the data fidelity term is very large compared with the regularization terms. In prac-tice, such a situation is not likely to occur if the regularization parameters are cho-sen such that an optimal image quality is obtained in the reconstructed image. In this work, the regularization parameters were chosen empirically and were kept constant throughout the algorithm. For SB-type methods, however, updating the regularization parameters during the algorithm makes the performance of the algorithm less depen-dent on the initial choice of the parameters [48]. Moreover, it might result in improved convergence, from which our work can benefit.

This work focussed on the linear part of the SB method, in which only the right-hand side vector changes in each iteration and not the system matrix. Other`1-norm

minimization algorithms exist that require a linear solver [49], such as IRLS or Second-Order Cone Programming. For those type of algorithms linear preconditioning tech-niques can be applied as well. Although the actual choice for the preconditioner de-pends on the system matrix of the linear problem, which is in general different for dif-ferent minimization algorithms, similar techniques as used in the current work can be exploited to construct a preconditioner for other minimization algorithms.

(24)

As outlined earlier in the introduction, there are alternative approaches to elim-inating the iterative scheme to solve the`2-norm minimization problem. Although a

detailed comparison of techniques is difficult due to the required choice of reconstruc-tion parameters, it is worth noting that in [31] a comparison was made between the non-preconditioned SB scheme that we also use as comparison in our work, and the authors’ extra variable splitting method. Their results suggest that the preconditioned SB scheme with an acceleration factor of 2.5 is very similar to the performance of the method adopting extra variable splitting. Moreover, variable splitting is not possible for non-Cartesian data acquisition, but is easily incorporated into the preconditioned SB approach. In this extension, the block circulant matrix with circulant blocks is replaced by the block Toeplitz matrix with Toeplitz blocks [40]. Given the promising results for Cartesian trajectories, future work will therefore focus on including non-Cartesian data trajectories into a single unified preconditioned SB framework.

Another large group of reconstruction algorithms involve gradient update steps; ex-amples in this group are the Iterative Shrinkage-Thresholding Algorithm (ISTA), FISTA, MFISTA, and BARISTA [21, 50–52]. In [52] it was discussed that the performance of FISTA, for which convergence depends on the maximum sum of squared absolute coil sensitivity value, can be poor due to large variations in coil sensitivities. In our work, however, the coil sensitivity maps were normalized such that the corresponding sum-of-squares map is constant and equal to one in each spatial location within the object region. The normalization of these coil sensitivities might therefore lead to acceler-ation of FISTA-type algorithms. Thus, it would be interesting to compare the perfor-mance of the preconditioned SB algorithm with the perforperfor-mance of FISTA when incor-porating normalized coil sensitivities into both algorithms.

In conclusion, the designed FFT-based preconditioner reduces the number of itera-tions required for solving the linear problem in the SB algorithm considerably, resulting in an overall acceleration factor of 2.5 for PI-CS reconstructions. The approach works for different coil-array configurations, MR sequences, and non-power of two acquisi-tion matrices, and the time to construct the precondiacquisi-tioner is negligible. Therefore, it can be easily used and implemented, allowing for efficient computations.

A

CKNOWLEDGEMENT

We would like to thank Ad Moerland from Philips Healthcare Best (The Netherlands) and Mariya Doneva from Philips Research Hamburg (Germany) for helpful discussions on reconstruction. Furthermore, we express our gratitude towards the reviewers for their constructive criticism.

(25)

R

EFERENCES

[1] A. Deshmane, V. Gulani, M. A. Griswold, and N. Seiberlich, Parallel MR imaging, Journal of Magnetic Resonance Imaging 36, 55–72 (2012).

[2] M. Blaimer, F. Breuer, M. Mueller, R. M. Heidemann, M. A. Griswold, and P. M. Jakob, SMASH, SENSE, PILS, GRAPPA: how to choose the optimal method, Topics in Magnetic Resonance Imaging 15, 223–236 (2004).

[3] K. P. Pruessmann, M. Weiger, M. B. Scheidegger, and P. Boesiger, SENSE: sensitivity encoding for fast mri, Magnetic Resonance in Medicine 42, 952–962 (1999). [4] M. A. Griswold, P. M. Jakob, R. M. Heidemann, et al., Generalized

autocalibrat-ing partially parallel acquisitions (GRAPPA), Magnetic Resonance in Medicine 47, 1202–1210 (2002).

[5] D. L. Donoho, M. Elad, and V. N. Temlyakov, Stable recovery of sparse overcomplete representations in the presence of noise, IEEE Transactions on Information Theory

52, 6–18 (2005).

[6] M. Lustig, D. Donoho, and J. M. Pauly, Sparse MRI: The application of compressed sensing for rapid MR imaging, Magnetic Resonance in Medicine 58, 1182–1195 (2007).

[7] R. Otazo, D. Kim, L. Axel, and D. K. Sodickson, Combination of compressed sens-ing and parallel imagsens-ing for highly accelerated first-pass cardiac perfusion MRI, Magnetic Resonance in Medicine 64, 767–776 (2010).

[8] D. Liang, B. Liu, J. Wang, and L. Ying, Accelerating SENSE using compressed sens-ing, Magnetic Resonance in Medicine 62, 1574–1584 (2009).

[9] H. Chandarana, L. Feng, T. K. Block, et al., Free-breathing contrast-enhanced mul-tiphase MRI of the liver using a combination of compressed sensing, parallel imag-ing, and golden-angle radial samplimag-ing, Investigative Radiology 48 (2013).

[10] S. Boyd and L. Vandenberghe, Convex optimization (Cambridge University Press, 2004).

[11] E. J. Candes, M. B. Wakin, and S. P. Boyd, Enhancing sparsity by reweighted`1

minimization, Journal of Fourier Analysis and Applications 14, 877–905 (2008). [12] K. T. Block, M. Uecker, and J. Frahm, Undersampled radial mri with multiple coils.

iterative image reconstruction using a total variation constraint, Magnetic Reso-nance in Medicine 57, 1086–1098 (2007).

[13] S. Vasanawala, M. Murphy, M. T. Alley, et al., Practical parallel imaging compressed sensing MRI: Summary of two years of experience in accelerating body MRI of pedi-atric patients, in IEEE International Symposium on Biomedical Imaging (2011) p. 1039–1043.

[14] T. Goldstein and S. Osher, The split Bregman method for L1-regularized problems, SIAM Journal on Imaging Sciences 2, 323–343 (2009).

(26)

[15] M. Murphy, M. Alley, J. Demmel, K. Keutzer, S. Vasanawala, and M. Lustig, Fast

`1-spirit compressed sensing parallel imaging MRI: Scalable parallel

implemen-tation and clinically feasible runtime, IEEE Transactions on Medical Imaging 31, 1250–1262 (2012).

[16] S. Ma, W. Yin, Y. Zhang, and A. Chakraborty, An efficient algorithm for compressed MR imaging using total variation and wavelets, in IEEE Conference on Computer Vision and Pattern Recognition (2008) p. 1–8.

[17] S. Ramani and J. A. Fessler, An accelerated iterative reweighted least squares algo-rithm for compressed sensing MRI, in IEEE International Symposium on Biomedi-cal Imaging (2010) p. 257–260.

[18] B. Liu, K. King, M. Steckner, J. Xie, J. Sheng, and L. Ying, Regularized sensitivity encoding (SENSE) reconstruction using bregman iterations, Magnetic Resonance in Medicine 61, 145–152 (2009).

[19] S. Kim, K. Koh, M. Lustig, and S. Boyd, An efficient method for compressed sensing, in IEEE International Conference on Image Processing (2007) p. III 117–III 120. [20] E. J. Candes and J. K. Romberg, Signal recovery from random projections, in SPIE

Computational Imaging III, Vol. 5674 (2005) p. 76–86.

[21] I. Daubechies, M. Defrise, and C. De Mol, An iterative thresholding algorithm for linear inverse problems with a sparsity constraint, Communications on Pure and Applied Mathematics: A Journal Issued by the Courant Institute of Mathematical Sciences 57, 1413–1457 (2004).

[22] S. G. Lingala, Y. Hu, E. DiBella, and M. Jacob, Accelerated dynamic MRI exploiting sparsity and low-rank structure: kt SLR, IEEE Transactions on Medical Imaging 30, 1042–1054 (2011).

[23] D. L. Donoho et al., Compressed sensing, IEEE Transactions on Information Theory

52, 1289–1306 (2006).

[24] L. M. Bregman, The relaxation method of finding the common point of convex sets and its application to the solution of problems in convex programming, USSR Computational Mathematics and Mathematical Physics 7, 200–217 (1967). [25] R.-Q. Jia, H. Zhao, and W. Zhao, Convergence analysis of the Bregman method for

the variational model of image denoising, Applied and Computational Harmonic Analysis 27, 367–379 (2009).

[26] S. Setzer, Operator splittings, Bregman methods and frame shrinkage in image pro-cessing, International Journal of Computer Vision 92, 265–280 (2011).

[27] J.-F. Cai, S. Osher, and Z. Shen, Split Bregman methods and frame based im-age restoration, SIAM Journal on Multiscale Modeling and Simulation 8, 337–369 (2009).

(27)

[28] O. L. Elvetun and B. F. Nielsen, The split Bregman algorithm applied to PDE-constrained optimization problems with total variation regularization, Computa-tional Optimization and Applications 64, 699–724 (2016).

[29] J. Zhou, J. Li, and J. C. Gombaniro, Combining SENSE and compressed sensing MRI with a fast iterative contourlet thresholding algorithm, in 12th International Con-ference on Fuzzy Systems and Knowledge Discovery (FSKD) (2015) p. 1123–1127. [30] S. F. Cauley, Y. Xi, B. Bilgic, et al., Fast reconstruction for multichannel

com-pressed sensing using a hierarchically semiseparable solver, Magnetic Resonance in Medicine 73, 1034–1040 (2015).

[31] S. Ramani and J. A. Fessler, Parallel MR image reconstruction using augmented Lagrangian methods, IEEE Transactions on Medical Imaging 30, 694–706 (2010). [32] N. Cai, W. Xie, Z. Su, S. Wang, and D. Liang, Sparse parallel MRI based on

accel-erated operator splitting schemes, Computational and Mathematical Methods in Medicine 2016 (2016).

[33] M. V. Afonso, J. M. Bioucas-Dias, and M. A. Figueiredo, Fast image recovery us-ing variable splittus-ing and constrained optimization, IEEE Transactions on Image Processing 19, 2345–2356 (2010).

[34] M. Benzi, Preconditioning techniques for large linear systems: a survey, Journal of Computational Physics 182, 418–477 (2002).

[35] J. Scott and M. Tùma, On signed incomplete Cholesky factorization preconditioners for saddle-point systems, SIAM Journal on Scientific Computing 36, A2984–A3010 (2014).

[36] M. Baumann, R. Astudillo, Y. Qiu, E. Y. Ang, M. Van Gijzen, and R.-E. Plessix, An MSSS-preconditioned matrix equation approach for the time-harmonic elas-tic wave equation at multiple frequencies, Computational Geosciences 22, 43–61 (2018).

[37] C. Chen, Y. Li, L. Axel, and J. Huang, Real time dynamic MRI with dynamic total variation, in International Conference on Medical Image Computing and Computer-Assisted Intervention (2014) p. 138–145.

[38] R. Li, Y. Li, R. Fang, S. Zhang, H. Pan, and J. Huang, Fast preconditioning for accel-erated multi-contrast MRI reconstruction, in International Conference on Medical Image Computing and Computer-Assisted Intervention (2015) p. 700–707.

[39] Z. Xu, Y. Li, L. Axel, and J. Huang, Efficient preconditioning in joint total variation regularized parallel MRI reconstruction, in International Conference on Medical Image Computing and Computer-Assisted Intervention (2015) p. 563–570.

[40] D. S. Weller, S. Ramani, and J. A. Fessler, Augmented lagrangian with variable

splitting for faster non-Cartesian l1-SPIRiT MR image reconstruction, IEEE

Trans-actions on Medical Imaging 33, 351–361 (2013).

(28)

[41] Y. Saad, Iterative methods for sparse linear systems, Vol. 82 (SIAM, 2003).

[42] R. N. Bracewell and R. N. Bracewell, The Fourier transform and its applications, Vol. 31999 (McGraw-Hill New York, 1986).

[43] R. M. Gray, Toeplitz and circulant matrices: A review, Foundations and Trends in Communications and Information Theory 2, 155–239 (2006).

[44] J. P. Mugler III, Optimized three-dimensional fast-spin-echo MRI, Journal of Mag-netic Resonance Imaging 39, 745–767 (2014).

[45] M. Uecker, P. Lai, M. J. Murphy, et al., ESPIRiT—an eigenvalue approach to au-tocalibrating parallel MRI: where SENSE meets GRAPPA, Magnetic Resonance in Medicine 71, 990–1001 (2014).

[46] F. Huang, S. Vijayakumar, Y. Li, S. Hertel, and G. R. Duensing, A software chan-nel compression technique for faster reconstruction with many chanchan-nels, Magnetic Resonance Imaging 26, 133–141 (2008).

[47] T. Zhang, J. M. Pauly, S. S. Vasanawala, and M. Lustig, Coil compression for ac-celerated imaging with cartesian sampling, Magnetic Resonance in Medicine 69, 571–582 (2013).

[48] S. Boyd, N. Parikh, E. Chu, B. Peleato, J. Eckstein, et al., Distributed optimization and statistical learning via the alternating direction method of multipliers, Foun-dations and Trends in Machine learning 3, 1–122 (2011).

[49] P. Rodríguez and B. Wohlberg, Efficient minimization method for a generalized total variation functional, IEEE Transactions on Image Processing 18, 322–332 (2008).

[50] A. Beck and M. Teboulle, A fast iterative shrinkage-thresholding algorithm for lin-ear inverse problems, SIAM Journal on Imaging Sciences 2, 183–202 (2009). [51] A. Beck and M. Teboulle, Fast gradient-based algorithms for constrained total

vari-ation image denoising and deblurring problems, IEEE Transactions on Image Pro-cessing 18, 2419–2434 (2009).

[52] M. J. Muckley, D. C. Noll, and J. A. Fessler, Fast parallel MR image reconstruction via B1-based, adaptive restart, iterative soft thresholding algorithms (BARISTA), IEEE Transactions on Medical Imaging 34, 578–588 (2014).

(29)

Referenties

GERELATEERDE DOCUMENTEN

Although some type of sampling schemes require the use of iterative reconstruction algorithms to obtain accurate pa- rameter quantification (see Chapter 3), there are also

Met behulp van deze preconditioneringsmatrix kon het lineaire stelsel in Split Bregman (SB) bijna vijf keer sneller opgelost worden dan zonder preconditioneringsmatrix, wat

neighbor embedding as a tool for visualizing the encoding capability of magnetic resonance fingerprinting dictionaries, under review in Medical Image Analysis (2019)5.

Classical model-based reconstruction techniques and deep learning models should reinforce each other, to achieve a robust and reliable reconstruction model for MR images.. It is

The module isomorphism problem can be formulated as follows: design a deterministic algorithm that, given a ring R and two left R-modules M and N , decides in polynomial time

The handle http://hdl.handle.net/1887/40676 holds various files of this Leiden University dissertation.. Algorithms for finite rings |

Professeur Universiteit Leiden Directeur BELABAS, Karim Professeur Universit´ e de Bordeaux Directeur KRICK, Teresa Professeur Universidad de Buenos Aires Rapporteur TAELMAN,

For the block circulant preconditioner the number of iterations in GMRES did not depend on the time step, however for large time steps block Jacobi does better and for smaller