• No results found

Application of artificial intelligence in cardiac CT: From basics to clinical practice

N/A
N/A
Protected

Academic year: 2021

Share "Application of artificial intelligence in cardiac CT: From basics to clinical practice"

Copied!
11
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

University of Groningen

Application of artificial intelligence in cardiac CT

van den Oever, L B; Vonder, M; van Assen, M; van Ooijen, P M A; de Bock, G H; Xie, X Q;

Vliegenthart, R

Published in:

European Journal of Radiology

DOI:

10.1016/j.ejrad.2020.108969

IMPORTANT NOTE: You are advised to consult the publisher's version (publisher's PDF) if you wish to cite from

it. Please check the document version below.

Document Version

Publisher's PDF, also known as Version of record

Publication date:

2020

Link to publication in University of Groningen/UMCG research database

Citation for published version (APA):

van den Oever, L. B., Vonder, M., van Assen, M., van Ooijen, P. M. A., de Bock, G. H., Xie, X. Q., &

Vliegenthart, R. (2020). Application of artificial intelligence in cardiac CT: From basics to clinical practice.

European Journal of Radiology, 128, [108969]. https://doi.org/10.1016/j.ejrad.2020.108969

Copyright

Other than for strictly personal use, it is not permitted to download or to forward/distribute the text or part of it without the consent of the author(s) and/or copyright holder(s), unless the work is under an open content license (like Creative Commons).

Take-down policy

If you believe that this document breaches copyright please contact us providing details, and we will remove access to the work immediately and investigate your claim.

Downloaded from the University of Groningen/UMCG research database (Pure): http://www.rug.nl/research/portal. For technical reasons the number of authors shown on this cover page is limited to 10 maximum.

(2)

Contents lists available atScienceDirect

European Journal of Radiology

journal homepage:www.elsevier.com/locate/ejrad

Application of arti

ficial intelligence in cardiac CT: From basics to clinical

practice

L.B. van den Oever

a

, M. Vonder

b

, M. van Assen

c,d

, P.M.A. van Ooijen

a

, G.H. de Bock

b

, X.Q. Xie

e

,

R. Vliegenthart

f,

*

aUniversity of Groningen, University Medical Center Groningen, Department of Radiation Oncology, the Netherlands bUniversity of Groningen, University Medical Center Groningen, Department of Epidemiology, the Netherlands cUniversity of Groningen, University Medical Center Groningen, Faculty of Medicine, Groningen, the Netherlands

dDivisions of Cardiothoracic Imaging, Nuclear Medicine and Molecular Imaging, Department of Radiology and Imaging Sciences, Emory University Hospital, Atlanta, GA,

USA

eShanghai General Hospital, Shanghai Jiao Tong University School of Medicine, Department of Radiology, Shanghai, The People’s Republic of China fUniversity of Groningen, University Medical Center Groningen, Department of Radiology, the Netherlands

A R T I C L E I N F O Keywords: Cardiovascular Disease Artificial Intelligence Machine Learning Computed Tomography A B S T R A C T

Research into the possibilities of AI in cardiac CT has been growing rapidly in the last decade. With the rise of publicly available databases and AI algorithms, many researchers and clinicians have started investigations into the use of AI in the clinical workflow. This review is a comprehensive overview on the types of tasks and applications in which AI can aid the clinician in cardiac CT, and can be used as a primer for medical researchers starting in thefield of AI. The applications of AI algorithms are explained and recent examples in cardiac CT of these algorithms are further elaborated on. The critical factors for implementation in the future are discussed.

1. Introduction

The capabilities of artificial intelligence (AI) are rapidly progressing

and the research community is getting increasingly interested in its possibilities. With advances in computing power, storage capabilities and innovative algorithms, the use of machine learning (ML) and deep learning (DL) in medical imaging research has grown rapidly. The public availability and open source nature of AI libraries, such as

Tensorflow [1] and PyTorch [2], make research and use of AI

algo-rithms possible not only for professional computer scientists but also for

clinical researchers from a wide variety offields. The application of AI

in cardiac CT is of interest due to cardiac CT’s fast innovative nature,

increasing use, complex imaging protocols and extensive analysis pro-cedures. As illustration, a quick PubMed search on the Mesh terms “Diagnostic Imaging” AND “Cardiovascular Disease” AND “Machine Learning” shows that in 2013, 7 articles were published in this field, and in 2018, 76 articles.

The usage of AI algorithms can have several advantages. First of all, AI algorithms are able to work continuously and at high speed, and they can perform very tedious tasks automatically, like segmentation of cardiac structures for calculating ventricular ejection fraction, left

ventricle mass or myocardium thickness [3]. This could increase

objectivity, decrease the need for manual intervention, and reduce human workload. At the same time, the reproducibility of the process could increase drastically. Furthermore, in both clinic and screening, where radiologists are faced with an increasing amount of data it saves radiologists precious time that they can spend on more complex data or

rarer cases. Second, in some specific fields, AI algorithms can be used to

detect relations between biomarkers and outcomes or detect patterns that are too complex for human readers, for instance, by requiring too many variables to be connected, or by being indistinguishable to human eyes.

In this review, the application of AI algorithms in thefield of cardiac

CT will be discussed based on the different task categories of AI

algo-rithms, clarifying the possibilities of AI research to further thefield of

cardiac CT. To understand the possibilities of AI, it is important tofirst

introduce the basic terminology of application tasks. Second, a state-of-the-art example is elaborated to illustrate what type of clinical question can be answered with the particular AI task category. The algorithms used and how they work is described. Literature from the last three years is summarized in tables for comparison and further illustration of clinical questions that might be answered by using AI. The research on AI moves very quickly and is very innovative. Better performing soft-ware libraries, training methods and AI models accompanied by

https://doi.org/10.1016/j.ejrad.2020.108969

Received 7 November 2019; Received in revised form 30 January 2020; Accepted 11 March 2020

Corresponding author at: University Medical Center Groningen, Dept of Radiology, EB45, Hanzeplein 1, 9713 GZ, Groningen, the Netherlands. E-mail address:r.vliegenthart@umcg.nl(R. Vliegenthart).

European Journal of Radiology 128 (2020) 108969

0720-048X/ © 2020 Published by Elsevier B.V.

(3)

advances in hardware performance, result in previous work to become obsolete very rapidly. For this reason, this review only covers papers published in the last three years. Finally, the limitations of current AI algorithms are discussed and suggestions are made on how to get AI closer to clinical implementation.

2. Definition of AI task categories in cardiac CT

AI techniques can be divided in categories based on the primary purpose of the algorithm. This results in the following AI task

cate-gories: Image improvement, diagnostic classification, object detection,

seg-mentation of structures and prognosis and outcome prediction. Although some methods have overlapping purposes, or use multiple algorithms, for this review we choose the primary outcome of the last AI algorithm as main purpose. For instance, the volume of the left ventricle can be calculated by using an AI algorithm to segment the left ventricle and counting the number of voxels in the result. The volume can also be

calculated by the AI algorithm itself without segmenting itfirst. In the

former, the task category is segmentation and in the latter, it is pre-diction. The distinction is in the output of the network. Segmentation algorithms have a binary image as output, while prediction algorithms have a number as output. We will further elaborate on the categories in this chapter.

2.1. Image improvement & generation

An emerging AI category of interest is thefield of image

improve-ment. Image improvement algorithms in the cardiac CTfield are mainly

focused on reducing noise or artefacts, as well as on creating additional images. In accordance with the ALARA principle, research is done on

the reduction of radiation dose given to a patient [4–6]. However, this

often results in reduced image quality, hampering optimal image ana-lysis. AI algorithms that allow for optimizing image quality could greatly help lower radiation dose while maintaining diagnostic image

quality, an example of this is seen in Fig. 1. Another way to reduce

radiation dose is by creating multiple virtual images from a set of true images. Potentially, with the use of AI algorithms, it might be possible to create contrast-enhanced CT images out of non-enhanced images, eliminating the need for a contrast-enhanced acquisition thus reducing

radiation and the use of contrast dose in patients [7].

2.2. Diagnostic classification

Classification algorithms, such as support vector machines (SVM) or

convolutional neural networks (CNN), use their input to sort images or patients into a pre-specified number of categories. Usually, their output consists of a probability of the input falling in each of these categories.

An example question would be“Is this an image of a patient with or

without myocardial infarction?”. InFig. 2a, an example of this

classi-fication problem is presented. Two different classes are represented by dots and crosses. The algorithm will then separate the two datasets as symbolized by the yellow dotted line.

2.3. Object detection

Object detection tries to distinguish specific objects within an image

and label the objects. For instance,finding the heart in a CT image with

a‘regions with convolutional neural network’ (R-CNN). The output of

such algorithms are usually bounding boxes to indicate where the ob-ject in the image is. Bounding boxes are a set of coordinates where an

object can be found within the image, as can be seen inFig. 3.

Cur-rently, a method purely for object detection alone is not often used in cardiac CT AI research, since most algorithms not only detect objects, but directly segment them. Therefore, for the review of AI applications in cardiac CT, we combine object detection and segmentation into one category.

2.4. Segmentation

Segmentation in cardiac CT can be considered as pixel-based la-belling. Contrary to object detection that outputs a bounding box

around a target object, segmentation algorithms, such as U-net [8],

label each pixel in the image as either belonging to the target object or not. This results in binary masks for each of the target objects, where the voxels are 0 for not belonging to the object and 1 for belonging to the object. This can then be converted to contours if necessary. After this, the segmented object can be analysed separately. Pixel-wise

seg-mentation can also be interpreted as pixel-based classification. Each

pixel is classified as being in a class, where ‘no class’ is also considered a class and is mostly used to remove background without further

speci-fication.Figs. 4 and 5give examples of segmentation tasks, one that

segments coronary calcium, whereas all pixels outside those containing

coronary calcium are labelled as‘no class’ or background and one that

does the same for the heart itself. For segmentation results, the Dice

Fig. 1. Example of a Deep learning noise reduction algorithm. On the left (A), an ultra-low dose chest CT image and, on the right (B), the noise reduced chest CT image as generated by the AI algorithm.

(4)

coefficient (DC) is often used [9–11]. DC is a measure for overlap be-tween the predicted segmentation and the truth. A DC score of 1 means perfect overlap and a score of 0 means no overlap at all between pre-diction and truth.

2.5. Prognosis and outcome prediction

Where it could be argued that the previous categories are based on

the classification of an image, object or pixel, prognosis and prediction

algorithms are mainly based on regression. Whereas classification tries to sort data into pre-defined classes, prediction is done on a continuous scale instead, resulting in a numerical output such as ejection fraction

or cardiovascular disease risk.Fig. 2b shows an example of a typical

regression problem. The dots represent the training data, which the network uses to estimate the relationship between variables and out-come. The input of the algorithm is not limited to only imaging data, but can also include other biomarkers. Imaging biomarkers are often

first extracted from the image, either by other AI algorithms, image processing techniques or manually, before being used as input for prognosis or outcome prediction algorithms.

3. Applications of AI in cardiac CT

In this chapter, one task for each previously mentioned AI category is further discussed by using a cardiac CT example from state-of-the-art literature from the last three years. First, the clinical relevance of the application is shortly introduced. Second, the method and the con-cerning AI algorithms are explained. Other tasks for each category and their accuracy are summarized in the tables mentioned in the sub-chapters.

3.1. Image improvement & generation

DL architectures can help with improving image quality or even be used to convert non-contrast CT to contrast CT scans, reducing the number of CT acquisitions needed for a full clinical work up and po-tentially reducing radiation and contrast dose. An example is the work

of Santini et al. [7], in which they attempt to synthesize a coronary CT

angiography (CCTA) by using non-contrast CT images as input and CCTA images as output of a publicly available DL neural network. Manual segmentations of the left atrium and left ventricle on the non-contrast images were made by independent observers for later use as a metric for the quality of the scans. The segmentations of the left atrium and ventricle were also acquired by thresholding the axial contrast CT images generated by the neural network. The manual and the predicted segmentation were then compared to obtain the DC score. Manual segmentation by observers reached a DC score of 0.85 for the task of LA and LV segmentation, whereas the proposed AI method reached a DC of 0.89 with more reliable geometry than the observers. This AI algorithm is based on a supervised learning method, meaning that for the training phase of this algorithm both non-contrast and the contrast CT need to be present.

Since combined non-contrast and contrast CT examinations are not always available, Geng et al. used an unsupervised method on the

si-nograms to reduce noise in non-contrast low-dose CT images [12].

Unsupervised learning does not require labelled data, thereby solving this problem by excluding the need for a contrast-enhanced scan. By creating labels with a maximum a posteriori probability (MAP) model

[13], a network can be trained to create clean sinograms out of noisy

ones, reducing the amount of noise in the reconstructed non-contrast low dose CT. A MAP model is computationally very heavy, thus taking a Fig. 2. Basic 2D representation of classification (A) and regression (B). In classification (A), the division (yellow dotted line) between patients with (red crosses) and without (blue dots) myocardial infarction is sought. In regression (B), the outcome is a continuous variable instead of two classes. Based on the values of the variables on the x- and y-axes, a prediction (the yellow dotted line) of outcome is done, for instance, the 5-year survival odds.

Fig. 3. Illustration of a bounding box. A bounding box (in red) around the heart in a non-contrast cardiac CT as found by an object detection neural network. The image is cropped to the bounding box to reduce input variables and memory usage for further analysis on coronary artery calcium and epicardial fat.

L.B. van den Oever, et al. European Journal of Radiology 128 (2020) 108969

(5)

long time to run and needing specialized computing clusters to run. Also, MAP models cannot process sinograms separately and use learned features from other sinograms to make predictions. However, by using the results of the MAP model for the creation of labels, a neural network can be trained to create cleaner sinograms in a shorter time with better results than the MAP model itself. In this study, a supervised network was also trained to compare supervised vs unsupervised learning. The structural similarity (SSIM) index is used as a metric for the amount of

perceivable noise in the image [14]. The supervised and unsupervised

methods reached an SSIM index score average of 0.926 and 0.930. A

SSIM of 0.6157 was reached with normal filtered back projection,

showing considerable improvement.

Both papers, as described earlier and seen inTable 1, show that

image enhancement by AI might be a real option in the future. The number of datasets used in the papers described in this section is small, the test sets only consisting of 20 patients, which makes it difficult to comment on possible implementation. The next step for these algo-rithms used would be to increase and diversify the datasets.

3.2. Diagnostic classification

In cardiac CT, classification is commonly used to determine the

presence of cardiac diseases. Frequently used classes are ‘diseased’

versus‘non-diseased’ that can be applied to imaging data for a wide

variety of diseases. Some examples of AI classification applications and

their results published in the last three years are shown inTable 2.

Coronary artery disease (CAD) is one of the leading causes of

mortality worldwide [15,16]. The early detection of CAD by

non-in-vasive imaging is an important step to timely treat the disease and

prevent cardiovascular events [17]. Non-invasive diagnosis of CAD can

be done by functional imaging, detecting haemodynamic changes caused by CAD, or by anatomical imaging, where the coronary artery tree is evaluated. CCTA has become the primary gatekeeper for further invasive testing with its excellent sensitivity and negative predictive

value for coronary stenosis [18,19]. To further clarify the application of

classification in cardiac CT image analysis, the detection of coronary artery stenosis is taken as an example and the potential role of AI ex-plained in more detail.

Van Hamersvelt et al. [20] expanded on previous work of the same

research group by Zreik et al. [21]. The main goal of their study was to

automatically detect functionally significant coronary artery stenosis

based on a rest CCTA, as measured by a fractionalflow reserve (FFR)

of > 0.78. Their method evaluated the textural information of the myocardium of the left ventricle (LVM) from the image instead of using expert engineered features, such as degree of stenosis. Functionally

significant stenosis causes stress-induced ischemia of the LVM that leads

Fig. 4. Example of an AI segmentation task for coronary artery calcium. The non-contrast CT input image on the left is processed by a deep learning algorithm similar to U-net to detect and segment the calcium in the coronary arteries. The output of the network is a binary image (right) of pixels with (white) and without (black) coronary artery calcium.

Fig. 5. Example of an AI segmentation task for the heart in a cardiac CT. The input is an axial CT image(A). Whereas inFig. 3, a bounding box was used, this segmentation algorithm based on U-net creates a binary image of the location of the heart. This binary image can then be projected over the original CT as seen in (B).

(6)

Table 1 State-of-the-art image enhancement studies in cardiac CT. The reference in bold is an article discussed in the subchapter. First author Aim Modality Number of participants in training set Number of participants in test set Reported metrics DC Other metric Santini et al., 2018 [ 7 ] Synthetic creation of contrast enhanced CT from non-contrast CT CT 120 20 DC, error, NMI index, PSNR 0.89 -Geng et al., 2018 [ 12 ] Noise reduction on non-contrast low dose CT CT 50 20 PSNR, SSIM, FSIM -SSIM (0.97) CT = Computed Tomography, DC = Dice Similarity Coe ffi cient, NMI = Normalized Mutual Information index, PSNR = Peak Signal-To-Noise Ratio index, SSIM = Structural Similarity Index, FSIM = Feature Simila rity Index. Table 2 State-of-the-art classi fi cation studies in cardiac CT. The reference in bold concerns an article discussed in the subchapter. Author Aim Modality Number of participants in training set Number of participants in test set Reported metrics AUC Sensitivity Speci fi city Accuracy Mannil et al., 2018 [ 40 ] Detection of MI on non-contrast CT Non-contrast CT 58 29 AUC, sens, spec, FPR 0.78 0.86 0.81 0.862 Jin et al., 2018 [ 41 ] Assisted diagnosis of AF Non-contrast CT 55 45 Acc, OR -0.93 Zreik et al., 2018 [ 21 ] Automatic detection of coronary artery stenosis CCTA 20 126 DC, MD, sens, spec, AUC 0.74 0.70 0.71 -Alizadehsani et al., 2018 [ 42 ] Automated detection of CAD CCTA 450 50 Acc, sens, spec, FP, FN 0.92 1 0.881 -Van Hamersvelt et al., 2019 [ 20 ] Automatic detection of coronary artery stenosis CCTA 101 126 AUC, sens, spec, PPV, NPV, acc 0.76 0.846 0.484 0.717 MI = Myocardial Infarction, CT = Computed Tomography, CCTA = Cardiac Computed Tomography Angiography, AF = Atrial Fibrillation, CAD = Coronary Arter y Disease, AUC = Area Under the Curve, sens = sensitivity, spec = speci fi city, FPR = False Positive Rate, Acc = accuracy, OR = odds ratio, DC = Dice Similarity Coe ffi cient, MD = Mean Di ff erence, PPV = Positive Prediction Value, NPV = Negative Predictive Value

L.B. van den Oever, et al. European Journal of Radiology 128 (2020) 108969

(7)

to perfusion defects. The early effect of ischemia might be invisible to the human eyes, but could be detectable by an AI algorithm. The output

of this network was whether a patient has functionally significant

ste-nosis or not. Van Hamersvelt et al. used multiple AI techniques to reach their goal. First, the LVM was segmented using a CNN, a task that will be further elucidated in an example in a later paragraph. Second, a convolutional auto-encoder, an unsupervised method, collected the most relevant features of the LVM. Third, these features were passed to

an SVM that classified the regions of the LVM as either stenosis or no

stenosis. SVM is a supervised technique commonly used for classi

fica-tion or regression analysis in two class problems such as this one.

For the final evaluation, the DL method with classification was

combined with assessment based on the visual assessment of the degree of stenosis. The method used by the authors reached a sensitivity and specificity of 0.85 and 0.48 and an AUC of 0.76 compared to invasive FFR. Although the AUC of 0.76 is similar compared to other medical AI

papers as can be seen inTable 2on classification algorithms, it is higher

than the AUC of the reported evaluation of degree of stenosis on CCTA alone versus the reference FFR reference standard (AUC = 0.68). 3.3. Object detection & Segmentation

As explained earlier, in the medicalfield, object detection is

com-bined with segmentation, and therefore, the two are discussed together in this chapter.

An example in cardiac imaging of a segmentation task is automatic detection and segmentation of coronary artery calcium (CAC) on ECG-triggered non-contrast CT acquisitions. The CAC score is a strong

pre-dictor for cardiac events [17]. A summary with other examples is shown

inTable 3.

Research on CAC scoring is currently being done based on dedicated

cardiac CT acquisitions [17], but also using non-dedicated chest CT

scans [22]. Results from the MICCAI 2014 challenge, where multiple

research groups tried to get the best accuracy on automatic CAC

scoring, and the work of Wolterink et al. [23] show the ability of AI to

perform CACS on CCTA images. Wolterink et al. investigated the ac-curacy of a CNN based AI algorithm to segment CAC in 200 patients,

reaching a sensitivity of 72% for detection of calcified lesions.

An additional pre-processing step was used by Wolterink to resolve the issues of inter-patient variability and to reduce redundant

in-formation. This pre-processing stepfirst selected the cardiac region on

CCTA. Subsequently, a region of interest around the heart was selected by analysing the volumes in three different views (axial, coronal and sagittal), by three CNNs and combining the predictions made by the three networks. This method, called 2.5D analysis, makes more use of spatial information than 2D analysis, and is an example of object de-tection in cardiac CT. After the heart localization, paired CNNs were

used to first select potential calcium with one CNN and secondly,

classifying them pixel-based, reaching a sensitivity of 0.72 [23].

With the possible implementation of lung screening in the future, a large increase in non-contrast chest CTs is expected. The increased number of chest CTs made for lung cancer screening and the fact that lung cancer and CAD are often co-existing diseases, makes chest CT the ideal candidate to simultaneously assess lung cancer and cardiovascular risk.

Lessmann et al. [24] used non-contrast non-ECG-triggered chest CT

for detecting coronary calcium. The authors reconstructed the 1 mm slice thickness chest scans to 3 mm slice thickness scans. These re-constructed scans were used for manual labelling as reference standard. As with the algorithm of Wolterink et al., paired 2.5D CNNs were used, one to detect potential calcium and a second for reducing the number of false positives by judging whether a potential calcium voxel was a true calcification or a false positive. The second CNN looked at small patches

of voxels around the potential calcium voxel selected by thefirst CNN

and classified the voxel as either CAC or non-CAC, e.g. valve

calcifi-cations or noise. This approach led to DC scores of 0.9. Although the DC Table

3 State-of-the-art segmentation studies in cardiac CT. The pixels in the prediction of the network versus the manual truth are used to calculate the sensitivity and speci fi city of the reported works. The reference in bold concerns an article discussed in the subchapter. First author Aim Modality Number of participants in training set Number of participants in test set Reported metrics Sensitivity Speci fi city DC Wolterink et al., 2016 [ 23 ] Automatic segmentation of CAC in CCTA CCTA 90 100 ICC, absolute agreement, Acc, k, sens, spec, FP, BAp, Pearson 0.72 -Zreik et al., 2016 [ 43 ] Automatic segmentation of LV CCTA 55 5 DC, MD, sens, spec 0.95 0.966 0.85 Santini et al., 2017 [ 37 ] Automatic scoring of CAC Cardiac CT 45 56 Sens, spec, TPR, PPV, k, Pearson, BAp 0.912 0.9537 -Lessmann et al., 2018 [ 24 ] Automatic segmentation of CAC, thoracic calcium, cardiac valve calcium Chest CT 1,012 506 Sens, PF, F1, k, DC 0.913 -0.90 Vesal et al., 2019 [ 44 ] Automatic multi-organ segmentation in thoracic CT CT 40 20 DC, HD -0.916 CT = Computed Tomography, CCTA = Cardiac Computed Tomography Angiography, sens = sensitivity, spec = speci fi city, Acc = accuracy, DC = Dice Similarity Coe ffi cient, MD = Mean Di ff erence, PPV = Positive Prediction Value, NPV = Negative Predictive Value, ICC = Intraclass correlation, k = Cohen ’s Kappa, FP = False Positives, BAp = Bland-Altman percentage, Pearson = Pearson ’s correlation, HD = Hausdor ff Distance.

(8)

scores were high for detecting coronary calcium lesions, the detection of valvular calcium was still an issue, with DC scores of 0.66 and 0.64

for the detection of mitral and aortic valve calcifications. Despite the

low accuracy on mitral and valve calcifications, this work shows that automated detection can work even on non-contrast images such as

chest CT’s, resulting in high accuracy on CAC and possibly reliable risk

categorization when compared to the manual labelling and risk cate-gorization of the chest scans. The accuracy on the test sets suggests that future implementation is possible, however, it will require testing on

different and larger datasets and clinicians to test the software.

3.4. Prognosis and outcome prediction

Another important application of AI algorithms in the medicalfield

is prediction. Recent developments investigate the use of risk stratifi-cation for therapy decision making and its use may result in the

re-duction of cardiac fatalities [25,26]. Summaries of recent studies can be

seen inTable 4.

Two recent studies on prognosis and survival prediction are by

Motwani et al. [27] and Van Rosendael et al. [28] on a relatively large

dataset (n = 8,844), the CONFIRM registry [29,30], to predict all-cause

mortality and the risk of CVD events. Motwani et al. combined both CCTA markers and clinical markers as input, while van Rosendael et al. only used imaging markers. By ranking the importance of features for predicting mortality, the highest-ranking features, such as the severity of the plaques in the left anterior descending (LAD) coronary artery, were selected for further training of the AI algorithm. Similar AUCs were reached in both studies, 0.77 and 0.79 for the Motwani and Ro-sendael study, respectively. These AUCs are higher than prediction based on the Framingham risk score, which uses biomarkers such as age, sex and cholesterol levels, and CCTA risk score (AUC = 0.69). Although still needing manual input, such as the degree of stenosis, this workflow shows potential to improve patient care by providing more accurate or earlier prediction of people at risk of CVD.

Whereas earlier we described calcium scoring as part of the seg-mentation class, it can also be approached as a prediction class. De Vos

et al. [22] used both cardiac and chest CT images for training and

testing a network that does not segment CAC, but only gives an esti-mation of the CAC score as output. The input of this type of network is still the non-contrast CT as discussed in the segmentation subchapter, but the label is the CAC score and not a segmentation of the CAC. Ac-curacy of over 0.9 was reached for categorization on chest or cardiac CT for the AI algorithm compared to manual scoring. They made further improvements on computational time, taking processing time needed per patient down from 12 seconds to 0.3 second, needed for predictions, making clinical implementation promising.

4. Discussion

In this review, we show that AI can potentially be used for several

different tasks within the clinical workflow in cardiac CT. Besides the

tasks discussed in this paper other developments are entering the clinical arena that are also prone to automation with AI. For example, we have excluded research on CT-FFR, since it was deemed too ex-tensive for this review, but for the current status on CT-FFR, one can

refer to Litjens et al. [31] or Benjamins et al. [32] Other biomarkers are

emerging that may be used for risk prediction, such as e.g. epi- and

pericardial fat as discussed in recent work [33–35].

Different AI algorithms, based on various architectures of neural networks and designed for different types of tasks, are being tested for a

variety of applications. Classification, segmentation, prognosis and

outcome prediction and image optimization are tasks that can be per-formed by AI and are of great interest for radiology, specifically for the field of cardiac CT.

One goal of AI is to perform tasks at similar or higher accuracy than

humans while reducing the time it takes to perform these tasks. Thefirst

Table 4 State-of-the-art prediction studies in cardiac CT. The reference in bold concerns an article discussed in the subchapter. First author Aim Modality Number of participants in training set Number of participants in test set Reported metrics AUC Sensitivity Speci fi city DC Motwani et al., 2017 [ 27 ] Predict 5-year mortality from CCTA CCTA 10,030 69 AUC, sens, spec 0.79 -van Rosendael et al., 2018 [ 28 ] Compare ML risk strati fi cation vs current CCTA risk strati fi cation CCTA 7075 1769 AUC, NRI 0.771 -De Vos et al., 2019 [ 22 ] Automatic CAC scoring Cardiac and chest CT 1,148 1,036 ICC, Bap, k, acc -0.86 -0.82 CT = Computed Tomography, CCTA = Cardiac Computed Tomography Angiography, CAC = Coronary Artery Calcium, ML = Machine Learning, sens = sensitivity, spec = speci fi city, Acc = accuracy, ICC = Intraclass correlation, k = Cohen ’s Kappa, BAp = Bland-Altman percentage, NRI = Net Reclassi fi cation Improvement.

L.B. van den Oever, et al. European Journal of Radiology 128 (2020) 108969

(9)

step AI researchers and developers can take in order for their AI algo-rithms to be clinically implementable is to reach similar or better ac-curacy than human observers. From the above-mentioned task cate-gories, segmentation tasks are currently the main types of tasks in cardiac CT that achieve almost similar accuracy to human readers with

faster processing time than a human reader [22,36,37]. One of the

reasons of the excellent performance of AI in segmentation tasks might be that the articles described in this review concern well-defined tasks,

for example,finding the edge of LVM or detecting calcium spots that

have a high contrast with surrounding tissue. Well-defined tasks make it

easier for the network to quickly learn to distinguish these edges. Segmentation tasks may for example have worse accuracy if the object to be segmented is lower in contrast to surrounding tissue, i.e., epi-cardial adipose tissue. Classification outcomes are usually dependant on more variables than segmentation tasks are. The accuracy of a human

reader on classification tasks is usually lower than on a segmentation

task. Since an AI algorithm is dependent on the labels created by human readers, the accuracy of the AI algorithm might be lower as well. The output, and label, of a segmentation task is usually a large area, so that gives a clear indication for a network where to look, whereas classifi-cation tasks do not look at one area, but have to take more variables

into account. This causes the accuracy of classification tasks to lag

behind the segmentation tasks. The same holds true for the prog-nostication and prediction tasks based on regression, due to the

in-creased complexity of the problem defined for the AI algorithm,

reaching high accuracy becomes an increasingly difficult task.

A second step that needs to be taken in order for AI algorithms to reach clinical implementation is proper testing and validation.

Although the work done in AI in cardiac CT is promising, it is difficult

to determine how far from clinical implementation specific algorithms

currently are. Thefield of AI is rapidly developing due to the use of

online platforms, such as arXiv [38] where articles can be publicly

re-leased before being published in a peer-reviewed journal, and GitHub

[39] where databases and algorithms can be shared. These platforms

allow for rapid and easy sharing of algorithms andfindings, however,

do not have the quality assurance that peer-review offers. For

wide-spread clinical implementation, where decisions based on these AI

al-gorithms will influence and impact patient treatment and outcomes,

critical review and validation of these algorithms are highly necessary.

The rapid pace of innovation in the AI field has led to the

in-troduction of successive algorithms. As a result, little to no research on medical AI is being performed using existing networks to compare which algorithms work best for which problem. Another issue ham-pering the comparison of different algorithms is the variation in

training data used, making it difficult to compare the results from the

various publications even if they are aiming to solve the same clinical problem. Except for the prognosis algorithms, many of the works pre-sented in this review make use of very small databases, with training datasets consisting of less than 100 patients. Usually, the studies are either based on local data or use older public data that are not com-parable to the data quality of currently used CT systems. This makes it

very difficult for these algorithms to perform well on data in other

centres. AI algorithms work by learning to recognize features. Some features might be more pronounced or less pronounced in the training data than in data from other centres or areas of the world. These al-gorithms are less precise for features, and, therefore, for data that they have seen less or not at all. Validation in other centres becomes, therefore, even more important to ensure that algorithms work not only on the training data but also on cohorts elsewhere. It is also difficult to interpret the results of the algorithms, since not all the features learned by neural networks can be extracted, the so-called black-box problem. Therefore, it is difficult for patients and clinicians to get explanations as to why decisions by the AI are made. If the validation is not done correctly, inherent biases, developed, for instance, by the training of the AI based on human errors or skewed databases, will not be detected. The support of clinicians in setting up validation studies and assisting in

development by identifying relevant areas of research and helping with integrating AI algorithms in workflow is of paramount importance for the future goal to embed AI in the clinical setting.

The validation and reproducibility issues become even more chal-lenging by the variation in performance metrics used to evaluate the algorithms as can be seen in the overview tables. This can be seen in the more than 25 different performance metrics that are used in the small specific field that is discussed in this paper. The metrics might also be

used differently in the clinical field than in the computer science field.

This can cause the published papers to be harder to interpret by clin-icians. Another issue is that it is often unknown what the performance metric values would be if the work would be done by a human observer. It is, therefore, hard to say if an algorithm is good enough for im-plementation. Commonly, reproducibility is not reported. This would help with indicating how difficult the task is for human readers and give an performance metric for comparison with the AI algorithm. AI Challenges, in which a central problem is presented and a benchmark database is provided to all participants, can play a major role in stan-dardizing metrics and comparing neural networks. In general, releasing databases and algorithms in the public domain, allowing for external replication validation, could lead to solid scientific results promoting trust that is put into these algorithms, enhancing the chance of these algorithms being used clinically.

In conclusion, AI algorithms in cardiac CT can be used for a wide

variety of tasks such as classification, segmentation, prediction and

image optimization. However, to reach its full potential and make it into widespread clinical practice, algorithms should be compared to select the most appropriate algorithm for each task. In order to accu-rately validate the AI algorithms, large publicly accessible benchmark data sets are needed. Future efforts should not only focus on the de-velopment of new algorithms but also on the creation of heterogeneous open databases and the subsequent validation of existing AI algorithms to make them into robust applications for clinical implementation. Declaration of Competing Interest

None.

Acknowledgements

This research did not receive any specific grant from funding

agencies in the public, commercial, or not-for-profit sectors. References

[1] M. Abadi, A. Agarwal, P. Barham, E. Brevdo, Z. Chen, C. Citro, G.S. Corrado, A. Davis, J. Dean, M. Devin, S. Ghemawat, I. Goodfellow, A. Harp, G. Irving, M. Isard, Y. Jia, R. Jozefowicz, L. Kaiser, M. Kudlur, J. Levenberg, D. Mane, R. Monga, S. Moore, D. Murray, C. Olah, M. Schuster, J. Shlens, B. Steiner, I. Sutskever, K. Talwar, P. Tucker, V. Vanhoucke, V. Vasudevan, F. Viegas, O. Vinyals, P. Warden, M. Wattenberg, M. Wicke, Y. Yu, X. Zheng, TensorFlow: Large-Scale Machine Learning on Heterogeneous Distributed Systems, (2016) (Accessed 11 October 2019),http://arxiv.org/abs/1603.04467.

[2] A. Paszke, S. Gross, S. Chintala, G. Chanan, E. Yang, Z. DeVito, Z. Lin, A. Desmaison, L. Antiga, A. Lerer, Automatic differentiation in PyTorch, (2017) (Accessed 11 October 2019),https://openreview.net/forum?id=BJJsrmfCZ.

[3] R. Shahzad, D. Bos, R.P.J.J. Budde, K. Pellikaan, W.J. Niessen, A. Van Der Lugt, T. van Walsum, Automatic segmentation and quantification of the cardiac struc-tures from non-contrast-enhanced cardiac CT scans, Phys. Med. Biol. 62 (2017) 3798–3813,https://doi.org/10.1088/1361-6560/aa63cb.

[4] D.W. Entrikin, J.A. Leipsic, J.J. Carr, Optimization of Radiation Dose Reduction in Cardiac Computed Tomographic Angiography, Cardiol. Rev. 19 (2011) 163–176, https://doi.org/10.1097/CRD.0b013e31821daa8f.

[5] M.I. Karamat, Strategies and Scientific Basis of Dose Reduction on State-of-the-Art Multirow Detector X-Ray CT Systems, Crit. Rev. Biomed. Eng. 43 (2015) 33–59, https://doi.org/10.1615/CritRevBiomedEng.2015013977.

[6] T. Henzler, M. Hanley, E. Arnoldi, G. Bastarrika, U.J. Schoepf, H.-C. Becker, Practical Strategies for Low Radiation Dose Cardiac Computed Tomography, J. Thorac. Imaging. 25 (2010) 213–220,https://doi.org/10.1097/RTI. 0b013e3181ec9096.

(10)

D. Chiappino, D. Della Latta, Synthetic contrast enhancement in cardiac CT with Deep Learning, (2018), pp. 1–8http://arxiv.org/abs/1807.01779.

[8] O. Ronneberger, P. Fischer, T. Brox, U-net: Convolutional networks for biomedical image segmentation, Lect. Notes Comput. Sci. (Including Subser. Lect. Notes Artif. Intell. Lect. Notes Bioinformatics), (2015), pp. 234–241,https://doi.org/10.1007/ 978-3-319-24574-4_28.

[9] C.H. Sudre, W. Li, T. Vercauteren, S. Ourselin, M. Jorge Cardoso, Generalised Dice Overlap as a Deep Learning Loss Function for Highly Unbalanced Segmentations, Springer, Cham, 2017, pp. 240–248, https://doi.org/10.1007/978-3-319-67558-9_28.

[10] W.R. Crum, O. Camara, D.L.G. Hill, Generalized overlap measures for evaluation and validation in medical image analysis, IEEE Trans. Med. Imaging. 25 (2006) 1451–1461,https://doi.org/10.1109/TMI.2006.880587.

[11] L.R. Dice, Measures of the Amount of Ecologic Association Between Species, Ecology. 26 (1945) 297–302,https://doi.org/10.2307/1932409. [12] M. Geng, Y. Deng, Q. Zhao, Q. Xie, D. Zeng, D. Zeng, W. Zuo, D. Meng,

Unsupervised/Semi-supervised Deep Learning for Low-dose CT Enhancement, (2018)http://arxiv.org/abs/1808.02603.

[13] Q. Xie, D. Zeng, Q. Zhao, D. Meng, Z. Xu, Z. Liang, J. Ma, Robust Low-dose CT Sinogram Preprocessing via Exploiting Noise-generating Mechanism, IEEE Trans. Med. Imaging. 36 (2017) 2487–2498,https://doi.org/10.1109/TMI.2017.2767290. [14] Z. Wang, A.C. Bovik, H.R. Sheikh, E.P. Simoncelli, Image Quality Assessment: From

Error Visibility to Structural Similarity, IEEE Trans. Image Process. 13 (2004) 600–612,https://doi.org/10.1109/TIP.2003.819861.

[15] D. Mozaffarian, E.J. Benjamin, A.S. Go, D.K. Arnett, M.J. Blaha, M. Cushman, S.R. Das, S. de Ferranti, J.-P. Després, H.J. Fullerton, V.J. Howard, M.D. Huffman, C.R. Isasi, M.C. Jiménez, S.E. Judd, B.M. Kissela, J.H. Lichtman, L.D. Lisabeth, S. Liu, R.H. Mackey, D.J. Magid, D.K. McGuire, E.R. Mohler, C.S. Moy, P. Muntner, M.E. Mussolino, K. Nasir, R.W. Neumar, G. Nichol, L. Palaniappan, D.K. Pandey, M.J. Reeves, C.J. Rodriguez, W. Rosamond, P.D. Sorlie, J. Stein, A. Towfighi, T.N. Turan, S.S. Virani, D. Woo, R.W. Yeh, M.B. Turner, Heart Disease and Stroke Statistics—2016 Update, Lippincott Williams & WilkinsHagerstown, MD, 2015, https://doi.org/10.1161/cir.0000000000000350.

[16] E. Wilkins, W. L, K. Wickramasinghe, B. P, European Cardiovascular Disease Statistics, Eur. Hear. Netw. 94 (118) (2017) 8–15 (2017) 127, 149, 162, 174. [17] S.J. Denissen, C.M. van der Aalst, M. Vonder, M. Oudkerk, H.J. de Koning, Impact of

a cardiovascular disease risk screening result on preventive behaviour in asymp-tomatic participants of the ROBINSCA trial, Eur. J. Prev. Cardiol. (2019),https:// doi.org/10.1177/2047487319843396204748731984339.

[18] A.J. Moss, M.C. Williams, D.E. Newby, E.D. Nicol, The Updated NICE Guidelines: Cardiac CT as the First-Line Test for Coronary Artery Disease, Curr. Cardiovasc. Imaging Rep. 10 (2017) 15,https://doi.org/10.1007/s12410-017-9412-6. [19] J. Knuuti, W. Wijns, A. Saraste, D. Capodanno, E. Barbato, C. Funck-Brentano,

E. Prescott, R.F. Storey, C. Deaton, T. Cuisset, S. Agewall, K. Dickstein, T. Edvardsen, J. Escaned, B.J. Gersh, P. Svitil, M. Gilard, D. Hasdai, R. Hatala, F. Mahfoud, J. Masip, C. Muneretto, M. Valgimigli, S. Achenbach, J.J. Bax, F.-J. Neumann, U. Sechtem, A.P. Banning, N. Bonaros, H. Bueno, R. Bugiardini, A. Chieffo, F. Crea, M. Czerny, V. Delgado, P. Dendale, F.A. Flachskampf, H. Gohlke, E.L. Grove, S. James, D. Katritsis, U. Landmesser, M. Lettino, C.M. Matter, H. Nathoe, A. Niessner, C. Patrono, A.S. Petronio, S.E. Pettersen, R. Piccolo, M.F. Piepoli, B.A. Popescu, L. Räber, D.J. Richter, M. Roffi, F.X. Roithinger, E. Shlyakhto, D. Sibbing, S. Silber, I.A. Simpson, M. Sousa-Uva, P. Vardas, A. Witkowski, J.L. Zamorano, S. Achenbach, S. Agewall, E. Barbato, J.J. Bax, D. Capodanno, T. Cuisset, C. Deaton, K. Dickstein, T. Edvardsen, J. Escaned, C. Funck-Brentano, B.J. Gersh, M. Gilard, D. Hasdai, R. Hatala, F. Mahfoud, J. Masip, C. Muneretto, E. Prescott, A. Saraste, R.F. Storey, P. Svitil, M. Valgimigli, S. Windecker, V. Aboyans, C. Baigent, J.-P. Collet, V. Dean, V. Delgado, D. Fitzsimons, C.P. Gale, D. Grobbee, S. Halvorsen, G. Hindricks, B. Iung, P. Jüni, H.A. Katus, U. Landmesser, C. Leclercq, M. Lettino, B.S. Lewis, B. Merkely, C. Mueller, S. Petersen, A.S. Petronio, D.J. Richter, M. Roffi, E. Shlyakhto, I.A. Simpson, M. Sousa-Uva, R.M. Touyz, S. Benkhedda, B. Metzler, V. Sujayeva, B. Cosyns, Z. Kusljugic, V. Velchev, G. Panayi, P. Kala, S.A. Haahr-Pedersen, H. Kabil, T. Ainla, T. Kaukonen, G. Cayla, Z. Pagava, J. Woehrle, J. Kanakakis, K. Tóth, T. Gudnason, A. Peace, D. Aronson, C. Riccio, S. Elezi, E. Mirrakhimov, S. Hansone, A. Sarkis, R. Babarskiene, J. Beissel, A.J.C. Maempel, V. Revenco, G.J. de Grooth, H. Pejkov, V. Juliebø, P. Lipiec, J. Santos, O. Chioncel, D. Duplyakov, L. Bertelli, A.D. Dikic, M. Studenčan, M. Bunc, F. Alfonso, M. Bäck, M. Zellweger, F. Addad, A. Yildirir, Y. Sirenko, B. Clapp, ESC Guidelines for the diagnosis and management of chronic coronary syndromes, Eur. Heart J. 2019 (2019),https://doi.org/10.1093/eurheartj/ehz425.

[20] R.W. van Hamersvelt, M. Zreik, M. Voskuil, M.A. Viergever, I. Išgum, T. Leiner, Deep learning analysis of left ventricular myocardium in CT angiographic inter-mediate-degree coronary stenosis improves the diagnostic accuracy for identifica-tion of funcidentifica-tionally significant stenosis, Eur. Radiol. 29 (2019) 2350–2359,https:// doi.org/10.1007/s00330-018-5822-3.

[21] M. Zreik, N. Lessmann, R.W. van Hamersvelt, J.M. Wolterink, M. Voskuil, M.A. Viergever, T. Leiner, I. Išgum, Deep learning analysis of the myocardium in coronary CT angiography for identification of patients with functionally significant coronary artery stenosis, Med. Image Anal. 44 (2018) 72–85,https://doi.org/10. 1016/j.media.2017.11.008.

[22] B.D. de Vos, J.M. Wolterink, T. Leiner, P.A. de Jong, N. Lessmann, I. Isgum, Direct Automatic Coronary Calcium Scoring in Cardiac and Chest CT, IEEE Trans. Med. Imaging. 0062 (2019) 1–12,https://doi.org/10.1109/TMI.2019.2899534. [23] J.M. Wolterink, T. Leiner, B.D. de Vos, R.W. van Hamersvelt, M.A. Viergever,

I. Išgum, Automatic coronary artery calcium scoring in cardiac CT angiography using paired convolutional neural networks, Med. Image Anal. 34 (2016) 123–136,

https://doi.org/10.1016/j.media.2016.04.004.

[24] N. Lessmann, B. van Ginneken, M. Zreik, P.A. de Jong, B.D. De Vos, M.A. Viergever, I. Isgum, Automatic Calcium Scoring in Low-Dose Chest CT Using Deep Neural Networks With Dilated Convolutions, IEEE Trans. Med. Imaging. 37 (2018) 615–625,https://doi.org/10.1109/TMI.2017.2769839.

[25] M. Vonder, C.M. van der Aalst, R. Vliegenthart, P.M.A. van Ooijen, D. Kuijpers, J.W. Gratama, H.J. de Koning, M. Oudkerk, Coronary Artery Calcium Imaging in the ROBINSCA Trial: Rationale, Design, and Technical Background, Acad. Radiol. 25 (2018) 118–128,https://doi.org/10.1016/j.acra.2017.07.010.

[26] M.C. Williams, A. Hunter, A.S.V. Shah, V. Assi, S. Lewis, J. Smith, C. Berry, N.A. Boon, E. Clark, M. Flather, J. Forbes, S. McLean, G. Roditi, E.J.R. van Beek, A.D. Timmis, D.E. Newby, Use of Coronary Computed Tomographic Angiography to Guide Management of Patients With Coronary Disease, J. Am. Coll. Cardiol. 67 (2016) 1759–1768,https://doi.org/10.1016/J.JACC.2016.02.026.

[27] M. Motwani, D. Dey, D.S. Berman, G. Germano, S. Achenbach, M.H. Al-Mallah, D. Andreini, M.J. Budoff, F. Cademartiri, T.Q. Callister, H.-J. Chang, K. Chinnaiyan, B.J.W. Chow, R.C. Cury, A. Delago, M. Gomez, H. Gransar, M. Hadamitzky, J. Hausleiter, N. Hindoyan, G. Feuchtner, P.A. Kaufmann, Y.-J. Kim, J. Leipsic, F.Y. Lin, E. Maffei, H. Marques, G. Pontone, G. Raff, R. Rubinshtein, L.J. Shaw, J. Stehli, T.C. Villines, A. Dunning, J.K. Min, P.J. Slomka, Machine learning for prediction of all-cause mortality in patients with suspected coronary artery disease: a 5-year multicentre prospective registry analysis, Eur. Heart J. 38 (2017) 500–507, https://doi.org/10.1093/eurheartj/ehw188.

[28] A.R. van Rosendael, G. Maliakal, K.K. Kolli, A. Beecy, S.J. Al’Aref, A. Dwivedi, G. Singh, M. Panday, A. Kumar, X. Ma, S. Achenbach, M.H. Al-Mallah, D. Andreini, J.J. Bax, D.S. Berman, M.J. Budoff, F. Cademartiri, T.Q. Callister, H.J. Chang, K. Chinnaiyan, B.J.W. Chow, R.C. Cury, A. DeLago, G. Feuchtner, M. Hadamitzky, J. Hausleiter, P.A. Kaufmann, Y.J. Kim, J.A. Leipsic, E. Maffei, H. Marques, G. Pontone, G.L. Raff, R. Rubinshtein, L.J. Shaw, T.C. Villines, H. Gransar, Y. Lu, E.C. Jones, J.M. Peña, F.Y. Lin, J.K. Min, Maximization of the usage of coronary CTA derived plaque information using a machine learning based algorithm to im-prove risk stratification; insights from the CONFIRM registry, J. Cardiovasc. Comput. Tomogr. 12 (2018) 204–209,https://doi.org/10.1016/j.jcct.2018.04.011. [29] J.K. Min, A. Dunning, F.Y. Lin, S. Achenbach, M.H. Al-Mallah, D.S. Berman,

M.J. Budoff, F. Cademartiri, T.Q. Callister, H.-J. Chang, V. Cheng, K.M. Chinnaiyan, B. Chow, A. Delago, M. Hadamitzky, J. Hausleiter, R.P. Karlsberg, P. Kaufmann, E. Maffei, K. Nasir, M.J. Pencina, G.L. Raff, L.J. Shaw, T.C. Villines, Rationale and design of the CONFIRM (COronary CT Angiography EvaluatioN For Clinical Outcomes: An InteRnational Multicenter) Registry, J. Cardiovasc. Comput. Tomogr. 5 (2011) 84–92,https://doi.org/10.1016/J.JCCT.2011.01.007.

[30] M. Hadamitzky, S. Achenbach, M. Al-Mallah, D. Berman, M. Budoff, F. Cademartiri, T. Callister, H.-J. Chang, V. Cheng, K. Chinnaiyan, B.J.W. Chow, R. Cury, A. Delago, A. Dunning, G. Feuchtner, M. Gomez, P. Kaufmann, Y.-J. Kim, J. Leipsic, F.Y. Lin, E. Maffei, J.K. Min, G. Raff, L.J. Shaw, T.C. Villines, J. Hausleiter, Optimized Prognostic Score for Coronary Computed Tomographic Angiography: Results From the CONFIRM Registry (COronary CT Angiography EvaluatioN For Clinical Outcomes: An InteRnational Multicenter Registry), J. Am. Coll. Cardiol. 62 (2013) 468–476,https://doi.org/10.1016/J.JACC.2013.04.064.

[31] G. Litjens, F. Ciompi, J.M. Wolterink, B.D. de Vos, T. Leiner, J. Teuwen, I. Išgum, State-of-the-Art Deep Learning in Cardiovascular Image Analysis, JACC Cardiovasc. Imaging. 12 (2019) 1549–1565,https://doi.org/10.1016/J.JCMG.2019.06.009. [32] J.W. Benjamins, T. Hendriks, J. Knuuti, L.E. Juarez-Orozco, P. van der Harst, A primer in artificial intelligence in cardiovascular medicine, Netherlands Hear. J. 27 (2019) 392–402,https://doi.org/10.1007/s12471-019-1286-6.

[33] C. Militello, L. Rundo, P. Toia, V. Conti, G. Russo, C. Filorizzo, E. Maffei, F. Cademartiri, L. La Grutta, M. Midiri, S. Vitabile, A semi-automatic approach for epicardial adipose tissue segmentation and quantification on cardiac CT scans, Comput. Biol. Med. 114 (2019),https://doi.org/10.1016/j.compbiomed.2019. 103424.

[34] F. Commandeur, M. Goeller, J. Betancur, S. Cadet, M. Doris, X. Chen, D.S. Berman, P.J. Slomka, B.K. Tamarappoo, D. Dey, Deep Learning for Quantification of Epicardial and Thoracic Adipose Tissue from Non-Contrast CT, IEEE Trans. Med. Imaging. 37 (2018) 1835–1846,https://doi.org/10.1109/TMI.2018.2804799. [35] E.K. Oikonomou, M.C. Williams, C.P. Kotanidis, M.Y. Desai, M. Marwan,

A.S. Antonopoulos, K.E. Thomas, S. Thomas, I. Akoumianakis, L.M. Fan, S. Kesavan, L. Herdman, A. Alashi, E.H. Centeno, M. Lyasheva, B.P. Griffin, S.D. Flamm, C. Shirodaria, N. Sabharwal, A. Kelion, M.R. Dweck, E.J.R. Van Beek, J. Deanfield, J.C. Hopewell, S. Neubauer, K.M. Channon, S. Achenbach, D.E. Newby, C. Antoniades, A novel machine learning-derived radiotranscriptomic signature of perivascular fat improves cardiac risk prediction using coronary CT angiography, Eur. Heart J. 40 (2019) 3529–3543,https://doi.org/10.1093/eurheartj/ehz592. [36] G. Yang, Y. Chen, X. Ning, Q. Sun, H. Shu, J.L. Coatrieux, Automatic coronary

calcium scoring using noncontrast and contrast CT images, Med. Phys. 43 (2016) 2174–2186,https://doi.org/10.1118/1.4945045.

[37] G. Santini, D. Della Latta, N. Martini, G. Valvano, A. Gori, A. Ripoli, C.L. Susini, L. Landini, D. Chiappino, An automatic deep learning approach for coronary artery calcium segmentation, IFMBE Proc. 65 (2017) 374–377,https://doi.org/10.1007/ 978-981-10-5122-7_94.

[38] arXiv.org e-Print archive, (n.d.).https://arxiv.org/(Accessed 3 October 2019). [39] The world’s leading software development platform · GitHub, (n.d.).https://github.

com/(Accessed 3 October 2019).

[40] M. Mannil, J. Von Spiczak, R. Manka, H. Alkadhi, Texture Analysis and Machine Learning for Detecting Myocardial Infarction in Noncontrast Low-Dose Computed Tomography: Unveiling the Invisible, Invest. Radiol. 53 (2018) 338–343,https:// doi.org/10.1097/RLI.0000000000000448.

[41] C. Jin, J. Feng, L. Wang, H. Yu, J. Liu, J. Lu, J. Zhou, Left atrial appendage L.B. van den Oever, et al. European Journal of Radiology 128 (2020) 108969

(11)

segmentation and quantitative assisted diagnosis of atrialfibrillation based on fu-sion of temporal-spatial information, Comput. Biol. Med. 96 (2018) 52–68,https:// doi.org/10.1016/j.compbiomed.2018.03.002.

[42] R. Alizadehsani, M.J. Hosseini, A. Khosravi, F. Khozeimeh, M. Roshanzamir, N. Sarrafzadegan, S. Nahavandi, Non-invasive detection of coronary artery disease in high-risk patients based on the stenosis prediction of separate coronary arteries, Comput. Methods Programs Biomed. 162 (2018) 119–127,https://doi.org/10.

1016/j.cmpb.2018.05.009.

[43] M. Zreik, T. Leiner, B.D. De Vos, R.W. Van Hamersvelt, M.A. Viergever, I. Isgum, Automatic segmentation of the left ventricle in cardiac CT angiography using convolutional neural networks, Proc. - Int. Symp. Biomed. Imaging (2016) 40–43, https://doi.org/10.1109/ISBI.2016.74932062016-June.

[44] S. Vesal, N. Ravikumar, A. Maier, A 2D dilated residual U-net for multi-organ segmentation in thoracic CT, CEUR Workshop Proc. 2349 (2019) 2–5.

Referenties

GERELATEERDE DOCUMENTEN

To obtain information about local left atrium (LA) myocardial wall thickness, for selection of ablation duration and energy, requires a presegmentation in- cluding both endocardial

License: Licence agreement concerning inclusion of doctoral thesis in the Institutional Repository of the University of Leiden. Downloaded

Door grijswaardentraining bij de constructie van statistische vormmodellen ach- terwege te laten, worden dergelijke modellen ook toepasbaar op beeldvlakken die een andere orientatie

Although the initial model was trained using densely acquired imaging data, SPASM offers a solution during the matching stage to cope with the absence of image information due to

No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photo- copying, recording, or any information storage

This sparse data matching enables LV function analysis without the necessity of acquiring a large number of image slices and is a major reason for us to choose the ASM approach for

In order to reduce model dimensionality, the model was restricted to represent 99% of the shape variation present in the training data, resulting in 33 modes for statistical

Informacje o osobach represjonowanych (w przypadku braku miejsca, dalsze osoby wpisać w informacjach dodatkowych lub na od- wrocie formularza) Imię i nazwisko Data urodzenia Imię