• No results found

2 Anatomical models in medical image analysis

N/A
N/A
Protected

Academic year: 2021

Share "2 Anatomical models in medical image analysis "

Copied!
20
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

2 Anatomical models in medical image analysis

Anatomical modeling is a rapidly growing field of research, where three main application areas have evolved over the last decades: visualization and education, functional analysis and segmentation. Since the focus of attention in this thesis is directed to anatomical models for segmentation purposes, this section will briefly discuss the first two applica- tions, whereas models aimed at medical image segmentation are discussed in more detail.

2.1 Anatomical models for visualization and educational purposes Recent advances in computer technology have enabled the development of digital ana- tomical models for education and training purposes. For example, highly detailed ana- tomical atlas models are digitally represented in the form of a set of labeled voxels [1-9].

By combining photorealistic renderings of segmented voxel data with background infor- mation about the visualized structures, different aspects of human anatomy can be explored three-dimensionally. The added value of such atlases over plain paper atlases is the facility to interactively explore the spatial structure of an organ in three dimensions, where background information about the function of an organ can be retrieved on demand. A well-known example of such an atlas model for visualization and education purposes is the visible human atlas [4-9], where human anatomy can be viewed interac- tively in combination with its appearance in a number of radiological imaging modali- ties.

A novel educational application of anatomical atlas models is surgical simulation and

training in a virtual reality environment. Again, the rapid progression in computer hard-

ware has triggered the development of a broad spectrum of such applications, where real-

istic atlas models are visualized three-dimensionally, and response to user interaction is

provided by visual and in some cases tactile feedback [10-12]. The actual clinical applica-

tion of virtual- and augmented reality methods in medicine is slowly increasing, though

currently such techniques are mainly applied experimentally in a research setting. Clini-

cal application examples such as augmented reality displays have been described in e.g.

(2)

[13, 14]. For a recent overview of the state-of-the-art applications of anatomical models in surgical simulation applications the reader is referred to [15].

A third promising application of anatomical atlas models in combination with visual- ization techniques lies in radiological image simulation. In this application, the appear- ance of different organs in a radiological image is simulated by modeling the entire imaging chain from physical tissue characteristics to the underlying physical principles and transfer functions of an imaging modality. This way, the role of specific parameters in the imaging chain on the resulting image conditions can be investigated. Examples of this application of anatomical atlas models have been described in [16-19] for nuclear imaging and in [20, 21] for magnetic resonance images.

2.2 Anatomical models for functional analysis

A second important application field of anatomical models in medicine is the analysis and simulation of physical processes as they occur throughout the human body. By cou- pling physiological knowledge with image data, a better insight can be obtained in the features that distinguish normal from pathological conditions. An excellent example of this application is provided by Kaye [22], who modeled cardiothoracic interactions dur- ing respiration and the changes therein as a result of a pneumothorax based on an ana- tomical model of the thoracic contents. Other examples of the applications of anatomical models for functional analysis are given in [23, 24].

A specialized application area of analytical anatomical models is formed by the dynamic cardiac deformation models [25-63], which are aimed at modeling the contrac- tion pattern of left [27-62] and right [63] ventricles over the cardiac cycle. These defor- mation models are dedicated to the tracking and analysis of wall motion, often by means of a set of characteristic parameter functions, from which wall motion abnormalities can be recognized as deviations from a normal set of functions (e.g.[27, 42, 43]). Further- more, regional parameters for cardiac function like wall stress and strain can be estimated from the deformations mapping the model to an image set.

A third class of analytical shape models is aimed at quantitatively analyzing anatomi-

cal variations and group differences in organ shapes over a population [64-76]. The idea

behind such models is the definition of a non-linear transformation (B-spline basis func-

tions [64], thin-plate spline interpolants [65-69], elastic-[70] or viscous fluid deforma-

tions ([71, 72, 74, 76])), which allow the mapping of a set of training samples in a

standardized space. The deformation effort required to map a shape sample onto the

standardized shape template is a measure for the shape difference between a sample and

the template model. With these models insight can be obtained in anatomical variabili-

ties, where pathological organ shapes can be distinguished from normal shapes [69, 74-

76]. Many of such statistical shape models have been applied for segmentation purposes

as well, as will be discussed later in this section.

(3)

2.3 Anatomical models for medical image segmentation

The need to incorporate prior knowledge into image segmentation methods is nowadays widely recognized. Especially in medical imaging, where many aspects of the imaging conditions are difficult to control, the incorporation of knowledge about the shape, loca- tion, appearance and spatial context of an organ is essential. In this section, different classes of anatomical models are discussed in the context of a commonly applied subdivi- sion of segmentation methods into five abstraction levels. From high-level down to low- level operations, one can distinguish the scene level, the single object level, the image entity level, the low-level segmentation level and the preprocessing level (see Figure 2.1).

Each stage in the image interpretation hierarchy is classified according to the degree of prior knowledge involved in operations at that level. The different anatomical modeling methods described in the literature can be placed mainly at the top three levels of the image interpretation pyramid, and are discussed in a bottom-up order.

Physically-based deformable models (snakes):

A widely acknowledged object representation applied for segmentation of medical imag- ery is the deformable model, which has been recently surveyed in Singh [77]. Terzopou- los [78] introduced the deformable model concept to create realistic computer

animations by viewing an object surface as an elastic sheet and deforming the object by mimicking the physical deformation behavior of the sheet. The first application to image

Preprocessing Low-level image analysis Lower image interpretation

Raw images Filtered images Image features: edges,

regions, texture Scene elements: 3D-surfaces,

volumes, contours Objects

Object recognition High level scene interpretation Scene

Figure 2.1: General classification of segmentation methods.

(4)

segmentation was described by Kass [79], who introduced the nowadays widely applied active contours, also referred to as snakes. Roughly, three types of snakes have evolved since then: parametric snakes, implicit snakes and probabilistic snakes.

Parametric snakes

In brief, a parametric snake is a curve expressed in coordinate functions (x(s) and y(s)), where s represents the parametric domain [0,1]. The shape of the contour is governed by an energy functional:

(2.1) where the first integral term represents an internal deformation energy of the model, which is balanced with an external scalar field P(v), typically defined from an image fea- ture such as the local image gradient. Parameter functions w

1

(s) and w

2

(s) represent two physical properties of the contour, i.e. the ability to stretch and bend respectively. These functions can be used to impose a preferred shape on the model and to locally control shape characteristics like the object smoothness of the resulting segmentation. The third term E

user

represents shape constraints introduced by the user, e.g. by fixating a point of the contour to an image point. The final shape of an active contour in an image corre- sponds to a minimum in E(v), which can be found by numerically solving the Euler- Lagrange equation of Equation 2.1.

Extensions of the two-dimensional snake model to three dimensions (deformable bal- loons) have since been reported [49, 80-94], as well as many modifications of the origi- nal energy formulation to improve robustness of the snake- and balloon methods with respect to spurious feature points and initial positioning [81, 95, 96], transitions in topology [97, 98] and simultaneous detection of multiple objects [99]. Applications to left-ventricular segmentation in dynamic cardiac MR-images are given in e.g. [53, 62, 84, 100-104]

Implicit snakes

One of the practical limitations of parametric snakes is the requirement of an initial guess, which is reasonably close to the desired shape. Furthermore, these snakes are not suitable to describe shape protrusions or extrusions that a shape may posses. A different class of physically-based deformable models designed to circumvent these shortcomings is the level-set approach [105-111], which simulates the propagation of wave fronts with curvature dependent speeds. In [108, 109], the original formulation of Cassalles [105, 106] is modified to an energy minimization, whereas in [111, 112] an extension for simultaneous segmentation of multiple objects is described. An advantage of these implicit snakes over the parametric snake formulation is the lack of assumptions made about topological structure of an object. Therefore topologically adaptable snakes have shown to be useful for segmentation of complexly shaped objects of which little prior shape knowledge is available, for instance branching vessel structures. However, in the

E v w s v

s w s v

s ds P v s ds Euserds

( ) = ( ) ∂ ( ) ( ( ))

∂ + ∂

∂ Ê

Ë Á ˆ

¯ ˜ + +

Ú

01 1 2 22

Ú

01

Ú

01

(5)

absence of geometric shape constraints other than connectivity, the ways to incorporate shape knowledge are limited in case prior knowledge is available.

Probabilistic snakes

Due to the locally distributed nature of prior shape knowledge of the original parametric and implicit snake formulations, the facilities to incorporate shape knowledge other than smoothness constraints to restrict the allowable shape domain are limited. Therefore these snakes are less suitable for object recognition purposes and can be generally classi- fied to the low-level image interpretation stage in Figure 2.1. As a result of this, paramet- ric and implicit snakes are mainly suitable for application in a highly interactive setting.

By selecting a shape parametrization expressed on an orthonormal basis, i.e. a repre- sentation that allows the definition of an object shape as a weighted sum of known basis functions, the shape parameters become physically interpretable. A preferred shape can thus be imposed, based on the parameter distributions over a set of training samples, where the snake is allowed to deform following population-based shape deformations.

The probabilistic snake is preferentially attracted towards feature patterns in the image data, which are consistent with its trained shape. This makes the model matching more robust with respect to initial position and noise and therefore these snakes can be classi- fied to the object recognition level in Figure 2.1.

Applications of probabilistic snakes have been described in Vemuri et al. [113-115], who developed a model based on a deformable superquadric in combination with a locally superimposed deformation field expressed on an orthonormal wavelet basis. Staib et al. describe a two-dimensional probabilistic snake [116] and a three-dimensional [103] probabilistic balloon model applicable to deformations of four surface topologies constructed on a Fourier basis, whereas orthonormal Fourier parametrizations have been described in [117-119] that are applicable to segmentation and recognition problems of free-form closed surfaces.

Statistical shape models

Probabilistic snakes describe a shape and its natural variations by means of the parameter distributions of a shape parametrization and have shown to be a powerful representation for population based shape knowledge. The necessary prerequisite of a parametrization on an orthonormal basis however introduces limitations to the shape topology of these models. In contrast, statistical shape models do not require a shape parametrization, and are therefore not subjected to the topological or shape constraints intrinsic to a lumped parameter model. This makes these statistical models suitable to describe free-form shapes consisting of multiple objects simultaneously.

A widely acknowledged statistical shape model is the Point Distribution Model

(PDM) as introduced by Cootes et al. [120-125]. A Point Distribution Model (PDM)

describes the average shape and characteristic shape variations of a set of training sam-

ples, which are given in the form of a set of points on the sample boundaries. By apply-

ing an affine registration of the shape samples, the most characteristic local shape

(6)

variations around a shape average can be extracted by means of a principal component analysis on the sample point distributions. The only necessary condition for the calcula- tion of a point distribution model is the definition of a point-correspondence between points in successive training samples, which ensures a compact and specific model. In two dimensions this point correspondence is typically defined in application specific assumptions, which are difficult to generalize in three dimensions. The development of more generic methods to define point correspondence for two [126-128] and three [129, 130] dimensions is currently an active field of research.

The application of point distribution models to image segmentation is known as Active Shape Model (ASM). A key difference between the ASM matching method and the matching mechanism for parametric and implicit snake models is the absence of an energy functional based on elastic material properties. For ASM’s, the image matching is performed by calculating a suggested boundary location for each point in the PDM based on image information. This allows an elegant coupling between high-level knowl- edge about object shapes and low-level image features. The suggested boundary hypoth- eses can be generated either using a simple edge filter, but also using custom edge filters for each point in the shape samples, a statistical gray-value model for each sample point or other forms of prior knowledge about an organ’s image appearance [122, 131-133].

The model pose- and shape parameters are iteratively updated to optimally fit the hypothesized shape, where the model is only allowed to deform along the most charac- teristic eigendeformations. A final solution is reached when the generated candidate boundary points coincide with the model boundaries.

A second important statistical shape modeling method for segmentation purposes is based on spatial normalization of a set of training images (2D) or image volumes (3D).

By optimally registering a set of segmented voxel volumes into a standardized space by applying affine [134-140] or higher dimensional transformations such as thin-plate spline interpolants [141-143], an image scene can be expressed as a probability map or as an average shape with a locally defined variance measure respectively. These models are generally applied to segmentation problems by weighting a feature-based probability density function with a spatial probability distribution of an organ shape [138, 139, 141-143] in a Bayesian formulation.

Because statistical shape models allow a coupling between low-level image data and higher level knowledge about individual organ shapes and their spatial context in a scene, they can be classified to the scene interpretation level in the image interpretation hierarchy in Figure 2.1.

Boundary template models

Boundary template models (boundary = 2D contour or 3D surface) can be seen as pre-

shaped deformable models of an organ boundary which are only allowed to deform

within restricted modes of deformation. Generally, boundary templates represent one

shape instance of a particular organ or set of organs in the form of one 2D contour [144-

150], a set of contours forming a 3D surface[151] or a set of coupled analytical primi-

(7)

tives, as has been demonstrated in 2D by Yuille[152], who describes the eye and the mouth as a combination of circles and parabolic arcs. In 3D this analytical approach has been exemplified by Delibasis [153], who modeled part of the brain stem as a combina- tion of globally deformable superquadrics. Furthermore, in Chapters 4, 5 and 6 of this thesis [154-156], a novel boundary template model is presented that describes the tho- racic anatomy as a set of analytical primitives combined by means of Constructive Solid Geometry.

The matching of boundary templates is performed by either balancing an internal energy term with an external energy similar to snake-based approaches [144, 151], using global cost optimization strategies based on dynamic programming [144, 146-150], or by optimizing an energy function directly on explicit model parameters, thereby omit- ting an internal energy function [152-156]. During the matching, the allowed deforma- tion modes are restricted by prior knowledge, which is not necessarily population-based.

Such prior knowledge can be for instance an allowed shape interval on a set of radials along the template boundaries [145], assumptions about small deformations from the template shape [151], deformations along cascaded affine transforms [154-156], a restriction of the template deformations along orthogonal curves [144], or a restriction on the parameter bounds in analytical templates [153].

Due to the restricted degrees of deformation in boundary template matching, explicit prior knowledge can be imposed on the shape and its deformations resulting in more robust matching behavior, although boundary templates are less flexible than other snake approaches. Due to their relative rigidity, the boundary templates are applicable to single objects [144, 145] as well as to scenes consisting of multiple objects [151, 152, 154- 156], where an optimum is sought for the scene model as a whole. Boundary templates can therefore be classified to the object level and the top (scene) level in Figure 2.1.

Volumetric templates

Volumetric templates consist of a segmented voxel set, which is often constructed by

manually delineating a number of organs in a representative image set, and are therefore

derived from a single shape instance. Such models are matched to image data from a dif-

ferent subject by deforming the model on the basis of attraction forces, which are derived

from local similarity measures. The dimensionality of the defined transformations deter-

mines the matching accuracy that can be achieved with these deformations. Several

approaches have been described based on affine [137] , piecewise affine [157], non-rigid

[158], elastic [159, 160], thin-plate spline interpolants spanned by landmarks [161] and

viscous fluid deformations [71, 162-167], where such template models are most com-

monly applied to segmentation of brain structures. In [166, 167] examples of LV seg-

mentation from SPECT [166] and MR [167] image data have been described. Due to

the locally distributed nature of shape knowledge in these models, the model-image

matching is computationally an order of magnitude more expensive than for boundary

template approaches.

(8)

Since volumetric templates are matched as a whole, the topological structure of the model is preserved throughout the matching procedure, which makes it suitable to seg- ment multiple objects in a scene simultaneously. Therefore, these models can be placed on the scene interpretation level in the image interpretation pyramid in Figure 2.1.

High level scene models

On the highest level of abstraction in the image interpretation pyramid the high-level scene models can be found. These models explicitly describe knowledge about the scene domain under investigation. This can be knowledge about a typical size or volume of an organ, the typical image features for a set of organs in a particular image modality [168], the spatial relations of different organs with respect to each other [169-171] or the spa- tial embedding of objects in the image scene by means of a Voronoi diagram ([172], Chapter 3 [173] in this thesis). Two common representations for such high level knowl- edge are explicit rules [174, 175] and semantic- or frame networks [168-171, 176].

Explicit rules store knowledge in the form of ‘if ... then’ rules. This representation is often chosen to represent procedural knowledge consisting of a large number of discrete facts, and is highly flexible in its application, easy to extend and allows combination of multiple rules in a straighforward manner. Examples of the application of explicit rules to trace the intrathoracic airway trees in CT images and to semantically interpret MR brain scans are given in [175, 177] respectively.

Semantic- and frame networks are structured object descriptions, which describe a set of objects and their mutual relations as an attributed graph. Each object is described with a record data structure containing slots, which can be attributed a symbolic or numeric value. Relationships between different records are described by links, which characterize inheritance relations, neighborhood relations and part-subpart hierarchies. Frame- and semantic net representations are commonly applied in a framework for hypothesis gener- ation and verification such as a blackboard system [168, 171]. In [170, 171, 175, 178], detailed implementations of frame representations have been described for knowledge driven segmentation of MR images of the brain, whereas applications to segmentation of thoracic- and abdominal CT scans have been described in [168, 169] respectively.

2.4 Summary

This chapter provides an overview of the applications of anatomical models in medical

image analysis. Anatomical models have rapidly become valuable tools for visualization,

educational, functional and shape analysis and segmentation purposes. In particular for

segmentation applications, anatomical models can be utilized to extend segmentation

algorithms with prior knowledge about the scene under investigation. This application is

discussed in detail following a common subdivision of image segmentation into five

abstraction levels in an image interpretation pyramid: preprocessing, low-level segmenta-

tion, lower image interpretation, object recognition and scene interpretation (see Figure

(9)

2.1). Anatomical models for segmentation purposes can be classified mainly to the high- est three levels in this image interpretation hierarchy.

Anatomical models on the lower image interpretation level are mainly limited in application to the formation of coherent scene elements such as contours and regions from low-level image information. Parametric and implicit snakes are examples of such models at this level, because the facilities to impose prior shape knowledge other than smoothness and connectivity constraints are limited. On the object recognition level, trainable models such as probabilistic snakes, point distribution models and object tem- plates can be distinguished, which combine prior knowledge about an average object shape and its characteristic shape deformations with domain specific low-level feature knowledge. At the highest level of abstraction, which involves the simultaneous process- ing of multiple objects within a scene, point-distribution models, volumetric template models and boundary template models can be utilized to describe the shapes of multiple organs in the scene within their spatial context. These models allow a coupling between high-level anatomical knowledge and low-level image features by defining different types of transforms, which enable a data driven deformation of the model to a feature pattern in the image data. A second class of anatomical models on the scene interpretation level are the high level scene models, which explicitly describe knowledge about the scene domain under investigation in a set of rules, a frame representation or a semantic net- work. These knowledge representations are commonly applied in a framework for hypothesis generation and verification such as a blackboard system.

2.5 References

[1] E. Richter, H. Krämer, W. Lierse, R. Maas, and K. H. Höhne, “Visualization of neonatal anatomy and pathology with a new computerized three-dimensional model as a basis for teaching, diagnosis and therapy,” Acta Anatomica, vol. 150, pp. 75-79, 1994.

[2] R. Schubert, K. H. Höhne, A. Pommert, M. Riemer, T. Schiemann, U. Tiede, and W. Lierse, “A new method for practicing exploration, dissection, and simulation with a complete computerized three- dimensional model of the brain and skull,” Acta Anatomica, vol. 150, pp. 69-74, 1994.

[3] K. H. Höhne, B. Pflesser, P. A., M. Riemer, T. Schiemann, R. Schubert, and U. Tiede, “A new repre- sentation of knowledge concerning anatomy and function,” Nature Medicine, vol. 1(6), pp. 506-511, 1995.

[4] T. Schiemann, J. Nuthmann, U. Tiede, and Höhne, “Generation of 3D anatomical atlases using the visible human,” in R. F. Kilcoyne, Proc. Computer Applications to Assist Radiology, pp. 62-67, Sym- posia foundation,1996.

[5] T. Schiemann, U. Tiede, and K. H. Höhne, “Segmentation and visualization of the Visible Human for high-quality volume-based visualization,” Medical Image Analysis, vol. 1(4), pp. 263-270, 1996.

[6] V. Spitzer, M. J. Ackerman, A. L. Scherzinger, and D. Whitlock, “The Visible Human male: a techni- cal report,” Journal of the American Medical Informatics Association, vol. 3, pp. 118-130, 1996.

[7] U. Tiede, T. Schiemann, and K. H. Höhne, “Visualizing the Visual Human,” IEEE Computer Graph- ics and Applications, vol. 16, pp. 7-9, 1996.

(10)

[8] R. Mullick and Nguyen, “Visualization and labelling of the Visible Human dataset: challenges and resolves,” in K. H. Höhne and R. Kikinis, Proc. Visualization in Biomedical Computing, vol. 1131 of Lecture Notes in Computer Science, pp. 75-80, Springer Verlag, Berlin,1996.

[9] J. E. Stewart, W. C. Broaddus, and J. H. Johnson, “Rebuilding the Visual Man,” in K. H. Höhne and R. Kikinis, Proc. Visualization in Biomedical Computing, vol. 1131 of Lecture Notes in Computer Science, pp. 81-86, Springer Verlag, Berlin,1996.

[10] L. Serra, W. L. Nowinski, T. Poston, N. Hern, L. C. Meng, C. G. Guan, and P. K. Pillay, “The brain bench: virtual tools for stereotactic frame neurosurgery,” Medical Image Analysis, vol. 1(4), pp. 317- 329, 1996.

[11] T. Schiemann and K. H. Höhne, “Definition of volume transformations for volume interaction,” in J. Duncan and G. Gindi, Proc. Information Processing in Medical Imaging, vol. 1230 of Lecture Notes in Computer Science, pp. 245-258, Springer Verlag, Berlin,1997.

[12] S. Gibson, C. Fyock, E. Grimson, T. Kanade, R. Kikinis, H. Lauer, N. McKenzie, A. Mor, S. Naka- jima, H. Ohkami, R. Osborne, J. Samosky, and A. Sawada, “Volumetric object modeling for surgical simulation,” Medical Image Analysis, vol. 2(2), pp. 121-132, 1998.

[13] H. Fuchs, A. State, E. D. Pisano, W. F. Garret, G. Hirota, M. Livingston, M. C. Whitton, and S. M.

Pizer, “Towards performing ultrasound-guided needle biopsies from within a head-mounted display,”

in K. H. Höhne and R. Kikinis, Proc. Visualization in Biomedical Computing, vol. 1131 of Lecture Notes in Computer Science, pp. 591-600, Springer Verlag, Berlin,1996.

[14] Y. Sato, M. Nakamoto, Y. Tamaki, T. Sasama, I. Sakita, Y.Nakajima, M. Monden, and S. Tamura,

“Image guidance of breast cancer surgery using 3-D ultrasound images and augmented reality visual- ization,” IEEE Transactions on Medical Imaging, vol. 17(5), pp. 681-693, 1998.

[15] W. M. Wells, A. C. F. Colchester, and S. Delp, Proc. “Medical Image Computing and Computer Assisted Intervention,” Lecture Notes in Computer Science, vol. 1496: Springer Verlag, Berlin, 1998, pp. 1258.

[16] I. G. Zubal and C. R. Harrell, “Voxel based Monte-Carlo calculations of nuclear medicine images and applied variance reduction techniques,” Image and Vision Computing, vol. 10, pp. 342-348, 1992.

[17] I. G. Zubal, C. R. Harrell, E. O. Smith, Z. Rattner, G. G., and P. B. Hoffer, “Computerized three- dimensional segmented human anatomy,” Medical Physics, vol. 21(2), pp. 299-302, 1994.

[18] H. Wang, R. J. Jaszczak, and R. E. Coleman, “Solid geometry-based object model for Monte Carlo simulated emission and transmission tomographic imaging systems,” IEEE Transactions on Medical Imaging, vol. 11(3), pp. 361-372, 1992.

[19] C.-L. Huang, W.-T. Chang, L.-C. Wu, and J.-K. Wang, “Three-dimensional PET emission scan reg- istration and transmission scan synthesis,” IEEE Transactions on Medical Imaging, vol. 16(6), 1997.

[20] R. K. S. Kwan, A. C. Evans, and G. B. Pike, “An extensible MRI simulator for post-processing evalu- ation,” in K. H. Höhne and R. Kikinis, Proc. Visualization in Biomedical Computing, vol. 1131 of Lecture Notes in Computer Science, pp. 135-140, Springer Verlag, Berlin,1996.

[21] D. L. Collins, A. P. Zijdenbos, V. Kollokian, J. G. Sled, N. J. Kabani, C. J. Holmes, and A. C. Evans,

“Design and construction of a realistic digital brain phantom,” IEEE Transactions On Medical Imag- ing, vol. 17(3), pp. 463-468, 1998.

[22] J. M. Kaye, F. P. Primiano, and D. N. Metaxas, “A three-dimensional virtual environment for model- ing mechanical cardiopulmonary interactions,” Medical Image Analysis, vol. 2(2), pp. 169-195, 1998.

(11)

[23] Z. A. Cohen, D. M. McCarthy, H. Roglic, J. H. Henry, W. G. Rodkey, J. R. Steadman, V.C. Mow, and G. A. Ateshian, “Computer-aided planning of patellofemoral joint OA surgery: developing phys- ical models from patient MRI,” in W. M. Wells and A. C. F. Colchester, Proc. MICCAI, vol. 1496 of Lecture Notes in Computer Science, pp. 9-20, Springer Verlag, Berlin,1998.

[24] P. Edwards, D. Hill, J. Little, and D. Hawkes, “A three-component deformation model for image- guided surgery,” Medical Image Analysis, vol. 2(4), pp. 355-367, 1998.

[25] Y. Zhu, M. Drangova, and N. J. Pelc, “Estimation of deformation gradient and strain from cine-PC velocity data,” IEEE Transactions on Medical Imaging, vol. 16(6), pp. 840-851, 1997.

[26] J. Declerck, J. Feldmar, M. Goris, and F. Betting, “Automatic registration and alignment on a tem- plate of cardiac stress and rest reoriented SPECT images,” IEEE Transactions on Medical Imaging, vol.

16(6), pp. 727-737, 1997.

[27] J. Declerck, J. Feldmar, and N. Ayache, “Definition of a four-dimensional continuous planispheric transformation for the tracking and the analysis of left-ventricle motion,” Medical Image Analysis, vol.

2(2), pp. 197-213, 1998.

[28] S. Benayoun, C. Nastar, and N. Ayache, “Dense non-rigid motion estimation in sequences of 3D images using differential constraints,” in N. Ayache, Proc. CVRMed, vol. 905 of Lecture Notes in Computer Science, pp. 309-318, Springer Verlag, Berlin,1995.

[29] E. Bardinet, L. D. Cohen, and N. Ayache, “Tracking and motion analysis of the left ventricle with deformable superquadrics,” Medical Image Analysis, vol. 1(2), pp. 129-149, 1996.

[30] P. Clarysse, D. Friboulet, and I. E. Magnin, “Tracking geometrical descriptors on 3-D deformable surfaces: application to the left-ventricular surface of the heart,” IEEE Transactions on Medical Imag- ing, vol. 16(4), pp. 392-404, 1997.

[31] J. L. Prince and E. R. McVeigh, “Motion estimation from tagged MR image sequences,” IEEE Trans- actions on Medical Imaging, vol. 11(2), pp. 238-249, 1992.

[32] A. A. Young and L. Axel, “Tracking and finite element analysis of stripe deformation in magnetic res- onance tagging,” IEEE Transactions on Medical Imaging, vol. 14(3), pp. 413-421, 1995.

[33] D. Friboulet, I. E. Magnin, and D. Revel, “Assessment of a model for overall left ventricular three- dimensional motion from MRI-data,” The International Journal of Cardiac Imaging, vol. 8, pp. 175- 190, 1992.

[34] J. M. Gorce, D. Friboulet, P. Clarysse, and I. E. Magnin, “Three-dimensional velocity field estima- tion of moving cardiac walls,” in Proc. Computers in Cardiology, pp. 489-492, 1994.

[35] W. C. Huang and D. B. Goldgof, “Adaptive-size meshes for rigid and nonrigid shape analysis and synthesis,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 15(6), pp. 611-616, 1993.

[36] C. W. Chen, T. S. Huang, and M. Arrot, “Modeling, analysis and visualization of left ventricle shape and motion by hierarchical decomposition,” IEEE Transactions on Pattern Analysis and Machine Intel- ligence, vol. 16(4), pp. 324-356, 1994.

[37] J. Duncan, R. Owen, P. Anandan, L. Staib, T. McCauley, A. Salazar, and F. Lee, “Shape-based track- ing of left ventricular wall motion,” in Proc. Computers in Cardiology, pp. 41-44, 1991.

[38] S. Kumar and D. Goldgof, “Automatic trcking of SPAMM grid and the estimation of deformation parameters from cardiac MR images,” IEEE Transactions on Medical Imaging, vol. 13(1), pp. 122-132, 1994.

(12)

[39] J. S. Duncan, F. A. Lee, A. W. M. Smeulders, and B. L. Zaret, “A bending energy model for measure- ment of cardiac shape deformity,” IEEE Transactions on Medical Imaging, vol. 10(3), pp. 307-320, 1991.

[40] W. G. O'Dell, C. C. Moore, W. C. Hunter, E. A. Zerhouni, and E. R. McVeigh, “Three-dimensional myocardial deformations: calculation with displacement field fitting to tagged MR-images,” Radiol- ogy, vol. 195(3), pp. 165-175, 1995.

[41] H. Azhari, S. Sideman, R. Beyar, E. Grenadier, and U. Dinnar, “An analytical shape descriptor of 3-D geometry. Application to the analysis of the left ventricular shape and contraction,” IEEE Transactions on Biomedical Engineering, 1987.

[42] J. Park, D. Metaxas, and L. Axel, “Analysis of left ventricular wall motion based on volumetric deformable models and MRI-SPAMM,” Medical Image Analysis, vol. 1(1), pp. 53-71, 1996.

[43] J. Park, D. Metaxas, A. A. Young, and L. Axel, “Deformable models with parameter functions for car- diac motion analysis from tagged MRI data,” IEEE Transactions on Medical Imaging, vol. 15(3), pp.

278-289, 1996.

[44] E. L. Dove, K. P. Philip, D. D. McPherson, and B. Chandran, “Quantitative shape descriptors of left ventricular cine-CT images,” IEEE Transactions on Biomedical Engineering, vol. 38(12), pp. 1256- 1261, 1991.

[45] S. K. Mishra, D. B. Goldgof, and T. S. Huang, “Motion analysis and epicardial deformation estima- tion from angiographic data,” in Proc. Computer Vision and Pattern Recognition, pp. 331-336, 1991.

[46] A. Pentland and B. Horowitz, “Recovery of nonrigid motion and structure,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 13(7), pp. 730-742, 1991.

[47] J. C. McEachen and J. S. Duncan, “Shape-based tracking of left ventricular wall motion,” IEEE Transactions on Medical Imaging, vol. 16(3), pp. 270-283, 1997.

[48] F. G. Meyer, R. T. Constable, A. J. Sinusas, and J. S. Duncan, “Tracking myocardial deformation using phase constrast MR velocity fields: a stochastic approach,” IEEE Transactions on Medical Imag- ing, vol. 15(4), pp. 453-465, 1996.

[49] C. Nastar and N. Ayache, “A new physically based model for efficient tracking and analysis of defor- mations,” Lecture Notes in Computer Science, vol. 911, pp. 239-283, 1993.

[50] P. Shi, G. Robinson, A. Chakraborty, L. Staib, R. Constable, A. Sinusas, and J. Duncan, “A unified framework to assess myocardial function from 4D images,” Lecture Notes in Computer Science, vol.

905, pp. 327-340, 1995.

[51] C. Nastar and N. Ayache, “Non-rigid motion analysis in medical images: A physically based approach,” Lecture Notes in Computer Science, vol. 687, pp. 17-32, 1993.

[52] C. Nastar, “Vibration modes for nonrigid motion analysis in 3D images,” Lecture Notes in Computer Science, vol. 801, pp. 231-236, 1994.

[53] A. Gupta, T. O'Donnel, and A. Singh, “Segmentation and tracking of cine cardiac MR and CT images using a 3-D deformable model,” in Proc. Computers in Cardiology, pp. 661-664, 1994.

[54] P. Radeva, A. A. Amini, and J. T. Huang, “Deformable B-solids and implicit snakes for 3D localiza- tion and tracking of SPAMM MRI data,” Computer Vision and Image Understanding, vol. 66(2), pp.

163-178, 1997.

[55] S. M. Song and R. M. Leahy, “Computation of 3-D velocity fields from 3-D Cine CT Images of a Human Heart,” IEEE Transactions on Medical Imaging, vol. 10(3), pp. 295-306, 1991.

(13)

[56] R. T. Constable, K. M. Rath, A. J. Sinusas, and G. G. J, “Development and evaluation of tracking algorithms for cardiac wall motion analysis using phase velocity MR imaging,” Magnetic Resonance in Medicine, vol. 32, pp. 33-42, 1994.

[57] A. A. Young, “Model Tags: Direct 3D tracking of heart wall motion from tagged MR images,” in W.

M. Wells and A. C. F. Colchester, Proc. MICCAI, vol. 1496 of Lecture Notes in Computer Science, pp. 92-101, Springer Verlag, Berlin,1998.

[58] A. A. Young, D. L. Kraitchman, and L. Axel, “Deformable models for tagged MR images: reconstruc- tion of two- and three-dimensional heart wall motion,” in Proc. IEEE Workshop on Biomedical image analysis, pp. 317-332, 1994.

[59] H. D. Tagare, “Non-rigid curve correspondence for estimating heart motion,” in J. Duncan and G.

Gindi, Proc. Information Processing in Medical Imaging, vol. 1230 of Lecture Notes in Computer Science, pp. 489-494, Springer Verlag, Berlin,1997.

[60] S. Sullivan, L. Sandford, and J. Ponce, “Using geometric distance fits for 3-D object modeling and recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 16(12), pp. 1183- 1195, 1994.

[61] T. O'Donnel, A. Gupta, and T. Boult, “The hybrid volume ventriculoid: a model for MR-SPAMM 3-D analysis,” in Proc. Computers in Cardiology, pp. 5-8, 1995.

[62] C. Nastar and N. Ayache, “Frequency-based nonrigid motion analysis: application to four dimen- sional medical images,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 18(11), 1996.

[63] E. Haber, D. Metaxas, and L. Axel, “Motion analysis of the right ventricle from MRI images,” in W.

M. Wells and A. C. F. Colchester, in Proc. MICCAI, vol. 1496 of Lecture Notes in Computer Sci- ence, pp. 177-188, Springer Verlag, Berlin,1998.

[64] G. Subsol, J. P. Thirion, and N. Ayache, “A scheme for automatically building three-dimensional morphometric anatomical atlases: application to a skull atlas,” Medical Image Analysis, vol. 2(1), pp.

37-60, 1998.

[65] F. Bookstein, “Landmark methods for forms without landmarks: morphometrics of group differences in outline shape,” Medical Image Analysis, vol. 1(3), pp. 225-243, 1996.

[66] F. Bookstein, “Visualizing group differences in outline shape: methods from biometrics of landmark points,” in K. H. Höhne and R. Kikinis, in Proc. Visualization in Biomedical Computing, vol. 1131 of Lecture Notes in Computer Science, pp. 405-410, Springer Verlag, Berlin,1996.

[67] F. L. Bookstein, “Combining "vertical" and "horizontal" features from medical images,” in Lecture Notes in Computer Science, vol. 905, 1995, pp. 184-191.

[68] F. L. Bookstein, “Principal warps: thin-plate splines and the decomposition of deformations,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 11(6), pp. 567-585, 1989.

[69] F. L. Bookstein, “Quadratic variation of deformations,” in J. Duncan and G. Gindi, Proc. Informa- tion Processing in Medical Imaging, vol. 1230 of Lecture Notes in Computer Science, Springer Ver- lag, Berlin,1997.

[70] C. Davatzikos, “Spatial normalization of 3D brain images using deformable models,” Journal of Com- puter Assisted Tomography, vol. 20(4), pp. 656-665, 1996.

[71] G. E. Christensen, S. C. Joshi, and M. I. Miller, “Volumetric transformation of brain anatomy,” IEEE Transactions on Medical Imaging, vol. 16(6), pp. 864-877, 1997.

(14)

[72] S. Joshi, A. Banerjee, G. E. Christensen, J. G. Csernansky, J. W. Haller, M. I. Miller, and L. Wang,

“Gaussian random fields on sub-manifolds for characterizing brain surfaces,” in J. Duncan and G.

Gindi, Proc. Information Processing in Medical Imaging, vol. 1230 of Lecture Notes in Computer Science, Springer Verlag, Berlin,1997.

[73] M. Miller, A. Banerjee, G. Christensen, S. Joshi, N. Khaneja, U. Grenander, and L. Matejic, “Statisti- cal methods in computational anatomy,” Statistical Methods in Biomedical Research, vol. 6, pp. 267- 299, 1997.

[74] P. Thompson and A. W. Toga, “Visualization and mapping of anatomic abnormalities using a proba- bilistic brain atlas based on random fluid transformations.,” in K. H. Höhne and R. Kikinis, Proc.

Visualization in Biomedical Computing, vol. 1131 of Lecture Notes in Computer Science, pp. 383- 392, Springer Verlag, Berlin,1996.

[75] D. Dean, P. Buckley, F. Bookstein, J. Kamath, D. Kwon, L. Friedman, and C. Lys, “Three-dimen- sional MR based, morphometric comparison of schizophrenic and normal cerebral ventricles,” in K.

H. Höhne and R. Kikinis, Proc. Visualization in Biomedical Computing, vol. 1131 of Lecture Notes in Computer Science, pp. 363-372, Springer Verlag, Berlin,1996.

[76] P. M. Thompson and A. W. Toga, “Detection, visualization and animation of abnormal anatomic structure with a deformable probabilistic brain atlas based on random vector field transformations,”

Medical Image Analysis, vol. 1(4), pp. 271-294, 1996.

[77] A. Singh, D. Goldgof, and D. Terzopoulos, “Deformable models in medical image analysis,” . Los Alamitos, CA: IEEE Computer Society Press, 1998.

[78] D. Terzopoulos, J. Platt, A. Barr, and K. Fleischer, “Elastically deformable models,” Comp. Graph, vol.

21(4), pp. 205-214, 1987.

[79] M. Kass, A. Witkin, and D. Terzopoulos, “Snakes: Active contour models,” International Journal of Computer Vision, vol. 1(4), pp. 321-331, 1988.

[80] J. V. Miller, D. E. Breen, W. E. Lorensen, R. M. O'Bara, and M. J. Wozny, “Geometrically deformed models: a method for extracting closed geometric models from volume data,” Computer Graphics, vol.

25(4), pp. 217-226, 1991.

[81] L. D. Cohen, “On active contour models and balloons,” CVGIP: Image Understanding, vol. 53(2), pp. 211-218, 1991.

[82] I. Cohen, L. D. Cohen, and N. Ayache, “Using deformable surfaces to segment 3-D images and infer differential structures,” CVGIP: Image Understanding, vol. 56(2), pp. 242-263, 1992.

[83] L. D. Cohen and I. Cohen, “Finite-element methods for active contour models and balloons for 2-D and 3-D images,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 15(11), pp.

1131-1147, 1993.

[84] T. McInerney and D. Terzopoulos, “A dynamic finite element surface model for segmentation and tracking in multidimensional medical images with application to cardiac 4D image analysis,” Com- puterized Medical Imaging and Graphics, vol. 19, pp. 69-83, 1995.

[85] T. N. Jones, “Automated 3D segmentation using deformable models and fuzzy affinity,” in J. Duncan and G. Gindi, Proc. Information Processing in Medical Imaging, vol. 1230 of Lecture Notes in Com- puter Science, pp. 113-126, Springer Verlag, Berlin,1997.

[86] P. Thompson and A. W. Toga, “A surface based technique for warping three dimensional images of the brain,” IEEE Transactions on Medical Imaging, vol. 15(4), pp. 402-417., 1996.

(15)

[87] L. Gao, D. Heath, and E. K. Fishman, “Abdominal Image Segmentation Using Three-Dimensional Deformable Models,” Journal of Computer Assisted Tomography, vol. 33(6), pp. 348-355, 1998.

[88] H. Delingette, M. Hebert, and K. Ikeuchi, “Shape representation and image segmentation using deformable surfaces,” Image and Vision Computing, vol. 10(3), pp. 132-144, 1992.

[89] C. Davatzikos and R. N. Bryan, “Using a deformable surface model to obtain a shape representation of the cortex,” IEEE Transactions on Medical Imaging, vol. 15(6), pp. 785-795, 1996.

[90] C. Davatzikos, “Spatial transformation and registration of brain images using elastically deformable models,” Computer Vision and Image Understanding, vol. 66 (2), pp. 207-222, 1997.

[91] S. Sandor and R. Leahy, “Surface-based labeling of cortical anatomy using a deformable atlas,” IEEE Transactions on Medical Imaging, vol. 16(1), pp. 41-54, 1997.

[92] M. Vaillant and C. Davatzikos, “Finding parametric representations of the cortical sulci using an active contour model,” Medical Image Analysis, vol. 1(4), pp. 295-315, 1996.

[93] D. Terzopoulos and D. Metaxas, “Dynamic 3D models with local and global deformations: deform- able superquadrics,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 13(7), pp.

703-714, 1991.

[94] J. G. Snel, H. W. Venema, and C. A. Grimbergen, “Detection of the Carpal Bone Contours from 3- D MR Images of the Wrist Using a Planar Radial Scale-Space Snake,” IEEE Transactions on Medical Imaging, vol. 17(6), pp. 1049-1062, 1999.

[95] A. Chakraborty, L. H. Staib, and J. S. Duncan, “Deformable boundary finding in medical images by integrating gradient and region information,” IEEE Transactions on Medical Imaging, vol. 15(6), pp.

859-870, 1996.

[96] M. Worring, A. W. M. Smeulders, L. H. Staib, and J. S. Duncan, “Parametrized feasible boundaries in gradient vector fields,” Computer Vision and Image Understanding, vol. 63(1), pp. 135-144, 1996.

[97] F. Leitner and P. Cinquin, “From splines and snakes to SNAKE SPLINES,” Lecture Notes in Computer Science, vol. 911, pp. 264-281, 1991.

[98] S. Lobregt and M. A. Viergever, “A discrete dynamic contour model,” IEEE Transactions on Medical Imaging, vol. 14(1), pp. 12-24, 1995.

[99] T. B. Sebastian, H. Tek, J. J. Crisco, S. W. Wolfe, and B. B. Kimia, “Segmentation of carpal bones from 3D CT images using skeletally coupled deformable models,” in W. M. Wells and A. C. F.

Colchester, Proc. MICCAI, vol. 1496 of Lecture Notes in Computer Science, pp. 1185-1194, Springer Verlag, Berlin,1998.

[100] A. A. Amini, T. E. Weymouth, and R. C. Jain, “Using dynamic programming for solving variational problems in vision,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 12(9), pp.

855-867, 1990.

[101] S. Ranganath, “Contour Extraction from Cardiac MRI Studies Using Snakes,” IEEE Transactions on Medical Imaging, vol. 14(2), pp. 328-338, 1995.

[102] A. Goshtasby and D. A. Turner, “Segmentation of Cardiac Cine MR Images for Extraction of Right and Left Ventricular Chambers,” IEEE Transactions on Medical Imaging, vol. 14(1), pp. 56-64, 1995.

[103] L. H. Staib and J. S. Duncan, “Model-based deformable surface finding for medical images,” IEEE Transactions on Medical Imaging, vol. 15(5), pp. 720-731, 1996.

[104] D. Geiger, A. Gupta, L. A. Costa, and J. Vlontzos, “Dynamic programming for detecting, tracking, and matching deformable contours,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 17(3), pp. 294-302, 1995.

(16)

[105] V. Cassales, F. Catte, T. Coll, and F. Dibos, “A geometric model for active contours in image process- ing,” Numerische Mathematik, vol. 66, pp. 1-31, 1993.

[106] R. Malladi, J. A. Sethian, and B. C. Vemuri, “Shape modeling with front propagation: a level set approach,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 17(2), pp. 158-175, 1995.

[107] T. McInerney and D. Terzopoulos, “Medical image segmentation using topologically adaptable snakes,” Lecture Notes in Computer Science, vol. 905, pp. 92-104, 1995.

[108] V. Casalles, R. Kimmel, and G. Sapiro, “Geodesic active contours,” International Journal of Computer Vision, vol. 22(1), pp. 61-79, 1997.

[109] A. Yezzi, S. Kichenassamy, A. Kumar, P. Olver, and A. Tannenbaum, “A geometric snake model for segmentation of medical imagery,” IEEE Transactions on Medical Imaging, vol. 16(2), pp. 199-209, 1997.

[110] L. M. Lorigo, O. Faugeras, W. E. L. Grimson, R. Keriven, and R. Kikinis, “Segmentation of bone in clinical knee MRI using texture-based geodesic active contours,” in Proc. MICCAI, Lecture Notes in Computer Science, vol. 1496, pp. 1195-1204, 1998.

[111] W. J. Niessen, B. M. ter Haar Romeny, and M. A. Viergever, “Geodesic deformable models for medi- cal image analysis,” IEEE Transactions on Medical Imaging, vol. 17(4), pp. 634-641, 1998.

[112] X. Zheng, L. H. Staib, R. T. Schultz, and J. S. Duncan, “Segmentation and measurement of the cor- tex from 3D MR images,” in Proc. MICCAI, Lecture Notes in Computer Science, vol. 1496, pp.

519-530, 1998.

[113] B. C. Vemuri, A. Radisavljevic, and C. M. Leonard, “Multi-resolution Stochastic 3D Shape Models for Image Segmentation,” Lecture Notes in Computer Science, vol. 687, pp. 62-76, 1993.

[114] B. C. Vemuri and A. Radisavljevic, “Multiresolution stochastic hybrid shape models with fractal pri- ors,” ACM Transactions on Graphics, vol. 13(2), pp. 177-207, 1994.

[115] B. C. Vemuri, Y. Guo, C. M. Leonard, and S.-H. Lai, “Fast numerical algorithms for fitting multires- olution hybrid shape models to brain MRI,” Medical Image Analysis, vol. 1(4), pp. 343-362, 1996.

[116] L. H. Staib and J. S. Duncan, “Boundary finding with parametrically deformable contour models,”

IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 14(11), pp. 1061-1075, 1992.

[117] C. Brechbühler, G. Gerig, and O. Kubler, “Parametrization of closed surfaces for 3-D shape descrip- tion,” Computer Vision and Image Understanding, vol. 61(2), pp. 154-170, 1995.

[118] G. Szekely, A. Kelemen, C. Brechbühler, and G. Gerig, “Segmentation of 3D objects from MRI vol- ume data using constrained elastic deformation of flexible Fourier surface models,” Lecture Notes in Computer Science, vol. 905, pp. 494-505, 1995.

[119] G. Szekely, A. Kelemen, C. Brechbühler, and G. Gerig, “Segmentation of 2-D and 3-D objects from MRI volume data using constrained elastic deformations of flexible Fourier contour and surface mod- els,” Medical Image Analysis, vol. 1(1), pp. 19-34, 1996.

[120] T. F. Cootes, A. Hill, C. J. Taylor, and J. Haslam, “The use of active shape models for locating struc- tures in medical images,” Lecture Notes in Computer Science, vol. 687, pp. 33-47, 1993.

[121] T. F. Cootes, A. Hill, C. J. Taylor, and J. Haslam, “Use of active shape models for locating structures in medical images,” Image and Vision Computing, vol. 12(6), pp. 355-366, 1994.

[122] T. F. Cootes, C. J. Taylor, D. H. Cooper, and J. Graham, “Active shape models- their training and application,” Computer Vision and Image Understanding, vol. 61(1), pp. 38-59, 1995.

(17)

[123] A. Hill, T. F. Cootes, and C. J. Taylor, “A genetic system for image interpretation using flexible tem- plates,” in Proc. British Machine Vision Conference, 1992.

[124] A. Hill and C. J. Taylor, “Model based image interpretation using genetic algorithms,” Image and Vision Computing, vol. 10, pp. 295-300, 1992.

[125] A. Hill, T. F. Cootes, C. J. Taylor, and K. Lindley, “Medical image interpretation: a generic approach using deformable templates,” Medical Informatics, vol. 19(1), pp. 47-60, 1994.

[126] A. C. W. Kotcheff and C. J. Taylor, “Automatic reconstruction of eigenshape models by genetic algo- rithm,” in J. Duncan and G. Gindi, Proc. Information Processing in Medical Imaging, vol. 1230 of Lecture Notes in Computer Science, pp. 441-446, Springer Verlag, Berlin,1997.

[127] A. C. W. Kotcheff and C. J. Taylor, “Automatic construction of eigenshape models by direct optimi- zation,” Medical Image Analysis, vol. 2(4), pp. 303-314, 1998.

[128] S. Sclaroff and A. P. Pentland, “Modal matching for correspondence and recognition,” IEEE Transac- tions on Pattern Analysis and Machine Intelligence, vol. 17(6), pp. 545-561, 1995.

[129] C. Kambhamettu and D. B. Goldgof, “Curvature-based approach to point correspondence recovery in conformal nonrigid motion,” CVGIP: Image Understanding, vol. 60(1), pp. 26-43, 1994.

[130] A. Hill, A. D. Brett, and C. J. Taylor, “Automatic landmark identification using a new method of non-rigid correspondence,” in J. Duncan and G. Gindi, Proc. Information Processing in Medical Imaging, vol. 1230 of Lecture Notes in Computer Science, pp. 483-488, Springer Verlag, Ber- lin,1997.

[131] N. Duta and M. Sonka, “Segmentation and interpretation of MR brain images using an improved knowledge-based active shape model,” in J. Duncan and G. Gindi, Proc. Information Processing in Medical Imaging, vol. 1230 of Lecture Notes in Computer Science, pp. 381-386, Springer Verlag, Berlin,1997.

[132] P. P. Smyth, C. J. Taylor, and J. E. Adams, “Automatic measurement of vertebral shape using active shape models,” in J. Duncan and G. Gindi, Proc. Information Processing in Medical Imaging, vol.

1230 of Lecture Notes in Computer Science, pp. 441-446, Springer Verlag, Berlin,1997.

[133] N. Duta and M. Sonka, “Segmentation and interpretation of MR brain images: an improved active shape model,” IEEE Transactions on Medical Imaging, vol. 17(6), pp. 1049-1062, 1999.

[134] T. L. Faber, E. M. Stokely, R. M. Peshock, and J. R. Corbett, “A model-based four-dimensional left ventricular surface detector,” IEEE Transactions on Medical Imaging, vol. 10(3), pp. 321-329, 1991.

[135] A. Zijdenbos, A. C. Evans, F. Riahi, J. Sled, J. Chui, and V. Kollokian, “Automatic quantification of multiple sclerosis lesion volume using stereotaxic space,” in K. H. Höhne and R. Kikinis, Proc. Visu- alization in Biomedical Computing, vol. 1131 of Lecture Notes in Computer Science, pp. 439-448, Springer Verlag, Berlin,1996.

[136] D. L. Collins, P. Neelin, T. M. Peters, and A. C. Evans, “Automatic 3D intersubject registration of MR volumetric data in standardized Talairach space,” Journal of Computer Assisted Tomography, vol.

18(2), pp. 192-205, 1994.

[137] D. L. Collins, C. J. Holmes, T. M. Peters, and A. C. Evans, “Automatic 3-D model-based neuroana- tomical segmentation,” Human Brain Mapping, vol. 3(3), pp. 190-208, 1995.

[138] N. Karssemeijer, “A statistical method for automatic labeling of tissues in medical images,” Machine Vision and Applications, vol. 3, pp. 75-86, 1990.

(18)

[139] K. van Leemput, F. Maes, D. Vandermeulen, and P. Suetens, “Automatic segmentation of brain tissues and MR bias field correction using a digital brain atlas,” in W. M. Wells and A. C. F. Colchester, Proc.

MICCAI, vol. 1496 of Lecture Notes in Computer Science, pp. 1222-1229, Springer Verlag, Ber- lin,1998.

[140] G. Le Goualher, D. L. Collins, C. Barillot, and A. C. Evans, “Automatic identification of cortical sulci using a 3D probabilistic atlas,” in W. M. Wells and A. C. F. Colchester, Proc. MICCAI, vol. 1496 of Lecture Notes in Computer Science, pp. 509-518, Springer Verlag, Berlin,1998.

[141] J. L. Boes, P. H. Bland, T. E. Weymouth, L. E. Quint, F. L. Bookstein, and C. R. Meyer, “Generating a normalized geometric liver model using warping,” Investigative Radiology, vol. 29(3), pp. 281-286, 1994.

[142] J. L. Boes, C. Meyer, and T. E. Weymouth, “Liver definition in CT using a population-based shape model,” Lecture Notes in Computer Science, vol. 905, pp. 506-512, 1995.

[143] J. L. Boes, T. E. Weymouth, and C. R. Meyer, “Multiple organ definition in CT using a Bayesian approach for 3D model fitting,” Proc. SPIE, vol. 2573, pp. 244-251, 1995.

[144] H. D. Tagare, “Deformable 2-D template matching using orthogonal curves,” IEEE Transactions on Medical Imaging, vol. 16(1), pp. 108-117, 1997.

[145] J. Brinkley, “A flexible, generic model for anatomic shape: application to interactive two-dimensional medical image segmentation and matching,” Computers and Biomedical Research, vol. 26, pp. 121- 142, 1993.

[146] J. G. Bosch, L. H. Savalle, G. van Burken, and J. H. C. Reiber, “Evaluation of a semiautomatic con- tour detection approach in sequences of short-axis two-dimensional echocardiographic images,” Jour- nal of the American Society Echocardiography, vol. 8, pp. 810-821, 1995.

[147] R. J. van der Geest, V. G. M. Buller, E. Jansen, H. J. Lamb, L. H. B. Baur, E. E. van der Wall, A. de Roos, and J. H. C. Reiber, “Comparison between manual and semiautomated analysis of left ventric- ular volume parameters from short-axis MR images,” Journal of Computer Assisted Tomography, vol.

21(5), pp. 756-765, 1997.

[148] R. J. van der Geest, R. A. Niezen, E. E. van der Wall, A. de Roos, and J. H. C. Reiber, “Automated measurement of volume flow in the ascending aorta using MR velocity maps: evaluation of inter- and interobserver variability in healthy volunteers,” Journal of Computer Assisted Tomography, vol. 22(6), pp. 904-911, 1998.

[149] M. Sonka, X. Zhang, M. Siebes, M. S. Bissing, S. DeJong, S. M. Collins, and C. R. McKay, “Seg- mentation of intravascular ultrasound images: A knowledge guided approach.,” IEEE Transactions on Medical Imaging, vol. 14, pp. 719-732, 1995.

[150] A. Krivanek and M. Sonka, “Ovarian ultrasound image analysis: follicle segmentation,” IEEE Trans- actions on Medical Imaging, vol. 17(6), pp. 935-944, 1998.

[151] J. Lötjönen, I. E. Magnin, P.-J. Reissman, J. Nenonen, and T. Katila, “Segmentation of magnetic res- onance images using 3D deformable models,” in proc. MICCAI, Lecture Notes in Computer Sci- ence, vol. 1496, pp. 9-20, 1998.

[152] A. L. Yuille, P. W. Hallinan, and D. S. Cohen, “Feature extraction from faces using deformable tem- plates,” International Journal of Computer Vision, vol. 8(2), pp. 99-111, 1992.

[153] K. Delibasis and P. E. Undrill, “Anatomical object recognition using deformable geometric models,”

Image and Vision Computing, vol. 12(7), pp. 423-433, 1994.

(19)

[154] B. P. F. Lelieveldt, M. Sonka, L. Bolinger, T. D. Scholtz, H. W. M. Kayser, R. J. v. d. Geest, and J. H.

C. Reiber, “Anatomical modeling with fuzzy implicit surfaces: application to automated localization of the heart and lung surfaces in thoracic MR Images,” in A. Kuba and M. Samal, Proc. Information Processing in Medical Imaging, vol. 1613 of Lecture Notes in Computer Science, pp 400-405, Springer Verlag, Berlin,1999.

[155] B. P. F. Lelieveldt, R. J. van der Geest, and J. H. C. Reiber, “Automated model driven localization of the heart and lung surfaces in thoracic MR images,” Computers in Cardiology, vol. 25, pp. 9-12, 1998.

[156] B. P. F. Lelieveldt, R. J. van der Geest, M. Ramze Rezaee, J. G. Bosch, and J. H. C. Reiber, “Anatom- ical model matching with fuzzy implicit surfaces for segmentation of thoracic volume scans,” IEEE Transactions on Medical Imaging, vol. 18(2), pp 218-230, 1999.

[157] P. St-Jean, A. F. Sadikot, L. Collins, D. Clonda, R. Kasrai, A. C. Evans, and T. M. Peters, “Automated atlas integration and interactive three-dimensional vsualization tools for planning and guidance in functional neurosurgery,” IEEE Transactions on Medical Imaging, vol. 17(5), pp. 672-680, 1998.

[158] T. Greitz, C. Bohm, Holte.S., and Eriksson, “A computerized brain atlas: construction, anatomical content and some applications,” Journal of Computer Assisted Tomography, vol. 15, pp. 26-38, 1991.

[159] R. Dann, J. Hoford, S. Kovacic, M. Reivich, and R. Bajcsy, “Evaluation of elastic matching system for anatomic (CT,MR) and functional (PET) cerebral images,” Journal of Computed Tomography, vol.

13(4), pp. 603-611, 1989.

[160] R. Bajcsy and S. Kovacic, “Multiresolution elastic matching,” Computer Vision, Graphics and Image Processing, vol. 46, pp. 1-21, 1989.

[161] A. C. Evans, W. Dai, L. Collins, P. Neelin, and S. Marret, “Warping of a computerized 3-D atlas to match brain image volumes for quantitative neuroanatomical and functional analysis,” Proceedings SPIE Image Processing, vol. 1445, pp. 236-247, 1991.

[162] G. E. Christensen, R. D. Rabbitt, and M. I. Miller, “3D-brain mapping using a deformable neu- roanatomy,” Physics in Medicine and Biology, vol. 39, pp. 609-618, 1994.

[163] G. E. Christensen, R. D. Rabbitt, and M. I. Miller, “Deformable templates using large deformation kinematics,” IEEE Transactions on Image Processing, vol. 5(10), pp. 1435-1447, 1996.

[164] J. W. Haller, A. Banerjee, G. E. Christensen, M. Gado, S. Joshi, M. I. Miller, Y. Sheline, M. W. Van- nier, and J. G. Csernansky, “Three-dimensional hippocampal MR morphometry with high-dimen- sional transformation of a neuroanatomic atlas,” Radiology, vol. 202(2), pp. 504-510, 1997.

[165] M. Bro-Nielsen and C. Gramkow, “Fast fluid registration of medical images,” in K. H. Höhne and R.

Kikinis, Proc. Visualization in Biomedical Computing, vol. 1131 of Lecture Notes in Computer Sci- ence, pp. 267-276, Springer Verlag, Berlin,1996.

[166] J. P. Thirion, “Image matching as a diffusion process: an analogy with Maxwell's demons,” Medical Image Analysis, vol. 2(3), pp. 243-260, 1998.

[167] Y. Wang and L. Staib, “Elastic model based non-rigid registration incorporating statistical shape information,” in W. M. Wells and A. C. F. Colchester, Proc. MICCAI, vol. 1496 of Lecture Notes in Computer Science, pp. 1162-1173, Springer Verlag, Berlin,1998.

[168] M. S. Brown, M. F. McNitt-Gray, N. J. Mankovitch, J. G. Goldin, J. Hiller, L. S. Wilson, and D. R.

Aberle, “Method for segmenting chest CT image data using an anatomical model: preliminary results.,” IEEE Transactions on Medical Imaging, vol. 16(6), pp. 828-839, 1997.

(20)

[169] N. Karssemeijer, L. J. Erning, O. v. a. n. Th, and E. G. J. Eikman, “Recognition of organs in CT- image sequences: a model guided approach,” Computers and Biomedical Research, vol. 21(5), pp. 434- 448, 1988.

[170] G. P. Robinson, A. C. F. Colchester, and L. D. Griffin, “Model-based recognition of anatomical objects from medical images,” Lecture Notes in Computer Science, vol. 687, pp. 197-211, 1993.

[171] H. Li, R. Deklerck, De, B. Cuyper, A. Hermanus, E. Nyssen, and J. Cornelis, “Object Recognition in Brain CT Scans: Knowledge-Based Fusion of Data From Multiple Feature Extractors,” IEEE Transac- tions on Medical Imaging, vol. 14(2), pp. 212-229, 1995.

[172] F. Poupon, J.-F. Magnin, D. Hasboun, C. Poupon, I. Magnin, and V. Frouin, “Multi-object deform- able templates dedicated to the segmentation of brain deep structures.,” in W. M. Wells and A. C. F.

Colchester, Proc. MICCAI, vol. 1496 of Lecture Notes in Computer Science, pp. 1134-1143, Springer Verlag, Berlin,1998.

[173] B. P. F. Lelieveldt, J. T. Rijsdam, R. J. van der Geest, D. P. Huijsmans, and J. H. C. Reiber, “Model driven interpretation of velocity encoded aortic flow images by means of Voronoi Arrangement Matrices.,” Proc. Computers in Cardiology 1998, vol. 25, pp. 753-756, 1998.

[174] M. Sonka, S. K. Tadikonda, and S. M. Collins, “Knowledge-based interpretation of MR brain images,” IEEE Transactions on Medical Imaging, vol. 15(4), pp. 443-452, 1996.

[175] S. Dellepiane, C. Regazzoni, S. B. Serpice, and G. Vernazza, “Extension of IBIS for 3D organ recog- nition in NMR multislices,” Pattern Recognition Letters, vol. 8, pp. 65-72, 1988.

[176] H. Niemann, G. F. Sagerer, S. Schröder, and F. Kummert, “ERNEST: A semantic network system for pattern recognition,” IEEE Transactions on Pattern Analysis and Machine Intelligence, vol. 12(9), pp.

883-905, 1990.

[177] M. Sonka, W. Y. Park, and E. A. Hoffman, “Rule based detection of intrathoracic airway trees,” IEEE Transactions on Medical Imaging, vol. 15(3), pp. 314-326., 1996.

[178] G. P. Robinson, A. C. F. Colchester, and G. L.D., “Model-based recognition of anatomical objects from medical images,” Image and Vision Computing, vol. 12(8), pp. 499-507, 1994.

Referenties

GERELATEERDE DOCUMENTEN

By applying the automated model matching method to a set of thoracic scout views as acquired prior to every cardiac MR examination, the posi- tion and orientation of the

Publisher’s PDF, also known as Version of Record (includes final page, issue and volume numbers) Please check the document version of this publication:.. • A submitted manuscript is

License: Licence agreement concerning inclusion of doctoral thesis in the Institutional Repository of the University of Leiden. Downloaded

Door grijswaardentraining bij de constructie van statistische vormmodellen ach- terwege te laten, worden dergelijke modellen ook toepasbaar op beeldvlakken die een andere orientatie

Although the initial model was trained using densely acquired imaging data, SPASM offers a solution during the matching stage to cope with the absence of image information due to

No part of this publication may be reproduced or transmitted in any form or by any means, electronic or mechanical, including photo- copying, recording, or any information storage

This sparse data matching enables LV function analysis without the necessity of acquiring a large number of image slices and is a major reason for us to choose the ASM approach for

In order to reduce model dimensionality, the model was restricted to represent 99% of the shape variation present in the training data, resulting in 33 modes for statistical