• No results found

Automatic infarct planimetry by means of swarm-based clustering

N/A
N/A
Protected

Academic year: 2021

Share "Automatic infarct planimetry by means of swarm-based clustering"

Copied!
6
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Automatic infarct planimetry by means of

swarm-based clustering

Pieter A. van Vuuren

School of Electrical, Electronic and Computer Engineering North-West University

Private Bag X6001, Potchefstroom, 2520, South-Africa Email: pieter.vanvuuren@nwu.ac.za

Derick van Vuuren

Division Medical Physiology Faculty of Medicine and Health Sciences

Stellenbosch University, South-Africa Email: dvvuuren@sun.ac.za

Abstract—Infarct planimetry is an important tool in cardiol-ogy research. At present this technique entails that infarct size is manually determined from scanned images of prepared heart sections. Existing attempts at automating infarct planimetry are limited in that they require user input in the form of starting points for region growing algorithms or template values for classification algorithms. In this paper a new automatic infarct planimetry (AIP) algorithm is presented. The algorithm entails colour contrast enhancement which is performed in the CIE LAB colour space. The distribution of the various tissue classes is thereafter modelled by means of a set of cluster centroids (multiple clusters are used to represent each tissue class in the RGB colour space). Finally, tissue pixels are classified by means of a nearest-neighbour rule. Two clustering algorithms are evaluated in this paper, namely the well-known k-means algorithm and particle swarm optimization (PSO) based clustering. The total AIP procedure is relatively robust for variations in background illumination as well as condensation patterns occurring on indi-vidual heart sections. The main advantage of this AIP algorithm is that only limited user input is required - the user merely has to specify which heart section is to be used for training. The classification decisions made by both variants of the AIP algorithm correlate well with those made by a human expert, with the PSO based clustering algorithm performing slightly better than k-means.

I. INTRODUCTION

Myocardial ischaemia/reperfusion (I/R) injury refers to injury associated with the loss of blood supply to the heart, or a portion of the heart, followed by the reinstitution of flow [1]. This form of injury is relevant in several clinical scenarios such as heart transplantation and myocardial infarction, commonly known as a heart attack. The common incidence of ischaemic heart disease [2] has motivated substantial interest in finding ways to limit and treat I/R injury [3]. In the basic sciences, the determination of infarct size in hearts harvested from animal models has gained wide-spread use [4] and is viewed by many as the gold-standard for determining the lethal effects of I/R [5].

Experimentally, determination of infarct size entails the transient occlusion of the left descending coronary artery in either an isolated heart preparation, or an open-chest in vivo model, in especially small animal models, such as mice, rats and rabbits. After a suitable period of time, the occlusion is removed to allow reperfusion of the affected area. At the end of the experiment the artery is permanently occluded and the heart perfused with 0.5% Evans Blue dye to delineate the area

of the heart not affected by the occlusion, called the viable area (VA), versus the ischaemic zone, called the area at risk (AAR). After freezing the hearts they are sliced into 2 mm thick slices which are then incubated in a phosphate buffer containing 1% w/v triphenyltetrazolium chloride (TTC) for 15 minutes at room temperature. Cell death by necrosis is characterized by an increase in the permeability of the cell membrane [6]. This means that during the I/R experiment, cells that have died have lost their structure and content. TTC only reacts with active dehydrogenases to form a red precipitate [7]. Consequently, in the AAR, tissue which contains cells that were viable at the end of the experiment stain red in the TTC solution, while tissue that has died remain unstained and appear white, the so-called infarcted area. Following staining, the sections of each heart is placed between two transparent Perspex plates and the three different area’s (VA, AAR and infarction) traced by a human observer. An example of one such a prepared section is shown in figure 1.

Fig. 1. Prepared heart section exhibiting three classes of tissue

These three tissue areas are then quantified by the ob-server using graphics analysis software such as the UTHSCSA ImageTool program1. This procedure entails that each heart

section is manually segmented into the three different tissue types. Finally, the different surface areas of the various slices are added together to determine the AAR, VA and infarcted area representative of the whole heart.

The weakest link in the aforementioned procedure is that the manual image segmentation is performed by a human

1Developed at the University of Texas Health Science Centre at San

Antonio, Texas, and available from the internet at http://compdent.uthscsa. edu/dig/download.html.

(2)

expert. Humans are error-prone, tire easily and are never absolutely objective.

It is no small wonder that automating the infarct planimetry process has received some attention in the past. As early as 1995 Porzio et al. presented an interactive technique which makes use of morphological and boolean operators that enables a human operator to eliminate unwanted tissue from an imaged heart section [8].

More recent techniques are still in part semi-automatic, since users are often required to define seedpoints for region growing algorithms [9]. Although such solutions reduce the possibility of subjectivity creeping into analyses, they don’t eliminate it altogether. The only fully automated procedure for the analysis of infarcted tissue is based on an anatomical model of heart cross sections [10]. In this model the entire section as well as its main constituent components are represented by a set of ellipses. The size, position and shape of these ellipsiodal components are then fitted to the observed data by means of the Expectation-Maximization algorithm. Unfortunately, it often occurs that heart sections don’t conform to the assumptions upon which this technique is based (see e.g. the sections in figure 3), thereby invalidating this approach.

Related work has also been done in the automated analysis and planimetry of PET/SPECT2 images. On one end of the

spectrum template matching techniques have been used in commercial PET/SPECT image analysis software [11]. Thresh-old based image segmentation techniques have also been proposed in the literature [12]. In the latter work, threshold levels are adapted locally based on prior knowledge of the heart anatomy as well as the physiology of myocardial infarction. An alternative approach was followed in Lamash et al. in which each type of tissue is modelled by its own Gaussian distribution model (which is optimized by means of a genetic algorithm) [13]. Image pixels are then finally classified by means of a Bayes-based classifier. The various tissue classes can be modelled with higher fidelity by means of Gaussian mixture models [14].

Despite previous work that has been done on automatic infarct planimetry, there is still room for improvement. Al-though this problem seems at face value to be quite simple, any automatic classifier has to contend with a large component of variability that is incurred during the entire process leading up to and including imaging of the prepared heart sections. Furthermore, infarcted tissue in itself exhibits significant intra-class variability. Consequently, a need still exists for automatic infarct planimetry software that is accurate, robust and requires minimal user input.

The achilles heel of classifiers that are based on supervised learning (e.g. support vector machines) is that their robustness is limited by the variability included in their training data. It often occurs that a given set of training data is only valid for specific conditions. The cost of obtaining training data therefore makes supervised learning impractical for automatic infarct planimetry. In contrast, clustering algorithms hold out the promise of enabling infarct planimetry software to adapt to changing inputs and thereby to become truly automatic.

2Photon Emission Tomography / Single-Photon Emission Computed

To-mography.

This paper contributes to the field of automatic infarct planimetry by employing unsupervised clustering to obtain a classifier which is both robust and dependent on minimal user input. The robustness of the classifier is in part due to the preprocessing performed on the input images. Section II describes the preprocessing algorithm which aims to limit the degree of variation present in the image prior to classification. Clustering algorithms are used in this paper to model each of the tissue classes. Two candidate clustering algorithms are presented in section III, namely: k-means and a particle swarm optimization based clustering algorithm. A nearest neighbour decision rule is used in this paper to classify each image pixel. The results obtained by the entire algorithm are presented and discussed in section IV, while section V outlines avenues for future work.

II. PREPROCESSING

All forms of scientific endeavour require some form of modelling in which complexity is made understandable by means of simplification and abstraction. In this paper three models will be used at various stages of the image segmen-tation process. The first step is to compensate for variation in background illumination which requires a model of the background illumination. Secondly, the distribution of pixels in the RGB colour space is modelled by a model which is similar to (but much simpler than) a Gaussian mixtures model. Finally each pixel is classified by a very simple model for the human decision making process: a nearest neighbour classifier.

The entire classification system can also be viewed as a hierarchical classifier. At first the classifier only attempts to identify heart sections from the rest of the image. The output of this classifier is used for twin purposes. Firstly, when the heart sections are removed from the image it allows accurate modelling of the background illumination. Secondly, the identified heart sections are then sent to the clustering algorithm in order to obtain class centroids which will be used for eventual classification.

A. Illumination correction

Significant variation exists in the background illumination between different scans of heart sections. To limit the impact of this variation on the evential classification decisions it is important to correct the background illumination to conform to a more standard background (e.g. pure white).

The observed value O(x, y) of each pixel at coordinates x and y in the image can be modelled as the point-wise product between the illumination I(x, y) falling on the object and the reflectance model R(x, y) for each point on the object. In order to obtain a desired illumination (e.g. I(x, y) = [1 1 1] in normalized RGB colour values), it is necessary to estimate the actual illumination by means of a model ˆI(x, y). The corrected image ˜O(x, y) will then be obtained by point-wise division as follows:

˜

O(x, y) = I(x, y) ˆ

I(x, y)R(x, y). (1)

In general, the background illumination in an image can be modelled by means of the following two-dimensional

(3)

polynomial function [15]: ˆ

I(x, y) = a0+ a1x + a2y + a3x2+ a4y2+ a5xy + . . . (2)

In this application, the illumination is mainly uniform. I(x, y) is therefore adequately approximated with a constant value a0. The value of a0 is estimated by first removing all

heart sections from the image and then taking the average of each RGB colour channel separately. The advantage of performing illumination correction via (1) is that illumination variations within each heart section can be compensated for by this approach.

A critical element in the illumination correction process is the ability to accurately identify heart sections. At first, Sobel edge detection is performed on the luminance component of the RGB image transformed to the CIE LAB colour space. Thereafter morphological closing [16] with a large disk-shaped structuring element followed by morphological hole filling is performed to obtain a binary mask with which the background image can be obtained. The size of the morphological structur-ing element is chosen to ensure that all heart sections as well as their shadows are excluded from the background image. Figure 2 shows an image before and after illumination correction.

Fig. 2. The result of illumination correction

B. Shadow removal

After illumination correction has been performed, each individual heart section is automatically identified and removed from the background. This time the shadows are included in the background image. In order to remove both the general background as well as the shadows (which in some instances can be darker than infarct tissue) it is necessary to generate two separate binary mask images and fuse them with a boolean OR operation.

The first binary mask is obtained from a grayscale ver-sion of the image. Foreground pixels are differentiated from background pixels in the grayscale image by means of a threshold value. The value of this threshold is determined adaptively from the histogram of pixel intensities for the particular image3.

The second binary mask is obtained by the same sequence of morphological operations mentioned earlier. The only dif-ference is that the size of the structuring element is determined iteratively until the number of identified heart sections is approximately the same as in the binary mask obtained by thresholding.

3More specifically from the main valley in the histogram representing the

boundary between foreground and background pixel values.

C. Colour contrast enhancement

In this paper the object under scrutiny is each individual heart section. Examples of heart sections are shown in figure 3. Each section consists of connected tissue, but unfortunately no other simplifying assumptions can be made that will be true for all heart sections encountered in practice. This means that each tissue type can only be described in terms of its colour. The rest of the preprocessing is therefore concerned with enhancing the colour contrast between the various classes of pixels. The twin objectives of the colour contrast enhancement algorithm are to increase the differences between pixels belonging to different tissue classes, but also to improve the homogeneity within each tissue class.

Fig. 3. Examples of different heart sections

The first step of the colour contrast enhancement algorithm is to convert the RGB image to the CIE LAB colour space. Al-though the RGB colour space is inspired by the colour sensitive receptors in the human retina, a more accurate representation of human colour vision and perception is by means of the CIE LAB colour space [17]. Similar to the opponent process colour perception model [18], the LAB colour space represents colour in two dimensions. The first dimension encompasses all colour on the axis between green and red/magenta, while the second ranges from blue to yellow.

The prime advantage of the CIE LAB colour space in this application is that the colouration of two of the tissue types exist at two of the poles of the LAB space. ”At risk” tissue is typically reddish in colour, which corresponds to one end of the green-magenta spectrum. Viable tissue are frequently dark blue, which in turn lies at one end of the blue-yellow dimension. Although infarct tissue exhibits wideband colouration, it sometimes contains a yellow component which is also highlighted in the blue-yellow axis of the CIE LAB space.

Colour contrast enhancement is accomplished in this paper by means of a soft threshold implemented by the well-known logistic function. The logistic function can be expressed as follows:

y = 1

1 + e−a(x−b), (3)

where x represents the colour value of a pixel in one of the LAB dimensions, y is the transformed pixel value and a and b are two parameters that control the particular nature of the mapping. The crossover point of the logistic function is determined by b, while a controls the gradient of the function at its crossover point. Optimal classification results are obtained if a and b are chosen to be respectively 0.5 and 10.

Figure 4 shows the effect of the various preprocessing algorithms on a typical image. A serendipitous consequence of the above mentioned colour contrast enhancement procedure is that condensation patterns are largely removed from the image.

(4)

Condensation frequently occurs during the imaging process making it very difficult to distinguish between red and blue tissue as can be seen in the left-most section in figure 3. Close scrutiny of figure 4 shows that condensation patterns are reduced significantly.

Fig. 4. The result of the various preprocessing algorithms

III. CLASSIFIER

A. Clustering based classification

Even after the above mentioned preprocessing has been performed, the various tissue classes can’t be accurately mod-elled by single centroids in the RGB space4. A colour coded scatterplot of the pixels in a typical heart section is shown in figure 5. In this plot each dot is represented by its RGB colour in the colour enhanced image. Clearly, the distribution of each tissue type in RGB space consists of more than one cluster of pixels (reminiscent of a Gaussian mixtures model).

Fig. 5. Distribution of tissue pixels in the RGB colour space

4Class separation is sometimes improved by increasing the dimensions of

the space. Classification is therefore performed in RGB space.

A simple, yet accurate model of each tissue class can be obtained by clustering the pixels in the RGB space into a large number of clusters5. In contrast with standard practice, the

clustering algorithm shouldn’t be seeded with random centroid positions. This is because each cluster will eventually be associated with a particular tissue colour. Clustering therefore proceeds from idealized colour values for each of the tissue classes (e.g. [0.9 0 0] in the case of red tissue) to which a small component of random noise has been added to obtain a number of centroids for the particular class. The other three idealized class centroids are: infarct [0.6 0.6 0], viable (blue) tissue [0 0.1 0.1], and ventricular lumen: [0.6 0.6 0.6].

After the clustering algorithm has converged, each centroid is allocated the same class label as the nearest idealized tissue colour. The result of the above mentioned modelling algorithm is a set of labelled centroids. These centroids can be used to classify image pixels by means of a nearest neighbour classification rule in which a pixel assumes the identity of the nearest cluster centroid.

The above mentioned algorithm is quite general and can accommodate almost any partitional clustering algorithm. The well-known k-means algorithm is a prime candidate clustering algorithm due to its simplicity and speed. As an example, figure 6 shows a scatterplot of the pixels in a typical heart section. The positions of the centroids determined by k-means are superimposed on this scatterplot. In this specific example (as well as the rest of the paper) two centroids were used to model infarct tissue, while four centroids were allocated separately to viable tissue as well as at-risk tissue. Finally, three centroids were devoted to the combination of ventricular lumen and remaining shadow pixels.

Fig. 6. Cluster centroids obtained by k-means

Despite its simplicity, this classifier works quite well. Figure 7 shows a few heart sections (a) before processing and (b) after classification.

B. Particle swarm optimization based clustering

Particle swarm optimization (PSO) can be used to solve optimization problems. Although PSO algorithms tend to be

5Fitting this mixtures model to the observed data corresponds to the

(5)

Fig. 7. Results obtained by k-means clustering

slow, they are easy to implement and are capable of finding a global minimum in the absence of a differentiable cost function [19]. If the clustering or classification problem is reformulated as an optimization problem it is possible to bring the opti-mizing abilities of swarm intelligence to bear on clustering problems [20]. It has been reported in the literature that PSO based clustering performs better than k-means, although at a much higher computational cost [21].

Various versions of PSO based clustering exist. In one version the position of each particle of the swarm is regarded as a candidate centroid for the cluster. The movement of each particle within the swarm is governed by its own previous best position, combined with the best position discovered to date by the swarm upon which a certain amount of randomness is superimposed. At each iteration of the algorithm the fitness of each particle is evaluated by means of a cost function (also known as a cluster validity measure) [22].

In this paper multiple swarms are used to find the centroids of the various clusters within the RGB space. Each swarm is devoted to a single cluster and the cooperation between swarms is limited to the cluster validity measure used. More specifically the so-called combined measure is used, which attempts to both minimize the intra-cluster distance as well as maximize the inter-cluster distance. Details of the algorithm can be found in [22].

IV. RESULTS

In practice large volumes of data have to be processed. As figure 8 shows, each (high resolution) image consists of numerous heart sections. Adapting the class centroids to each particular scan entails that only one example section is taken from a scan and clustered by means of either k-means or PSO

based clustering. The only proviso is that the example section contains tissue from all three classes.

Fig. 8. Typical scan of processed heart sections

The accuracy of the automatic infarct planimetry (AIP) algorithm presented in this paper can be objectively assessed by means of regression plots. These plots are scatter plots in which the software’s segmentation of each heart section is compared to the corresponding golden standard (namely segmentation performed by a human expert).

Figure 9 shows the performance of the AIP algorithm on the scanned image shown in figure 8. In this regression plot the performance of AIP by means of k-means is shown with asterisks, while circles are used to denote AIP by means of PSO based clustering. A clear correlation exists between the decisions of the AIP algorithm and a human expert. Least-squares lines were fitted to both regression plots. The resultant expressions are given in (4) (for the case of k-means clustering) and in (5) (for PSO based clustering). The R2value6for (4) is

83 %, while the R2value for (5) is 92 %. From these results we can conclude that the swarm-based AIP algorithm is slightly more accurate than k-means.

y1= 1.11x − 3.65 (4)

y2= 1.03x − 1.15 (5)

A more detailed analysis of figure 9 is given in figure 10. In the latter figure the regression plots are broken down to class level. Each point in the plot represents the tissue areas determined by a human expert and the software for a particular section. Figure 10 reveals that both versions of the AIP algorithm tend to overestimate the amount of non-risk tissue. Both AIP algorithms also underestimate the amount of infarct tissue. Both of these issues can be addressed by changing the number of cluster centroids that are associated with the relevant classes.

6This is a measure of the percentage variation in the data that can be

(6)

Fig. 9. Regression plot of both versions of automatic infarct planimetry (AIP)

Fig. 10. Detailed regression plot

V. CONCLUSION AND RECOMMENDATIONS

The main disadvantage of PSO based clustering is that it is computationally expensive. Typically, PSO based clustering takes anywhere between 500 and 700 times longer to converge to a solution than k-means clustering. It is debatable whether the marginal improvement afforded by PSO based clustering is worth the increased computational cost. The obvious point of departure for future work is to either improve the accuracy of k-means clustering or to reduce the computational cost of swarm based clustering.

The results presented in section IV indicate that the au-tomatic infarct planimetry algorithm outlined in this paper is indeed a promising tool for medical laboratories. The large variance of the residuals of the regression plot in figure 9 is however cause for concern and should be investigated. Part of the problem is the inherent variability contained in any golden standard which is derived from human classification decisions. Future work should therefore include comparing the AIP algorithm with a number of human experts.

REFERENCES

[1] H. Piper, D. Garcia-Dorado, and M. Ovize, “A fresh look at reperfusion injury,” Cardiovascular research, vol. 38, pp. 291–300, 1998.

[2] R. Lozano, M. Naghavi, K. Foreman, S. Lim, K. Shibuya, and V. Aboy-ans, “Global and regional mortality from 235 causes of death for 20 age groups in 1990 and 2010: a systematic analysis for the global burden of disease study 2010,” Lancet, vol. 380, pp. 2095–2128, 2012. [3] D. Sanz-Rosa, J. Garcia-Prieto, and B. Ibanez, “The future: therapy of

myocardial protection,” Annals of the New York Academy of Sciences, vol. 1254, pp. 90–98, 2012.

[4] K. Ytrehus, “The ischemic heart experimental models,” Pharmacolog-ical Research, vol. 42, no. 3, pp. 193–203, 2000.

[5] A. Lochner, S. Genade, and J. Moolman, “Ischemic preconditioning: Infarct size is a more reliable endpoint than functional recovery,” Basic research in cardiology, vol. 98, pp. 337–346, 2003.

[6] A. Edinger and C. Thompson, “Death by design: apoptosis, necrosis and autophagy,” Current opinion in cell biology, vol. 16, pp. 663–669, 2004.

[7] K. Pitts, A. Stiko, B. Buetow, F. Lott, P. Guo, D. Virca, and C. Toombs, “Washout of heme-containing proteins dramatically im-proves tetrazolium-based infarct staining,” Journal of Pharmacological and Toxicological Methods, vol. 55, pp. 201–208, 2007.

[8] S. Porzio, M. Masseroli, A. Messori, G. Forloni, G. Olivetti, G. Jeremic, E. Riva, G. Luvara, and R. Latini, “A simple, automatic method for morphometric analysis of the left ventricle in rats with myocardial infarction,” Journal of Pharmacological and Toxicological Methods, vol. 33, pp. 221–229, 1995.

[9] D. Nascimento, M. Valente, T. Esteves, M. de Fatima de Pina, J. Guedes, A. Freire, P. Quelhas, and P. Pinto-do O, “Miquant - semi-automation of infarct size assessment in models of cardiac ischemic injury,” PLoS ONE, vol. 6, no. 9, p. Art. no. e25045, 2011.

[10] T. Esteves, M. Valente, D. Nascimento, P. Pinto-Do-O, and P. Quelhas, “Automatic myocardial infarction size extraction in an experimental murine model using an anatomical model,” in Proceedings - Interna-tional Symposium on Biomedical Imaging, 2012, pp. 310–313. [11] J. De Sutter, C. Van de Wiele, Y. D’Asseler, P. De Bondt, G. De Backer,

P. Rigo, and R. Dierckx, “Automatic quantification of defect size using normal templates: A comparative clinical study of three commercially available algorithms,” European Journal of Nuclear Medicine, vol. 27, no. 12, pp. 1827–1834, 2000.

[12] H. Soneson, H. Engblom, E. Hedstrom, F. Bouvier, P. Sorensson, J. Pernow, H. Arheden, and E. Heiberg, “An automatic method for quan-tification of myocardium at risk from myocardial perfusion SPECT in patients with acute coronary occlusion,” Journal of Nuclear Cardiology, vol. 17, no. 5, pp. 831–840, 2010.

[13] Y. Lamash, J. Lessick, and A. Gringauz, “An automatic method for the identification and quantification of myocardial perfusion defects or infarction from cardiac CT images,” in Proceedings - International Symposium on Biomedical Imaging, 2011, pp. 1314–1317.

[14] S.-K. Woo, Y. Lee, K. Kim, J. Yu, K. Lee, M. Kim, J.-A. Park, J. Kang, B. Kim, and S. Lim, “Automated quantitative assessment of myocardial perfusion in rodent PET/SPECT images,” in IEEE Nuclear Science Symposium Conference Record, 2012, pp. 3054–3057.

[15] M. Haidekker, Advanced biomedical image analysis. Hoboken, New Jersey: John Wiley & Sons, 2011.

[16] R. Gonzalez and R. Woods, Digital image processing. Reading, Massachusetts: Addison-Wesley, 1992.

[17] A. Ford and A. Roberts, “Colour space conversions,” Westminster University, London, Tech. Rep., 1998.

[18] E. Thompson, Colour vision: a study in cognitive science and the philosophy of perception. London: Routledge, 1995.

[19] B. Panigrahi, Y. Shi, and M.-H. Lim, Eds., Handbook of swarm intelligence: concepts, principles and applications. Springer, 2010. [20] N. Nouaouria, M. Boukadoum, and R. Proulx, “Particle swarm

clas-sification: a survey and positioning,” Pattern Recognition, vol. 46, pp. 2028–2044, 2013.

[21] S. Alam, G. Dobbie, Y. Koh, P. Riddle, and S. Rehman, “Research on particle swarm optimization based clustering: a systematic review of literature and techniques,” Swarm and evolutionary computation, vol. 17, pp. 1–13, 2014.

[22] A. Ahmadi, F. Karray, and M. Kamel, “Flocking based approach for data clustering,” Natural Computing, vol. 9, no. 3, pp. 767–791, September 2010.

Referenties

GERELATEERDE DOCUMENTEN

It will be argued that the proper way to react to tail behavior is to signal already if all but j, with typically j=0, 1 or 2, of the r underlying waiting times for a

Chapter 4: Differences between the 48-hour post-exercise effects of passive recovery and contrast water therapy on the subscales of the Recovery-Stress Questionnaire in rugby

Medicinale cannabis wordt onder meer aanbevolen voor toepassing bij uitbehandelde patiënten met chronische niet- maligne pijn, misselijkheid en braken na chemotherapie,

On its turn, the fan engagement component also predicted buying behaviours, and translated identity with team to buying behaviours, namely merchandise expenditure and

Er zijn aanwijzingen dat de toevoeging van canagliflozine of empagliflozine aan de standaardbehandeling, ten opzichte van placebo, resulteert in een statistisch significante,

1) De inzet van ibrutinib bij oudere niet fitte patiënten met CLL die niet in aanmerking komen voor behandeling met een monoklonale anti-CD20 zorgt voor een geschatte budget

The agents who show the most loyalty to the source text are the editor (41.4% in her comments after her revision/editing) and the translator (34.7% in her comments during the

We can only publish what we ously distinguish original research papers and Tech- receive from authors; peer review is the mechanism nical Notes, publishing