• No results found

Detection of body movement using optical flow and clustering

N/A
N/A
Protected

Academic year: 2021

Share "Detection of body movement using optical flow and clustering"

Copied!
7
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Detection of body movement using optical flow and clustering

Wim Van Looy1, Kris Cuppens1,2,3, Bert Bonroy1,2, Anouk Van de Vel4, Berten Ceulemans4,5, Lieven Lagae5,6,Sabine Van Huffel3, Bart Vanrumste1,2,3

1IBW, K.H. Kempen (Associatie KULeuven), Kleinhoefstraat4, B-2440 Geel, Belgium

2Mobilab, K. H. Kempen Kleinhoefstraat4, B-2440 Geel, Belgium

info@mobilab-khk.be

3KULeuven,SISTA, ESAT, BioMed Celestijnenlaan 4, B-3000, Heverlee

4University Hospital of Antwerp, Wilrijkstraat 10, 2650 Edegem Belgium

5Epilepsy Centre for children and youth Pulderbos, Reebergenlaan 4, 2242 Zandhoven, Belgium

6University Hopsital Leuven, Herestraat 49, 3000 Leuven, Belgium

Abstract— In this paper we investigate whether it is possible to detect movement out of video images recorded from sleeping patients with epilepsy. This information is used to detect possible epileptic seizures, normal movement, breathing and other kinds of movement. For this we use optical flow and clustering algorithms. As a result, different motion patterns can be extracted out of the clustered body parts.

Keywords

Epilepsy, optical flow, spectral clustering, movement, k- means

I. INTRODUCTION

Epilepsy is still being researched in the medical world. It is very hard to find a specific cure for the disease but momentarily around 70% to 75% can be treated with medicines or specific operations. In a way to find new insights in these types of attacks, monitoring is important. Nowadays different methods have been proposed and developed to detect and examine epileptic seizures [10]. Mostly the video EEG standard is used. Neurologists monitor the patient using cameras and EEG electrodes [14]. They can look at the behavior and on meanwhile compare the reaction of the brain in an EEG chart.

This is an effective but uncomfortable way of monitoring a patient. Electrodes have to be attached which consumes a lot of time, it is uncomfortable for the patient and it is quite hard to do other activities while being monitored. Next, medical staff is required and it is not possible to monitor a patient for a longer period.

In this paper we explain a new approach for detecting epileptic seizures with body movement, using video monitoring.

Our next goal is to achieve an accurate detection of the seizures using simple and well priced hardware. Eventually we would like to use a webcamlike camera. The video data recorded for this study is acquired with the cameras available in the hospital setting for nocturnal monitoring. They use an external infrared light source.

With a simple near infrared camera and a computer it should be possible to make a detection out of the video images featuring a resolution of 320 x 240 pixels. The use of simple hardware requires intelligent processing as video data is computationally intensive.

We investigate which algorithms are efficient on our video data.

Previous research showed us that clustering algorithms can be used to cluster optical flow results for image segmentation [9]. In this paper, algorithms as optical flow and spectral clustering are tested on video recordings. The first algorithm, optical flow, is a motion detection algorithm that is capable of calculating vector fields out of two video frames. For this we use the Horn-Schunck method [15], this is discussed in section II.

Optical flow calculates the movement of each pixel using parameters as brightness and contrast. Each vector contains the velocity and direction which allows us to extract information necessary for a seizure detection. This is the main reason why this algorithm is chosen.

Other motion detection techniques such as background subtraction or temporal differencing do not give us information about the velocity and position of the movement.

Next, clustering is used to cluster the different features that are given by the optical flow algorithm. The goal is to separate different body parts and measure their velocity and direction to make an accurate prediction of the movement. The monitoring of respiration is also possible using our method.

The algorithms are applied using Matlab. Section II explains how the clustering can be optimized, how threshold calculation is done and which standards we used to come to our conclusions. In the third section we explain the results and how these results are influenced. Finally in the last section a vision is given on possible future improvements.

II. METHOD

(2)

A. Video specifications

The video data we use is recorded in Pulderbos epilepsy centre for children and youth in Antwerp. It is compressed using Microsoft’s WMV compression. The video has a resolution of 720 x 576 with a frame rate of 25 frames per second. The camera is positioned in the upper corner of the room, monitoring a child sleeping in a bed.

It is possible that the person is covered by a blanket so the different body parts are not always visible. This shouldn’t be a problem in the final result as the body movement connects with the blanket.

The image is recorded in grayscale using an infrared camera, no RGB information is available. Before we are able to use the data for our optical flow algorithm, the video sequences are reduced in size and frame rate. We downsize the image to a 320 x 240 pixels which contains enough detail to apply the optical flow. The frame rate is also lowered to a 12,5 frames per second. Due to these reductions, processing time has significantly decreased with a minor loss of detail.

In stead of the optical flow calculation we also could have used the motion information from the WMV compression. However, not all the video data from other datasets we have collected is WMV- compressed. Therefore we use an approach that is independent from the video format.

B. Optical flow

The next step is to deliver the reduced video data to the optical flow algorithm. The algorithm calculates a motion field out of the consecutive frames. It calculates a vector for every pixel which is characterized by the direction and magnitude of movement in the video. Mathematically a translation of a pixel with intensity

I

and time t can be written as follows:

(1)

I( x , y ,t )=I (x+δx , y +δy , t+δt )

Using differential methods, the velocity in both x and y directions is calculated. The Horn-Schunck method uses partial derivatives to calculate motion vectors. It has a brightness constancy constraint, this means that the brightness is assumed to stay constant over a certain time span. This is used to track pixels from one image to another. Horn & Schunck also uses a smoothness constraint, in case of an unknown vector value, the algorithm assumes that its value will be quite similar to the surrounding ones. It is important to supply video with rigid objects and a good intensity of the image. Otherwise the algorithm will respond less accurate.

The algorithm calculates movement out of two consecutive frames by default. It is possible to use frames over a bigger time span to emphasize the movement. (e.g. to monitor respiration which is a slower movement)

First we need to specify the smoothness and the number of iterations. The smoothness factor we’ve chosen is 1. This value is proportional to the average magnitude of the movement. It also depends on the noise factor. In our approach the factor is defined by experiment.

Next, the number of iterations has to be specified. When the number is higher, the motion field is more accurate and noise is

reduced. A downside is that the higher this parameter is, the more calculations have to be done.

Our experience showed us that 1000 iterations were ideal for both accuracy and speed. As a result, the optical flow algorithm produces a motion vector field. The information is stored in a matrix containing a complex value for each pixel.

Using absolute value, the complex magnitude is found. The angular value of the vector shows the direction of the moving pixel.

When this information is visualized, some noise in the signal became visible. This noise is the result of motion indirectly caused by the camera. A threshold is calculated to eliminate noise (e.g.

Fig. 1). The maximum amplitude for each frame is plotted. The result of this visualization is simultaneously compared to movie sections without movement. The maximum amplitude of these sections is used as a threshold.

Fig. 1 Maxima of the magnitude of the optical flow calculations. Video length is 10 seconds. Noise is visible around 0.08. Actual movement is indicated by magnitude values higher than 0.15.

For all the magnitude values beneath this threshold, the matching motion vector will be replaced by a zero and will be ignored in any further calculations.

C. Clustering

The next step in this process is to cluster the vector field. A clustering algorithm makes it possible to group objects or values that share certain similarities or characteristics.

In this approach we cluster pixels belonging to one moving part in the image having the same direction, speed and location. This results in different body parts moving separately from each other.

Different clustering methods are available to make a classification.

We tested several spectral clustering methods and also tested the standard Matlab k-means clustering on our dataset. We also used the “Parallel Spectral Clustering in Distributed Systems“ toolbox provided by Wen-Yen Chen et al. [3] This toolbox for Matlab provides different clustering methods.

Before we apply the clustering, certain features had to be extracted from our optical flow field.

D. Clustering features

A clustering algorithm needs features to classify objects in different clusters. We tested the algorithms with different features, starting by the following three.

 Magnitude

 Direction

 Location

The magnitude can be found by taking the absolute value of the vectors. It represents the strength of the movement. For example as

(3)

the patient strongly moves his head from the right to the left, this will result in vectors with high magnitude for the pixels that correspond with the location of the head. If the patient moves his hand at the same moment, these pixels will also have the same magnitude but have a different location and are therefore grouped in two different clusters when two clusters are specified.

As a second feature the direction was used. We use the radian angular value of the vectors by default. It can be converted to degrees which makes no difference for the algorithm. Due to the scale of 0° to 360°, phase shifting occurred. Two pixels pointing towards 0° or 360° have the same physical direction. As a feature this is falsely interpreted by the clustering algorithm, resulting in bad classification as shown in figure 2-B.

Fig. 2 Phase shift results in bad clustering, clusters are covering each other. Ideally we want one cluster as the direction is almost the same but due to the phase shift it looks like there is a difference of 360°. Direction is plotted in radians. B shows the clustering, The two clusters are shown in white and gray, the background where no motion is detected is black.

A solution to this problem is to split up the angular feature in two parameters. When the imaginary and real part are divided by the magnitude of the complex vector, the magnitude is suppressed and the direction is now given as a coordinate on the unit circle.

Phase jumps have been eliminated but one feature is replaced by two features which has consequences for weight of this feature.

This is discussed in section E. As a result, this feature will cluster all the movements that point to a single direction, independent of the location or magnitude.

The third feature consists of the location of the movement vector.

We use the coordinate of the pixel in the x-y plane.

This feature is used to make a distinction between movements that occur in different parts of the image.

E. Clustering algorithm

First an appropriate clustering algorithm needs to be specified. The algorithms we investigate are [3]:

 spectral clustering using a sparse similarity matrix

 spectral clustering using Nyström method with orthogonalization

 spectral clustering using Nyström method without orthogonalization

 k-means clustering

Method 1: the spectral clustering method using a sparse similarity matrix. This matrix gives the Euclidean distances

between all data points. [2,18] This type of clustering gives good results with higher number of clusters. Nevertheless it requires too much processing time and it gave bad results with two or three clusters. The calculation of the sparse similarity matrix would take half a minute. (e.g. Table I) This is caused by the high amount of data our image features can contain. This calculation requires too much processing time which is not feasible on a normal pc system.

Method 2 and 3: using the Nyström method with or without orthogonalization, the processing time significantly decreased (e.g.

Table I). This is because Nyström uses a fixed number of data points to calculate the similarity matrix. Around 200 samples are compared to the rest of the data points to calculate the matrix. The cluster quality is more than average and is usable for further processing. No difference in quality was noticeable between both Nyström methods but without the orthogonalization, the clustering is faster.

Method 4: k-means is an iterative method that first randomly specifies k center points. Next it calculates the Euclidean distance between these centroids and the data points. Data points are grouped with the closest center point. Afterwards the centroid of each group is calculated, and they are chosen as new start point for the next iteration.

Using the selected features, k-means provided good results, the performance of the clustering is visually inspected. It requires minimal processing time (see Results for Table I) and gives an accurate clustering. We will continue our research using this algorithm.

F. K-means and scaling of the features

The next step applies the features to the clustering algorithm that will cluster our data. Without scaling, the x-coordinate varies from 0 to 320, the y-coordinate from 0 to 240, while the magnitude of the complex vector varies between 0 and 1. A scale has to be applied to define which feature needs more weight and which one needs less weight. In order to find good weights, we use visual inspection instead of a mathematical approach. This method is easier to use and provides clear understanding of the used data and algorithms.

Features are plotted on a 3D plot. Coordinates on the x- and y- axis and the other feature on the z-axis (e.g. Fig. 3).

A B

(4)

Fig. 3 Features direction (A) and magnitude (B) plotted versus x- and y- coordinates. The direction feature has a larger weight, the algorithm clusters on this feature as the data points have a bigger variation and more weight. Plot (B) shows the impact of this clustering on the magnitude equivalent.

This visualization makes it easier to see the impact of the scaling. Adapting these scales soon lead to the appropriate clustering. The final result should be as follows: pixels are grouped when they differ in location, intensity of movement and direction.

Ideally the clusters cover different body parts.

G. Cluster analysis

The next issue is providing the number of clusters before the clustering. Every movement is different, which gives a varying number of clusters. We use inter and intra cluster distances to check the quality of the clustering [17]. The inter cluster distance is the distance between the clusters. The bigger this value, the better the clustering as there is a clear distinction between the clusters.

The intra cluster distance is the average distance between the centroid and other points of the cluster. The intra cluster distance should be minimized to have compact clusters.

In this research, a maximum of four clusters is common, one cluster is the minimum of course. This occurs when a person only moves its arm for example.

H. Defining thresholds

All frames of the test video are clustered up to four clusters.

Several features are extracted out of the distance measures. The most important features are the following four.

Maximum overall inter cluster distance

Standard deviation of the intra cluster distance for two clusters

Mean of the intra cluster distance for two clusters

Maximum of all intra cluster distances for one frame

The frames are labeled with the right number of clusters and compared to the selected features. Our goal is to find similarity between the labeled frames and the features. The comparison showed us that the maximum inter cluster distance and the standard deviation for two clusters would give the best results. Next, the labeled frames are classified using the selected features (e.g. Fig.

4). The classification is based on Euclidean distance.

Fig. 4 This graph shows the classification between 1 or 2 clusters.

Next thresholds can be defined using a training and test set to specify how many clusters should be used. The quality of the thresholds is tested on sensitivity, specificity, positive predicted value (PPV) and negative predicted value (NPV). These measures are stated below. (Distinction between class 1 and class 2.)

(2)

Sensitivity= TP TP+FN

¿ Number of correct classified items∈class 1 All items∈class 1

(3)

Specificity= TN TN +FP

¿ Number of correct classified items∈class 2 Allitems∈the class 2

(4)

PPV = TP

TP+FP

¿ Number of correct classified ite ms∈class 1 All items expected∈class 1

(5)

NPV = TN

TN +FN

¿ Number of correct classified items∈class 2 All items expected∈class 2

TP stands for true positives, FN for false negatives, TN for true negatives and FN for false negatives. The results of the application of thresholds can be found in section III.

III. RESULTS A. Cluster analysis results

A B

(5)

First we selected the clustering method based on the clustering accuracy and the speed, which is shown in table I.

To find the right number of clusters, thresholds are defined.

Then, the video is labeled. (i.e. the right number of clusters is added manually to each frame). This is done for three videos.

Using the maximal inter cluster distance, the standard deviation of intra cluster distance for 2 clusters and the right number of clusters, the data is trained. Our training set consists of 80 frames which are manually labeled. The rest of the data is used as test set (i.e. 56 frames). Manually labeling these frames is labour intensive and although this does not cover a long period in the video sequence, 80 points for training and 56 for testing can still give valuable results. Using classification in Matlab, thresholds are calculated.

The thresholds are tested using the test set and shown in table II.

TABLE I

CLUSTERINGSPEEDCOMPARISON.

Time needed for clustering one frame ( average of 20 iterations) Pentium D 830 3Gb Ram, Windows 7 system, implemented in

Matlab

Clustering type Processing time

Spectral Clustering 39s

Nyström with orthogonalization 1.2616s Nyström without orthogonalization 0.5981s

K-means 0.1324s

All of these values should be as high as possible. Results are discussed in section IV.

B. Movement analysis

In this section the results of our study are presented. In the following movie sequence a young patient is monitored, she randomly moves her head from the left side to the right side of the bed (e.g. Fig. 5-AC).

Fig. 5 Sample of two frames, in frame 41 the head is moving towards the left side.

The red colour and the arrows indicate this. In frame 45 the head is moving towards the right side. Screenshots (B) and (D) show the clustering. Both clusters that cover the head contain different movement information.

We cut 60 frames or five seconds out of the video. Two clusters are selected, one cluster represents the head, the other cluster features the lower body part. The clusters are segmented using the

standard k-means clustering method. The Nyström method would give similar results.

TABLE II

CLUSTERANALYSISRESULTSFORDISTINCTIONBETWEENONEORTWOCLUSTERS ANDDISTINCTIONBETWEENTWOORTHREECLUSTERS.

Seperation between class 1 and 2

Seperation between Class 2 and 3

Sensitivity 90,90% 93,30%

Specificity 62,50% 38,90%

PPV class A 83,30% 71,80%

NPV class B 76,90% 77,80%

For both body parts, direction and intensity of the movement are plotted. It can be seen that the direction plot of the head crosses the horizontal axis multiple times (e.g. Fig. 6-A). In the example the head is moving towards the left side and next to the right side. This can be seen as -120° (left) and 15° (right) (e.g. Fig. 6-A).

Fig. 6 The plot above shows the direction and intensity of the head movement. This information is provided by the cluster that covers the head.

Figure 6-B plots the intensity of the movement. Figure 7 gives information about the direction and intensity of movement for the lower body part.

Fig. 7 The plot above shows the direction and intensity of the lower body part movement. This information is provided by the cluster that covers the lower body part.

A B

C D

A

B

A

B

(6)

This information can be used to conclude whether or not a patient is simply moving or having a seizure. Strong movement and fast change in direction are signals that can point to seizures. This needs to be studied in the future to find measures that confirm this.

The next example shows a sleeping patient. The aim of this test was to measure the breathing of the patient. This is a separate video sequence where the breathing also was monitored using sensors attached to the upper body. Now we can monitor this using video detection. For a better detection, a region of interest focusing on the chest was chosen.

For this test, the algorithm uses one cluster. The plot shows 20 seconds or 250 frames of the original video. The breathing is clearly visible in the signal as the angle changes 180 degrees each sequence (e.g. Fig. 8). In the intensity plot is seen that inhaling produces slightly more movement than exhaling (e.g. Fig. 8-B).

Fig. 8 Breathing monitored and visualised with angular movement and intensity.

The breathing pattern is clearly visible.

In future work the monitoring of respiration should be combined with nocturnal movement. This will not be easy as the magnitude of the respiration is much smaller compared to the magnitude of movement, but is perhaps possible with proper filtering.

IV. DISCUSSION

We discuss several possible improvements for our method (e.g.

automated cluster scaling and improved cluster analysis ).

It would be interesting to test a system that scales the features depending on the situation. As a limited number of frames provide poor clustering quality, automatic scaling might solve this problem in these cases. Sometimes other features should have more weight than others. E.g. as the complete body is moving, the location feature might be emphasized a bit to have better distinction of the clusters.

Cluster analysis provides a good automated distinction between one, two or three clusters. The difference between two or three is less accurate but good in certain circumstances where the inter cluster distance has a higher value. The specificity of class three is a bit too low. The chances are 61.10% that a frame should be clustered using two clusters while being labeled in class 3 (e.g.

Table II). It is real that frames belonging to class 2 are falsely classified in class 3. Note that it is possible that they are classified correctly as the labeling sometimes has different correct number of clusters.

As a conclusion we can say that the algorithm is able to make a distinction between one, two or three clusters, but the right amount of clusters needs to be supplied manually when four or more clusters are required. Other cluster features have to be searched in the future.

Next, in our approach, the ideal number of clusters is defined before information is extracted out of the clusters.

On a system with enough power, different cluster properties (intensity, direction…) can be compared in time to see which number of clusters is ideal. E.g. if a body part is moving in a certain direction with velocity x, it is expected that the same body part will still be moving in the same direction the next frame but increasing or decreasing its velocity x. This way you can expect a cluster at the same location of the previous frame featuring slightly different properties. Increasing the frame rate of the video will make clusters change more gradually but more system power is needed. This method can be described as cluster tracking.

We also experimented with pixel intensity of the original video pixels as a feature but this resulted in bad clusters. This is because the pixel intensity is not directly related to the movement of the patient. The light source stays at the same position in time.

Therefore it was not studied any further.

Finally, optical flow sometimes has problems with slow movement as the magnitude of these movements becomes comparable to the magnitude of noise. This requires an adjustment of the optical flow settings. Optical flow needs to be calculated on two frames, the current frame and a frame shifted in time. This could be improved, measuring the average movement over a fixed period in order to calculate which frame in the past should be used to calculate the optical flow. This way it is possible to have an accurate motion field with less noise.

V. CONCLUSION

The research learned us that it is possible to cluster movement in video images. Different body parts can be separated using the location, direction and intensity of the movement. Out of these clusters further information can be extracted whether or not the patient is having epilepsy seizures, is breathing etc. Of course this method still has room for improvement.

VI. ACKNOWLEDGMENTS

Special thanks to the Mobilab team and KHKempen to make this research possible.

A

B

(7)

REFERENCES

[1] Casson, A., Yates, D., Smith, S., Duncan, J., & Rodriguez-Villegas, E. (2010). Wearable Electroencephalography. Engineering in Medicine and Biology Magazine, IEEE (Volume 29, Issue 3 ), 44.

[2] Chen, W.-Y., Song, Y., Bai, H., Lin, C.-J., & Chang, E. Y. (2008). Parallel Spectral Clustering in Distributed Systems. Lecture Notes in Artificial Intelligence (Vol.

5212), 374-389.

[3] Chen, W.-Y., Song, Y., Bai, H., Lin, C.-J., & Chang, E. Y. (2010). Parallel Spectral Clustering in Distributed Systems toolbox. Retrieved from http://alumni.cs.ucsb.edu/~wychen/sc.html

[4] Cuppens, K., Lagae, L., Ceulemans, B., Van Huffel, S., & Vanrumste, B. (2009). Automatic video detection of body movement during sleep based on optical flow in pediatric patients with epilepsy. Medical and Biological Engineering and Computing (Volume 48, Issue 9), 923-931.

[5] Dai, Q., & Leng, B. (2007). Video object segmentation based on accumulative frame difference. Tsinghua University, Broadband Network & Digital Media Lab of Dept. Automation, Beijing.

[6] De Tollenaere, J. (2008). Spectrale clustering. In J. D. Tollenaere, Zelflerende Spraakherkenning. (pp. 5-18). Katholieke universiteit Leuven, Leuven, België.

[7] Fleet, D. J., & Weiss, Y. (2005). Optical Flow Estimation. In N. Paragios, Y. Chen, & O. Faugeras, Mathematical models for Computer Vision: The Handbook.

Springer.

[8] Fuh, C.-S., & Maragos, P. (1989). Region-based optical flow estimation. Harvard University, Division of Applied Sciences, Cambridge.

[9] Galic, S., & Loncaric, S. (2000). Spatio-temporal image segmentation using optical flow and clustering algorithm. In Proceedings of the First International Workshop on Image and Signal Processing and Analysis. Zagreb, Croatia: IWISPA.

[10] International League Against Epilepsy. (n.d.). Epilepsy resources. Retrieved from www.ilae-epilepsy.org: http://www.ilae- epilepsy.org/Visitors/Centre/Brochuresforchapters.cfm

[11] Lee, Y., & Choi, S. (2004). Minimum entropy, k-means, spectral clustering. ETRI, Biometrics Technol. Res. Team, Daejon.

[12] Nijsen, N. M., Cluitmans, P. J., Arends, J. B., & Griep, P. A. (2007). Detection of Subtle Nocturnal Motor Activity From 3-D Accelerometry Recordings in Epilepsy Patients. IEEE Transactions on Biomedical Engineering.

[13] Raskutti, B., & Leckie, C. (1999). An Evaluation of Criteria for Measuring the Quality of Clusters. Telstra Research Laboratories.

[14] Schachter, S. C. (2006). Electroencephalography. Retrieved from www.epilepsy.com: http://www.epilepsy.com/epilepsy/testing_eeg [15] Schunck, B. G., & Horn, B. K. (1980). Determining Optical Flow. Massachusetts Institute of, Artificial Intelligence Laboratory, Cambridge.

[16] Top, H. (2007). Optical flow en bewegingillusies. University of Groningen, Faculty of Mathematics & Natural Sciences.

[17] Turi, R. H., & Ray, S. (1999). Determination of Number of Clusters in K-Means Clustering and Application in Colour Image Segmentation. Monash University, School of Computer Science and Software Engineering, Victoria Australië.

[18] Von Luxburg, U. (2007). A Tutorial on Spectral Clustering. Statistics and Computing (Volume 17 Issue 4).

[19] Xu, L., Jia, J., & Matsushita, Y. (2010). Motion Detail Preserving Optical Flow Estimation. The Chinese University of Hong Kong, Microsoft Research Asia, Hong Kong.

[20] Zagar, M., Denis, S., & Fuduric, D. (2007). Human Movement Detection Based on Acceleration Measurements and k-NN Classification. Univ. of Zagreb, Zagreb.

[21] Zelnik-Manor, L. (2004, Oktober). The Optical Flow Field. Retrieved from http://webee.technion.ac.il/~lihi/Presentations/OpticalFlowLesson.pdf

Referenties

GERELATEERDE DOCUMENTEN

In our merged electron-ion beam experiments, we have an opportunity to complement the results obtáined by afterglow, afterglow/mass-spectrometry, trapped ions and, in

While ASD has in the past been called the most devastating of childhood developmental disorders, progress in ASD research has brought a clear message in recent years: while

Het merendeel van de archeologische sporen vondsten werd aangetroffen in de sleuven op perceel 932. Deze overblijfselen konden in de meeste gevallen gedateerd worden in de

The following tools will be integrated: algorithms for preprocessing, feature extraction, clustering (De Smet et al., 2002), classification, and genetic network

Examples include power iteration clustering [ 26 ], spectral grouping using the Nyström method [ 27 ], incremental algorithms where some initial clusters computed on an initial sub-

Kaarten op CD-ROM in PDF-bestand 1 Bodemkaart 2 Grondwatertrappen 3 Vakindeling SBB, boorpunten en monsterlocaties 4 Dikte van de bovengrond en bodemtype na afgraven 5

Bijlage 2 Vragenlijst gemeenten Gezond werken in het groen bij gemeenten Achtergrond De laatste jaren zijn veel mensen met gezondheidsproblemen en om andere redenen die

In assessing the elements of the crime of aggression, this thesis finds that if a State chooses to engage in a unilateral armed attack against the Islamic State does not constitute