• No results found

A home-based system for detection of asthma in children

N/A
N/A
Protected

Academic year: 2021

Share "A home-based system for detection of asthma in children"

Copied!
19
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

A home-based system for detection of asthma in children

S.M.M.D.P. (Sheona) Sequeira

BSc Report

Committee:

dr. B. Sirmaçek dr.ir. M. Abayazid dr. S.U. Yavuz

July 2019

030RAM2019 Robotics and Mechatronics

EE-Math-CS University of Twente

P.O. Box 217

7500 AE Enschede

The Netherlands

(2)

A Home Based System for the Detection of Asthma in Children

Sheona M. M. D. P. Sequeira Department of Electrical Engineering

University of Twente Enschede, Netherlands

s.m.m.d.p.sequeira@student.utwente.nl

Abstract—Asthma is becoming a very serious problem with every passing day, especially in children. However, it is very difficult to detect this disorder in them, since the breathing motion of children tends to change when they reach an age of 6.

This, thus makes it very difficult to monitor their respiratory state easily. In this paper, I present a cheap non-contact alternative to the current methods that are available. This is using a stereo camera, that captures a video of the patient breathing at a frame rate of 30Hz.

For further processing, the captured video has to be rectified and converted into a point cloud. The obtained point clouds need to be aligned in order to have the output with respect to a common plane. They are then converted into a surface mesh. The depth is further estimated by subtracting every point cloud from the reference point cloud (i.e of the first frame). The output data, however, when plotted with respect to real time produces a very noisy plot. This is filtered by determining the signal frequency by taking the Fast Fourier Transform of the breathing signal.

The system was tested under 4 different breathing conditions:

deep, shallow and normal breathing and while coughing. On its success, it was tested with mixed breathing (combination of normal and shallow breathing) and was lastly compared with the output of the expensive 3DMD system. The comparison resulted in an output depth of 36.0661mm using the 3DMD and 38.4005mm using the stereo camera

I. I NTRODUCTION

The need for monitoring respiratory motion is increasing rapidly. There are a lot of people that tend to suffer from breathing disorder, asthma being the prime one. Asthma is a very serious condition in which it becomes extremely difficult to breathe, due to the airway passage becoming very narrow and swollen. This, thus prevents the proper flow of air into the lungs. This disorder, in turn makes the patient extremely tired quickly, making it difficult for the patient to perform daily activities that a normal individual would do. Asthma is sadly a condition which cannot be healed permanently, but the intensity of the asthmatic attacks can be reduced by taking proper treatment and care. However, this condition is extremely difficult to be properly diagnosed in children [1].

This is due to the fact that children who show recurring symptoms of asthma at ages below 6 not necessarily will continue to suffer from asthma later on in their lives. Thus, this usually tends to go un-diagnosed with the hope of it being a passing phase. While on one hand the children with mild asthma-like symptoms might no longer suffer from it after the age of 6 or more, children with more severe asthmatic

conditions often tend to still suffer. For this very purpose, monitoring the respiratory motion especially in children is vital.

There are many devices that can be used for tracking respi- ratory motion [2]. These devices might be further categorized based on the motion of the body, that is, they can be invariant to rigid body motion or not. The devices that are invariant to body motion ideally assume that every motion that occurs in the patient is the motion due to respiration.On the other hand, the non invariant systems basically involves only the monitoring of the chest motion from multiple cameras and then obtaining the depth of the region of interest. Some of the devices might also be classified as contact and non contact devices. An example of a contact based system that is invariant to rigid body motion would be a belt strapped around the chest region [3]. Such devices, however, might both produce not completely accurate results since every motion is considered as respiratory motion, while, at the same time having an object strapped around the chest also causes a lot of discomfort, as a result of which it might actually make the person breathe inefficiently.

For this reason, I propose to use a non invariant non contact system . This can be done by obtaining a video using a stereo camera. The solution thereby reduces any of the negative outcomes of the systems that might hinder the breathing motion while at the same time being cheaper and very user friendly, thus enabling it to be used for tracking child respiratory motion at homes on a frequent basis.

II. M ATERIALS AND M ETHODS

For simple and handy utilisation, I have implemented a

system that only makes use of a Stereo Camera. A Stereo

Camera is a passive camera that can be used to produce

a depth image of the object under test since it involves

two individual cameras. These together provide a distinctive

outlook on the same object much as how the human eye

works. This significant property of the camera can be used to

appropriately determine the depth of an object. The depth of an

image as perceived by the camera is directly proportional to the

focal length, baseline of the camera and inversely proportional

to the disparity. The camera can be used to ideally cover

infinite depth, however as the depth increases the error rate

increases; thus, maintaining a distance between 0.50- 1m from

(3)

the camera produces more accurate results since the breathing motion is more easily visible within this range. At the time of testing, it should be noted that the patient is able to breathe in two different manners: that is either through thoracic breathing or abdominal breathing.

A video of the patient breathing should be recorded with a frame rate of 30Hz and a resolution of 480x1280. The sampling rate of the video for processing can then be selected such that the output result is good [4]. Having a sampling frequency as high as 30Hz can result in a large video size, thus halving the sampling rate might be a better solution.

Attention should be paid to the extent to which the sampling frequency is lowered, since lowering the sampling rate might affect the quality of data by making it noisier. For specific analysis, a region of interest can also be selected. However, as mentioned earlier, by taking a region of interest, the analysis is being limited to either one of the breathing mechanisms which might thus result in inaccurate results. On the positive side, selecting a region of interest decreases the computation time greatly.

A. System Description

1) Stereo Camera Calibration: Several images(20 images) of a 7x10 checkerboard with each of the checkerboard squares having dimensions of 10mm are taken from different angles and different aspect points. For the sake of calibration, the Stereo Camera Calibration App in Matlab is used wherein the improper images are directly eliminated thereby reducing the calibration error rate. The stereo parameters obtained are then used for processing the breathing frames.

2) Tracking of Respiratory Motion: For the purpose of tracking the respiratory motion of a patient, a video of about 30 seconds to 1 minute is first taken by placing the stereo camera at a fixed distance of about 1m from the patient. Since the obtained video consists of frames from both the cameras of the stereo camera, the video should be split into two individual videos. Each individual video frame then has to be rectified.

Image rectification is extremely important in order to reduce the trouble of finding matching points between the frames later on. Upon rectification, it is possible to determine the distance between alike objects in the two images, which in turn helps in computing the disparity. It is very important to set the disparity range correctly in order to obtain a proper output. The disparity data needs to be filtered before it is further converted into a 3D point cloud as can be seen in figure 2. The 3D point cloud is basically a collection of the three dimensional coordinate system that is obtained from the disparity data in order to produce an accurate 3D digital output of the chest. For further processing of the data, the invalid points that are present in the point cloud need to be eliminated and the point cloud should be de-noised.

An extremely essential step that should be taken after ob- taining the point cloud is aligning all the video frames along the same axis, thereby making the data suitable for tracking respiratory motion. This is done by using the Iterative Closest Point Algorithm [5]. The algorithm re-determines the closest

point set and continues until it finds the local minimum match between the two surfaces. It works in a few stages, that is, identification of the closest model point for each data point and then finding the least square rigid body transformation relating these point sets. The initial frame is taken as the reference frame for aligning the points clouds to a common axis, thus obtaining the transformation angle.

In order to obtain the depth data, the aligned 3D point cloud should then be converted into a surface mesh. The depth information can be acquired by keeping the initial frame as the reference frame (to be taken such that it is the intermediary stage between inhalation and exhalation) and subtracting the depth of every subsequent frame from it. The obtained output will however consist of a lot of noisy data apart from the input signal and needs to be filtered. The noise can be of various types including salt and pepper noise, quantisation noise and white noise. This filtered output should then be plotted with respect to time.

(a)

(b)

(c) (d)

Fig. 2: Initial Image Processing: (a):Image Rectification (b):Disparity Map (c): Point Cloud(d):Triangular Mesh of the

above Point Cloud

Based on the number of breaths that the person takes and

the frequency with which it occurs, the persons breathing

condition can be determined. This can be done by determining

the number of peaks in the output plot, where in each peak

represents a maximum breath that is taken. Based on literature,

it is seen that for children below the age of 6, the number of

breaths should be between 22-34 breaths per minute while

children between 6 to 12 years should be 18-30 breaths per

minute [6]. If the patient(child) breathes more than 15-17

breaths or less than 10 breaths in 30 seconds a conclusion

can be drawn about the breathing condition of the person

being tested. A conclusion can be drawn easily if a person is

taking more breaths than required that the patient is showing

symptoms of asthma.

(4)

Fig. 1: Block Diagram on the process flow of the setup

III. R ESULTS

The accuracy of the system that I built was tested by checking the precision with which the output plot was produced with respect to time and visually matching that output with the input video that was taken of the patient.The input video taken was of a relatively small time frame since it was done in order to test the accuracy of determining respiration. The sampling rate for processing was also maintained at 30Hz. Reducing the sampling rate to 6.7Hz increased the amount of noise that was present in the output.

This can be seen in the Appendix figure 7.

Fig. 3: Unfiltered breathing signal

On obtaining the initial output, as can be seen in figure 3, the data needed to be further filtered. This was done by

designing a band pass filter with the help of using the Fast Fourier Transform (FFT). From the output of the FFT, the suitable frequency range for only the required breathing data was selected and the rest of the noise was thus ignored.

The system was tested by having a volunteer breathe in 4 different manners, including normal breathing, deep breathing, shallow breathing and coughing. For the deep breathing, the volunteer was asked to inhale air very slowly and then again exhale very slowly. On the other hand, the volunteer when asked for shallow breathing was asked to partially inhale and exhale very rapidly. This data served as input into the FFT, thus obtaining a clear output signal from which the breathing pattern can be determined. This can be seen in figure 4. The number of breaths that are taken are displayed on a dialog box with the current health status. This can be seen in the Appendix figure 10

However, the frequency range that was selected for every test kept changing and thus could not be generalized to be within a specific range for each different breathing condition as can be seen in figure 5

The data for normal breathing was further also evaluated by considering a small region of interest. This saved a lot of computation time, however, the output produced by limiting the size of the region of interest to just a small black point on the chest resulted in a much noisier output response.Thus, the region of interest was changed to the entire chest area.

Another issue that also arose was that, the volunteer switched between breathing mechanisms in the middle of the test run as a result of which when only a small region of interest was considered, there were certain cycles that were missed. This can be seen in figure 6.

On successful establishment of the previous stages of testing,

I performed a test where in a volunteer was asked to shuffle

(5)

(a) (b)

(c) (d)

Fig. 4: Breathing outputs for each test (a): Normal breathing (b):Long Breathing (c): Short Breathing (d):Cough

Fig. 5: FFT for different input breathing signals

(a) (b)

Fig. 6: Region of Interest (a):Entire Chest Area (b):Small region

between normal breathing and shallow breathing in order to accurately determine whether the system works efficiently well. The output of the test can be seen in the figure 7.

(a)

(b)

Fig. 7: Mixture of Breathing types: (a):FFT (b):Output Breath

In order to conclusively determine whether the designed system was successful, I compared my output with that of the more expensive system, the 3DMD. This was done by calculating the maximum expansion of the chest at the time of deep breathing from the intermediary state between inhalation and exhalation. Due to the extreme complexity of doing this coding in Matlab, the point clouds from the 3DMD and the stereo camera were entered into Cloud Compare. The maximum depth of the 3DMD was found to be 36.0661mm with an error rate of 1.783 while my alternative had a depth of 38.4005mm and an error rate of 1.542. In figure 8 we can see the output image from cloud Compare for both the 3DMD and the stereo camera.

IV. D ISCUSSION

A simple low cost system that is easy to use and compre-

hend was thus developed.

(6)

(a) (b)

Fig. 8: Depth comparison (a): 3DMD (b): Stereo Camera

However, there is still scope for more research when it comes to elimination of the noise, since the frequency range that could be used for determining the breathing pattern of the patient kept changing with every different test that was performed.

The difference in the measurement that arose between the depth estimation of the 3DMD system and the Stereo camera could be due to either the difference in the induced noise(that could be due to the difference in the relative thickness of the clothes) it could also be due to the camera calibration.

The system should also be generalized to be independent of whether the first frame is the intermediate breathing position or not. This could be done by taking a cycle of breathing before starting the data and further storing the reference position from that.

The setup could also not be tested with children, but that’s not a major problem since it can be solved easily by adjusting the maximum and minimum number of breaths that can be taken by the children in 30 seconds.

V. C ONCLUSION

The output performance of this system was verified by test- ing under several breathing conditions and was also compared with the benchmark system that is 3DMD, that produced a small depth difference of about 2mm. This system can be incorporated without hindering the persons breathing, in a completely non-contact manner. It can thus successfully be used to monitor the patients breathing on a regular basis. This is especially required for children since after a certain age the child that used to have asthmatic symptoms might not necessarily continue to suffer from the same. Thus, parents can keep a check of their child’s breathing and on unfortunate events can further go on to seeing a doctor.

Conclusively,with a little bit more research, this will be a very successful method for the detection of respiratory disorders like asthma.

VI. R EFERENCES

R EFERENCES

[1] Wim M Van Aalderen. Childhood Asthma : Diagnosis and Treatment.

2012, 2012.

[2] Stefan Wiesner and Ziv Yaniv. Monitoring patient respiration using a single optical camera. Annual International Conference of the IEEE Engineering in Medicine and Biology - Proceedings, pages 2740–2743, 2007.

[3] Sergio Silvestri and Emiliano Schena. Respiratory Rate. pages 1–47, 2019.

[4] Jochen Kempfle and Kristof Van Laerhoven. Respiration Rate Estimation with Depth Cameras. (September):1–10, 2018.

[5] D. L.G. Hill, P. G. Batchelor, M. Holden, and D. J. Hawkes. Medical image registration. Physics in Medicine and Biology, 46(3), 2001.

[6] Susannah Fleming, Matthew Thompson, Richard Stevens, Carl Heneghan, Annette Pl¨uddemann, Ian MacOnochie, Lionel Tarassenko, and David Mant. Normal ranges of heart rate and respiratory rate in children from birth to 18 years of age: A systematic review of observational studies.

The Lancet, 377(9770):1011–1018, 2011.

(7)

1 Appendix

1.1 Figures

Figure 1: Camera Calibration: Parameter Visualization

Figure 2: Camera Calibration: Error Estimation

1

(8)

Figure 3: FFT of breathing while coughing

Figure 4: FFT of Deep Breathing

2

(9)

Figure 5: FFT of Normal Breathing

Figure 6: FFT of Shallow Breathing

3

(10)

(a) (b) Figure 7: Sampling Rate (a):30Hz (b):6.7Hz

Figure 8: FFT of Shallow Breathing

4

(11)

Figure 9: Triangular mesh of 3DMD image

Figure 10: Output dialog box

5

(12)

1.2 Code

1.2.1 Obtaining input video

1 c l e a r a l l ; %% c l e a r a l l p r i v i o u s v a r i a b l e s 2 c l c ;

3 i m a q r e s e t ;

4 v i d 1 = v i d e o i n p u t ( ’ w i n v i d e o ’ , 2 ) ; % t o b e a d j u s t e d b a s e d on w h i c h camera i t i s t o t h e s y s t e m

5 s e t ( v i d 1 , ’ F r a m e s P e r T r i g g e r ’ , I n f ) ; 6 s e t ( v i d 1 , ’ R e t u r n e d C o l o r s p a c e ’ , ’ r g b ’ ) ; 7 v i d 1 . T r i g g e r R e p e a t =2;

8 v i d 1 . F r a m e G r a b I n t e r v a l = 1 ; 9

10 s t a r t ( v i d 1 ) ;

11 a v i O b j e c t 1 = V i d e o W r i t e r ( ’C: \ U s e r s \ Sheona \ Desktop \ t h e s i s \ s h e o n a 2 . a v i ’ ) ; % C r e a t e a new AVI f i l e

12 open ( a v i O b j e c t 1 ) 13

14 f o r iFrame = 1 : 9 0 0 % C r e a t e s a v i d e o o f 30 s e c o n d s 15 I=g e t s n a p s h o t ( v i d 1 ) ;

16 F = im2frame ( I ) ; % C o n v e r t I t o a movie frame

17 w r i t e V i d e o ( a v i O b j e c t 1 , F) ; % Add t h e frame t o t h e AVI f i l e 18 end

19 c l o s e ( a v i O b j e c t 1 ) ; 20

21 s t o p ( v i d 1 )

1.2.2 Splitting the Video

1 c l c ;

2 v=V id e o R e ad e r ( ’C: \ U s e r s \ s 1 9 3 8 3 5 5 \ Desktop \ t h e s i s \ cough \ cough3 . a v i ’ ) ; 3 i i =1;

4 r e c t 1 =[1 1 640 4 8 0 ] ; 5 r e c t 2 =[640 1 1280 4 8 0 ] ; 6

7 while hasFrame ( v )

8 img=readFrame ( v ) ;

9 img1=i m c r o p ( img , r e c t 1 ) ; 10 img2=i m c r o p ( img , r e c t 2 ) ;

11 f i l e n a m e =[ s p r i n t f ( ’ %03d ’ , i i ) ’ . j p g ’ ] ;

12 f u l l n a m e= f u l l f i l e ( ’C: \ U s e r s \ s 1 9 3 8 3 5 5 \ Desktop \ t h e s i s \ cough ’ , ’ l e f t ’ , f i l e n a m e ) ;

13 f u l l n a m e 2= f u l l f i l e ( ’C: \ U s e r s \ s 1 9 3 8 3 5 5 \ Desktop \ t h e s i s \ cough ’ , ’ r i g h t ’ , f i l e n a m e ) ;

14 i m w r i t e ( img1 , f u l l n a m e ) ; 15 i m w r i t e ( img2 , f u l l n a m e 2 ) ; 16 i i = i i +1;

17 end

18 imageNames=d i r ( f u l l f i l e ( ’C: \ U s e r s \ s 1 9 3 8 3 5 5 \ Desktop \ t h e s i s \ cough ’ , ’ l e f t ’ , ’ ∗ . j p g ’ ) ) ;

19 imageNames={imageNames . name } ;

20 imageNames2=d i r ( f u l l f i l e ( ’C: \ U s e r s \ s 1 9 3 8 3 5 5 \ Desktop \ t h e s i s \ cough ’ , ’ r i g h t ’ , ’ ∗ . j p g ’ ) ) ;

21 imageNames2={imageNames2 . name } ;

22 o u t p u t L e f t=V i d e o W r i t e r ( ’C: \ U s e r s \ s 1 9 3 8 3 5 5 \ Desktop \ t h e s i s \ cough \ c o u g h L p a t c h 3 0 . a v i ’ ) ;

6

(13)

23 o u t p u t R i g h t=V i d e o W r i t e r ( ’C: \ U s e r s \ s 1 9 3 8 3 5 5 \ Desktop \ t h e s i s \ cough \ coughRpatch30 . a v i ’ ) ;

24 o u t p u t L e f t . FrameRate =30;

25 o u t p u t R i g h t . FrameRate =30;

26 open ( o u t p u t L e f t )

27 f o r i i = 1 : ( length ( imageNames ) −1)

28 img3=i m r e a d ( f u l l f i l e ( ’C: \ U s e r s \ s 1 9 3 8 3 5 5 \ Desktop \ t h e s i s \ cough ’ , ’ l e f t ’ , imageNames { i i } ) ) ;

29 w r i t e V i d e o ( o u t p u t L e f t , img3 ) ; 30 end

31 c l o s e ( o u t p u t L e f t ) 32 open ( o u t p u t R i g h t )

33 f o r i i = 1 : ( length ( imageNames2 ) −1)

34 img4=i m r e a d ( f u l l f i l e ( ’C: \ U s e r s \ s 1 9 3 8 3 5 5 \ Desktop \ t h e s i s \ cough ’ , ’ r i g h t ’ , imageNames { i i } ) ) ;

35 w r i t e V i d e o ( o u t p u t R i g h t , img4 ) ; 36 end

37 c l o s e ( o u t p u t R i g h t )

1.2.3 Rectification

1 % l o a d ( ’C: \ U s e r s \ s 1 9 3 8 3 5 5 \ D e s k t o p \ t h e s i s \ c a l i b r a t i o n S e s s i o n f i n A L . mat ’ ) ;

2 s h o w E x t r i n s i c s ( s t e r e o P a r a m s ) ; 3

4 v i d e o F i l e L e f t=V id e o R e a de r ( ’C: \ U s e r s \ s 1 9 3 8 3 5 5 \ Desktop \ t h e s i s \ normal \ n o r m a l L e f t . a v i ’ ) ;

5 v i d e o F i l e R i g h t=V i d e o R ea d e r ( ’C: \ U s e r s \ s 1 9 3 8 3 5 5 \ Desktop \ t h e s i s \ normal

\ n o r m a l R i g h t . a v i ’ ) ; 6

7

8 %% image r e c t i f i c a t i o n

9 f r a m e L e f t=readFrame ( v i d e o F i l e L e f t ) ; 10 f r a m e R i g h t=readFrame ( v i d e o F i l e R i g h t ) ;

11 [ f r a m e L e f t R e c t , f r a m e R i g h t R e c t ]= r e c t i f y S t e r e o I m a g e s ( f r a m e L e f t , f r a m e R i g h t , s t e r e o P a r a m s ) ;

12 f i g u r e ( )

13 imshow ( s t e r e o A n a g l y p h ( f r a m e L e f t R e c t , f r a m e R i g h t R e c t ) ) ; 14 drawnow ;

15 t i t l e ( ’ R e c t i f i e d S t e r e o Frames ’ ) ; 16 %% d i s p a r i t y c o m p u t a t i o n

17 f r a m e L e f t G r a y=r g b 2 g r a y ( f r a m e L e f t R e c t ) ; 18 f r a m e R i g h t G r a y=r g b 2 g r a y ( f r a m e R i g h t R e c t ) ; 19 d i s p a r i t y R a n g e =[0 9 6 ] ;

20 d i s p a r i t y m a p=d i s p a r i t y ( f r a m e L e f t G r a y , frameRightGray , ’ D i s p a r i t y R a n g e

’ , d i s p a r i t y R a n g e , ’ U n i q u e n e s s T h r e s h o l d ’ , 1 0 ) ; 21 f i g u r e ;

22 imshow ( d i s p a r i t y m a p , d i s p a r i t y R a n g e ) ; 23 t i t l e ( ’ D i s p a r i t y Map ’ ) ;

24 colormap ( gca , j e t ) 25 colorbar

26 disp= i m f i l l ( d i s p a r i t y m a p , ’ h o l e s ’ ) ; 27 f i g u r e ;

28 imshow ( disp , d i s p a r i t y R a n g e ) ; 29 t i t l e ( ’ D i s p a r i t y Map ’ ) ; 30 colormap ( gca , j e t ) 31 colorbar

32

7

(14)

33

34 %% R e c o n s t r u c t i o n o f 3D s c e n e 35 % K=m e d f i l t 2 ( d i s p a r i t y m a p ) ;

36 p o i n t s 3 D=r e c o n s t r u c t S c e n e ( disp , s t e r e o P a r a m s ) ; 37

38 p o i n t s 3 D=p o i n t s 3 D . / 1 0 0 0 ;

39 ptCloud=p o i n t C l o u d ( p o i n t s 3 D , ’ C o l o r ’ , f r a m e L e f t R e c t ) ; 40

41 p t=r e m o v e I n v a l i d P o i n t s ( ptCloud ) ; 42 p t=p c d e n o i s e ( p t ) ;

43 g r i d S i z e = 0 . 0 0 1 ;

44 r o i =[ −0.3 0 . 3 1 −0.21 0 . 3 0 . 2 0 . 6 1 ] ;%new 28/01 45 ptdown=pcdownsample ( pt , ’ g r i d A v e r a g e ’ , g r i d S i z e ) ; 46 i n d i c e s=f i n d P o i n t s I n R O I ( ptdown , r o i ) ;

47 ptdownsampled=s e l e c t ( ptdown , i n d i c e s ) ;

48 p l a y e r 3 D=p c p l a y e r ( [ − 0 . 3 , 0 . 3 ] , [ − 0 . 2 1 , 0 . 3 ] , [ 0 . 2 , 0 . 6 1 ] , ’ V e r t i c a l A x i s ’ ,

’ y ’ , ’ V e r t i c a l A x i s D i r ’ , ’ down ’ ) ; 49

50 view ( p l a y e r 3 D , p t ) ; 51 %% Mesh

52 r e f l o c =ptdownsampled . L o c a t i o n ; 53 x1=d o u b l e ( r e f l o c ( : , 1 , : ) ) ∗ 1 ; 54 y1=d o u b l e ( r e f l o c ( : , 2 , : ) ) ∗ 1 ; 55 z 1=d o u b l e ( r e f l o c ( : , 3 , : ) ) ∗ 1 ; 56 t r i =d e l a u n a y ( x1 , y1 ) ;

57 f i g u r e ;

58 t r i m e s h ( t r i , x1 , y1 , z 1 )

1.2.4 Tracking Respiratory Motion

1 % c l c ; 2

3 v i d e o F i l e L e f t = ’C: \ U s e r s \ s 1 9 3 8 3 5 5 \ Desktop \ t h e s i s \ s h o r t \ s h o r t L 3 0 . a v i ’ ;

4 v i d e o F i l e R i g h t = ’C: \ U s e r s \ s 1 9 3 8 3 5 5 \ Desktop \ t h e s i s \ s h o r t \ s h o r t R 3 0 . a v i ’ ;

5 i =1;

6 c o u n t =3;

7 m=0;

8 v i d L e f t=V id e o R e ad e r ( v i d e o F i l e L e f t ) ; 9 L e f t=get ( v i d L e f t ) ;

10 numFrames = c e i l ( v i d L e f t . FrameRate ∗ v i d L e f t . D u r a t i o n ) ; 11 v i d R i g h t=V i d e o Re a d e r ( v i d e o F i l e R i g h t ) ;

12 R i g h t=get ( v i d R i g h t ) ; 13 d i s t M e a s u r e =0;

14

15 m e r g e S i z e = 0 . 0 0 0 1 ; 16 t i m e =0;

17 t o t a l t i m e = [ ] ;

18 t o t a l = [ ] ;

19 n=3;

20 c o u n t e r =0;

21 while i <numFrames 22 dT = 0 . 1 ;

23 f r a m e L e f t=r e a d ( v i d L e f t , i ) ; 24 f r a m e R i g h t=r e a d ( v i d R i g h t , i ) ; 25 %image r e c t i f i c a t i o n

26 [ f r a m e L e f t R e c t , f r a m e R i g h t R e c t ] = . . .

8

(15)

27 r e c t i f y S t e r e o I m a g e s ( f r a m e L e f t , f r a m e R i g h t , s t e r e o P a r a m s ) ; 28

29 % C o n v e r t t o g r a y s c a l e .

30 f r a m e L e f t G r a y = r g b 2 g r a y ( f r a m e L e f t R e c t ) ; 31 f r a m e R i g h t G r a y = r g b 2 g r a y ( f r a m e R i g h t R e c t ) ; 32

33 % Compute d i s p a r i t y . 34 d i s p a r i t y R a n g e =[0 1 1 2 ] ;

35 d i s p a r i t y m a p=d i s p a r i t y ( f r a m e L e f t G r a y , frameRightGray , ’

D i s p a r i t y R a n g e ’ , d i s p a r i t y R a n g e , ’ U n i q u e n e s s T h r e s h o l d ’ , 1 0 ) ; 36 disp= i m f i l l ( d i s p a r i t y m a p , ’ h o l e s ’ ) ;

37 %p o s t p r o c e s s i n g 38 K=m e d f i l t 2 ( disp ) ;

39 % R e c o n s t r u c t 3−D s c e n e i n a p o i n t c l o u d 40 p o i n t s 3 D = r e c o n s t r u c t S c e n e (K, s t e r e o P a r a m s ) ; 41 p o i n t s 3 D = p o i n t s 3 D . / 1 0 0 0 ;

42 ptCloud = p o i n t C l o u d ( p o i n t s 3 D , ’ C o l o r ’ , f r a m e L e f t R e c t ) ; 43 p t u s e d=r e m o v e I n v a l i d P o i n t s ( ptCloud ) ;

44 p t u s e d=p c d e n o i s e ( p t u s e d ) ; 45 g r i d S i z e = 0 . 0 0 1 ;

46 % s p e c i f i y i n g r e g i o n o f i n t e r e s t

47 r o i =[ −0.3 0 . 3 1 −0.21 0 . 3 0 . 2 0 . 6 1 ] ;%new 28/01 48 % r o i = [ 0 . 0 6 0 . 1 2 1 −0.2 −0.13 0 . 4 8 0 . 5 6 ] ; % marker 49 ptdown=pcdownsample ( p t u s e d , ’ g r i d A v e r a g e ’ , g r i d S i z e ) ; 50 i n d i c e s=f i n d P o i n t s I n R O I ( ptdown , r o i ) ;

51 ptdownsampled=s e l e c t ( ptdown , i n d i c e s ) ; 52

53 i f i <2 %frame 1

54 r e f f r a m e = ptdownsampled ;

55 r e f l o c =ptdownsampled . L o c a t i o n ;

56 moving=ptdownsampled ;

57 p l a y e r 3 D=p c p l a y e r ( [ − 0 . 3 , 0 . 3 ] , [ − 0 . 2 1 , 0 . 3 ] , [ 0 . 2 , 0 . 6 1 ] , ’ V e r t i c a l A x i s ’ , ’ y ’ , ’ V e r t i c a l A x i s D i r ’ , ’ down ’ ) ;% s h e o n a 58 p l a y e r 3 D 2=p c p l a y e r ( [ − 0 . 3 , 0 . 3 ] , [ − 0 . 2 1 , 0 . 3 ] , [ 0 . 2 , 0 . 6 1 ] , ’

V e r t i c a l A x i s ’ , ’ y ’ , ’ V e r t i c a l A x i s D i r ’ , ’ down ’ ) ;% s h e o n a

59 view ( p l a y e r 3 D , ptdownsampled ) ;

60 i=i +1;

61 %c o v e r s i o n i n t o mesh

62 x1=d o u b l e ( r e f l o c ( : , 1 , : ) ) ∗ 1 0 0 ; 63 y1=d o u b l e ( r e f l o c ( : , 2 , : ) ) ∗ 1 0 0 ; 64 z 1=d o u b l e ( r e f l o c ( : , 3 , : ) ) ∗ 1 0 0 ; 65 j 1=l i n s p a c e (min( x1 ) ,max( x1 ) , 4 0 ) ; 66 k1=l i n s p a c e (min( y1 ) ,max( y1 ) , 4 0 ) ;

67 [ X, Y]=meshgrid ( j 1 , k1 ) ;

68 v=s c a t t e r e d I n t e r p o l a n t ( x1 , y1 , z 1 ) ;

69 F1=v (X, Y) ;

70 d i s t M e a s u r e 1 ( 1 ) =0;

71 t o t a l t i m e ( 1 ) =0;

72 c o n t i n u e ;

73 e l s e i f i ==2 % frame 2

74 f i x e d=moving ;

75 moving=ptdownsampled ;

76 t f o r m=p c r e g r i g i d ( moving , f i x e d , ’ M e t r i c ’ , ’ p o i n t T o P o i n t ’ , ’ E x t r a p o l a t e ’ , t r u e ) ;

77 p t C l o u d A l i g n e d=p c t r a n s f o r m ( moving , t f o r m ) ;

78 p t C l o u d S c e n e=pcmerge ( f i x e d , p t C l o u d A l i g n e d , m e r g e S i z e ) ;

79 %mesh c o n v e r s i o n

9

(16)

80 x2=d o u b l e ( p t C l o u d A l i g n e d . L o c a t i o n ( : , 1 , : ) ) ∗ 1 0 0 ; 81 y2=d o u b l e ( p t C l o u d A l i g n e d . L o c a t i o n ( : , 2 , : ) ) ∗ 1 0 0 ; 82 z 2=d o u b l e ( p t C l o u d A l i g n e d . L o c a t i o n ( : , 3 , : ) ) ∗ 1 0 0 ; 83 j=l i n s p a c e (min ( [ x1 ; x2 ] ) ,max ( [ x1 ; x2 ] ) , 4 0 ) ; 84 k=l i n s p a c e (min ( [ y1 ; y2 ] ) ,max ( [ y1 ; y2 ] ) , 4 0 ) ;

85 [ X2 , Y2]=meshgrid ( j , k ) ;

86 v1=s c a t t e r e d I n t e r p o l a n t ( x2 , y2 , z 2 ) ;

87 %Depth e s t i m a t i o n w i t h r e s p e c t t o frame 1

88 F2=v1 ( X2 , Y2 ) ;

89 F1=v ( X2 , Y2 ) ;

90 t i m e=t i m e + 0 . 0 3 3 ;

91 t o t a l t i m e =[ t o t a l t i m e , t i m e ] ;

92 d e p t h 1=F1−F2 ;

93 d i s t M e a s u r e 1 ( 2 )=sqrt (norm( F1−F2 ) ) ; 94

95 view ( p l a y e r 3 D , ptdownsampled ) ;

96 view ( p l a y e r 3 D 2 , p t C l o u d S c e n e ) ; 97 e l s e i f i >2 % a l l t h e r e s t o f t h e f r a m e s

98 accumTform=t f o r m ;

99 f i x e d=moving ;

100 moving=ptdownsampled ;

101 t f o r m=p c r e g r i g i d ( moving , r e f f r a m e , ’ M e t r i c ’ , ’ p o i n t T o P o i n t

’ , ’ E x t r a p o l a t e ’ , t r u e ) ;

102 accumTform = a f f i n e 3 d ( t f o r m . T ∗ accumTform . T) ; 103 p t C l o u d A l i g n e d = p c t r a n s f o r m ( moving , accumTform ) ; 104 p t C l o u d S c e n e = pcmerge ( f i x e d , p t C l o u d A l i g n e d , m e r g e S i z e

) ;

105 view ( p l a y e r 3 D , ptdownsampled ) ;

106 view ( p l a y e r 3 D 2 , p t C l o u d S c e n e ) ;

107 a=d o u b l e ( p t C l o u d A l i g n e d . L o c a t i o n ) ∗ 1 0 0 ;

108 x = [ ] ;

109 y = [ ] ;

110 z = [ ] ;

111 s = [ ] ;

112 t = [ ] ;

113 v=s c a t t e r e d I n t e r p o l a n t ( x1 , y1 , z 1 ) ;

114 %mesh c o n v e r s i o n and d e p t h e s t i m a t i o n w i t h r e s p e c t t o frame 1

115 while c o u n t==i

116 x ( : , n )=a ( : , 1 , : ) ;

117 y ( : , n )=a ( : , 2 , : ) ;

118 z ( : , n )=a ( : , 3 , : ) ;

119 s ( : , n )=l i n s p a c e (min ( [ x1 ; x ( : , n ) ] ) ,max ( [ x1 ; x ( : , n ) ] ) , 4 0 ) ; 120 t ( : , n )=l i n s p a c e (min ( [ y1 ; y ( : , n ) ] ) ,max ( [ y1 ; y ( : , n ) ] ) , 4 0 ) ; 121 [ X1 , Y1]=meshgrid ( s ( : , n ) , t ( : , n ) ) ;

122 v3=s c a t t e r e d I n t e r p o l a n t ( x ( : , n ) , y ( : , n ) , z ( : , n ) ) ;

123 F1=v ( X1 , Y1 ) ;

124 F4=v3 ( X1 , Y1 ) ;

125

126 de pt h=F1−F4 ;

127 d i s t M e a s u r e 1 ( n )=sqrt (norm( de pt h ) ) ; 128

129 c o u n t=c o u n t +1;

130 n=n +1;

131 t i m e=t i m e + 0 . 0 3 3 ;

132 t o t a l t i m e =[ t o t a l t i m e , t i m e ] ;

133 end

10

(17)

134

135 end

136 i=i +1;

137 end 138 f i g u r e

139 plot ( t o t a l t i m e , d i s t M e a s u r e ) ;

140 %F i l t e r i n g t h e b r e a t h i n g s i g n a l by t a k i n g t h e FFT 141 m=0;

142 Fs =30;

143 t o t a l t i m e=t o t a l t i m e ’ ; 144 d i s t M e a s u r e 1=d i s t M e a s u r e 1 ’ ; 145 L=length ( d i s t M e a s u r e 1 ) ; 146 NEFT=2ˆnextpow2 ( L ) ;

147 h e l l o=abs ( f f t ( d i s t M e a s u r e 1 ,NEFT) ) ; 148 f r e q=Fs /2∗ l i n s p a c e ( 0 , 1 ,NEFT/2+1) ; 149 f i g u r e ;

150 plot ( f r e q , h e l l o ( 1 :NEFT/2+1) ) ; 151 t i t l e ( ’FFT f o r n o i s e e l i m i n a t i o n ’ ) ; 152 x l a b e l ( ’ f r e q e n c y ’ ) ;

153 y l a b e l ( ’ | de p th ( f r e q ) | ’ ) ; 154 o =4;

155 wn = [ 0 . 7 0 . 9 9 ] ∗ 2 / Fs ;

156 [ b , a ]= b u t t e r ( o , wn , ’ b a n d p a s s ’ ) ;

157 f i g u r e ;

158 f r e q z ( b , a , 1 0 2 4 , Fs ) ; 159 grid on

160 [ h , w]= f r e q z ( b , a , 1 0 2 4 , Fs ) ; 161 plot (w, 2 0 ∗ log ( 1 0 ∗ abs ( h ) ) ) ;

162 o u t p u t f i l t=f i l t e r ( b , a , d i s t M e a s u r e 1 ) ; 163 f i g u r e

164 plot ( t o t a l t i m e , o u t p u t f i l t ) ; 165

166 t i t l e ( ’ B r e a t h i n g w i t h r e s p e c t t o t i m e ’ ) ; 167 x l a b e l ( ’ t i m e i n s e c o n d s ’ ) ;

168 y l a b e l ( ’ d i f f e r e n c e between f r a m e s w i t h r e s p e c t t o f r a m e 1 ’ ) ; 169 f i g u r e

170 c u r v e=a n i m a t e d l i n e ;

171 f o r my=1: length ( t o t a l t i m e )

172 a d d p o i n t s ( c u r v e , t o t a l t i m e (my) , o u t p u t f i l t (my) ) ;

173 drawnow ;

174 end 175

176 pks=f i n d p e a k s ( o u t p u t f i l t , t o t a l t i m e ) ; 177

178 hold o f f

179 t i t l e ( ’ B r e a t h i n g w i t h r e s p e c t t o t i m e ’ ) ; 180 x l a b e l ( ’ t i m e i n s e c o n d s ’ ) ;

181 y l a b e l ( ’ d i f f e r e n c e between f r a m e s w i t h r e s p e c t t o f r a m e 1 ’ ) ; 182 while m<length ( pks )

183 m=m+1;

184 end

185 % D i s p l a y o f o u t p u t r e s u l t o f b r e a t h c o u n t s 186 i f (m>=6)&&(m<=10)

187 f = msgbox ( { ’ You a r e H e a l t h y ’ ; s p r i n t f ( ’ T o t a l number o f b r e a t h s i s %d\n ’ ,m) } , ’ R e s u l t ’ ) ;

188 e l s e

189 f = msgbox ( { ’ You a r e S i c k ’ ; s p r i n t f ( ’ T o t a l number o f b r e a t h s i s

11

(18)

%d\n ’ ,m) } , ’ R e s u l t ’ ) ; 190 end

1.2.5 Comparison with 3DMD

1 %% r e a d i n g t h e 3DMD d a t a f i l e s 2

3 pathname= ’C: \ U s e r s \ s 1 9 3 8 3 5 5 \ Desktop \ t h e s i s \ s h e o n a

\ 2 0 1 9 0 6 1 3 1 4 4 7 5 5 5 7 6 \ meshes \ o b j 1 . o b j ’ ; 4 o u t p u t=r e a d O b j ( pathname ) ;

5 p=o u t p u t . v ; 6 f i g u r e ;

7 t r i =d e l a u n a y ( p ( : , 1 ) , p ( : , 2 ) ) ;

8 a=t r i m e s h ( t r i , p ( : , 1 ) , p ( : , 2 ) , p ( : , 3 ) ) ; 9 t i t l e ( ’ I n h a l a t i o n ’ ) ;

10 pathname2= ’C: \ U s e r s \ s 1 9 3 8 3 5 5 \ Desktop \ t h e s i s \ s h e o n a

\ 2 0 1 9 0 6 1 3 1 4 4 7 5 5 5 7 6 \ meshes \ o b j 2 . o b j ’ ; 11 o u t p u t 2=r e a d O b j ( pathname2 ) ;

12 p2=o u t p u t 2 . v ; 13 f i g u r e ;

14 t r i 2=d e l a u n a y ( p2 ( : , 1 ) , p2 ( : , 2 ) ) ; 15 t r i m e s h ( t r i 2 , p2 ( : , 1 ) , p2 ( : , 2 ) , p2 ( : , 3 ) ) 16 t i t l e ( ’ e x h a l a t i o n ’ ) ;

17 %% Reading t h e S t e r e o cam i n p u t

18 v i d e o F i l e L e f t = ’C: \ U s e r s \ s 1 9 3 8 3 5 5 \ Desktop \ t h e s i s \ l o n g \ l o n g L e f t . a v i

’ ;

19 v i d e o F i l e R i g h t = ’C: \ U s e r s \ s 1 9 3 8 3 5 5 \ Desktop \ t h e s i s \ l o n g \ l o n g R i g h t . a v i ’ ;

20 v i d L e f t=Vi d e o R e ad e r ( v i d e o F i l e L e f t ) ; 21 L e f t=get ( v i d L e f t ) ;

22 numFrames = c e i l ( v i d L e f t . FrameRate ∗ v i d L e f t . D u r a t i o n ) ; 23 v i d R i g h t=V i d e o Re a d e r ( v i d e o F i l e R i g h t ) ;

24

25 f r a m e L e f t=r e a d ( v i d L e f t , 6 6 ) ;% The d e e p e s t b r e a t h i s on frame 66 26 f r a m e R i g h t=r e a d ( v i d R i g h t , 6 6 ) ;

27 [ f r a m e L e f t R e c t , f r a m e R i g h t R e c t ] = . . .

28 r e c t i f y S t e r e o I m a g e s ( f r a m e L e f t , f r a m e R i g h t , s t e r e o P a r a m s ) ; 29 f r a m e L e f t G r a y = r g b 2 g r a y ( f r a m e L e f t R e c t ) ;

30 f r a m e R i g h t G r a y = r g b 2 g r a y ( f r a m e R i g h t R e c t ) ; 31 d i s p a r i t y R a n g e =[0 1 1 2 ] ;

32 d i s p a r i t y m a p=d i s p a r i t y ( f r a m e L e f t G r a y , frameRightGray , ’ D i s p a r i t y R a n g e

’ , d i s p a r i t y R a n g e , ’ U n i q u e n e s s T h r e s h o l d ’ , 1 0 ) ; 33 disp= i m f i l l ( d i s p a r i t y m a p , ’ h o l e s ’ ) ;

34 % p o s t p r o c e s s i n g 35 K=m e d f i l t 2 ( disp ) ;

36 % R e c o n s t r u c t 3−D s c e n e .

37 p o i n t s 3 D = r e c o n s t r u c t S c e n e (K, s t e r e o P a r a m s ) ; 38 p o i n t s 3 D = p o i n t s 3 D . / 1 0 0 0 ;

39 ptCloud = p o i n t C l o u d ( p o i n t s 3 D , ’ C o l o r ’ , f r a m e L e f t R e c t ) ; 40 p t u s e d=r e m o v e I n v a l i d P o i n t s ( ptCloud ) ;

41 p t u s e d=p c d e n o i s e ( p t u s e d ) ; 42 g r i d S i z e = 0 . 0 0 1 ;

43 r o i =[ −0.3 0 . 3 1 −0.21 0 . 3 0 . 2 0 . 6 1 ] ;%new 28/01

44 ptdown=pcdownsample ( p t u s e d , ’ g r i d A v e r a g e ’ , g r i d S i z e ) ; 45 i n d i c e s=f i n d P o i n t s I n R O I ( ptdown , r o i ) ;

46 ptdownsampled=s e l e c t ( ptdown , i n d i c e s ) ; 47 r e f f r a m e = ptdownsampled ;

48 r e f l o c =ptdownsampled . L o c a t i o n ;

12

(19)

49 moving=ptdownsampled ;

50 p l a y e r 3 D=p c p l a y e r ( [ − 0 . 3 , 0 . 3 ] , [ − 0 . 2 1 , 0 . 3 ] , [ 0 . 2 , 0 . 6 1 ] , ’ V e r t i c a l A x i s ’ ,

’ y ’ , ’ V e r t i c a l A x i s D i r ’ , ’ down ’ ) ;%normal 51 view ( p l a y e r 3 D , ptdownsampled ) ;

52 x1=d o u b l e ( r e f l o c ( : , 1 , : ) ) ∗ 1 0 ; 53 y1=d o u b l e ( r e f l o c ( : , 2 , : ) ) ∗ 1 0 ; 54 z 1=d o u b l e ( r e f l o c ( : , 3 , : ) ) ∗ 1 0 ;%2920 55 t r i =d e l a u n a y ( x1 , y1 ) ;

56 f i g u r e ;

57 t r i m e s h ( t r i , x1 , y1 , z 1 ) 58 t i t l e ( ’ i n ’ ) ;

59 %

60 f r a m e L e f t=r e a d ( v i d L e f t , 1 ) ; 61 f r a m e R i g h t=r e a d ( v i d R i g h t , 1 ) ;

62 [ f r a m e L e f t R e c t , f r a m e R i g h t R e c t ] = . . .

63 r e c t i f y S t e r e o I m a g e s ( f r a m e L e f t , f r a m e R i g h t , s t e r e o P a r a m s ) ; 64 f r a m e L e f t G r a y = r g b 2 g r a y ( f r a m e L e f t R e c t ) ;

65 f r a m e R i g h t G r a y = r g b 2 g r a y ( f r a m e R i g h t R e c t ) ; 66 d i s p a r i t y R a n g e =[0 1 1 2 ] ;

67 d i s p a r i t y m a p=d i s p a r i t y ( f r a m e L e f t G r a y , frameRightGray , ’ D i s p a r i t y R a n g e

’ , d i s p a r i t y R a n g e , ’ U n i q u e n e s s T h r e s h o l d ’ , 1 0 ) ; 68 disp= i m f i l l ( d i s p a r i t y m a p , ’ h o l e s ’ ) ;

69 %p o s t p r o c e s s i n g 70 K=m e d f i l t 2 ( disp ) ; 71 % R e c o n s t r u c t 3−D s c e n e .

72 p o i n t s 3 D = r e c o n s t r u c t S c e n e (K, s t e r e o P a r a m s ) ; 73 p o i n t s 3 D = p o i n t s 3 D . / 1 0 0 0 ;

74 ptCloud = p o i n t C l o u d ( p o i n t s 3 D , ’ C o l o r ’ , f r a m e L e f t R e c t ) ; 75 p t u s e d=r e m o v e I n v a l i d P o i n t s ( ptCloud ) ;

76 p t u s e d=p c d e n o i s e ( p t u s e d ) ; 77 g r i d S i z e = 0 . 0 0 1 ;

78 r o i =[ −0.3 0 . 3 1 −0.21 0 . 3 0 . 2 0 . 6 1 ] ;%new 28/01

79 ptdown=pcdownsample ( p t u s e d , ’ g r i d A v e r a g e ’ , g r i d S i z e ) ; 80 i n d i c e s=f i n d P o i n t s I n R O I ( ptdown , r o i ) ;

81 ptdownsampled=s e l e c t ( ptdown , i n d i c e s ) ; 82 r e f f r a m e = ptdownsampled ;

83 r e f l o c =ptdownsampled . L o c a t i o n ; 84 moving=ptdownsampled ;

85 p l a y e r 3 D=p c p l a y e r ( [ − 0 . 3 , 0 . 3 ] , [ − 0 . 2 1 , 0 . 3 ] , [ 0 . 2 , 0 . 6 1 ] , ’ V e r t i c a l A x i s ’ ,

’ y ’ , ’ V e r t i c a l A x i s D i r ’ , ’ down ’ ) ;%normal 86 view ( p l a y e r 3 D , ptdownsampled ) ;

87 x2=d o u b l e ( r e f l o c ( : , 1 , : ) ) ∗ 1 0 0 ; 88 y2=d o u b l e ( r e f l o c ( : , 2 , : ) ) ∗ 1 0 0 ; 89 z 2=d o u b l e ( r e f l o c ( : , 3 , : ) ) ∗ 1 0 0 ; 90 t r i =d e l a u n a y ( x2 , y2 ) ;

91 f i g u r e ;

92 t r i m e s h ( t r i , x2 , y2 , z 2 )

13

Referenties

GERELATEERDE DOCUMENTEN