• No results found

Design and implementation of autonomous robotic scanning of the breast

N/A
N/A
Protected

Academic year: 2021

Share "Design and implementation of autonomous robotic scanning of the breast"

Copied!
80
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Design and implementation of autonomous robotic scanning of the breast

J.J.A. (Judith) Schoot Uiterkamp

MSc Report

C e

Prof.dr.ir. S. Stramigioli V. Groenhuis, MSc Dr.ir. J.R. Buitenweg Dr. F.J. Siepel

September 2016 042RAM2016 Robotics and Mechatronics

EE-Math-CS University of Twente

P.O. Box 217

7500 AE Enschede

The Netherlands

(2)

Design and implementation of autonomous robotic scanning of the breast.

J.J.A. Schoot Uiterkamp

7 oktober 2016

(3)
(4)

iii

Summary

MURAB stands for MRI and Ultrasound Robotic Assisted Biopsy and aims to improve MRI gui- ded breast biopsy. MRI breast biopsy is significantly more complicated than ultrasound guided biopsy and causes increased discomfort for the patient and increased intervention time and costs. The MURAB project aims to reduce these drawbacks with an innovative, robot-based solution replacing the current procedure. In the first phase of MURAB, the scanning phase, ultrasound images and elastographic data of the breast is autonomously acquired by a KUKA robotic arm. After localizing the lesion by merging the 3D ultrasound scan with the MRI image, and correcting this location for needle induced displacements using the elastographic model, the needle is correctly positioned for insertion during the second phase, the needle insertion phase.

The main topic of this master thesis consist of the analysis of the MURAB project, design of the scanning phase, and implementation of this design. Analysis revealed that the most impor- tant requirements for the scanning phase are patient safety, the acquiring of quality ultrasound images by maintaining a constant contact pressure and an orientation of the probe normal to the skin, and that the procedure should be autonomous. The design of the scanning phase consist of autonomous initialization of scanning, automatic trajectory planning, autonomous scanning movement and the acquisition of ultrasound and elastographic data. The design was implemented using a KUKA LWR4+ lightweight robot arm. The parts that were implemented are autonomous initialization of scanning, automatic trajectory planning and contact control.

Results show that initialization of the scanning motion using markers is very accurate and the robot is steered to the correct start position with an accuracy of 1.6 mm. The trajectory was successfully planned. Contact with the patient was made and maintained during the full scan.

During the full trajectory of the scanning motion, the contact pressure of 5 N is maintained

with maximum deviation of 1.5 N. The end-effector is kept normal to the surface with an aver-

age deviation of seven degrees. Recommendations for future work include transition to a new

robot, use prone position of the patient, include prototypes of end-effector designs, fine tuning

of the controller and advancing to the acquisition of real-time US and elastographic data.

(5)

Samenvatting

MURAB staat voor MRI and Ultrasound Robotic Assisted Biopsy en richt zich op het verbe- teren van de huidige MRI geleide borst biopsie procedure. MRI geleide biopsie is aanzienlijk complexer dan echo geleide biopsie, en is minder comfortabel voor de patiënt, geeft langere procedure tijden, hogere kosten en een hoger percentage vals-negatieve diagnoses. Het MU- RAB project ambieert om deze te beperken door een innovatieve, robot-gebaseerde oplossing die de bestaande procedure kan vervangen. In de eerste fase, de Scanning fase, wordt een echo scan van de borst gemaakt met behulp van een KUKA robot arm en worden elastografische ei- genschappen van het weefsel gemeten. De lokatie van de laesie kan worden vastgesteld door de 3D echo scan te overlappen met een MRI beeld. Daarna kan de naaldhouder gepositioneerd worden voor insertie tijdens de tweede fase, de Naald Insertie fase, waarbij wordt gecorrigeerd voor deformaties van het weefsel met een elastografisch model.

Deze Master opdracht omvat de analyse van het MURAB project, het ontwerp van de Scan- ning fase en de implementatie van onderdelen hiervan. Uit de analyse bleek dat de belang- rijkste vereisten voor de Scanning fase waren: 1) veiligheid van de patient, 2) verwerving van echo beelden van voldoende kwaliteit door een constante contact kracht te hanteren en de transducer normaal aan het oppervlak te houden, en 3) het autonoom uit laten voeren van de procedure. Het ontwerp van de Scanning fase bestaat uit het autonoom starten van de proce- dure, planning van het scantraject, autonoom scannen, en het verkrijgen van echobeelden en elastografische data. De scanning fase is geïmplementeerd met een KUKA lichtgewicht robot arm, een diepte camera en een 6DOF krachtsensor. De geïmplementeerde onderdelen zijn de autonome initialisatie, de scanbeweging en contact controle.

Het gebruik van markers voor de autonome initialisatie is erg accuraat. De robot plaatst zich

naar de correcte startpositie met een precisie van 1.6 mm. Tijdens de scan wordt de contact

kracht constant gehouden op 5 N, met een maximale afwijking van 1.5 . De afwijking van de

end-effector van de normaal tijdens scannen is gemiddeld seven graden. Het advies voor ver-

dere studie en ontwikkeling van het MURAB project is om over te gaan op implementatie in

de beoorgde klinische setting met de nieuwe robot en tafel, het integreren van prototypen van

end-effectoren, fijnstelling van de controller en om echobeeld en elastografische data verwer-

ving te implementeren.

(6)

v

Preface

About three years ago I came to the study advisor of the Biomedical Engineering programme at the University of Twente with a wish that I thought was not possible to realize: to start with the master Biomedical Engineering with a Bachelor in Veterinary Medicine as background. To my surprise this was possible, but only if I passed a custom-made pre-master year, which I did.

Then, even before I had started with the Master, I decided I wanted to learn more about ro- botics and how they can be applied in the medical field. At that time, graduating in a research departments focussed on robotics was not one of the official options within the Biomedical En- gineering Master programme, but after a long time of considering the official options I decided to walk the less-travelled road.

Luckily a new project would be soon be launched at the Robotics and Mechatronics depart- ment and they were happy to take me on board. In only one year after I got my Bachelor in Utrecht I found myself amongst electrical engineering, mechanical engineering and computer science students in courses about system control, computer vision, robotics and more, and I loved it. Then I got an internship position at the University of California, Berkeley, which seemed impossible when I started writing them, and after three months of e-mailing it still seemed that way. After three months and one day, however, I was accepted as a visiting scholar for three months.

And now my Master project has been finished. I enjoyed working on this project very much: be- ing in the lab, designing, playing, I mean, working with the robot, programming... It confirmed that I made the right choice back then. If there’s one thing I learned during these three years that I will always remember it is this: even if it seems like something is completely impossible to do or achieve it is always worth to try anyway.

Therefore I want to thank Theo van Dam, study advisor of BME, Dr. Jan Buitenweg, my super- visor during my premaster and Prof. Stefano Stramigioli, head of RaM department for giving me the right chances at the right moment.

Thanks to Dr. Françoise Siepel for insightful talks and detailed feedback (I love that), and to my daily supervisor Vincent Groenhuis, who gave me all the freedom to work my own way with this project and was always willing to help me with anything he could (especially designing the parts for 3D printing). Also thank you for the unforgettable boat trip that literally stranded on the beach!

Finally, my family; Dad, Mom and Tijnemans, for being always there for me with hugs, dish- cloth fights, cookies, and honest advice at the right moment:’Een brutaal mens heeft de halve wereld!’.

Judith J.A. Schoot Uiterkamp

Enschede, September 24, 2016

(7)
(8)

vii

Contents

1 Introduction 1

1.1 Breast cancer . . . . 1

1.2 Biopsy robots . . . . 1

1.3 Context: the MURAB project . . . . 1

1.4 Aim of this thesis . . . . 2

1.5 Organization of the report . . . . 2

2 Analysis 3 2.1 MURAB Project overview . . . . 3

2.2 The Scanning Phase . . . . 7

2.3 Plan of approach . . . . 11

3 Materials and systems 12 3.1 LWR4+ . . . . 12

3.2 ROS . . . . 15

3.3 Force sensor . . . . 16

3.4 Camera and depth sensor . . . . 18

3.5 Breast phantom . . . . 18

3.6 Omega 6 . . . . 19

4 Design and Implementation 21 4.1 Frames and screw theory . . . . 21

4.2 Autonomous scanning initialization . . . . 23

4.3 Depth sensing . . . . 23

4.4 Marker detection . . . . 25

4.5 Autonomous trajectory planning and scanning motion . . . . 26

4.6 Contact control . . . . 27

5 Experiments 31 5.1 Experimental set-up . . . . 31

5.2 Experiments and Results . . . . 35

6 Discussion 43 6.1 Markers and breast detection . . . . 43

6.2 Trajectory . . . . 43

6.3 Force sensor . . . . 44

6.4 Contact performance . . . . 44

6.5 Counterforce . . . . 45

(9)

6.6 Pointclouds . . . . 45

7 Conclusion and future recommendations. 46

A Appendices 47

A.1 4DOF 3D printed sensor . . . . 47

B Appendices 49

B.1 Rotation angle visual servoing . . . . 49

C Appendices 50

C.1 Scanning initialization results . . . . 50

D Appendices 52

Bibliography 71

(10)

1

1 Introduction

1.1 Breast cancer

Breast cancer is the most common cancer in women worldwide, with nearly 1.7 million new cases diagnosed in 2012. This represents about 12% of all new cancer cases and 25% of all can- cers in women. Breast cancer is the second most common cause of cancer mortality among women in developed countries (WCRFI, 2012). Currently, a mammography (X-ray image) is the first step in breast cancer diagnosis. An ultrasound (US) guided breast biopsy is indicated when the mammography shows a suspicious lesion. In the case that a lesion cannot sufficiently be localized using US, MRI breast biopsy is performed. Improvement of these breast biopsy meth- ods, allowing early detection and reliable diagnosis, can reduce the mortality rate significantly (Khatib and Modjtabai, 2006).

1.2 Biopsy robots

Apart from the introduction of screening programs and high quality professional training for radiologists, technical solutions have also contributed to fast and accurate biopsies. One ap- proach is a robotic solution, since biopsy robots can achieve high accuracy, consistency, dex- terity and maneuverability, properties that are required for biopsy (Priester et al., 2013). Sev- eral biopsy robots have been developed in the past decades with the aim to aid the radiologist during the procedure. These robots include teleoperated US scanning systems with position tracking of the probe and automated needle insertion (Pua et al., 2006; Kettenbach et al., 2005), or reconstruction of 3D US volume to aid needle guidance (Freschi et al., 2009). (Mallapragada et al., 2009) designed a system with simultaneous needle insertion and US manipulation to compensate for lesion motion within the breast during the intervention. These types of systems perform well on phantom tissue with reported accuracies of 2 mm or less. However, they typic- ally address only one aspect of the biopsy procedure like scanning, US imaging processing and lesion detection, or needle insertion. Also, most systems focus on ultrasound guided biopsy, while in many cases MRI biopsy is required, a much more time- and cost-inefficient interven- tion. The MURAB project tries to address these issues with the development of an innovative US-MRI combined robotic biopsy procedure that replaces the current practice in MRI breast biopsy.

1.3 Context: the MURAB project

MURAB stands for MRI and Ultrasound Robotic Assisted Biopsy and is a four-year European

project that is part the Horizon 2000 Framework for Research and Innovation Program. The

aim of the MURAB project is to improve breast biopsy, which is a required routine intervention

in the diagnosis of breast cancer. The project specifically focuses on cases where MRI-guided

biopsy is indicated. Currently, mammography (x-ray imaging) is the first step in breast can-

cer diagnosis. If the mammogram shows suspicious lesions, an ultrasound breast biopsy is the

next step. During the biopsy the lesion is detected and a needle is inserted into the breast to

collect tissue for histology examination. Lesion detection and needle navigation is guided by

real-time ultrasound imaging. In some cases, MRI-guided breast biopsy is required instead of

ultrasound-guided biopsy because lesions are easier to visualize in MRI images. Examples of

such cases are: the lesions are visible in the mammogram (X-ray image) but not in ultrasound

images; the patient has highly indicative symptoms but both the mammography and ultra-

sound images do not show lesions; and in the case of screening of a high-risk patient. MRI

breast biopsy is significantly more complicated, since it requires the patient to be taken out

of the MRI after the images has been made, and be taken back in after the needle has been

placed to check for correct placement. It can take multiple repetitions before the needle has

(11)

been found to have been inserted correctly. This causes increased discomfort for the patient, higher intervention time and costs, and a larger percentage false negative diagnoses.

In the MURAB project, only one MRI image is required. The full procedure will consist of two phases that are performed autonomously by a KUKA robotic arm. In the first phase, the scan- ning phase, a 2D ultrasound images of the breast area of the patient is acquired. These images are combined an this 3D ultrasound image is then merged with the MRI image to identify the location of the lesion with respect to the US scan. In the second phase, the intervention phase, the needle is positioned such that the most optimal insertion point at the skin is chosen and that the medical professional only needs to penetrate the needle through the skin and apply force in only one direction in order to reach the lesion. In addition, elastographic data of the breast tissue is mined during the scanning phase to create a elastographic model of the breast.

This model will be used to predict movement of the lesion caused by the needle insertion. The advantage of a fully autonomous procedure is that it generates evenly spaced ultrasound im- ages slices and enables 3-D volume reconstruction that is superior to freehand techniques. It also eliminates hand-tremor, uneven application forces, or lapses in concentration during the day that often negatively influence results (Priester et al., 2013).

1.4 Aim of this thesis

This Master thesis focuses on the design of the scanning phase of the MURAB project. The aim of the master project is to design and implement a procedure in which the breast of a patient is autonomously scanned by a robotic arm such that 2D ultrasound images and elastographic data of the full breast area can be collected. In addition, implementation of the design using a KUKA robotic arm is part of the project. The performance of the design is evaluated using the results from the experiments and the requirements from the analysis. Because this master project is part of a four-year international project, useful insights and recommendations to allow progress of MURAB should be presented at the end of the master project.

1.5 Organization of the report

This report is organized as follows: chapter 2 presents the clinical and technical context of the

MURAB project, the analysis of the master project, the design requirements and the plan of

approach. In Chapter 3 the materials and hardware used in this project are described. This

is presented before discussing any designs to maintain reading flow. Chapter 4 presents the

design and implementation of the proposed approach per part of the scanning phase. Chapter

5 contains the experiments, describing the set-up and the results. The report concludes with

a discussion addressing the performance of the implementation, a conclusion, and a recom-

mendation for future work on both the scanning phase implementation and the MURAB pro-

ject. Throughout the work the reader is referred to the appendices for a more detailed technical

description of the used equipment, mathematical derivations, and experimental data.

(12)

3

2 Analysis

This chapter provides an overview of the MURAB project. Because the project is currently in the exploration phase, an in-depth analysis of the MURAB project and its aims is first presented to derive the requirements. Special attention is paid to the scanning phase, in which the require- ments for safety, contact with the patient and technical requirements are discussed. From the analysis and the requirements, a plan of approach is formulated.

2.1 MURAB Project overview

MURAB is a four-year international project that has the ambition to drastically improve preci- sion and effectiveness of the current MRI biopsy procedure for cancer diagnostics. The project includes partners from the University of Verona in Italy, Medical University of Vienna in Austria, Radboud Academical Medical Centre in Nijmegen, ZGT hospital Hengelo and Siemens in the Netherlands, and KUKA Industrial Robotics in Germany with the University of Twente as the leading partner. Responsibility for different parts of the project is divided among these part- ners. The two research focus areas of the project are breast and muscle disease diagnosis. This master project focuses on breast biopsy. One of the features of MURAB is the replacement of the current MRI-guided biopsy by a new procedure.

2.1.1 Current procedure

Most patients that arrive at the radiology clinic are redirected from the general practitioner or from the Dutch breast cancer screening program for women over 50 years or high-risk women.

In the first case symptoms are usually felt abnormalities in the breast, in the second case the mammogram (x-ray image) made by the screening organisation shows a suspicious lesion in the breast (Figure 2.2). When no mammogram has been made yet, this is the first step. The second step is US- or MRI-guided breast biopsy to collect cells for histology analysis to invest- igate the morphology in order to form a diagnosis. During US guided biopsy, the radiologist scans the breast area with an US probe to find the lesion (see Figure 2.3). After applying local anaesthesia a small incision is made at the desired position of the breast and a biopsy needle is inserted. The needle is navigated to the lesion and placed in front of it. The biopsy needle consists of an inner needle surrounded with an outer shaft. Activation of the device by pushing a button snaps the inner needle out of its shaft into the lesion, as can be seen in Figure 2.1.

The outer shaft is re-aligned with the inner needle such that it protects the biopted cells during needle retraction. The intervention time of US ultrasound is about 20 minutes.

Figure 2.1: Biopsy needle mechanism. The inner needle is protected by an outer shaft. After positioning of the needle in front of a lesion, the needle is snapped out from its shaft into the lesion. The outer shaft is re-aligned with the inner needle, protecting the biopted cells during needle retraction.

(13)

Figure 2.2: Mammogram with lesion at the arrow. Figure 2.3: Ultrasound guided breast biopsy.

There are cases in which MRI-guided breast biopsy is indicated, such as when the lesion is not visible on the US-images and/or mammogram, or when it concerns a high-risk patient. During MRI-guided biopsy, the patient lies in prone position (face-down) on a MR-table with two holes through which the breasts are positioned. Once the MR image has been taken, the patient is taken out from the MRI and after making a small incision, a sheath with a stylet is inserted into the breast. The patient is taken back in the MRI scanner to check for correct placement. It can take multiple repetitions before the tube has been found to have been inserted correctly.

Finally, the biopsy needle is led through the sheath to the correct position and the biopsy is taken. Due to the cyclic procedure during MRI-guide biopsy patients experience more discom- fort compared to US-guided breast biopsy. Additionally MRI-guided breast biopsy has higher intervention time and costs, and sometimes the exact lesion is not reached by the needle be- cause the correctness of the actual position of the needle cannot be confirmed. The total time required for MRI-breast biopsy is about 45-60 minutes.

(a) MR Imaging for breast biopsy (b) MR Image (c) Breast biopsy

Figure 2.4: MRI guided breast biopsy

(14)

CHAPTER 2. ANALYSIS 5

2.1.2 MURAB procedure

The MURAB procedure aims to replace the current procedure. When MRI-guided biopsy is indicated, only one MR image is made. Next, the breast area of the patient is scanned with a robotically steered US transducer equipped with an acoustically transparent pressure sensor.

Scanning will be fully autonomous. During scanning, 2D US images and pressure data are ac- quired to construct a 3D US image and an elastographic model of the tissue. Using Tissue Active Slam (TAS), the 3D US image is registered to the MR image. After the target is localized by the radiologist the robotic arm steers the biopsy needle holder to the desired position guided by real-time US imaging. The needle is inserted by the radiologist. Tissue deformations are pre- dicted based on the acquired elastographic model of the tissue that should allow more accurate needle navigation.

The full procedure can be completed within one appointment. Also, only one MR image is required, which saves a lot of time compared to current MRI-guided biopsy. The reduced in- tervention time results in lower costs and less patient discomfort. Using a robot arm to scan the patient and modelling the elastographic properties of the tissue to correct for tissue de- formation results in a higher accuracy and therefore fewer false negatives. Higher accuracy also allows for the use of a thinner needle, which causes less damage to the breast tissue. An overview of the aims of MURAB can be found in Figure 2.5.

Figure 2.5: MURAB goals

Summarized, the MURAB procedure consist of the following parts:

• Scanning phase, including:

– Autonomous US scanning of the target area by a robot arm

– Acquiring elastographic data using a ultrasound transparent pressure sensor array.

• Tissue Active SLAM, during which a model of the breast area is constructed using:

– MRI - US image merging to locate the lesion – Predictive elastography model construction

• Needle insertion phase, including:

– Autonomous US needle tracking – Autonomous needle navigation – Manual needle insertion

– Virtual stop when target has been reached

(15)

Figure 2.6: Schematic overview of the MURAB procedure. First one MR image is made. Next, the patient is scanned using a robotic arm. Pressure data and US images are acquired using Tissue Active SLAM to create an elastographic model of the tissue and an 3D US map. This map is merged with the MRI to localize the lesion. Finally, the robot positions the biopsy needle such that the radiologist can insert the needle pushing in only one direction.

During the scanning phase a 3D ultrasound scan of the breast is autonomously made by a KUKA robotic arm. This arm has a US probe attached to its end-effector. During scanning, pressure data is acquired using a ultrasound transparent pressure sensor array. This data will be used to make an elastographic model of the tissue. The scanning phase has its own subsec- tion in this chapter and is analysed more extensively there. The other three parts will shortly elaborated on here.

After the scanning phase an elastographic model of the breast tissue is constructed using pres- sure data from the acoustically transparent pressure sensor. This model will be used during the needle insertion phase to predict deformation of the breast tissue and displacement of the lesion as result of the needle insertion and the US probe.

The acquired US images are combined to obtain a 3D US image of the target area. This image is merged with the MRI image to obtain the location of the lesion. Because the position of the robot arm on the skin was stored for every US image, the location of the lesion within the breast will be known in robot coordinates

During the needle insertion phase the robot arm moves the end-effector containing the US probe and a biopsy needle guiding system to the optimal position on the skin where the needle can be inserted. This location depends on the position of the lungs, ribs and nipple, because these should not be hit by the needle, and the path that causes the least amount of tissue dam- age. During the positioning movement of the robot, collisions with the patient, table and pos- sibly other objects must be avoided. Although the location of the breast was stored during the scanning phase by the robot arm, the patient might have moved slightly. The robot should know the location of the patient and other objects, and the positioning algorithm should in- clude collision avoidance.

The robot will autonomously execute the full process except the insertion of the needle, which will be performed by a medical professional like the radiologist. After needle insertion, a soft virtual stop will slow down the needle when the lesion has been reached to prevent damage to underlying tissue like the lungs and indicates that the lesion has been reached. However, this virtual stop is not a hard stop and eventually the radiologist controls the depth of the insertion.

During both phases, breathing and sudden movements must be taken into account. Move-

ments of the patient may cause obstruction of the US probe, bending of the needle or worse,

cause tissue damage. Safety must be maintained without obstructing the acquiring of high

quality data or accurate needle navigation. Possible solutions have already been proposed: the

(16)

CHAPTER 2. ANALYSIS 7

robot should move with the patients movements or the design of the end-effector should be such that it is compliant enough to allow external motion.

2.2 The Scanning Phase

The scanning phase includes the autonomous ultrasound scanning of the target area using a KUKA robot arm in order to gain elastographic data using an acoustically transparent pressure sensor array and 2D US images using a US probe. This phase is the focus of this master thesis and therefore this section is dedicated to the clinical context and clinical requirements of the scanning phase.

2.2.1 Patient configuration

In breast biopsy, there are two common patient configurations: supine position and prone position. US guided biopsies are generally performed with the patient in supine position (Fig- ure 2.3) and MRI guided biopsies with the patient in prone position (Figure 2.4a). The current approach for the MURAB project is to place the patient in the same position as during MR ima- ging (prone) to minimize tissue deformation between the MR image and the 3D US image. This facilitates easier and more accurate merging of the two images. The table that will be used for the procedure is the breast biopsy table by Hologic, commonly used for stereotactic (= X-ray guided) breast biopsy (Figure 2.7).

Figure 2.7: MultiCare Platinum prone breast biopsy table by Hologic, used for stereotactic (= X-ray guided) breast biopsy.

However during this Masters’ project, the table that will be eventually used was not yet avail- able. Also, supine position allows for easier implementation using the robot arm that was used in this project. Therefore the breast phantoms used for the implementations and experiments in this thesis have a supine configuration.

2.2.2 Contact

During the scanning phase the robot has to maintain contact with the patient. In order to obtain high quality images, a minimal amount of pressure has to be applied to the skin in combination with the use of enough US gel for optimal US transmission. However, the tis- sue should not be damaged, nor should the patient experience pain or high discomfort during scanning. Because the robot will work autonomously during the scanning phase, proper feed- back loops are required. A constant pressure will ensure that the tissue is equally compressed at all positions and that the US images are of equal quality. This will result in a more accurate 3D ultrasound map of the breast. The optimal pressure to acquire images is between 2 and 7 Newton.

The orientation of the US probe with respect to the skin can be vertical Figure 2.8a), or normal

to the skin (Figure 2.8b). The advantage of the vertical orientation is that the control algorithms

(17)

for maintaining contact will be simpler. The disadvantages are that the deformation of the tis- sue is considerable which will influence the elastography model of the breast negatively. Also, the contact of the probe with the skin is not optimal, especially at steep slopes, which results in lower quality images. When contact is kept normal to the surface of the skin, the disadvantages do not apply. It is also a more intuitive orientation of the probe, it reduces shear forces caused by sliding motion on the skin to a minimum and allows for progression to other common ori- entations of the probe, such as Tilt in Figure 2.9T, or Rotation in Figure 2.9R. In the future, these orientations might be included in MURAB to create a more complete set of features for the scanning phase and increase flexibility.

Figure 2.8: Possible orientation of the probe during motion. A: vertical orientation, B: normal orienta- tion

Figure 2.9: Possible US probe orientations used in breast ultrasound examination. P: Pressure on the skin in the direction of the normal, A: Normal oriented translation over the skin, R: Rotation around the normal axis. of the skin, T: Tilt creating an angle between the longitudinal axis of the probe and the normal axis of the skin, also called the ’angle pivoting movement’ and used extensively during by radiologists. From: Ihnatsenka and Boezaart (2010).

Sudden patients movements and breathing of the patient add complexity to the design of the

scanning phase. The robot should not obstruct the breathing pattern of the patient or endanger

patient safety during a sudden movement of the patient. In turn, the breathing pattern of the

patient should not interfere with the scanning motion or data collecting. Breathing influences

both the external position of the breast and the internal configuration of the tissue. Different

(18)

CHAPTER 2. ANALYSIS 9

approaches towards solving this problem are: including real-time measurements of the breath- ing movements, design of a predicting model of the breathing pattern and averaging multiple data samples over the full breathing cycle. Asking the patient to hold the breath is not an option since the scanning phase will take too long for this. Also, the tension on the breast might cause tremor that influences the scanning motion and image quality.

Another factor that introduces complexity to the scanning phase is the problem of counter- force during prone position. Figure 2.10 shows the lateral movement of the breast when a force is applied to the side of the breast. A counterforce is required that covers at least the green area on the right side of the breast when force is applied at the purple area on the left side. In supine position, breast movement is limited because exerted force on the skin is directed towards the body, as can been seen in Figure 2.11. The exact solution for this problem has not been de- cided about yet. Because supine position is assumed during this master thesis, the problem of counterforce is less applicable and neglected for this experiments described in this thesis.

Figure 2.10: Illustration of direction of force when applied in prone position. Counterforce is required in the green area on the right side of the breast. This area corresponds with probe contact at the purple area on the left side. Counterforce for probe contact at blue areas on the left side is generated by the body of the patient.

The scanning path or trajectory of the scanning depends on the target area, the optimal US- image configuration for 3D image construction and on the final solution for the counterforce problem. Several configurations for the probe motion are used in handheld scanning for 3D US images and are shown in Figure 2.12. Implementation of all or a combination of configurations can increase the flexibility of the system for different patients.

2.2.3 Safety

An important part of the scanning phase is safety. This includes patient safety as well as the

prevention of damage to the equipment. Safety is a heavily discussed topic within the field

of robotics. In de scanning phase, the two most important problems regarding safety are the

earlier mentioned unexpected movements of the patient, and damage as result of clamping of

(19)

Figure 2.11: Illustration of direction of force when applied in prone position. Counterforce is generated by the body at all probe contact points and external counterforce is not required.

Figure 2.12: Different scanning paths used in handheld US scanning for 3D US image construction ((Kelly and Richwald, 2011))

tissue between the probe and the body of the patient or the table. Haddadin et al. (2009) argued that clamping of an object between a robot an a wall results in damage proportional to the mass of the robot, while merely being hit by the same robot depends on the mass of the object and the velocity of the robot. The latter usually causes less damage. In the scanning phase there is a risk of clamping because the breast is between the probe and a counterforce.

There are different general approaches to reduce risk and damage:

• Recognition of a dangerous situation

• Prevention of a dangerous situation

• Reduce damage in case of an accident

In terms of safety, the standard approach in the field of robots is immediate interruption of any type of movement of the robot when a dangerous situation has been identified. Activating a counter movement to immediately remove the hazard or initiate a free floating mode (where the robot arm can be freely moved around manually) is inferior to this approach because this may introduce a new dangerous situation or even worsen the current one. Immediate discon- tinuation of movement will prevent any further damage.

Shear forces on the skin caused by the moving probe are considered of less importance com-

pared to clamping or sudden movements, but they can still cause pain or damage of the

(20)

CHAPTER 2. ANALYSIS 11

skin. Shear forces are usually kept minimal during ultrasonography using a gel that is applied between the skin and the probe. During the scanning phase it is therefore required that gel is applied to the full target area to avoid high shear forces.

2.2.4 Requirements

Two requirements follow from the MURAB project proposal paper: a KUKA LWR4+ robotic arm is to be used and the scanning phase must be carried out fully autonomously. These require- ments have been determined by the MURAB consortium. The other requirements follow from the analysis and insights from this chapter.

During scanning, the end-effector should remain normal to the tissue to ensure optimal con- tact with the skin, reduce shear forces and allow easy progression to other orientations of the probe when required. The contact force on the skin should be as constant as possible to en- sure equal quality of US images and equal deformation of the underlying tissue. The optimal contact pressure is between 2 N and 7 N, otherwise, the US image should be discarded. De- viations from the contact force reference caused by breathing should be minimized as much as possible. Also, scanning should not interfere with the breathing pattern of the patient and remain safe. Detection for clamping is required to ensure safety. The pressure should never cross a maximal pressure which is set to twice the maximal optimal force of 14 N. In case of a dangerous situation, movement should immediately be discontinued to prevent damage to the patient or equipment. Table 2.1 summarizes the requirements for the scanning phase.

Scanning plan Scanning procedure Safety

Fully autonomous procedure End-effector normal to skin Maximal contact pressure Use of KUKA LWR4+ Movement discontinuation Breathing taken into account robotic arm Constant optimal contact in dangerous situations

pressure

Table 2.1: Requirements of the scanning phase

There are two more requirements that may become important later during the MURAB pro- ject but are not included in this thesis. It is yet unknown which of the orientations of the US probe in Figure 2.9 eventually will be included in the scanning movement, therefore a neut- ral position normal to the skin is assumed in this thesis. Also, the patient will eventually be in prone position and counterforce should be provided to minimize breast displacement in this position. However because the Hologic table was not available yet and to facilitate easier im- plementation, supine position is assumed in this project and the counterforce problem can be ignored.

2.3 Plan of approach

The scanning procedure will be subdivided in three parts that are approached individually.

These three parts are:

• Scanning initialization

• Trajectory planning and scanning motion

• Contact control

All parts will be autonomously performed by the KUKA robotic arm and implemented and

tested separately. The final implementation is an autonomously performed scanning proced-

ure of a phantom breast in which the position of the phantom breast is determined, a trajectory

plan is made, contact initialized, scanning motion executed while maintaining contact and fin-

ishing.

(21)

3 Materials and systems

Throughout the report the KUKA LWR4+ robotic arm is used for all implementations. All parts were implemented using Robotic Operation Platform (ROS) for communication between hard- ware. The programs (nodes) for ROS were all written in C++. In the implementations, 3D prin- ted force sensors (1DOF and 3DOF), a 6DOF force sensor, a depth sensor camera, and a breast phantom were used. These materials are discussed here in more detail.

3.1 LWR4+

The LWR4+ is a lightweight industrial seven degrees-of-freedom (DOF) robotic arm (Figure 3.2).

Each joint is equipped with a position sensor on the input side and position and torque sensors on the output side. The robot can be operated with position, velocity and torque control using the KCP (KUKA Control Panel), see Figure 3.1. The robot can be operated without external computer using this panel. With its sensitive sensors and one redundant DOF, the LWR4+ is specifically developed for handling and assembly tasks.

Figure 3.1: Overview of KUKA LWR4+ communication:

• 1: KUKA LWR4+ lightweight robotic arm

• 2: KCP control panel

• 3: robot controller

• 4: connecting cable to KCP

• 5: connecting cable to robot

• 6: ROS environment

• 7: connecting cable to external laptop

3.1.1 The Fast Research Interface

The LWR4+ was built for both industrial and research applications. The Fast Research Interface

facilitates communication between the LWR4+ and an external computer. Data such as robot

(22)

CHAPTER 3. MATERIALS AND SYSTEMS 13

Figure 3.2: Seven joints of the LWR4+ with E1 as the redundant joint.

controller messages, sensor values and robot state is transferred between the robot and the external computer through FRI. The FRI is a state machine with two important states:

• Monitor mode: allows cyclical communication with transfer of robot sensor data to an external computer. Variables and parameters can be changed during this mode

• Command mode: allows cyclical communication with transmission of commands from an external computer to the robot controller and transfer of robot sensor data to an ex- ternal computer. No changes in variables or parameters are allowed.

Switching between modes is facilitated by the FRI state machine that is schematically illus- trated in Figure 3.3. FRI-commands allow switching between states. When opening the FRI, monitor mode is initialized and opened with the ’FRI-open’ command. No movement of the robot is allowed, only variable changes and parameter settings. Command mode can only be started (’FRI-start’) when the data transfer quality is sufficient. In command mode data from the LWR4+ sensor is sent to the external computer, and robot commands are sent to the LWR4+

from the external computer in a cycling fashion. Any fault or error, such as data loss or exceed- ing joint limits immediately activates the joint brakes and forces the robot back to Monitor mode. Initiation of Command mode is only possible when Monitor mode is active first.

3.1.2 The KCP

The KUKA Control Panel (KCP) not only allows full programming of the robot but also manual jogging of the robot using its Joint Position or Cartesian Stiffness controller, monitoring of vari- ables, interaction with the FRI state machine and logging.

3.1.3 Controllers

The LWR4+ has three types of controllers:

• (Joint) Position controller

• Cartesian stiffness controller

• Axis-specific stiffness controller

(23)

Figure 3.3: The FRI-statemachine. FRI-commands allow switching between modes.

• 1: Cmd initialisation data transfer

• 2: Fault

• 3: Monitor mode (MON)

• 4: Quality of data transfer is not sufficient

• 5: Command mode (CMD)

In this project, only the Cartesian stiffness controller was used because it allows control of the robot in Cartesian space. The control law of the Cartesian stiffness controller is:

τ

C md

= J

T

(k

c

(x

F R I

− x

msr

) + F

F R I

) + D(d

c

) + f

d yn

(q, ˙ q, ¨ q)

The law is defined by the transposed Jacobian matrix and consist of the following (adjustable) parameters:

• x

F R I

, Cartesian setpoint position

• k

c

, Cartesian stiffness, with k

c

(x

F R I

− x

msr

) representing a virtual spring

• D(d

c

), Damping factor

• F

F R I

, Cartesian force

• f

d yn

(q, ˙ q, ¨ q), Dynamic model of the LWR4+.

The Cartesian stiffness controller accepts 5 types of input commands: velocity, position, stiff-

ness, damping and torque. In this project the most practical option in the context of the ex-

periments Chapter 5 is hybrid control, switching between velocity and position control. How-

ever, hybrid control is not allowed during Command mode with an external computer. For

this project velocity control is preferred due to the extensive use of force feedback in the ex-

periments. Movements of the robot to exact positions is still possible using position-feedback

control loops. The direction of movement is determined by the error between the current pos-

ition and the desired position. Velocity control with position-feedback loops achieve the same

accuracy as position control (< 1 mm).

(24)

CHAPTER 3. MATERIALS AND SYSTEMS 15

3.1.4 The frames

The LWR4+ handles a base frame and a tool frame. Cartesian control commands can be sent to the robot with respect to either the base or the tool frame, but not mixed. In Chapter 4 the relation between frames of the LWR4+ and several sensors are discussed in more detail.

3.2 ROS

Every experiment was implemented using the Robot Operating System (ROS) as the commu- nication platform between the robot, sensors and the user. ROS provides an open-source soft- ware framework for robot software development. Its main uses include device control, imple- mentation of common functionality, message-passing between processing and package man- agement (Thomas, 2014). Programs and other processes occur in a ’Node’, and they commu- nicate with each other through ’Topics’. Nodes publish data to Topics and take data from other Topics without knowing who the receiver or sender is. Data is referred to as ’Messages’. Its graph-like structure allows for integrating parts of complex systems like robots with multiple sensors and actuators. The robot sensors, actuators and robot controller are processes that take place in Nodes, and their data, like images (for instance from a camera Node), velocity (for instance from an actuator Node), or even laser scans, are passed as Messages via Topics. A simple example can be seen in graph-form in Figure 3.4, but in complex systems, the graph can become a web with tens of nodes and topics.

Figure 3.4: Graphical representation a simple ROS system. The publishing node publishes messages on a topic, the subscribing node takes the messages from the topic.

An important feature of this structure is that none of the nodes rely on each other. They merely publish their data or receive data from topics without knowing anything of the full system be- hind it. Because of its structure, one can take out a node from one system and re-use it in another or replacing it with a different node. A practical example is that it does not matter what type of camera is used, as long as it publishes images on the topic that the Image Processing node is subscribing to. This way implementing parts of a complex system becomes flexible and therefore ROS is very suitable for research and experimenting purposes in the field of robotics.

Apart from the basic structure, ROS has other functionality including easy monitory and access to data, visualisation and code-sharing with the community. In this thesis, all nodes were writ- ten in the programming language C++ because the author wished to become familiar with this wide-used programming language.

3.2.1 FRI statemachine ROS node

In order to facilitate synchronized data transfer between the robot and the external computer,

a FRI-state machine ROS node was developed by a previous master student of the Robotics and

Mechatronics chair Baarsma (2015). This state machine is a copy of the FRI state machine of

the LWR4+ (Figure 3.3), but runs on the external machine. Both state machines communicate

in a cyclic fashion and only when both machines are in the same states (’digital handshake’),

data transfer is possible. As mentioned in the FRI state-machine section, errors prevent the

initiation of the Command mode such as lost connection to the robot or uninitialized para-

meters. If such errors occure, one of the state machines falls back to Monitor mode, blocking

(25)

communication. The FRI-statemachine ROS node also translates sensor data from the robot to ROS messages, publishes these on ROS topics, and translates the commanded messages to a sensible data package to send to the LWR4+.

3.3 Force sensor

The KUKA LWR4+ is able to calculate the force applied to the end-effector in six dimensions using the torque measurements in every joint. The torque sensors of the LWR4+ are designed to be very sensitive and accurate for small changes in force. However, the calculated torque at the end effector has a poor repeatability and sometimes failed to recover to start values after interaction with the environment. Since accuracy, repeatability and reliability plays a large role in the MURAB project for safety and precision reasons, a dedicated force sensor was used in this project. Early experiments were carried out with a 3D printed 1DOF and 3DOF force sensor. This sensor fulfilled its job well, but its sensitivity and accuracy was not enough in order to sense fine changes accurately. Therefore, the 3D-printed 3DOF sensor was replaced by a commercial 6DOF force sensor. The design, setup and implementation of the 1DOF and 3DOF sensor can be found Appendix Appendix A and is mainly aimed as resource for future students.

The ATI mini40 is a compact 6DOF stain gauge force-torque sensor typically used in telerobot- ics, robotic surgery and robotic-hand and finger-force research (ATI, 2016). With a resolution of 1/100 N (Fz) and 1/8000 Nm (Tx, Ty), it is very suitable for measuring small changes in surface shape.

Figure 3.5: ATI industrial automation’s Mini20 FT sensor.

Communication between the force sensor and ROS was facilitated by a dedicated ROS node publishing the measured wrench. Connection to end-effector is enabled by a simple laser- cut interface as shown in Figure 3.6. A 3D printed hemisphere interface was used to transfer contact force from the patient to the sensor.

Figure 3.6: Mounted camera, force sensor and probe to the end-effector using a laser-cut white interface ring.

(26)

CHA PTER 3. M A T E R IALS AND S YSTEMS 17

Figure 3.7: Graphical representation of all parts of the communication system. From left to right: Example program ROS node ’/contact_control’, FRI-statemachine ROS node ’/KUKA_LWR4_1’, FRI-statemachine of the LWR4+ itself, Cartesian stiffness controller. Communication between the ROS nodes ’/contact_control’ and

’/KUKA_LWR4_1’ is facilitated by the rostopics ’/FRI_Control’, ’/FRI_send_velocity’ and ’/FRI_MsrPosition’.

R obotics and M ec hat ron ic s J.J.A. S c hoot U iter kamp

(27)

3.4 Camera and depth sensor

The used camera is a CREATIVE SENZ3D depth imaging camera capable of both capturing RGB images as well as depth images Figure 3.11. A depth sensor uses a laser to project a pattern of evenly spaced dots into the space. The dots are invisible to the human eye, but can be cap- tured by the infrared camera of the depth sensor device. When the dots hit a surface, its pattern breaks. On near objects the pattern spreads out, on far objects the pattern is dense, see Fig- ure 3.9. The result is a depth map of the scene.

Figure 3.8: Parts of the SENZ 3D camera and depth sensor.

Figure 3.9: Infra-red dot pattern projected by a depth sensor. Dots that hit an object far away appear more scattered than dots on objects nearby. The image was captured by an infra-red camera.

Similarly to the force sensor, the camera has its own dedicated node publishing rgb and depth images and image data. Processing of image data within a ROS node was done using OpenCV, an open source library for image processing and computer vision. Processing of the depth densor data was done using PCL, an open source point cloud processing library.

3.5 Breast phantom

The breast phantoms are made with a mixture of liquid PVC and a plasticiser. The liquid

was heated to approximately 200 degrees Celsius and poured into the 3D-printed mould (Fig-

ure 3.10). By varying the shares of PVC and plasticiser the stiffness of the phantoms can be

varied. The breast phantom with surrounding skin was used in the experiments and consist of

85 mass percent PVC and 15 mass percent plasticiser.

(28)

CHAPTER 3. MATERIALS AND SYSTEMS 19

Figure 3.10: From left to right: heating pan, liquid PVC and plasticiser, 3D printed mould. Bottom:

breast phantoms with different stiffness and surrounding phantom skin.

3.6 Omega 6

During the early phases of the project when autonomous scanning was not yet implemented, the Omega 6 haptic device was extensively used to command the LWR4+. The omega 6 is a very sensitive 6DOF joystick with actuators to provide haptic feedback in linear x, y and z directions.

It allowed teleoperation of the robot. Figure 3.12 shows a demo in which a professional demo patient is scanned with an early design of the needle holder. Again, communication was facil- itated via a ROS node. Once the autonomous scanning motion was implemented, the omega 6 was not required any more.

Figure 3.11: Omega 6 haptic device.

(29)

Figure 3.12: The author tele-operating the LWR4+ with the omega 6. During this session, a scan was made to feature in a demo video for MURAB (photo by Vincent Groenhuis, MSc).

(30)

21

4 Design and Implementation

In this chapter, the design and implementation of the three parts of the scanning phase are presented. These parts are: scanning initialization, trajectory planning and scanning motion, and contact control. All code written for the implementations was written in C++.

In literature and industry there a roughly two approaches to robotically scanning of a surface:

preprogrammed motion where the orientation of the surface is known beforehand and motion with real-time feedback about the current state of the surface pose. In the MURAB project, the environment of the robot is dynamic during scanning due to breathing, patient motion, patient specific differences and breast deformation. Therefore, real-time feedback of the state of the environment is required to ensure the safety of the patient, robot and surroundings.

4.1 Frames and screw theory

The set-up of the scanning phase contains several sensors and a robot that all have their own frames. Screw theory is a powerful tool to describe geometric positions and orientation of points and bodies in space, express velocity and force with respect to a certain frame, and change coordinates between frames, amongst other applications. The following section will outline a quick and intuitive explanation. For further mathematical elaboration and proofs, the reader is referred to (Stramigioli and Bruyninckx, 2001).

In screw theory, linear and angular velocity is represented by Twists and forces and torques are represented by Wrenches. This section aims to provide an intuitive understanding of Twists and Wrenches, and explains the conversions between frames of the robot base, end-effector, force sensor and camera. These frames are shown in Figure 4.1.

In screw theory, a Twist T is a 6x1 matrix that represents a velocity of a rigid body using a 3D translational velocity Vector vel and a 3D angular velocity Vector rot:

T =

ω

x

ω

y

ω

z

v

x

v

y

v

z

= ·r ot vel

¸

(4.1)

A Twist in a frame i expressed in coordinates of a frame j is notated as T

ij

. A Wrench W is a 6x1 matrix that represents a force and torque exerted on a rigid body:

W =

F

x

F

y

F

z

τ

x

τ

y

τ

z

= · f or ce t or que

¸

(4.2)

Figure 4.1 shows the different frames of the base and end-effector of the LWR4+ ( Ψ

b

and Ψ

e

respectively), the camera (Ψ

c

) and the force sensor (Ψ

s

). Any Twist or Wrench in another frame

than the base frame must be expressed in base frame coordinates before it can be sent to the

LWR4+.

(31)

Figure 4.1: Overview of camera and force-torque sensor mounting. A: Robot and end-effector, B: Close up of end-effector with mounting of the camera and force-torque sensor, C: Base frameΨb and end- effector frameΨe, D: End-effector frameΨecamera frameΨcand ft-sensor frameΨs.

The H -matrix in screw theory is a 4x4 matrix and contains the rotation matrix and translation vector that describe the relation between two frame. A position of a frame Ψ

f

with respect to another (reference) frame Ψ

r e f

can be represented in H

r e ff

.

H

r e ff

=

"

R

r e ff

p

r e ff

0 1

#

(4.3)

where R

r e ff

is the rotation matrix of Ψ

r

with respect tot Ψ

r e f

and p

r e ff

is the translation vector.

The H -matrix can be used to perform a change of coordinates of a Vector p

f

= [x, y, z, 1] from the frame Ψ

f

to frame Ψ

r e f

:

p

r e f

= H

r e ff

p

f

(4.4)

A change of coordinates of a Twist requires the H -matrix to be rewritten to a 6x6 adjoint-H - matrix:

Ad

Hr e f

f

=

"

R

rr e f

0

˜

p

r e ff

R

r e ff

R

r e ff

#

(4.5)

The change of coordinates of the Twist from the sensor frame Ψ

s

to base frame Ψ

b

is given by

the expression:

(32)

CHAPTER 4. DESIGN AND IMPLEMENTATION 23

T

r e f

= Ad

Hr e f

f

T

f

(4.6)

The frames of sensors and the robot in this project are shown in fig Figure 4.1. The base frame Ψ

b

of the robot is the reference frame. The pose of any of the frames Ψ

e

(end-effector), Ψ

s

(ft sensor), and Ψ

c

(camera) with respect to Ψ

b

can be described by H

eb

, H

sb

and H

cb

respectively.

A twist in the sensor frame can be expressed in the base frame using the adjoint matrix of H

sb

:

T

b

= Ad

Hsb

T

s

(4.7)

In this equation, H

eb

can be derived from the position of the end-effector measured by the robots’ sensors. H

sb

is not directly known, but the relation between frame Ψ

s

and Ψ

e

can be measured: H

se

. Using the chain rule of screw theory, H

sb

can be calculated from H

eb

and H

se

:

H

sb

= H

eb

H

se

(4.8)

H

se

can be constructed with a identity rotation matrix R

se

=

1 0 0 0 1 0 0 0 1

 and translation vector between Ψ

s

and Ψ

e

: p

es

= [0, 0, 0.054m].

H

eb

can be derived from the current measured pose of the end-effector in the base frame. This data is presented by the LWR4+ as a ROS pose message and consist of a position given by Euc- ledian x, y and z coordinates, and a orientation given in quaternions. Quaternions allow for an unique description of the orientation of a body in a frame, in contrast to Euler angles, where the same configuration can be described multiple ways. A quaternion consists of four coordin- ates: q

x

, q

y

, q

z

andq

w

with a value between one and zero. An intuitive example of quaternions is given in Figure 4.2. Four key orientations are described with the value one in the one of the coordinates and zero in the others. Any configuration between the four orientation is a ’blend’

between the four orientations. In Figure 4.2e both q w and q y contribute equally to the ori- entation of the object. The value 0.7 results from the underlying mathematical definition of a quaternion. In order to derive H

eb

, the quaternion representation has to be converted to a Ro- tation matrix representation. A rotation matrix can be calculated from quaternions using the following conversion:

R =

q

w2

+ q

x2

− q

2y

− q

z2

2q

x

q

y

− 2q

w

q

z

2q

x

q

z

+ 2q

w

q

y

2q

x

q

y

+ 2q

w

q

z

q

2w

− q

2x

+ q

2y

− q

z2

2q

y

q

z

− 2q

w

q

x

2q

x

q

z

− 2q

w

q

y

2q

y

q

z

+ 2q

w

q

x

q

w2

− q

x2

− q

y2

+ q

2z

 (4.9)

4.2 Autonomous scanning initialization

The goal of the autonomous scanning initialization is to identify the target area on the body of the patient. Because the target area and breast size and shape differs per patient, flexible start point initialization and trajectory planning is required. Two approaches were considered:

depth imaging using a depth sensing camera and marker detection using a camera.

4.3 Depth sensing

The potential of point clouds for patient detection will be investigated with this implementa-

tion. A depth sensing camera captures depth information of an environment, as was discussed

in Chapter 3 and the depth values can be visualized in a point cloud. Processing of a point

cloud includes for instance filtering and smoothing to optimize the point cloud for surface re-

construction. The point cloud can be used for patient detection by using the depth values to

(33)

(a) q = (1,0,0,0) (b) q = (0,1,0,0) (c) q = (0,0,1,0) (d) q = (0,0,0,1)

(e) q = (0.7,0,0.7,0)

Figure 4.2: Intuitive example of quaternions. A quaternion q = [qw, qx, qy, qz].

find the distance between the camera and the patient, and use this to find the real-world dis- tance between two points of the point cloud.

A filter was used to remove outliers and isolated points caused by artefacts and noise. The filter uses point neighbourhood statistics to filter outlier data and was developed by Rusu et al.

(2008). The algorithm iterates through the entire point cloud twice: During the first iteration it will compute the average distance that each point has to its nearest k neighbours. Next, the mean µ and standard deviation σ of all these distances are computed in order to determine a distance threshold Tr es

d i st ance

with the following equation:

T hr es

d i st ance

= µ + a ∗ σ (4.10)

where a is a multiplier. During the second iteration the points will be classified as inlier or outlier if their average neighbour distance is below or above this threshold respectively. Points are still unordered and in order to obtain a continuous surface the point cloud can be smoothed using the Moving Least Squares (MLS) algorithm. MLS calculates continuous functions from a set of unorganized point samples using weighted least squares measures in the region around every point. Figure 4.3 shows an intuitive representation of the MLS algorithm outcome. After smoothing the surface can be reconstructed using the Poisson algoritm, (Kazhdan et al., 2006) which is widely used in surface construction of point clouds from 3D scans. The algorithms were implemented in C++ using the open source library PCL (Point Cloud Library).

Figure 4.3: Example of a smooth function outcome of MLS.

(34)

CHAPTER 4. DESIGN AND IMPLEMENTATION 25

4.4 Marker detection

This design allows the robot to move to reference position position with respect to the markers from any different start position (given that all markers are visible in the camera image from this start position). This is referred to as visual servoing.

Feature detection from camera images was used in Mustafa et al. (2013) to identify the epigast- ric region on a subject. Colour segmentation was used to detect both nipples and the umbil- icum. The landmarks in MURAB are the circular edge of the Hologic table and the nipple. Extra landmarks can be added if required, for instance to define the target area. For the experiments in this thesis, three coloured markers in a triangular shape represent the table edge. The mark- ers are of different sizes to in order to be able to determine the orientation of the robot arm with respect tot the markers (Figure 4.4a).

Figure 4.1 shows the camera mounted to the robot end-effector. Assumed is that the camera looks down to the breast phantom in the direction of the end-effectors’ z-axis. This way, the transformation between reference and current marker locations is an affine transformation(no shear present). The reference points for visual servoing are defined in pixel coordinates in the image (Figure 4.4b). To find the blue markers in the image, The image is segmented in HSV space for blue Hue Figure 4.5. The advantage of the HSV colour system is that light changes do not influence the segmentation when using the Hue value only. The segmented areas are sorted in descending size, with marker 1 having the largest area and marker 3 the smallest.

(a) Blue markers in triangle shape. Markers are of different size to allow tracking of the orientation

(b) Reference points for visual servoing in the image

Figure 4.4: Markers in camera image and reference in binary image.

Figure 4.5: The HSV (Hue, Saturation and Value) colour space. The system is base on how humans perceive color.

The error between the reference pose and the marker pose is used as the input for visual ser-

voing control. The error consist of a translational error in x, y and z direction in pixels and a

Referenties

GERELATEERDE DOCUMENTEN

From this, the conclusion can be drawn that any road safety policy in the Netherlands can only lead to a decrease in the absolute number of deaths, when

Copyright and moral rights for the publications made accessible in the public portal are retained by the authors and/or other copyright owners and it is a condition of

2015/072 Archeologische Opgraving Daknam Pontweg Bijlage 3 Detailplan noord X Y 100m 75m 50m 25m 0m Projectgebied Recente verstoring Late middeleeuwen en nieuwe tijd

Reaction 3, in which poly butyl acrylate reaction 1 was chain extended with polystyrene, produced a mixture of products; the majority of the initial homopolymer latex was

Ondersoek die wyse waarop die NBAK manifesteer in klasse deur gebruik te maak van die interpretiewe raamwerk en „n kwalitatiewe benadering asook deur die bestudering van

In een cirkel trekt men de koorde AB en aan de grootste boog een raaklijn // AB met raakpunt C.. Bewijs: CA

Uit de voorgaande paragrafen en het vorige hoofdstuk volgt dat de directe actie en het eigen recht beide dienen tot bescherming van de benadeelde bij verhaal van zij schade, maar