• No results found

Head mounted control system for robotic assisted surgery

N/A
N/A
Protected

Academic year: 2021

Share "Head mounted control system for robotic assisted surgery"

Copied!
38
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)
(2)
(3)

Abstract

Head mounted control system for robotic assisted thoracoscopic surgery

The purpose of this research is to make a proof-of-concept for a head mounted control system

that will control an endoscopic robot. The TeleFlex project at the University of Twente is

taken as an example of such a robot. This project will include an investigation of requirements,

implementation of different control modes and testing using evaluation platforms. The focus of

this project is software development for such a system.

(4)
(5)

CONTENTS HCSRATS, University of Twente

Contents

1 Introduction 7

2 Background 9

2.1 The Surgery . . . . 9

2.2 Control Theory . . . . 10

2.3 Sockets . . . . 10

3 Analysis 11 3.1 System Requirements . . . . 11

3.1.1 Medical Requirements . . . . 11

3.1.2 Technical Requirements . . . . 11

3.2 TeleFlex . . . . 12

3.3 System Validation . . . . 13

4 Design 15 4.1 System Overview . . . . 15

4.2 Programming Language . . . . 15

4.3 3D Environment . . . . 15

4.4 Module Description . . . . 15

4.4.1 Sensor . . . . 17

4.4.2 Control Module . . . . 20

4.4.3 Foot Switch . . . . 22

4.4.4 HMD . . . . 22

4.4.5 3D test environment . . . . 23

4.4.6 Prototype Robot . . . . 24

4.4.7 Teleflex . . . . 24

5 Results 26 5.1 Sensor . . . . 26

5.2 Foot switch . . . . 26

5.3 Main Module . . . . 26

5.4 Virtual 3D Environment . . . . 27

5.4.1 Model . . . . 27

5.4.2 Performance . . . . 27

5.5 Prototype Robot . . . . 28

5.5.1 Robot . . . . 28

5.5.2 Camera . . . . 28

5.6 Performance . . . . 28

5.7 TeleFlex . . . . 29

5.7.1 Scope Characterization . . . . 29

5.7.2 Performance . . . . 29

6 Discussion 31 6.1 Time Management . . . . 31

6.2 Video Lag . . . . 31

6.3 Discipline Synergy . . . . 31

(6)

CONTENTS HCSRATS, University of Twente

7 Conclusion and Suggestions 32

8 Appendices 34

8.1 Measurement results characterisation TeleFlex + coloscope . . . . 34

8.2 Report of visit at VATS mini-Maze operation . . . . 36

8.3 Source Code . . . . 37

8.4 SolidWorks Dynamixel Webcam Mount . . . . 38

(7)

1 INTRODUCTION HCSRATS, University of Twente

1 Introduction

Robotics and mechatronics are becoming more reliable, and with that become more suited for fields of application where robots were not welcome before. One of these fields is the medical environment.

In this report, a study is done on making a proof of concept for a head mounted control system for robotic assisted thoracoscopic surgery. This project will use a Video assisted Thoracoscopic minimal incision (VATS mini-Maze) as example case. With a VATS mini-Maze operation a patient will be cured from atrial fibrillation. In the practice, the surgeon needs an assistant that holds a rigid scope and manoeuvres it when the surgeon says so. The camera feed of the scope is displayed on a LCD screen nearby the operation table. This set-up has the following drawbacks 1. A clear communication between surgeon and assistant is of great importance. Especially

during long operations this tends to go less efficient.

2. The assistant has to keep the scope in practically the same position for hours. This can lead to exhaustion and health issues.

3. Although surgeons are trained on hand-eye misalignment, controlling tools and at the same time looking at a screen located in another direction is not ideal. This also leads to a expanded learning curve, increasing costs for health care.

4. The working space at the entrance of the body during a minimal incision surgery is limited.

Using a flexible scope instead of a rigid thoracoscope saves space.

With the system to be developed, a surgeon can control a flexible scope by moving his head and gets the video feed of this scope presented on his virtual reality glasses. This will give the surgeon a better control over the the surgery, but it also gives the assistant time to focus on more important tasks.

The focus of the project will lie on the implementation of combining a head mounted device

(HMD) with an endoscope. This HMD will be provided by the company Cinoptics and the

endoscope will be delivered by the TeleFlex project (University of Twente). In this research a

controller will be designed that translates the movement of the head of a surgeon to the movement

of the endoscope. At first, the system will have 2 DOF (yaw and pitch, see figure 1) and can be

extended with more degrees of freedom (roll). In order to feed the video of the endoscope to the

HMD, the OpenCV image processing framework will help processing and controlling the video

feed. To give the surgeon the ability to activate, deactivate or reset modules of the user interface,

a foot-switch will be introduced.

(8)

1 INTRODUCTION HCSRATS, University of Twente

Figure 1: Yaw, Pitch and Roll

1

This assignment is a follow up of other projects. In one of these projects, the way surgeons prefer to control a robot via a head mounted device is examined. In another projct, the state of the art systems that are allready build is investigated [1, 2]. In line with these studies, different control algorithms will be implemented.

The goal of the author is to implementing the knowledge learned in his Bachelor Advanced Technology and will give him more experience with programming in C and Python which is not included in the Advanced Technology program. As a future student of Embedded Systems, these skills are useful to have.

Because the system to be developed is a ”Head-mounted control system for robotic assisted thoracoscopic surgery”, a name for this project could be a abbreviation of the title: HCSRATS.

1

source http://www.mdpi.com/sensors

(9)

2 BACKGROUND HCSRATS, University of Twente

2 Background

This section provides information that is necessary to understand the coming chapters. The basics of control theory and endoscopes will be further described.

2.1 The Surgery

The end product of this project will give the surgeon the ability to control an endoscope.

Endoscopy means ’looking inside’ and refers in this case to a camera inside a human body. There are multiple types of endoscopes, for example gastroscope, colonoscope or thorascoscope. Each scope is specially designed for inspection of different parts of the human body. An example of an endoscope can be seen in figure 2a. The TeleFlex robot can control the tip of an endoscope by mounting the endoscope’s handle in a specially designed mount. In this way, different endoscopes can be controlled with the same robot.

The current set-up uses a rigid scope that is hold by an assistant. In this project, the implementation of a flexible thoracoscope that will be controlled by a robot will be tested.

(a) A typical endoscope. On the left the handle at which the tip of the scope can be altered. This handle can be connected to the TeleFlex

(b) The tip of the endoscope can be set in different positions over two DOF

Figure 2: The endoscope explained

The VATS mini-Maze operation is an operation at which a patient will be cured from atrium

fibrillation. Instead of opening the chest, minimal incision surgery is applied. This means that

only via a limited number of holes, instruments will be inserted into the body. This has as

advantage that the chance on infection is reduced.

(10)

2 BACKGROUND HCSRATS, University of Twente

2.2 Control Theory

Control theory is a discipline in engineering and mathematics that deals with the behaviour of a dynamical system, and how it behaves on feedback. A controller steers output devices based on knowledge of the total system and on its inputs (sensors). The working of a control loop can be best explained by figure 3.

Imagine a cart that should automatically move to a desired location. This cart has a motor and a sensor to measure it’s current position. The desired location is the reference in figure 3. At that point, the current location of the cart is measured by the sensor. The difference between the current location and the desired location is referred to as the error. Depending on this error, the controller starts to power the engines. If the cart gets close to the desired location, the error goes to zero and the power that the controller gives to the motor should decrease to precisely arrive at the desired position.

Figure 3: Controller loop: the output is determined by the desired set point and measuring the current state, source: [3]

If one would like to control the tip of an endoscope in the same way as described in the example of the cart, the endoscope should have a sensor that measures the position of it’s tip.

This is not the case in the TeleFlex project. Therefore a so called human-in-loop system will be implemented. With this, a human replaces the block of the sensor and controls in that way the tip of the endoscope. Since the system is a camera steering mechanism and not a precise surgical tool, the precision of a human in loop is sufficient.

2.3 Sockets

In software development, sockets are a way to let one program communicate with another. These programs can be on the same computer, but can also run on separate computers. In the later case, the socket can communicate over a network. In this project, communication between different programs will be done by sockets.

Sockets are like phone conversations. A program A can ”call” another program B if the right

”phone number” is dialled and ask something. Program B can respond and the connection will

be closed. In software development, making a ”call” is referred to as a request and the phone

number is the combination of an ip address and a port number.

(11)

3 ANALYSIS HCSRATS, University of Twente

3 Analysis

This chapter will discus the problem definition in a more elaborate way. It will discus the project planning, the requirements that the HCSRATS have to meet and background information on the technical limitations.

3.1 System Requirements

To evaluate which requirements the HCSRATS have to meet, the requirements are categorized in a medical requirements part and a technical requirements parts.

3.1.1 Medical Requirements

The HCSRATS has to meet certain requirements in order to work optimal in the operation room:

1. Robustness. The system should have multiple fall-back systems. An operation is in some cases a matter of life and death. A robot as such needs a decent backup plan and always do a double check on the systems decisions.

2. Reliability. The HCSRATS will be used intense and for long periods of time. Maintenance occurring problems should be easy to resolve, i.e. hardware should easily be replaceable and software safe and easy to reset.

3. Suitable for medical cleaning procedures. Only medical graded equipment can be used in the OR, so parts nearby the patient have to be sterilizable.

4. Intuitive in use. An operation can take up to 6 hours or even longer. During these long operations, the systems should be controlled in a way that is natural to the surgeon. In addition, any new technology should be transparent to the user and give a familiar user experience which does not disturb the conventional work flow.

5. Compact. The space at the operation table is limited. Therefore, the system should occupy as little space as possible near the patient.

6. Lightweight. The HMD should be light and use as less cables as possible. With this, the surgeon has the maximum manoeuvrability and wearing the HMD will not result in neck muscle fatigue.

3.1.2 Technical Requirements

Besides medical requirements, there are also technical requirements that should be met in order to let the system operate at a level that is convenient for the surgeon to operate with. The HMD has to have a sensor system that monitors the movement of the head of the surgeon. This sensor should have

• At least two DOF output, preferably three or more

• Accuracy of at least 1 degree. As a rule of thumb, 1 degree precision is taken as a maximum error of the sensor. Systems with a larger error will not be precise enough to control an endoscope.

• Minimal an sample rate of 50 Hz. This update frequency is the lower limit of what gives a

smooth control system. 100 Hz is preferred.

(12)

3 ANALYSIS HCSRATS, University of Twente

• Readout on Linux. Since open-source software is preferable at this stage, the operation system that will be used is a Linux distribution.

• Give the surgeon maximum manoeuvrability. This means that the sensor system should obstruct the surgeon as little as possible.

An evaluation of the available sensors will be done in section 4.4.1 page 17.

The computer that runs the software will have to have

• USB ports for, among other things, the sensor and prototype robot connection.

• Firewire for the endoscope camera feed.

• Ethernet for connection to the TeleFLEX.

• The possibility to run low latency (almost real time) software.

3.2 TeleFlex

The ultimate goal is to design a system that can control an endoscopic robot, in this case the TeleFlex. The Teleflex consists of a standard endoscope set-up plus an endoscope driver and mount (see figure 6 part a and b). Furthermore it is known that the scopes that will be attached to the TeleFlex suffers from hysteresis. This means that the output of the Teleflex system, that is the location of the tip of the endoscope, depends on time and previous states of the TeleFlex system. A general example of hysteresis can be seen in figure 4.

Figure 4: Hysteresis; the output can only be determined by knowing it’s previous states. Therefore, there is no linear relation between input and output. source [4] .

At the moment that this report was written, research was done how the hysteresis can be overcome. For the moment of the HCSRATS project, position control of the endoscope will not be precise due to hysteresis in the endoscopes. However, a human-in-loop control system can still be used to control the endoscope. In a normal system, a controller (e.g. PID) can be used to steer the tip of the scope to a set point. However, in this setup, there is no sensor that measures the tip of the scope. Knowing that the system will have hysteresis, the experiment that will be conducted in section 5.7.1 is designed to expect kind of distortion.

By using representable temporarily substitutes for the TeleFlex, a head mounted control

system that supports position control can be developed still. The system only can not be tested

fully on the TeleFlex. If hysteresis in the endoscope can be compensated in the future, position

control will then be possible.

(13)

3 ANALYSIS HCSRATS, University of Twente

3.3 System Validation

The ultimate goal is to control a endoscopic robot, i.e. the TeleFlex. These robots are delicate and expensive. Therefore, two substitute systems will be developed that will give the ability to validate the HCSRATS with minimal influence of external factors, like hysteresis.

Firstly, a virtual 3d environment will be used to test the performance of the HCSRATS. It will simulate the movements of an endoscope inside a human body. The advantage of a virtual test environment is that it is cheap, safe, portable and can be easily utilized for later research. In the next step, a more elaborate mechanical model is constructed. This model will consists of a prototype robot that has 2 DOF and has a camera attached to it. In this model, the effects of controllers and inertia of the robot make it a more realistic validation platform compared to the first virtual model. A flow diagram of this project can be seen in figure 5.

Figure 5: Project Process: the three output stages of the HCSRATS.

(14)

3 ANALYSIS HCSRATS, University of Twente

Figure 6: TeleFlex system. (a) portable endoscope mount, (b) Stationary unit, (c) Motors

stationary unit, (d) Motor controllers, (e) Communication board, (f) Cable pulleys, (g) Pretension

mechanism cables, (h) Emergency stop, (i) Bowden cables, (j) Power supply source [5] .

(15)

4 DESIGN HCSRATS, University of Twente

4 Design

In this chapter, the design choices for the problem definition discussed in the previous chapters are discussed. By using a system overview diagram, each module of the HCSRATS is discussed. In this chapter all the preparations to build the HCSRATS system will be shown. The construction and results of this will be presented in the next chapter.

4.1 System Overview

As discussed in section 3.3, the HCSRATS behaviour will be evaluated using models. The first and most simple validation platform is a virtual 3D environment. An overview of this evaluation set-up can be seen in figure 7a. In the next step, to have a more realistic test, a robotic mechanism using modular components will be assembled. This prototype robot can control the yaw and pitch of a camera and the control system will be the same as in the virtual 3D environment. This model will serve as a physical model for the TeleFLEX. An overview of this system can be seen in Figure 7b. Each module will be build as separate systems that will communicate via a standard protocol.

4.2 Programming Language

The system should have a low latency, meaning the response of the system should be immediate.

This will avoid harming healthy or vital tissue. Low latency can be realized by using a low level programming language. Furthermore, since modular evaluation platforms will be used, a modular software design is preferred. Object orientated programming is best suited for modular design.

Two languages that have these properties are C++ and Python. [6, 7] For this project Python is chosen as programming language. C++ has higher performance but development in Python is much easier [8]. Also, if Python is coming short on performance, C++ bindings can be used to speed up the calculations. For example, OpenCV uses this trick. Another reason for using Python is that the TeleFLEX is programmed in Python.

4.3 3D Environment

There are multiple virtual reality engines for Python available. The engine should be easy to implement, stable and able to load medical models. An evaluation of some of the most popular frameworks can be found in table 1. In this examination −− indicates not suited for a virtual model and ++ indicates that it is suited. Please note that a survey like this can not be fully evaluated objective.

From this survey it is concluded that Panda3D will be most suitable for serving as virtual model. A large factor in this decision is the integration in Python code.

4.4 Module Description

The evaluation platforms use modules that can be reused. Each module of an evaluation platform

can be seen in figure 7. In this section, the design choices for each module will be discussed.

(16)

4 DESIGN HCSRATS, University of Twente

Socket Framework Sensors API Inertia Sensor

HMD LCD

Foot switch Control module

Virtual 3D environment

Main module

Hardware Software

(a) System with Virtual 3D environment

Socket Framework Sensors API Inertia Sensor

HMD LCD

Foot switch Control module

Video processing

Main module

Hardware Software

Robot

(b) System with a Robot

Figure 7: System overview

(17)

4 DESIGN HCSRATS, University of Twente

PyGame PySoy Panda3D Ogre3D

Easy to implement in Python + - ++ -

Custom Models 0 0 + ++

Complexity match (under kill vs overkill) - + ++ 0

Stability + + + +

Community and Documentation ++ 0 0 -

Table 1: Popular python game engines compared [9, 10, 11]

4.4.1 Sensor

4.4.1.1 Sensor Types

There are different ways to detect the position of the head of the surgeon. The two most suitable way for the HCSRATS is using video analysis or inertial sensors. Video analysis can be done using infra-red reflecting markers or using face and object recognition. Both methods require a complex camera array inside a operation room. Moreover, these methods are not reliable enough because these systems sometimes lose the infra-red markers or do not recognize an object for sometime.

An inertial sensor is more portable compared to a video based tracking system. This uses the inertia of a mass and its displacement to calculate the (angular) acceleration. The (angular) position can be calculated from the acceleration by integrating twice. However, by noise in the signal the integration steps will lead to a sensor drift. Drift is (partially) compensated by the magnetic field of the earth and the gravitational pull. Because the HSCRATS will not be controlled continuously and the system is controlled in a differentially matter, a small drift is not a problem per se.

Because video based tracking system would be to hard to implement and being not portable enough, it is decided that an inertial sensor would be more suitable for this project. For the most reliable solution, a combination of inertial sensors and video tracking can be used. The combination of two sensor systems is used in other HMD system. With this, the video system only compensates for the drift of the inertial sensor [12]. Implementing both systems is beyond the scope of this project. For this project, solely an inertial sensor will be used.

4.4.1.2 Available Sensors

To select the proper inertial sensor, a survey is conducted. Four sensor candidates are shown in table 2. Recalling section 3.1.2, a suitable sensor can be chosen.

The Xsense MTx-28A53G25 is chosen mainly because this was the only sensor that was available. If in the end result the system performs not at a sufficient level, a more precise sensor can be ordered. However, Since the MTx-28A53G25 meets the requirements it should work sufficient for a proof-of-concept.

The MTx-28A53G25 has a C++ readout API. Because the main program will be written in

(18)

4 DESIGN HCSRATS, University of Twente

Python, a connection between the sensor and the main program should be made. There are two options: a Python extension or using TCP sockets. The first option makes it possible to call C++

functions from a Python script. This method can be faster than sockets, and low latency is a system requirement, but it is more complex to implement. Sockets have higher latency, but give the ability to add abstraction layers to the data transport. The latter advantage comes in handy if, for example, the HMD and sensor have to be made wireless and the data has to be transported over ethernet. The significance of the latency of a socket will determine if a socket connection will suffice.

For testing the latency of a socket connection, the socket framework ZeroMQ will be used.

Implementing a socket is made easier and more reliable. This framework will hook up to the C++

sensor API and transport the data from this API over to the main module of the HCSRATS.

The total latency of a ZeroMQ socket connection between two computers in a set-up with a 1Gbps TCP infrastructure, quad core CPU and 8gb RAM is 40 µs, of which 15 µs is caused by ZeroMQ. [13, 14]. The other 25 µs is due to the network stack used. Since for the proof of concept the data will remain on the same computer, the performance should be equal or better than 40 µs. The sensor update rate is approximately 100 Hz and so one cycle will take 10 millisecond.

It would be wise to let the controller module run at the same frequency. In a controller cycle a readout request is send over ZeroMQ to the C++ sensor API and a response is send back. So there are two messages send and this should take around, assuming a C++ handing time of 20 µs, 40 µs + 40µs + 20µs = 100 µs. This leaves 9.9 millisecond for other computations in the controller module.

With this result, it is concluded that sockets perform sufficiently quick to use in the HCSRATS.

(19)

Sensor Name

Accuracy Roll (degrees)

Angular resolution

Accuracy pitch

Accuracy Yaw Interfaces Output frequency Typical Stand Dev. Max rotation speed [deg/s]

Xsense MTi-30 AHRS

Static: Typical 0.2 // max 0.4 Dyn: typ 0.5 max 2.0

Same as roll 1.0

RS232/RS485/

422/UART/USB

Up to 2 kHz 1 sigma RMS 450

Xsense MTi-300 AHRS

Static: Typical 0.2 // max 0.25 Dyn: typ 0.3 max 1.0

Same as roll 1.0

RS232/RS485/

422/UART/USB

Up to 2 kHz 1 sigma RMS 450

Xsense MTx-28A53G25

<0.5 deg/s Dynamic 2 deg RMS

0.05 deg

Static: <1 deg

Dynamic: 2 deg RMS

Same as roll USB/Rs-232 Max 512 Hz 1200 deg/s

Intersense InertiaCube 4 0.25

0.01 deg RMS

0.25 1 RS232/USB 200 Hz unknown 2000

Inertia Technology unknown 0.06 deg/s unknown unknown Wifi, USB 200 unknown 2000

Table 2: Inertial Sensor Comparison

19

(20)

4 DESIGN HCSRATS, University of Twente

4.4.1.3 Kalman Filter

Because all sensors have noise, in almost all set-ups a filter is used in to improve measurement data.

A popular filter algorithm is a Kalman filter. This filter uses the probability density functions (PDF) of the measurements and predictions in order to determine what the best estimation of the output will be. A Kalman filter can be used if the noise in a signal can be modelled as a Gaussian distribution. The prediction state generates a Gaussian distributed pdf for the current time step (t = 1s) based on the physical limitations of a system and the previous time step (t = 0s). The measurement stage also generates a pdf, but based on the output of the sensor at the current time step (t = 1s). By combining both pdfs a new pdf from which the solution with the highest probability can be determined [15].

To makes this more clear, think of the following example. Imagine one DOF of the sensor as the position of a kart, see figure 8a. At t = 0s the state of the system can be described by a Gaussian probability density function (pdf) with a reasonable accuracy. This pdf can be seen in figure 8b as the red Gaussian distribution. At the next time step (t = 1s) the location can be estimated using physics and on known limitations, such as the possible maximum acceleration and velocity. This can be described as

x

t

= x

t−1

+ ˙ x

t−1

∗ ∆t + x ¨

t−1

(∆t)

2

2 (1)

˙

x

t

= ˙ x

t−1

+ x ¨

t−1

(∆t)

2 (2)

Formula 1 and 2 are begin called the prediction state. In these formulas, x

t

is the predicted outcome of the position, ˙ x

t−1

and ¨ x

t−1

the predicted velocity and acceleration respectively at an earlier time step and ∆t is the time step [15].

In figure 8d the blue curve shows the pdf of the sensor. Most sensors are characterised and the standard deviation of the noise (σ) is known. Using the unfiltered sensor output as the mean (µ), a pdf of the sensor data can be made. By multiplying both pdfs, figure 8e, a new pdf is calculated that describes what the best sensor output is. This output is used as the ”real” measured value of the sensor system.

By using more external factors, such as magnetic field of the earth and gravitational pull, the sensors can be corrected even further. From these factors the position or velocity can be deduced as well. By describing these as a pdfs, these factors can be taken into account in the pdf multiplication (figure 8e) [15]. The Xsens sensors come with setting templates and can be tweak at will.

4.4.2 Control Module

With the data from the sensor API, the control module can be fed. There are multiple ways to control the virtual 3D environment or the robot. Each control mode has its own benefits and are typically suited for certain situations. Some control modes have a higher accuracy or give the end user less likely sea sickness symptoms. During the implementation it should be kept in mind that the control modes should be modular so that other control modes can be easily added.

The first control mode moves the camera with a constant velocity in the same direction as

sensor. If the sensor registers an angle in the positive or negative direction, the camera will move

with a constant velocity in that direction. This is described using difference equations in equation

3. The sensor output is s(t), ∆t is the time-step size and c is a predefined gain constant that can

be altered at will. The controller output, ˆ f (t) , is a vector containing the three DOF positions

(yaw, pitch, roll) and ˆ I is a unit vector. So, in words, the current position of the camera is the

(21)

4 DESIGN HCSRATS, University of Twente

(a) One DOF kart with sensor relative to left wall

(b) Pdf of the kart at t = 0s.

(c) In red, the predicted pdf that describes the position of the kart at t = 1s based on the position of the kart at t = 0s and known physics of the kart.

(d) In blue, the sensor output as pdf at t = 1s.

(e) In green, the pdf as result of the multiplication of the pdfs of the multiplication stage and measurement stage

Figure 8: Kalman basic principles visualized. Source: [15]

(22)

4 DESIGN HCSRATS, University of Twente

position of the camera at the previous time step plus a positive or negative constant. The sign of this constant depends on the sign of the sensor output.

f (t) = ˆ ˆ f (t − 1) + c ∗ ∆t ∗ sgn(ˆ s(t)) ∗ ˆ I (3) Controller 2 is similar to controller 1, but the angle of the sensor now determines the angular velocity. This is mathematically described in equation 4

f (t) = ˆ ˆ f (t − 1) + ˆ s(t) ∗ c ∗ ∆t (4) Controller 3 maps the position of the sensor to the position of the camera, given that the control gain constant is 1. This controller is the most natural way of controlling the camera position and therefore would be recommended if the surgeon get sea-sick often. If the gain is larger than 1, for example 2, the camera will be at twice the angle of the sensor. With this the surgeon only have to rotate his head 45

if the tip of the scope should be at 90

. This is mathematically described in equation 5.

f (t) = c ∗ ˆ ˆ s(t) (5)

Controller 4 combines controller 3 with controller 2. So far, the controllers where defined universally for all DOFs. However, sometimes it could be useful to control two DOFs with one control algorithm and the third one with a different algorithm. A practical situation in which this can happen is if the surgeon does not want to alter the horizon constantly and wants position control for the other DOFs. A mathematical description of this can be found in equation 6

f (t) = ˆ

( ˆ f (t)

yaw,pitch

= c ∗ ˆ s(t)

yaw,pitch

f (t)

roll

= f (t − 1)

roll

+ s(t)

roll

∗ c ∗ sgn(s(t)

roll

) ∗ ∆t (6) 4.4.3 Foot Switch

To control the HCSRATS system, a foot pedal will be used that can activate or deactivate user parameters of the HCSRATS. A single pedal will be used which has three modes: one-click, double-click and hold. These inputs correspond to the commands lock yaw& pitch, lock roll and reset to home position respectively. The lock yaw & pitch and lock roll are useful when the surgeon wants to move his head but leave the endoscope tip in the position that is currently in.

The reset commando will be used when the surgeon wants an other position as initial position.

4.4.4 HMD

As stated in the introduction, the HMD will be delivered by Cinoptics. The HMDs of Cinoptics behave as an external monitor to the Linux OS. This makes debugging easier because an external LCD screen behaves in the same way as the HMD. A surgeon prefers to be able to see his surroundings without detaching the HMD. Therefore, a see-through HMD will be best suited.

The state of the art see-through glasses of Cinoptics are called the Airo II, see figure 10.

(23)

4 DESIGN HCSRATS, University of Twente

Hold One-Click Double-Click

All locked Roll active Yaw-pitch active All active Reset

Figure 9: The behaviour of the foot switch. By default, the robot is locked. By pressing the foot switch, the state of the control module can be changed

Figure 10: Cinoptics’ Airo II, source: [16]

4.4.5 3D test environment

As it was concluded in section 4.3, Panda3D is the most suitable virtual 3d framework. The basic usage of Panda3d is shown in listing 1. For the full Python code see appendix 8.3.

1 2

3

d e f L o a d T e r r a i n ( s e l f ) :

4

s e l f . c o u n t e r = l o a d e r . l o a d M o d e l ( ’ m o d e l s / body5 ’ )

5

s e l f . c o u n t e r . r e p a r e n t T o ( r e n d e r )

6

b a s e . s e t B a c k g r o u n d C o l o r ( 0 , 0 . 0 , 0 . 0 )

7

d e f L o a d L i g h t ( s e l f ) :

8

9

p l i g h t = AmbientLight ( ’my p l i g h t ’ )

10

p l i g h t . s e t C o l o r ( VBase4 ( 0 . 1 2 , 0 . 1 2 , 0 . 1 2 , 1 ) )

11

p l n p = r e n d e r . attachNewNode ( p l i g h t )

12

r e n d e r . s e t L i g h t ( p l n p )

13

14

d e f u p d a t e P o s i t i o n ( s e l f ) :

(24)

4 DESIGN HCSRATS, University of Twente

15

s e l f . camera yaw , s e l f . c a m e r a p i t c h , s e l f . c a m e r a r o l l = s e l f . c o n t r o l l e r . g e t P o s i t i o n ( )

16

17

d e f spinCameraTask ( s e l f , t a s k ) :

18

s e l f . u p d a t e P o s i t i o n ( )

19

b a s e . camera . s e t H p r ( s e l f . c a m e r a p i t c h , s e l f . camera yaw , s e l f . c a m e r a r o l l )

20

21

r e t u r n Task . c o n t

Listing 1: Panda3D Python basic setup

From line 3 till line 6 the definition of the model to be loaded is given. Then from line 7 till line 12 the way the lighting is done can be altered. On line 14 the function that gets the position of the control module is defined. It puts the result on local variables such that it can be used later on by other functions. On line 17 the function spinCameraTask is defined. This function is called every time a frame is generated. In the function it calls the updatePosition() function.

With this set-up, the frame rate does not influence the iteration speed of the controller.

4.4.6 Prototype Robot

For the prototype robot, Dynamixel AX-12+ modules are chosen as building blocks. The modules have their own PID controller and can be controlled via a TTL connection. In the setup of the HCSRATS a USB to TTL converter is used in order to control the Dynamixel module. The AX-12+ can be dasy-chained, so multiple actuators can be connected via one USB connection.

The AX-12+ has a maximum stall torque (peak torque) of 0.0153N m and approximately 0.003 N m continuous torque. This is sufficient to let the lower module support an additional motor plus the weight of a USB Camera.[17]

The Dynamixel AX-12+ modules have build-in PID controllers. The parameters of the controller can be altered via the development kit of Dynamixel. As can be seen in figure 11b, the AX-12+ has a valid working range of 300

. The home position of the AX-12+ is at 150

. The AX-12+ is a stepper motor and can be controlled by setting a set point measured in steps. The home position corresponds to step 512. Equation 7 describes the conversion from angle (α) to Dynamixel step location.

S = 512(1 + α

150 ) (7)

4.4.7 Teleflex

As discussed in section 3.2 the TeleFlex has a mount for an endoscope to control the position of

the tip of the scope. The control module can be seen in figure 6 page 14. The mount is indecated

with ”a”. The TeleFlex has stepper motors build in. Those motors have a PID controller and

setpoints can be given in motorsteps, just like the Dynamixel modules. To control the TeleFlex

the relation between motorsteps and angle of the tip of the scope should be determined. A model

of the gears and gyrators is shown in figure 12. In an ideal situation, the rotation of the stepper

motor in figure 12 linearly alters the position of the tip. In reality, effects such as slip-stick,

non-linearity, flexibility and friction let the system behave in a different way. However, modelling

these effects is beyond the scope of this project.

(25)

4 DESIGN HCSRATS, University of Twente

(a) Position Limitations

(b) Stepper Motor

Figure 11: Dynamixel AX-12+, source [17]

4.4.7.1 Scope Characterisation Setup

The relation between rotation and translation of a beltpully can be described by d = απr

180 (8)

Combining all pullies and gears, the linear relation between rotation of the stepper motor and the position of the tip can be described by

α

tip

= α

motor

∗ C

trans

∗ r

pul

/r

pul1

∗ C

trans

∗ r

pul2

/r

scopetip

(9) Then, by inserting the relation between stepper motor steps and the angle of the axis of the motor, the relation between angle of the tip to the TeleFlex input is found.

α

tip

= step ∗ C

angle/step

∗ C

trans

∗ r

pul

/r

pul1

∗ C

trans

∗ r

pul2

/r

scopetip

(10) To find the relation described by equation 10, an experiment with a Olympus CF-H180AL coloscope will be done. Since there was no flexible thoracoscope at hand, a coloscope is used as a reference. A coloscope is a type of endoscope that is relatively thin, and therefore behave in a similar way as a thoracoscope. In the experiment, the relation between input step and output angle will be determined. Equation 10 simplifies to one with only one constant between angle and stepper motor step.

Figure 12: 20 SIM model of the TeleFlex and endoscope steering system

(26)

5 RESULTS HCSRATS, University of Twente

5 Results

In this section the final results of the HCSRATS project will be shown. It uses the measuring plans and designs from the previous chapters to construct the HCSRATS. Also the performance of each part is presented. The next chapter will draw conclusions on these results.

5.1 Sensor

The C++ api for the sensor was used to send the output over sockets to the main module. The performance of this will be discussed at the virtual 3D evaluation.

The Kalman filter that is present in the embedded chip of the sensor had a preset for working with human head movement. This model was tweaked by trail and error to improve drift compensation. However, this had little to no effect.

The refresh rate of the sensor runs at 100 Hz. At this speed, a thread under Ubuntu will be stressed for about 80% to 90%. This does not happen on Windows. Since the laptop in this setup has 8 threads available, this does not influence the performance but could be a problem when this readout has to be done on embedded chips.

5.2 Foot switch

To lock the position of the HCSRATS, the foot switch can be used. This foot switch behaves like a normal push-button. An image of the foot switch can be seen in figure 13. The switch is connected via an Arduino Uno to the computer at which the HCSRATS software is running. Via a serial connection, the Arduino can send over three states: one-click, double-click and long-press.

The status of these parameters can be seen on the HMD, see figure 16c.

Figure 13: The foot switch used in the HCSRATS project.

5.3 Main Module

When the program is run, the program parameters can be chosen via a graphical user interface (GUI). These parameters are: the desired controller, output mode (virtual 3D, Prototype robot, TeleFlex), controller gain and dead-part correction. This can be seen in figure 14.

When the parameters are chosen, the program can be run. When that happens, a thread

is created for each time-independent module. With this, the controller, sensor-read out and

(27)

5 RESULTS HCSRATS, University of Twente

Figure 14: The begin GUI to select control module, output type, controller gain constant and dead zone adjustment.

graphical processing are done in a separate loops that can run at different speeds. This makes it easier to use timers or time depended equations and prevent blocking.

5.4 Virtual 3D Environment

5.4.1 Model

In the 3D modelling program Blender a human body is modelled in which a camera is placed in the thorax. This model is to evaluate the behaviour of the HCSRATS system, and is not medically accurate. In this model, the right lung is left out such that heart, liver and left lung are visible. Figure 15a shows the construction in Blender. After construction of the model in Blender, the model is exported to Panda3D. Using the HMD, one is able to look around in the thorax model. A screen shot is shown in figure 15b.

5.4.2 Performance

The performance of the virtual 3D environment does meet the requirements. The response is quick and the model is nicely visible. This makes it a proper evaluation tool for the HCSRATS.

When the camera is moved quickly, a vertical misalignment can be seen. The exact cause of this could not be found. A possible explanation for this is that the frame rate of the engine is to low or that the monitor refresh rate does not match with the Panda3D engine.

In this model drift of the sensor can be noticed. This drift is only noticeable when the HMD

is laid perfectly still on, for example, a table. Also, the sensor corrects itself again after the HMD

(28)

5 RESULTS HCSRATS, University of Twente

(a) Construction of the model in

Blender (b) Live view of the Panda3D output

Figure 15: Pictures of human 3d model

is moved. If the sensor is in use, the drift is not noticeable. The performance of this system is sufficient and development of the next model was started

5.5 Prototype Robot

5.5.1 Robot

The constructed prototype robot can be seen in figure 16b. The AX-12+ modules where connected to each other and mounted on top of a mass. This will hold the modulus in place, also at higher angular velocities. Because the modules can draw up to 900mA at 12V , a power supply feeds both Dynamixel modules.

5.5.2 Camera

The camera chosen for the prototype robot is a USB webcam that can be mounted on the Dynamixel modules. The webcam feed is processed by OpenCV to generate an image similar to the output of an endoscope. OpenCV also adds the 3rd degree of freedom: roll. By using the angle that is calculated in the control module, OpenCV can rotate the camera output. This can also be used by the TeleFlex.

5.6 Performance

Compared to the virtual model, two main differences can be noticed by the end user. At first, the robot has inertia and therefore respond slower to control modules output. Secondly, the camera does have latency which will also contribute to a latency in the whole system. Despite the increase in latency, the system still behaves according to the specifications.

An important result of this model was the awareness of the presence of a valid working range.

That the AX-12+ modules have a working range of 300

was known. However, if one would cross from −150

through −180

to 150

the camera would move over its valid range to the other maximum. This means that the system will work on full speed to get to its other maximum.

This action is not a command of the surgeon but can harm the patient. Therefore, a protection

mechanism would be build in to protect this from happening.

(29)

5 RESULTS HCSRATS, University of Twente

5.7 TeleFlex

The connection between the HCSRATS system and the TeleFlex runs over sockets. For these sockets, the ZeroMQ framework is used again. The ZeroMQ framework makes it easy to connect the HCSRATS to the TeleFlex system . This also makes it possible to run the TeleFlex system apart from the HCSRATS system, by letting the two systems communicate via a LAN set-up.

5.7.1 Scope Characterization

The Olympus CF-H180AL coloscope is taken as example to demonstrate the working of the HCSRATS. Equation 10 showed the relation between stepper motor steps and the angle of the tip. By taking all constants in equation 10 as one constant, the relation simplifies to:

α

tip

= step ∗ C

total

(11)

The coloscope that is used has an operating range running from −110

to 110

. This corresponds to a stepper motor count of +140000 and −140000. Using increments of 20000 the experiment is conducted over the whole valid range for 4 times. The measuring data can be found in Appendix 8.1. The relation for the horizontal movement as well as the vertical movement can be found in equation 12a and 12b respectively. This relation is found by fitting a linear model through the measurement data of the experiment. The data does have deviation, and it is thought of that this is due to hysteresis.

α

tip

= 1211step sideways (12a)

α

tip

= 1204step up-down, (12b)

5.7.2 Performance

The socket connection between the TeleFlex and HCSRATS worked according to plan. It is reliable and fast. In the endoscope model, only ideal components are used. However, the scope is quite elastic and therefore a set point will not be reach fast compared to the prototype robot.

Since this lag only occurs due to the elasticity of the scope, it will behaves like a surgeon would expect it to be.

Using equation 12, the angle calculated by the control module is converted into stepper motor counts for the TeleFlex.

The video feed of the endoscope camera can be send to the HCSRATS via firewire. However,

this latency is in the order of 1s and is too large to work with. By adjusting the buffer size this

latency can be lowered, but the connection becomes less stable. A workable optimum is found

that has a workable latency and provides enough stability to work with. Also, the HMD can be

connected directly to the endoscope set up using a direct HDMI output. However, one would lose

the OpenCV rendering and with that the 3rd DOF.

(30)

5 RESULTS HCSRATS, University of Twente

(a) AX-12+ 2DOF robot (b) Prototype Robot with camera

(c) Video Output after OpenCV processing

Figure 16: Prototype Robot Result

(31)

6 DISCUSSION HCSRATS, University of Twente

6 Discussion

In this chapter reflection takes place, on both the process as well as the contents of this research.

Matters that could have been improved or have to be taken into account in future projects are listed in the following section.

6.1 Time Management

Since there was a lot of hardware involved in this project, quite some time was spend on getting all the devices work. Especially compiling software libraries and packages took time. In particular the lack of documentation or open-source support contributed to this. During this project, a contribution to the Xsens sensor was done. The Makefile for the Linux C++ example files contained invalid compiler options. By analysing the makefile of Xsens the bug was found. It turned out that the order at which the parameters for compiling with the gcc compiler did matter.

By putting the link libraries at the end of the compiler commandos the Xsens c++ software did compile.

6.2 Video Lag

For the HCSRATS system to behave optimal, all factors of the system should work as optimal.

Each module that has latency will decrease the overall performance. As was proven by the virtual 3D environment, there is no significant latency noticeable in the sensor readout, main module or control module.

In the prototype robot and also the TeleFlex implementation, the camera latency is the main source of latency but it is sufficiently low for this proof of concept. By using dedicated hardware for video processing, this latency could possibly be lowered. This can be a FPGA or a GPU that is capable of parallel computing. Implementing this would be a future project.

6.3 Discipline Synergy

This project is synergy of different disciplines. Modelling in Blender and SolidWorks, software

development and medical knowledge should be known in order to construct such a product. Getting

familiar with these tools take up a lot of time. Future participants of the HCSRATS project

should realize that this is multidisciplinary project. Working on such a project is challenging.

(32)

7 CONCLUSION AND SUGGESTIONS HCSRATS, University of Twente

7 Conclusion and Suggestions

In this project it was shown that a surgeon will be able to use a HMD to control an endoscopic robot. However, before this robot will enter an operation room further research has to be done. A proof of concept is made and the basics for a working system are laid. The HCSRATS is able to control the TeleFlex and steer the tip of the endoscope. The design of the HCSRATS is modular and can be easily expanded. Using a clear and object orientated structure, future work can be implemented easily. Choosing the right control module is a personal preference and writing a custom control module is therefore made easy.

The sensor that was used in this project does suffice the criteria that are stated in the problem definition. However, the drift makes it hard to control the system perfectly. It is advised to the end-user to control the system and when done to lock the location. When the lock is released again, the drift of the sensor will be automatically compensated by looking at what the position was when the system was locked. This way, drift will occur in the background but will not be noticeable by the end-user.

Future work that can be done is hysteresis compensation and reducing the video latency. As mentioned before, at the time of writing this report, research is done on minimizing the hysteresis in the endoscope. Decreasing video latency can be done in multiple ways, like a dedicate FPGA or a GPU. The OpenCV project supports GPU calculations, but require knowledge about parallel processing.

Besides improving the HCSRATS, the following ideas could contribute to a better overall technical support for the medical staff.

• The virtual environment with the virtual reality glasses can be used to train medical staff.

Via this easy to access tool, a medical student can get familiar with working with an endoscope and interpreting video from these scopes. For this, a more accurate medical model should be generated.

• During a visit to a VATS mini-Maze surgery, it was noticed that the operation room is full of cables and wires. Part of these cables are digital, part of these are analogue and the rest are tubes that supply substances to the patients body. There is a real thread to trip over these wires. Analog or digital signal cables could be replaced by standardized cables, for example an ethernet implementation. The HCSRATS uses sockets and is fully compatible with such a solution. For a full report of the VATS mini-Maze surgery visit, see appendix 8.2.

• During a discussion with Stefano Stramigioli, additional control modes were thought of.

A mode in which the first 30 degrees would be position control and after that constant

velocity control would give the surgeon the ability to look around with the endoscope with

less head movement. This mode should be implemented as future work.

(33)

REFERENCES HCSRATS, University of Twente

References

[1] Gart de Bruin. Endoscope control by head movements applied to minimally invasive surgery.

April 2010.

[2] J.E.N. Jaspers. The use of a head-mounted display during video-assisted thoracoscopic surgery. Jun 2014.

[3] Wikipedia. Control theory. http://en.wikipedia.org/wiki/Control_theory. [Online;

accessed 19-5-2015].

[4] Wikipedia. Hysteresis. http://en.wikipedia.org/wiki/Hysteresis. [Online; accessed 8-5-2015].

[5] Jeroen Gerard Ruiter. Robotic flexible endoscope, proefontwerp. Sep 2013.

[6] Python Software Foundation. About. https://www.python.org/about/, 2015. [Online;

accessed 25-4-2015].

[7] Quora. What are the advantages of python over c++? http://www.quora.com/

What-are-the-advantages-of-Python-over-C++, April 15, 2012. [Online; accessed 25-4- 2015].

[8] Debian Foundation. Benchmark c++ vs python. http://benchmarksgame.alioth.debian.

org/u32q/benchmark.php?test=all&lang=python3&lang2=gpp&data=u32q, April 2015.

[Online; accessed 28-4-2015].

[9] The Monkey Project. 3d engine roundup. https://themonkeyproject.wordpress.com/

2011/05/11/3d-engine-roundup/. [Online; accessed 24-4-2015].

[10] mansu. Panda3d vs ogre3d. http://mansu.livejournal.com/23554.html, January 3, 2006.

[Online; accessed 24-4-2015].

[11] Reddit. Pygame, or something else? http://www.reddit.com/r/Python/comments/

15lz1m/pygame_pyglet_something_else_entirely/. [Online; accessed 24-4-2015].

[12] Oculus Rift. Dk2. https://www.oculus.com/dk2/. [Online; accessed 29-4-2015].

[13] ZeroMQ. Ømq (version 0.3) tests. http://zeromq.org/results:0mq-tests-v03. [Online;

accessed 4-5-2015].

[14] ZeroMQ. Infiniband tests (version 2.0.6). http://zeromq.org/results:ib-tests-v206.

[Online; accessed 4-5-2015].

[15] Ramsey Faragher. Understanding the basis of the kalman filter. Lectures notes, Systems Advanced Technology Centre United Kingdom, Sept 2012.

[16] Cinoptics. Airo ii 3d render. http://cinoptics.com/product/airo-ii/. [Online; accessed 19-5-2015].

[17] Dynamixel. Dynamixel ax-12a robot actuator. http://www.trossenrobotics.com/

dynamixel-ax-12-robot-actuator.aspx. [Online; accessed 4-5-2015].

(34)

8 APPENDICES HCSRATS, University of Twente

8 Appendices

8.1 Measurement results characterisation TeleFlex + coloscope

angle sideways [deg] upwards/downwards [deg]

input A B C D A B C D

-140000 -100 -100 -110 -110 -110 -110 -100 -100

-120000 -100 -80 -100 -90 -110 -90 -100 -80

-100000 -90 -60 -95 -70 -100 -70 -90 -55

-80000 -70 -40 -80 -40 -90 -55 -70 -35

-60000 -60 -20 -60 -25 -70 -30 -50 -20

-40000 -30 -5 -30 -10 -50 -15 -30 -5

-20000 -20 0 -20 0 -25 -3 -15 5

0 0 10 0 10 0 10 0 20

20000 5 25 5 30 3 20 7 30

40000 20 45 15 45 15 35 25 50

60000 40 60 30 70 35 60 40 60

80000 60 90 60 90 50 85 60 90

100000 80 100 80 100 70 100 80 105

120000 100 105 100 105 90 110 90 110

140000 110 110 110 110 110 110 110 110

Table 3: Measurement results of Teleflex + Olympus CF-H180AL coloscope

(35)

8 APPENDICES HCSRATS, University of Twente

y = 1.211x

-200000 -150000 -100000 -50000 0 50000 100000 150000 200000

-150 -100 -50 0 50 100 150

A n g le [ d e g ]

Stepper Motor Count

TeleFLEX characterization movement sideways

y = 1.204x

-200000 -150000 -100000 -50000 0 50000 100000 150000 200000

-150 -100 -50 0 50 100 150

A n g le [ d e g ]

Stepper Motor Count

TeleFLEX characterization movement up/down

Figure 17: Linear fitting through the measurement results of Teleflex + Olympus CF-H180AL

(36)

8 APPENDICES HCSRATS, University of Twente

8.2 Report of visit at VATS mini-Maze operation

author: Rob Kers year: February 2015

On the 10th of February, my daily supervisor Foad Farimani and I visited a VATS mini-Maze operation. In this medical operation the patient will be treated for a heart condition called “Atrial fibrillation”. With this condition signals coming from the veins disturb the natural “sinus signal”.

With this, the hart contracts in a not ordered way and becomes less efficient. This condition can be treated by making scar tissue on the part where the veins coming from the lungs enter the hart.

The reason of this visit is the fact that I will do my Bachelor assignment on using a flexible endoscope in combination with augmented reality glasses. In this report I will make a description of what I saw and noticed. Busyness: The very first thing that we saw when the patient came is that the amount of people inside the operation room is high. Their way of working and there presence do not interfere with each other per se, but one really have to know what his function is.

Cables: Cables were all around the patient. Most of the cables that were attached to a sensor were connected to the equipment via a cable tree that was located underneath the operation table. There were also tubes used to feed the patient oxygen or medicine which do not run via a cable tree. These run directly from equipment to the patient. The tubes are impractical, because people can trip over these tubes. This really happened during the operation. LCD Screen: In this type of operation, two LCD screens are needed. One is for the surgeon and assistant and the other for the crew. These LCD screens are hanging from the ceiling. In stressful situations the presence of this monitor was sometimes forgotten and one could easily bumped his head against this screen.

In short I have to think about the following problems regarding the design of my system:

• The chest and the heart of the patient are moving. The assistant that holds the thoracoscope compensates for this. Using a table supported mount for the endoscope will not compensate for this. A flexible endoscope might have less need of compensation. The movement by the chest can be compensated by making use of a patient supported endoscope mount.

• In case of a power outage the device that has to be made should remain on-line or restore its original state very quickly.

• Lots of time is wasted on cleaning the endoscope from moisture and other material on the lens of the camera.

• Although the surgeon is trained on hand-eye misalignment, using a HMD would be much easier to operate.

• Most of the devices used are in Dutch. This could be because in a stressful moment, communicating in a language different from your native language could be hard.

I noticed the following things that are off topic:

• Shaving of the patient’s chest hair is done in the operation room and not on forehand.

• The crew of this operation was really interested in fusing images from detailed non-realtime

images and not so detailed real-time images. This might be interesting for other projects.

(37)

8 APPENDICES HCSRATS, University of Twente

8.3 Source Code

Since more than thousand lines of code are written for this project, the source code is not included

on paper. The full code can be downloaded from the RAM git repository at https://git.ce.utwente.nl/hcsrats The source code is located in the hcsrats main folder. The structure of the code follows the

structure defined in 7. The main.py file contains the main program. All separate modules are located in the lib folder.

The main program can be run by ”sudo python main.py”. Sudo is needed since by default a normal user does not have access to all camera’s and input devices. To reconfigure the Dynamixel modules, one could run ”sudo python main.py -c”. The ”-c” option will start the Dynamixel configuration wizard.

The sensor readout program can be found in the Sensor folder and is called main.cpp. The

compiled version is also supplied and can be run using ”sudo ./main ”

(38)

8 APPENDICES HCSRATS, University of Twente

8.4 SolidWorks Dynamixel Webcam Mount

This mount would have been used to attach the webcam to the Dynamixel Robot. However, during this project a better camera came available and this mount was not necessary anymore.

This mount can be used in future work.

8 15

8

23 5,80

80,42

3,80

23 8

15

APPV'D CHK'D

webcammount1extrude

WEIGHT:

A4

SHEET 1 OF 1 SCALE:1:2

DWG NO.

TITLE:

REVISION DO NOT SCALE DRAWING

MATERIAL:

DATE SIGNATURE

MFG Q.A

DEBUR AND EDGES BREAK SHARP

NAME ANGULAR:

FINISH:

UNLESS OTHERWISE SPECIFIED:

DIMENSIONS ARE IN MILLIMETERS SURFACE FINISH:

TOLERANCES:

LINEAR:

DRAWN

Figure 18: Solid Works Webcam mount

Referenties

GERELATEERDE DOCUMENTEN

The features of the working medium are chosen to be suitable for a typical laboratory plasma experiment in which electron temperature elevation and

Het grafveld van Broechem blijkt ook reeds van de in de 5de eeuw in gebruik en op basis van andere recente archeologische gegevens uit dezelfde regio, is ondertussen bekend dat

Een beginpunt ligt dan weer een halve periode naar rechts... Een frequentie van 1000 Hertz

Concluderend kwam er uit de resultaten van dit onderzoek naar voren dat zelfregulatie geen modererende rol speelt in de relatie tussen intentie om academisch te presteren en de

In support of this framing, we performed an exploratory policy analysis, applying future climate and socio-economic scenarios to account for the autonomous development of flood

Concluding, by executing three steps: transcription of audio to text by means of ASR, filtering the transcriptions using NLP techniques to obtain the content of the discussions,

Using the optical simulation, the properties of the point spread function were measured as a function of camera position (Fig. 4.10a), iris diameter, light emission distribution