• No results found

Design of haptic feedback in pedal based UGV teleoperation to enhance situation awareness

N/A
N/A
Protected

Academic year: 2021

Share "Design of haptic feedback in pedal based UGV teleoperation to enhance situation awareness"

Copied!
68
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

Faculty of Electrical Engineering, Mathematics & Computer Science

Design of haptic feedback

in pedal based UGV teleoperation to enhance situation awareness

Nils Rublein

B.Sc. Thesis Creative Technology July 2019

Supervisors:

dr. ir. D. Dresscher dr. ir. E. C. Dertien

Robotics and Mechatronics Group

Faculty of Electrical Engineering,

Mathematics and Computer Science

University of Twente

P.O. Box 217

7500 AE Enschede

The Netherlands

(2)
(3)

This assignment is being carried out in cooperation with i-Botics, a research center for teler- obotics at the University of Twente. The goal of i-Botics is to enhance robotic sensing and remote perception for telerobotics. Telerobotics is the control of a robot in a remote environ- ment, combining the cognitive abilities of a human and the physical abilities of a robot. This study was set out to investigate the effects of haptic feedback based on obstacle presence in remote UGV control on the situation awareness of the remote operator. Situation awareness is a key factor in telerobotics as it greatly influences performance of the human operator and the overall success of the task mission. Research has shown that haptic feedback can complement visual information in the context of obstacle avoidance systems and improve the SA of the operator by providing haptic cues indicating the distance and direction to obstacles in the remote environment.

The system that is used in this assignment is compromised out of two main parts: A robotic platform equipped with mechanum wheels and a robotic arm and a cockpit that provides audio, video and haptic feedback to guide the operator. The cockpit is furthermore equipped with a pair of pedals that are able to steer the platform and generate haptic feedback based on the distance to an obstacle. The goal of this assignment was to integrate the two systems, provide haptic feedback based on the presence of obstacles around the robotic platform to guide the operator’s decision making and evaluate the effect of haptic feedback in the context of situation awareness.

In the analysis, a conceptual foundation for the implementation of the system and answering of the main research question is presented. First, the state of the art of situation awareness is being discussed and suitable techniques for evaluating situation awareness in the context of telerobotics are being proposed. Second, the current state of the system will be described and evaluated which will give a basis towards the final realization of the system and possible improvements for individual system components. Third, the theoretical design of an obstacle detection & processing system is being discussed which will serve as foundation to generate haptic feedback. Fourth, based on the previous sections, the necessary steps towards a fully integrated system that can be used to evaluate the effect of pedal based haptic feedback on the SA will be discussed.

In the design & implementation chapter, the implementation of the finalized system is being discussed. The final system is compromised out of two main parts, a robotic platform that

iii

(4)

scans the remote environment for obstacles and a cockpit that provides haptic feedback via a pair of pedals which allow to steer the platform. Communication between these two main components is being discussed first, afterwards the platform and the cockpit are examined individually.

After the system is fully integrated, the functionality of the system is being evaluated and an user test is being conducted. First, the haptic feedback controller is being evaluated for stability & robustness and a suitable set of spring constants for the user testings is sought out to be found. Second, the obstacle detection & processing system is being evaluated as it serves as foundation of the haptic feedback. Third, user tests are being conducted to asses the SA of an operator steering a robotic platform in a remote environment guided by the haptic feedback. Due to time constraints and technical issues, only one spring constant is being explored. The tests are thus being carried out via the between approach, evaluating a control group with no haptic feedback and an experimental group with haptic feedback. The situation awareness of the participants is being evaluated by means of the SART questionnaire as well as an additional set of metrics based on the categories navigation, perception and manipulation.

The results of the user testing fail to observe any significant statistical difference on the effect of haptic feedback on the situation awareness of the operator between the experimental group and the control group. However, participants of the control group completed the user tests in significantly less time and also showed on average higher values in the SART questionnaire, indicating a higher value of situation awareness.

In conclusion, despite the efforts being made to understand the influence of haptic feedback

in pedal based UGV control on the SA of a human operator, more research is needed to

understand how well haptic feedback guides the decision making of the operator and how

it should be designed in order to optimally enhance the SA. The implementation of haptic

feedback for the particular telerobotic system of this study does indicate a higher value of

SA but should be explored more in depth by revising the user studies as described in the

analysis in order to obtain more meaningful and significant data.

(5)

Summary iii

1 Introduction 1

1.1 Context . . . . 1

1.2 Project goal . . . . 1

1.3 Related work . . . . 2

1.4 Report organization . . . . 3

2 Analysis 5 2.1 Situation awareness . . . . 5

2.1.1 Definition of Situation Awareness . . . . 5

2.1.2 Potential harmful factors for Situation Awareness . . . . 6

2.1.3 Metrics for Situation Awareness in telerobotics . . . . 6

2.1.4 Evaluation techniques for Situation Awareness . . . . 7

2.2 Current state of the system . . . . 8

2.2.1 Hardware . . . . 9

2.2.2 Software . . . . 10

2.3 Obstacle localization . . . . 16

2.3.1 Area of interest . . . . 16

2.3.2 Sensor configuration and coordinate transformation . . . . 17

2.3.3 Collision area . . . . 19

2.3.4 Path length calculation . . . . 20

2.4 Evaluations and Considerations . . . . 26

2.4.1 Haptic Feedback . . . . 26

2.4.2 Obstacle detection & processing . . . . 26

2.4.3 User testings . . . . 27

3 Design and Implementation 29 3.1 Communication . . . . 29

3.2 Platform . . . . 31

3.3 Cockpit . . . . 32

3.3.1 PC . . . . 32

3.3.2 RaMstix . . . . 33

v

(6)

3.3.3 Pedals . . . . 33

4 Results & Discussion 35 4.1 Controller . . . . 35

4.1.1 Goal . . . . 35

4.1.2 Design . . . . 35

4.1.3 Results . . . . 36

4.1.4 Discussion . . . . 39

4.2 Obstacle Detection & Processing . . . . 40

4.2.1 Goal . . . . 40

4.2.2 Design . . . . 40

4.2.3 Results . . . . 41

4.2.4 Discussion . . . . 42

4.3 Situation Awareness . . . . 43

4.3.1 Goal . . . . 43

4.3.2 Design . . . . 43

4.3.3 Results . . . . 47

4.3.4 Discussion . . . . 49

5 Conclusion 51 6 Recommendations 55 6.1 Controller . . . . 55

6.2 Obstacle detection & processing . . . . 55

6.3 User testing . . . . 55

6.4 Usability issues . . . . 56

References 57

Appendices

A Appendix 61

A.1 PD controller tests by Meijer . . . . 61

(7)

Introduction

1.1 Context

This assignment is a continuation of the work from Sierd Meijer [1] and Fabian van Hum- mel [2] and is being done in cooperation with i-Botics, a joint research and development center founded by TNO and the University Twente, that focuses on telerobotics. Telerobotics is the remote control of a robot, combining the cognitive abilities of a human and the physical abilities of a robot. and is used in many domains such as the chemical industry, healthcare or defense and has many applications on land, sea and in space. i-Botics’ goal is to enhance robotic sensing and situation awareness (SA) for telerobotics by researching alternatives to classical means of visual information technology in modern robotics such as VR or haptic feedback. SA is an important factor in telerobotics, as the human operator is physically iso- lated from the robotic vehicle. The lack of SA may lead to deteriorated decision making and performance of the human operator, as exclusive use of visual information to simultaneously perform the primary task (e.g. search and rescue) and to be aware of possible impediments to the operation becomes a very challenging task [3]. Research has shown that haptic feed- back can complement visual information in the context of obstacle avoidance systems and improve the SA of the operator by providing additional haptic cues indicating the distance and direction to the obstacle [4].

1.2 Project goal

The goal of this assignment is to design and implement a system that provides haptic feedback based on the presence of obstacles around a remote robotic platform to guide the operator’s decision making and enhance his SA and overall performance. The system is compromised out of two main parts: First, haptic feedback that is being generated by the pedals that steer the robotic platform and second, an obstacle localization system that will serve as basis for the feedback. The main research question of this assignment is:

”How to design haptic feedback based on an obstacle detection system to improve the situation awareness of the human operator in remote UGV control?”

1

(8)

In addition, the following sub-questions will be investigated:

ˆ What is the state of the art of the current system? Which elements are already imple- mented and which are missing, in order to have a fully integrated system that can be used to study the effect of haptic feedback on SA?

ˆ What obstacles are relevant for the scope of this assignment and how can they be detected?

ˆ How can haptic feedback be generated based on obstacle location and how can it be used to help avoiding relevant obstacles?

ˆ How should the SA of the operator be evaluated? How do user tests need to be designed to obtain reliable and meaningful data?

1.3 Related work

Relating the own to existing work in the field is important as it indicates relevance but also gives a opportunity to compare results and draw corresponding conclusions.

Using haptic feedback to guide the decisions of an UGV operator and improve his SA has been done already in a number of studies. For example, Luz et al. [3] compare the use of a tactile tablet, an traction cylinder and a vibrotactile glove to improve traction awareness of an UGV operator. The study shows that two of the haptic devices increase the SA of the operator with regards to the traction state of the UGV significantly when comparing to the exclusive use of visual information. The study conducted by Corujeira et al. [5] investigates the use of directional force feedback with a gaming controller that exerts haptic feedback via simulation based obstacle detection and processing. The study shows that directional force feedback improves the time needed to complete the navigation tasks and decreases the duration of collisions significantly in comparison to the control group.

When looking at pedal based haptic feedback, there are plenty of studies with focus on automobiles to improve for example risk predictive driving [6], [7] or encourage eco-friendly driving [8], [9] but there is little to almost no research on haptic feedback pedals with regards to telerobotic systems. A study that does use haptic feedback pedals in remote robotic systems was conducted by Kim et al. [10] who investigated applying haptic feedback to a set of pedals that can be manipulated via translation and rotation. Haptic feedback in this case is also based on obstacle detection, however the difference to this thesis is that here the pedals are meant to capture the movement of the user and simulate walking in a remote environment. Furthermore, this study only tested the functional requirements of the pedals and not how effective they are in guiding an user based on obstacle information.

To conclude, haptic feedback in general is well researched, but there is little existing work

in the field of remote telerobotic systems that use pedal based haptic feedback. Studies that

investigate the effect of obstacle location based haptic feedback like [3] and [5] may be used

to compare to the results of this study in regards of assessing how effective the system is in

improving the SA of the operator. The study of [10] may be used to compare the functionality

(9)

of the pedal in regards of how effective the pedals are in steering the robotic platform. The results of this study can be then used on the one hand to fine tune the individual components of this particular system but also serve as guideline for designing haptic feedback to improve SA of the operator on the other hand.

1.4 Report organization

The remainder of the report is structured as follows: The analysis (Chapter 2) includes the

theoretical knowledge that is necessary to answer the main research question and includes

research about SA, the current state of the existing system, obstacle localization and discusses

the necessary steps towards a fully integrated system that will serve as basis to evaluate the

effect of pedal based haptic feedback on SA. Chapter 3 contains the conceptual design and

implementation of the system based on the analysis, including detailed description about

hardware and software. Then, in chapter 4 (Results & Discussion) the system will be tested

based on the project goals and subsequently evaluated. Finally, in chapter 5 a conclusion

is being presented based on the observations of this report as well as recommendations for

future work.

(10)
(11)

Analysis

In this chapter a conceptual foundation for the implementation of the system is presented as well as the necessary theoretical knowledge for answering the main research questions and the respective sub-questions. First, the state of the art of SA will be discussed in order to obtain a general understanding of what SA is, what common pitfalls are when designing Human-Robotic-Interfaces and how to assess and evaluate the SA of an user. Second, the current state of the system will be described and evaluated which will give a basis towards the final realization of the system and possible improvements for individual system components of the setup. Third, in order to generate haptic feedback based on the presence of obstacle around the platform, the theoretical design of a obstacle detection & processing system will be discussed. Lastly, based on the previous sections, the necessary steps towards a fully integrated system that can be used to evaluate the effect of pedal based haptic feedback on SA will be discussed.

2.1 Situation awareness

In order to design haptic feedback with focus on improving the SA of an operator, a general understanding of SA and how to evaluate it is needed. Thus, first a general definition of SA will be given. Second, potential harmful factors for SA with attention on telerobotic systems will be discussed in order to improve the current design of the system. Once a device is realized, a relevant set of metrics needs to be determined in order to evaluate the SA of the operator, thus suitable metrics for evaluating telerobotic systems in the context of obstacle location based haptic feedback will be presented in the third part of this section. Finally, the forth part of this section will discuss several techniques that can be used to evaluate SA once a device has been realized and a suitable set of metrics has been determined.

2.1.1 Definition of Situation Awareness

SA was defined by Endsley in 1995 as “perception of the elements in the environment within a volume of time and space, the comprehension of their meaning and the projection of their status in the near future” [11]. Endsley divides SA in 3 different levels: the first and lowest level is perception of the environment. The second level is comprehension of relevant data

5

(12)

and applying it in the context of the mission. The third and highest level is projection of the environment and its possible system states.

2.1.2 Potential harmful factors for Situation Awareness

When designing interfaces or systems there are several factors that can potentially harm the SA of the operator. Endsley defined a set of general factors called the “8 demons” that include for example data overload, attentional narrowing, workload, fatigue and other stressors [12].

Additional factors in context of telerobotics that are relevant for the scope of this assign- ment include delays and scale & spacial ambiguities when perceiving a remote environment.

Multiple studies have found that time delays may lead to decreased operator performance, degraded driving and tracking, as well as reduced telepresence and over-actuation when delay is variable [13]–[18]. In addition, Tittle et al. [19] state that in contrast to a directly natural perceived environment, humans can experience complications perceiving the scale of a remote environment in relation to their own capabilities for movement as well as tracking their own spatial location based on visual information in that environment. Feedback information via the vestibular nerve system of the human body that indicates the acceleration of the own body and provides a natural scaling for distances in an environment is lost, as the human perceptual processor is decoupled from the environment that is being explored. Tittle et al. suggest to “identify and implement perceptual cues that can augment the remote video stream and allow the human perceiver to compensate for the absence of the complex combi- nation of naturally occurring information (e.g., muscular and vestibular feedback) that would exist if he were actually investigating the environment.” As an example, multiple studies have shown that haptic feedback can complement visual data for telerobotic systems by provid- ing additional perceptual cues about the environment, such as attitude perception, traction awareness or distance to obstacles and provide thus spatial and situational awareness that would be missing otherwise [3], [5], [20].

To conclude, general potential harmful factors towards SA include data overload, atten- tional narrowing, workload and fatigue. Factors with focus on telerobotics that are relevant for the scope of this assignment include delays as well as scale and spatial ambiguities. There- fore, the haptic feedback design of this system should minimize delays to mitigate possible impediments towards the SA of the operator. Furthermore, the design of the feedback should complement the visual information that is displayed on the screen of the cockpit to pro- vide the user with a beneficent spacial sense regarding obstacles around the platform and

”unburden the visual channel by using other human senses to improve SA” [3].

2.1.3 Metrics for Situation Awareness in telerobotics

Once the device is designed, a relevant set of metrics needs to be determined in order to facil-

itate an objective evaluation of the SA of the operator. According to Steinfeld et al. [21], key

metrics in human robot interaction can be assigned in 5 categories: Navigation, perception,

management, manipulation and social. In the context of designing haptic feedback based on

obstacle localization, the remainder of this section will focus on navigation, perception and

(13)

manipulation. Note that Steinfeld et al. describe manipulation as controlling and steering the robot whereas navigation is being described as path planning and dealing with environmental impediments encountered on the way.

Metrics for navigation include effectiveness measures such as number of obstacles that were successfully avoided or deviations from a planned route [22]; Efficiency measures that include e.g. the time needed to complete the task or average time needed for obstacle extraction [23];

Non-planned navigation effort measures including the number of operator interventions [24]

and ratio of operator to robot time to successfully initialize and execute a task of the robot [25].

Metrics for perception can be divided in passive perception, the interpretation of received sensor data, and active perception, the fusing of multiple sensor readings to make an infer- ence of the environment [26]. Measures for passive perception include detection measures (e.g. signal detection, detection by object orientation), recognition measures (e.g. classifi- cation accuracy, confusion matrices), judgment of extent measures (accuracy of quantitative judgments about the environment) and judgment of motion measures (accuracy with which movement of objects in the environment is judged) [21]. Active perception measures in- clude active identification (measures performance on recognition tasks involving mobility) and active search (measures performance on search tasks involving mobility) [21].

Finally, manipulation metrics include the degree of mental computation needed to execute a task such as mental rotation or mental short and long term memory [27]. In addition, the number of collisions and the type of contact error (hard/soft, glancing, etc.) is a key metric in manipulation tasks, especially for systems that involve obstacle avoidance, as it is highly indicative for the situation awareness and performance of the operator [21].

2.1.4 Evaluation techniques for Situation Awareness

Finally, when the interface is designed and suitable metrics for evaluation have been de- termined, the system can be evaluated via various evaluation techniques that focus on SA.

Hjelmfelt and Pokrant [28] state that these methods can be divided into 3 categories: 1) subjective, subjects rate their own SA, 2) implicit performance, measuring subjects’ task performance assuming it is related to the SA of the subject and 3) explicit performance, subjects’ SA is being directly evaluated during short interruptions of the experiment. Two of the most popular evaluation techniques for SA are the Situation Awareness Global As- sessment Technique (SAGAT) and the Situational Awareness Assessment Technique (SART).

SAGAT, developed by Endsley [29] is an objective, explicit performance evaluation technique

that freezes randomly mission or task simulations, and asks subjects queries about their SA

while the displays and interfaces are blanked. SART is in contrast to SAGAT a subjective

form of evaluating the subjects SA based on 10 criteria post trial [29]. These criteria include

for example complexity of the situation, focusing of attention, information quantity or spare

mental capacity. The subjects are asked to rate each criteria on a scale from 1 to 7, where

1 is the lowest rating and 7 the highest. The ratings are afterwards combined to determine

the subject’s SA.

(14)

Both SAGAT and SART have advantages and disadvantages that have to be considered when evaluating SA. Advantages of SART are that it is quick and easy to execute, while being applicable to various domains and being non-intrusive to task performance as it is administered post trial [30] [31]. On the other hand, Endsley [32] states that SART is highly correlated to the task performance, a participant that performs well would give himself a good SA rating while a participant that performs bad will would give himself a bad rating.

In addition, it seems that the memory of the participants regarding detailed information about past events is relatively poor and that post-trial questionnaires only record actual SA of the participant at the end of the task. In contrast, SAGAT avoids issues collecting data post trial by asking the subjects directly [33]. In addition it provides diagnostic data on how well the system assists the SA of the subject [32] and demonstrates reliability and validity in various domains [30]. However, Salmon et al. [30] also point out that interruptions diminish the flow of the task and are rarely executable non-simulated situations. Furthermore, the analysis requires extensive preparation and the outcome of the SAGAT evaluation culminates in multiple variables rather than on singe SA value; combining these values together to obtain a truthful representation can be challenging [33].

For the scope of this assignment and the time that is available, SART will be used as indicator for SA as it requires less time and resources to execute than SAGAT. However, SART does show some critical points which will influence the results of the user experiments and need to be accounted therefore. SART is based on the self assessment of the participants and can thus introduce bias through its subjective nature. To account for this bias, a set of metrics based on navigation, perception and manipulation will be chosen in section 4.3.2 to facilitate an objective evaluation of SA. In addition, it is stated that participants can forget detailed information through out the experiment, therefore experiments will be designed to only last a relatively short amount of time to record the true SA of the participant.

2.2 Current state of the system

In this section the state of the current system will be described and evaluated. The UGV is

a robotic platform that uses a segway RMP Omni 50 as base and is furthermore equipped

with a KUKA LWR 4+ arm and a ReFlex TakkTile gripper as depicted in figure 2.1. At the

operator’s side, the Leo Universal Cockpit (LUC) provides audio, video and haptic feedback

to guide the operator. A pair of pedals can be used to navigate the platform and generate

haptic feedback to guide the operator based on the location of obstacles. The pedals itself

have been designed by designed by Martijn de Roo [34] while hardware and software for the

haptic feedback have been designed and implemented by Sierd Meijer [1]. An overview of the

current system can be found in figure 2.2, which shows that obstacle detection & processing

is currently implemented only in a simulation.

(15)

Figure 2.1: LUC cockpit and robotic platform [1].

Figure 2.2: Overview schematic of the current system.

The following subsections hardware and software will recite and discuss the design and im- plementation of the haptic feedback by Meijer.

2.2.1 Hardware

The pedals are mounted to the base of the LUC via a BOIKON profile [35] and can be moved downwards and upwards in order to move either side of the robotic platform in two directions. The pedals have a rotational movement range of 30,2 degrees and an offset of 21,86 degrees. In order to generate haptic feedback via the pedals, a set of accelerometers, motors, optical rotary encoders and Elmo Whistles are used and controlled via a RaMstix, a FPGA developed by RaM. An overview over the hardware setup can be found in figure 2.3.

The system uses MMA7260Q accelerometers [36] and HEDS5540 optical rotary encoders [37] as sensors to determine position, velocity and acceleration of the pedals. The accelerom- eters have a 800mV/G ratio at 1.5g sensitivity and the encoders have a resolution of 1,85E-3 degrees per step of the encoder due to the rotational movement range of the pedals. The out- put signal of these sensors is used via the RaMstix to control two Maxon RE 50 motors [38]

by sending an analog voltage signal to the Elmo Whistles that regulate the current going towards the motors. The motors actuate the pedals then via a pulley belt system, providing haptic feedback to the operator.

The signal of the accelerometers is filtered, integrated and compensated for gravity to

obtain the velocity of the pedals. The acceleration signal from the accelerometers is likely

(16)

Figure 2.3: Schematic overview of the old hardware layout.

to contain an offset and high frequencies, which will cause the integrated signal to drift.

Drift has typically low frequencies and is in the current system partially dealt with gravity compensation and digital high pass filters. For this project, new ADXL 335 accelerometers by Sparkfun [39] have been placed under the pedal and need to be calibrated accordingly.

2.2.2 Software

The software that controls the pedals and provides haptic feedback has been implemented via the Robotic Operating System (ROS) framework and consists out of four nodes: a pedal driver, pedal interpretation, controller and obstacle interpretation node. An overview of the software architecture can be found in figure 2.4:

Figure 2.4: Schematic overview of the software architecture including the signal flows [1].

(17)

2.2.2.1 Pedal driver node

The pedal driver node requests the sensor values from the RaMstix and converts them into SI units which are then being used by the other ROS nodes. Gravity compensation of the acceleration and differentiation, integration and filtering for estimating the velocity of the pedals is done in this node as well.

2.2.2.2 Obstacle interpretation node

The obstacle interpretation node outputs distance to the obstacle and force distribution based on the relative angle to an obstacle. This node receives two vectors containing the location of the obstacle with respect to the orientation of the obstacle, X o which is parallel and Y o

which is perpendicular to the direction of the platform, as depicted in figure 2.5:

Figure 2.5: Calculation of the obstacle distance based on the input vectors [1].

The distance to the obstacle l o is calculated via Pythagoras’ theorem as the magnitude of the vectors X o and Y o , as described in equation 2.1:

l o = q

X o 2 + Y o 2 (2.1)

In order to calculate the force distribution, the angle to the obstacle needs to be calculated first:

θ o = arctan X o

Y o

(2.2) where θ o is the angle to the obstacle in radians. Equations 2.3 to 2.6 are used to calculate the force distribution for the left pedal U L on a scale from 0 to 1:

θ o between 0,and 0,5π:

U L = 0, 5 · θ o

0, 5π (2.3)

θ o between 0,5 and π:

U L = 0, 5 · θ o

0, 5π (2.4)

(18)

θ o between π and 1,5π:

U L = 0, 5 + 0, 5 · (0, 5π − |θ o |)

0, 5π (2.5)

θ o between 1,5π and 2π:

U L = 1 − 0, 5 · (0, 5π − θ o )

0, 5π (2.6)

The force distribution of the right pedal U R can be calculated via equation 2.7:

U R = 1 − U L (2.7)

However, it is important to note that this approach does not consider if the obstacle is actually in the path of motion and thus relevant, nor does it account for multiple obstacles and determines on what obstacle it should give feedback. In addition, the calculation of the force distribution is more related to the generation of haptic feedback than interpreting the location of an obstacle and should thus be moved to the controller node.

2.2.2.3 Pedal interpretation node

The pedal interpretation node subscribes to the pedal driver node to obtain the position of the pedal and converts the pedal orientation into a twist with two degrees of freedom. A twist expresses velocity of a rigid body into angular linear velocity components. The pedal positions are first converted to the displacement in percentage with respect to the origin as described in equation 2.8:

P = 100 · (θ p − θ min )

θ max − θ min − 50 (2.8)

where P is the displacement percentage between ±50% of one pedal, θ p the current pedal position and θ max and θ min the minimum and maximum position of the pedal in radians.

The linear and angular velocities are calculated via the sum or the difference between both displacement percentages as shown in equations 2.9 and 2.10:

v l = v max · (P L + P R )

100 (2.9)

v r = v max · (P L − P R )

100 (2.10)

where v l is the linear velocity and v r is the angular velocity, v max is the maximum velocity

of the robotic platform, P L and P R the displacement percentages of pedals. Once the twists

are calculated, the pedal interpretation node sends the twist to the control node which then

uses this data to drive the robotic platform.

(19)

2.2.2.4 Controller node

The controller node receives the pedals’ position and velocity from the pedal driver node and the distance to the obstacle and the force distribution between the pedals from the obstacle interpretation node and calculates the force that the pedals will exert. Force-stiffness feedback has been chosen as basis for the design of the feedback as it allows to adequately indicate the distance to an obstacle while at the same time providing guidance based on the objects around the platform as well as reactive feedback, that is how the system acts on the input from the user [1], [40]–[43]. The force-stiffness feedback in this system makes use of a spring constant that is based on the distance to an obstacle as well as the displacement of the pedal. Small displacements of the pedals will lead to small exerted forces which may not provide sufficient haptic information to the operator, therefore a force offset is being added to compensate for small displacements. An adaption of Hooke’s law, incorporating force stiffness feedback can be found in equation 2.11:

F spring = K · ((x rest + x of f set ) − x) (2.11) where F spring is the output force, K is the varying spring constant, x rest the origin position, x of f set the change in origin position and x the current position.

The controller is designed in such a way that the pedals will behave like a damped harmonic oscillator, where a virtual spring is being used to bring the pedals back to their origin position.

This is described by equation 2.12,

F = F s + F d = −Kθ p − Dω p (2.12)

where the spring force F s is given by the spring constant K and the displacement of the spring θ p and the virtual damper F d acting as friction is given by the damping constant D and velocity of the pedal, ω p . The spring force is negative as the exerted force should be in the opposite direction of the displacement of the spring. The damping ratio ζ is being used to tune the pedal and is given by equation 2.13:

ζ = D

2 √

IK (2.13)

where ζ is the damping ratio, D the damping constant, I the inertia of the pedal and K the spring constant. Setting ζ to 1 will give the pedals a critically damped behaviour, returning them to their origin position as quick as possible and preventing possible unwanted oscillations. The damping constant D can then be rewritten via equation 2.14:

D = 2 √

IK (2.14)

Consequentially, equation 2.12 can then be then rewritten as equation 2.15:

F = −Kθ p − 2ω p

IK (2.15)

(20)

Next, in order to operate the pedals with and without haptic feedback, the spring constant K is split up into the base spring constant K b and the spring increase constant K i , as shown in equation 2.16:

K = K b + K i (2.16)

Hence, in order to operate the pedals without haptic feedback, K i is set to 0 while K b should should be set large enough to provide the minimum force that is required to return the pedals back to their origin position. In addition, to counter gravity as result of the weight of the pedals, the force offset F of f is added to the output force of the controller. Thus, when no haptic feedback is being used, the new controller function is given by equation 2.17:

F = −K b · θ p − 2ω p

p IK b + F of f (2.17)

When using haptic feedback, K i is set unequal to 0 and is based on the distance and location of an obstacle, the platform’s velocity V and the force distribution U that is obtained from the obstacle interpretation node, as depicted in equation 2.18:

K i = K l · U · V (2.18)

K l in this case is the increase in the spring constant based on the distance l o to the obstacle with tunable parameters m, n and p, as depicted in equation 2.19:

K l = m

l o + n + p (2.19)

The bigger the distance to the obstacle, the smaller the increase will be and vice versa. If obstacles are outside of the maximum range of the sensors, K i should be 0 and consequently no feedback should be given, hence p is being used as a negative offset to obtain a value of 0 for K i when l o equals the maximum range of the sensors being used. The parameters n and m are being used to limit the amount of feedback that is being given, e.g. if l o would be 0, K i should not approach infinity as the user should still be able operate the pedals. Next, the velocity V of the platform is given by the equation 2.20:

V = 0.5 · |v r |

v max + 0.5 (2.20)

where |v r | is the magnitude of the current velocity and v max is the maximum velocity of the platform. As the feedback of the spring increase needs to be noticeable also at low velocities of the platform, an offset of 0,5 is added; that way the velocity of the platform is converted to a scale from 0,5 to 1. The final controller equation that is used to generate haptic feedback via the pedals of the platform is given by equation 2.21:

F out = −(K b + K ip − 2ω p p

I(K b + K i ) + F of f (2.21)

where F out is the output force, K b is the base spring constant, K i is the spring constant

increase, θ p the pedal position, ω p the pedal velocity, I the pedal inertia and F Of f is the

(21)

force offset. A complete overview of the controller with its respective in- and outputs can be found in figure 2.6:

Figure 2.6: Schematic overview of the controller [1].

Using the combined velocity signal from the encoder and the accelerometer, the controller is able to maintain a constant damping ratio by altering the damping constant based on the spring constant. However, as the output force of the controller pushes the pedals back to their origin position, a distinct steady state error is being produced as the controller is not making use of an integrator. Using an integrator would actively work against the operator the stronger he is pressing against the pedals and is therefore not used.

Furthermore, it had been observed that increasing the spring constant lead to a decreased steady state error but at the same time the system shows increased underdamped behaviour.

It is expected that most of these problems are caused by delay that is introduced by the

architecture of the system, as sensor values being read by the RaMstix are first passing the

driver node to be converted to SI units and are then being sent to the controller node. In

addition, the same connection is being used to send the output force of the controller to the

motors of the pedals, creating a significant delay between measurement and actuation. Delay

leads to phase shift, which influences the phase margin. The phase margin is the amount of

phase that can be varied until the signal reaches a phase shift of -180 degrees while having a

gain of 0dB, as depicted in figure 2.7:

(22)

Figure 2.7: Phase and gain margin for an arbitrary system.

At this point the system is unstable as the open loop transfer function H(s)G(s) equals

−1, meaning that the closed loop function 1+H(s)G(s) G(s) will approach infinity. It is expected that redesigning the system architecture by moving the controller to the RaMstix will reduce delays in the system and lead to a less underdamped and more stable behavior of the pedals.

2.3 Obstacle localization

The obstacle detection & processing approach for this system will build on the research from Fabian van Hummel [2]. In order to determine the distance to the most relevant obstacle as input to the haptic controller, the general area of interest around the platform needs to be determined first which gives then a basis for a possible sensor configuration. Next, a collision area needs to be derived as haptic feedback should only given on obstacles that are in the path of motion. The last step is then to determine the distance to the closest obstacle.

2.3.1 Area of interest

The area of interest is dependent on the motion capabilities of the platform and the minimum

travel time. The platform is meant to be driven differentially, that means it can move for-

wards, backwards, but also rotate around its axis as depicted in figure 2.8. Any combination

of these movements will be result in a circlular path of motion with a certain radius. The

minimum travel time is the minimum time frame in which the operator should experience

haptic feedback prior to a collision. This has been defined as 5 seconds. Based on the mo-

tion capabilities of the platform and the minimum travel time, the area of interest has been

determined as shown in figure 2.9:

(23)

Figure 2.8: Motion capabilities of the platform [2].

Figure 2.9: Area of interest given a 5 second minimum travel time [2].

2.3.2 Sensor configuration and coordinate transformation

Based on the area of interest, a configuration for the sensors can be determined. The sensors that will be used in this assignment are URG-04LX-UG01 LIDAR sensors. These sensors illuminate their environment with pulsed laser light and measure the reflected pulses to determine the distance to an obstacle. The sensor scans an area of up to 240 ° and has a maximum range of up to 5.6 meters, with 1mm resolution. The angular resolution is 0.36 ° and the maximum divergence of the laser beam is 4cm starting from a 4m measuring distance.

This allows it to measure the distance and the angle to an obstacle very precisely in a 2D

dimension. As there are only 2 sensors available at this moment, the sensors will be placed

at the top corners as depicted in figure 2.10. This way almost all of the area of interest to

the front, left and right can be measured.

(24)

Figure 2.10: URG-04LX-UG01 sensor configuration.

The next step is to express each received sensor value in a base frame as depicted in figure 2.11 as the sensors measure distances with respect to their own location and orientation.

Representing the sensor readings in a uniform frame will make further obstacle processing more easy. The base frame orientation that is shown in figure 2.11 has been chosen as the IMU of the platform uses the same reference frame. This is important as the IMU provides orientation and acceleration of the platform and thus also velocity.

Figure 2.11: Base frame of the platform (note that the actual radius of the sensors is larger than depicted here).

In order to express the sensor readings in the base frame, rotation matrix H will be used as

described in equation:

(25)

H(x, y, θ) =

cos(θ) sin(θ) x

−sin(θ) cos(θ) y

0 0 1

 (2.22)

Where x, y represent the coordinate of the origin of the original frame expressed in the new frame and θ represents the offset of the angle of the original frame to the new frame. This way a point p that is measured in the sensor frame s can be expressed as a point in the base frame b as described in equation 2.23 and 2.24:

b P = H s b s P (2.23)

b x

b y 1

 =

cos(θ) sin(θ) x

−sin(θ) cos(θ) y

0 0 1

s x

s y 1

 (2.24)

The last step is to determine the angle offset θ from the sensor frame to base frames as well as the Cartesian coordinates of the sensors with respect to the origin of the base frame, which is the location of the IMU. Given that a counterclockwise rotation results in a positive angle, sensor 1 has angle offset of 30 ° and sensor 2 has an angle offset of 150°, while the Cartesian coordinates for sensor 1 are (0.29, 0.28) and for sensor 2 (0.29, -0.28) in meters.

2.3.3 Collision area

The collision area is the path of motion of the platform given a certain minimum travel time.

Any movement of the platform can be described as a motion over an arc with a certain radius and angle, the collision area is therefore defined by the radius and angle range for a certain movement, visualized in figure 2.12 as the red area:

Figure 2.12: Collision area [2].

(26)

In order to calculate the radius and angle ranges, r cor , the radius from origin of the base frame to the center of rotation (P0), as well as the dimensions and the velocity of the platform are being used. The radius boundaries are determined via equation 2.25 and 2.26:

r P 1 = r min = r cor − w

2 (2.25)

r P 2 = r max = tan −1 ( h 2

r cor + w 2 ) (2.26)

where w is the width of the platform and h 1 is the upper height of the platform. Next, the angle boundaries are being determined. These are dependent on what quadrant the platform is moving in, as shown in equations 2.27 and 2.28:

quadrant 1 and 4

θ min = 2π − sin −1 ( r h2

max

) θ max = |v r

x

|·t

cor

(2.27)

quadrant 2 and 3

θ min = −2π + sin −1 ( r h1

p3

) θ max = − |v r

x

|·t

cor

(2.28)

where h 2 is the lower height of the platform, v x is the forward velocity of the platform obtained from the IMU and r p3 is given by equation 2.29:

r p3 = r

(r cor + w

2 ) 2 + h 2 1 (2.29)

2.3.4 Path length calculation

The next step is to check what the closest obstacle in the collision area is, as this is the most

relevant obstacle that needs to be avoided. The path length is defined as the length of an arc

from the platform to the obstacle. Ideally, this arc has the radius starting from the center of

rotation and ending at the location of the obstacle and an angle starting from the colliding

point from the platform and ending at the location of the obstacle. The path length has here

been simplified by locating the colliding point at the y axis, as depicted in figure 2.13:

(27)

Figure 2.13: Path length calculation [2].

Dependent on which quadrant the platform is moving and in which quadrant the obstacle is the located, the radius and angle can be calculated as defined in equations 2.30 and 2.31:

Radius =

 

 

 

 

px 2 + (r cor − |y|) 2 = a px 2 + (|y| − r cor ) 2 = b px 2 + (|y| + r cor ) 2 = c

(2.30)

Angle =

 

 

 

 

tan −1 ( r |x|

cor

−|y| ) = d tan −1 ( |y|−r |x|

cor

) = e tan −1 ( |y|+r |x|

cor

) = f

(2.31)

The relation between the different quadrants with respect to the angle and radius can be found in tables 2.1 and 2.2:

rcor > |y| rcor < |y|

obstaclequadrant 1 2 3 4 1 2 3 4

quadrant of motion 1 a a c c b b c c

quadrant of motion 2 a a c c b b c c

quadrant of motion 3 c c a a c c b b

quadrant of motion 4 c c a a c c b b

Table 2.1: Radius calculations used in the quadrant system [2].

(28)

rcor > |y| rcor < |y|

obstaclequadrant 1 2 3 4 1 2 3 4

quadrant of motion 1 d 2π-d 2π-f f π-e π+e 2π-f f

quadrant of motion 2 -2π+d -d -f -2π+f -π-e -π+e -f -2π+f quadrant of motion 3 -2π+f -f -d -2π+d -2π+f -f -π+e -π-e

quadrant of motion 4 f 2π-f 2π-d d f 2π-f π+e π-e

Table 2.2: Angle calculations used in the quadrant system [2].

Once the radius and the angle have been determined, the path length can be calculated as follows:

l = r · θ (2.32)

However, the design approach proposed by F. van Hummel has some disadvantages. Ultra- sonic sensors have been used in his iteration of the system to detect obstacles, due to the wide beam width of 2.4 meters of these sensors, considerable uncertainties in projecting the obstacles in the collision area are present, as depicted in figure 2.14. For example, obstacles could be projected in the collision area although they are outside of it and vice versa. How- ever, this will not be a problem with the new sensors, as these have a maximum beam width of 4cm, thus allowing measurements that are a lot more precise.

Figure 2.14: Path length calculation [2].

In addition, there have been significant path length errors observed if an obstacle was in

the swerving area. For example, in figure 2.15, the calculated path length is clearly not

corresponding to the actual path length.

(29)

Figure 2.15: Swerving area of the platform and path length error for obstacles in the swerv- ing area [2].

Thus a new approach to calculate the path is being proposed: An artificial boundary around the platform can be created based on the swerving area. The path length can be then cal- culated based on the radius of the swerving area and basic trigonometry. This way obstacles can be detected already before they are in the swerving area, which allows for only very small path length errors. In order to use this approach, the angle at which the obstacle is colliding with the circumference of the swerving area needs to be known, as depicted in figure 2.16:

Figure 2.16: New path length calculation.

In order to obtain this angle, a triangle that connects the center of the swerving area, the

center of rotation and the obstacle, can be created as depicted in figure 2.17:

(30)

Figure 2.17: Triangle connecting the center of rotation, the obstacle and the origin of the swerving area.

Next, the triangle side from the center of rotation to the obstacle will be kept constant while the angle in the center of rotation will be decreased until the opposite side of the triangle a, equals the radius of the swerving area, as depicted in figure 2.18. The length of the opposite triangle side can then be calculated via the law of cosines, which allows to calculate the side of a non-right triangle if two sides and the angle between them is known.

Figure 2.18: Determining the angle of the triangle at which the opposite side to the angle has the same length as the swerving radius.

In order to calculate a, the length from the origin of the swerving area until the center of

rotation, b, and the angle from the origin of the swerving area until the obstacle, θ obs∗ , need

to be known first, as depicted in figure 2.19:

(31)

Figure 2.19: Intermediate right triangle to calculate b and θ obs∗ .

In this case, r cor is known, while h 3 , the height from the origin of the swerving area until the origin of the base frame can be calculated as follows:

h 3 = h 2 − h

2 (2.33)

where h is the total height of the platform and h 2 the lower height of the platform. b can then be calculated via the law of cosines as described in equation 2.34:

b = r

r cor 2 + h 3 2 − 2r cor· h 3 · cos( π

2 ) (2.34)

The next step is to calculate the angle from the from the origin of the swerving area to the origin of the base frame, θ h3 . This is being calculated via equation2.35:

θ h3 = sin −1 ( h 3

b ) (2.35)

This angle is then being added to θ, the original angle of the path length calculation, in order to obtain θ obs∗ , as depicted in equation 2.36:

θ obs∗ = θ h3 + θ (2.36)

Once θ obs∗ and b are known, a can then be calculated via the law of cosines as follows:

a = p

r cor 2 + b 2 − 2r cor · b · cos(θ obs∗ ) (2.37)

The next step is to decrease θ obs∗ until a equals the radius of the swerving area. At this

point, the obstacle is colliding with the circumference of the swerving area, as depicted in

figure 2.18. This angle is then being subtracted from θ obs∗ , to obtain the angle from the

circumference of the swerving area to the obstacle, as depicted in equation 2.38:

(32)

θ f = θ obs∗ − θ obs∗∗ (2.38) where θ f is final angle to the obstacle and θ obs∗∗ is the angle where a equals r swv . The final step is then to calculate the arc length from the obstacle to the colliding point at the swerving area of the platform, as depicted in equation 2.39:

l = r swv · θ f (2.39)

where l is the final path length and r swv is the radius of the swerving area. For multiple obstacles in the collision area, the shortest path length will then be chosen to indicate the most relevant obstacle. The path length and the angle to the obstacle will be then be send to the controller node as discussed in section 2.2.2 in order to alter the spring constant.

2.4 Evaluations and Considerations

The previous sections discussed the state of the art on SA, the current state of the system in regards to haptic feedback and the design of an obstacle detection & processing system.

Based on these sections, this section will discuss the necessary steps towards a fully integrated system that can be used to evaluate the effect of pedal based haptic feedback on the SA of an operator in telerobotics. First, the current implementation of haptic feedback will be discussed, then obstacle detection & processing and lastly the execution of the user tests.

2.4.1 Haptic Feedback

First, delays can lead to reduced operator performance, degraded driving and tracking behav- ior and ultimately lower SA as described in section 2.1.2. Once the final system is integrated, there will be additional delay in the system created by the network connection from the plat- form to the cockpit. Therefore, the delay that is being produced by the software architecture on the cockpit itself must be reduced by moving the loop controller onto the RaMstix. Sec- ond, new accelerometers have been placed under the pedals and should be calibrated. The new V /g ratio needs to be determined as well as the new cutoff frequency that is needed to get rid of the drifting that is being created when integrating the acceleration of the sensors. It is also possible to only use the velocity obtained by differentiating the position of the pedals which is obtained via the encoders, however the obtained velocity will then be less accurate.

Third, the steady state error that is being produced by the controller due to the lack of an integrator will not be revised, as the average steady state error of the controller is only 1.51 ° for the differentiated velocity and 1.16 ° for the combined velocity. These values are not large enough to significantly reduce the user experience and limit the SA of the operator.

2.4.2 Obstacle detection & processing

The haptic feedback of this system is based on the distance of the platform to an obstacle.

Obstacle detection & processing is currently done only in simulation and must be therefore

(33)

implemented with actual sensors and integrated with the rest of the system in order to have a fully functioning setup. Furthermore, path length errors are present for obstacles in the swerving area that can influence the effect of the haptic feedback. A new approach has been proposed to create a boundary based on the swerving area and detect obstacles before they enter this area. The new approach should be first tested and sub sequentially implemented if it performs better than the original approach.

2.4.3 User testings

Lastly, SART has been chosen as evaluation technique for SA as it is requires less time and resources to execute than SAGAT. However, SART does show some critical aspects which will influence the results of the user experiments and need to be accounted therefore. Metrics will be designed to counterbalance bias that is introduced due to the subjectiveness of SART.

In addition, experiments will be designed to last only a short period of time to in order to record an accurate representation of the participant’s SA.

Based on the MoSCow principle, an overview of the discussed points is given in table 2.3:

Must Should Could Would

Haptic feedback

Migrating Con- troller

Calibrating ac- celerometers

Steady-state error

Obstacle process- ing

Implementation in real life

New path

length ap- proach

User test- ing

System must be functional and integrated

Account for critical points of SART

Table 2.3: MoSCoW table.

(34)
(35)

Design and Implementation

In this chapter the design and implementation of the fully integrated system will be discussed.

The final system is compromised out of two main parts, the platform and the cockpit. First, communication between these will be discussed, afterwards each part will be examined indi- vidually. A simplified overview of the final system can be found in figure 3.1:

Figure 3.1: Simplified system overview.

3.1 Communication

Communication between the platform and the router is done via a transparent bridge for wireless data transfer and ZeroMQ, an asynchronous messaging library. The platform has a router while the cockpit has only one access point (AP) and only one network card that is connected to the RaMstix via an Ethernet cable. Most routers are by default in Network Address Translation (NAT) mode which allows private IP networks that use unregistered IP addresses to connect to the internet [45]. This is being done by converting the private (not globally unique) IP addresses of the network to valid IP addresses before sending them to another network as depicted in figure 3.2. This means that sending data back and forth between the platform and cockpit will not be able via NAT routing as they are on 2 different networks, as depicted in figure 3.3.

29

(36)

Figure 3.2: Wireless communication to the internet via NAT routing.

Figure 3.3: Wireless communication via NAT routing. In this case there is no communica- tion between the systems as they are on two different networks.

In order to create a wireless network without NAT routing, a transparent bridge has been

created on the PC of the cockpit by enabling the Wireless Distribution Service (WDS) mode

on the access point of the PC of the cockpit as well as on the router of the platform. In this

configuration, every device is part of the same subnet and can communicate with all other

devices, as depicted in figure 3.4:

(37)

Figure 3.4: Wireless communication via WDS enabled transparent bridging.

This way wireless communication is possible between the platform, the cockpit and the RaM- stix. For future projects a switch can be connected to the network card of the cockpit, making it possible to connect multiple devices to the cockpit such as an Omega 7 or a Panda by Franka Emika for example.

3.2 Platform

The PC on the platform sends the velocity of the platform as well as the path length and angle to the closest obstacle in the collision area to the Cockpit and receives in return the twists of the pedals. A schematic showing the architecture of the ROS nodes used for this process can be found in figure 3.5:

Figure 3.5: ROS architecture of the platform.

The Hokuyo Driver node returns an array with distance values obtained by the Hokuyo URG-

04LX-UG01 LIDAR sensors, where each position in the array corresponds to a certain angle

(38)

that can be determined via the angular resolution of the sensors. Sensor mountings for these sensors have been designed via Solidworks as depicted in figure 3.6:

Figure 3.6: Sensor mounting for the URG LIDAR sensors.

Next, the distance values will then be used by the coordinate transformation node as dis- cussed in section 2.3 to obtain the coordinates of the obstacles with respect to the base frame.

The obstacle processing node will then calculate the collision area based on the velocity of the platform, which is obtained by integrating the acceleration that is obtained by the XSense Driver node via the IMU integration node. Important to note is that the suggested improve- ment based on the swerving area radius has not been implemented due to time constraints.

Based on the collision area and the coordinates of the obstacles around the platform, the path length and angle to the most relevant obstacle is being calculated and sent to the Ze- roMQ server node. This node serves as main communication point from the platform to other devices and can be used for other projects as well. In this case it sends the velocity of the platform as well as the path length and angle to the most relevant obstacle and receives in turn the twists of the pedals which are then being send to the Segway node in order to steer the platform.

3.3 Cockpit

The cockpit is compromised out of 3 main parts which will be discussed in this section in the following order: First the PC, then the RaMstix and lastly the pedals.

3.3.1 PC

The PC serves as client between the platform and the RaMstix. It requests the velocity of the

platform as well as the path length and the angle to closest obstacle from the ZeroMQ server

of the platform and responds with the twists that are obtained via the encoder positions by

the RaMstix. The spring and damping constants for the haptic feedback are being calculated

on the PC, while the loop controller has been moved from the PC to the RaMstix in order to

reduce the network delay as discussed in section 2.2.2.4. The feedback constants are then be-

ing send via ZeroMQ to the RaMstix. Furthermore, the calculation of force distributions has

been moved from the obstacle interpretation node to the Force Feedback node, as discussed

in section 2.2.2.4.

(39)

A full overview over the ROS architecture of the cockpit can be found below in figure 3.7:

Figure 3.7: ROS architecture of the cockpit.

3.3.2 RaMstix

The RaMstix is responsible for reading the encoder values and converting these to SI units to obtain the position of the pedals in radians. The position is differentiated in order to obtain the velocity of the pedals, which is then being filtered by a low pass filter. Important to note is that only the differentiated velocity obtained by the encoder position has been used as sufficient calibration of the new accelerometers was not possible due to time constraints.

Based on the received feedback values via the PC and the velocity of the pedals, the total force of the controller is being obtained, as described in equation 2.21.

3.3.3 Pedals

Velcro straps have been attached to the pedals in order to secure the feet of the operator and allow him to pull the pedals up. This way the the operator will be able to steer the platform in all desired directions.

Figure 3.8: Velcro straps to secure the feet of the operator

(40)
(41)

Results & Discussion

At this point the system has been fully integrated. In this chapter the impact of pedal based haptic feedback on the SA of an operator steering a robot in a remote area is being investi- gated. First, a set of experiments will be executed to test the design and implementation of haptic feedback and obstacle detection & processing. Afterwards, a set of user testings are conducted to evaluate how well the system guides the operator in the remote environment.

Each section will be structured as follows: First the goal of the experiment is being explained, then the design is being discussed and results are being presented. A discussion on the results will conclude each section.

4.1 Controller

4.1.1 Goal

The goal for the experiment in this section is to obtain a suitable range of spring constants that can be used to study the effect of different feedback strengths on SA in a set of user experiments. In order to do so, the migrated loop controller will be tested on stability and consistency and will be compared with experiments previously conducted by Meijer. It is expected that the migrated controller will be more damped and stable due to the reduced network delay as discussed in the analysis.

4.1.2 Design

The controller is such designed that the damping ratio is always constant while the spring and damping constant can be varied (see eq. 2.13 and 2.14). Figure 4.1 shows four different damping ratios ζ, where

ˆ ζ > 1 Overdamped

ˆ ζ = 1 Critically damped

ˆ ζ < 1 Underdamped

ˆ ζ = 0 Undamped

35

(42)

The desired damping ratio is ζ = 1 as this allows for maximum rising time and minimum settling time.

Figure 4.1: Visualization of four damping ratios [1].

In theory, the controller should show similar motions of the pedal for various spring constants.

To test the stability and consistency of the controller, the pedal is maximally displaced and afterwards released. This procedure is being done four times, first the pedal is being pushed down two times and then pulled up times, giving four deviations. The spring constants that are being tested are K 1 = 6, K 2 = 8, ... K 6 = 16 where K 1 is no feedback and K 6 is maximum feedback. For this experiment only the right pedal is being tested, just as in the experiments of Meijer.

It is expected that the migrated controller will be more damped and stable than the original version due to the reduced network delay. In addition it is expected that there will be a steady steady state error when the pedal is returning to its origin due to the lack of an integrator in the controller. The results of Meijer’s experiments can be found in the appendix in figure A.1 and A.2.

4.1.3 Results

Figure 4.2 and 4.3 show the results of the experiment for the different spring constants. K 1

is the only spring constant for that the pedal is just overshooting and not starting to oscillate after its release. As expected, all signals show a steady state error when releasing the pedal.

Increasing the spring constant leads to a decreased state error on average for K 1 to K 4 and

increased underdamped behaviour for K 1 to K 6 . The graph for K 5 shows at the beginning

the start up of the pedal and then the response of the system when the pedal is being pushed

down. Afterwards the pedal is being pulled up and released, causing it to overshoot the

origin position until the minimum range of the pedal. The pedal remains at the bottom and

is not actuated anymore. For K 6 a similar, more extreme behavior can be observed. The

pedal immediately overshoots to its maximum possible range and is pushed down to its the

minimum range at start up. Afterwards the the pedal is not actuated anymore and remains

at the bottom of the frame.

(43)

Figure 4.2: Output of the pedal position and differentiated velocity with linear displacement of the partially migrated controller for 1) K=6, 2) K=8, 3) K=10

Figure 4.3: Output of the pedal position and differentiated velocity with linear displacement

of the partially migrated controller for 1) K=12, 2) K=14, 3) K=16

Referenties

GERELATEERDE DOCUMENTEN

The required punch load depends on a large number of drawing conditions, such as forming properties of the sheet material, sheet thickness, drawing ratio, blank

Bij het proefonderzoek kwamen heel wat middeleeuwse grachten aan het licht, maar niet het circulaire spoor dat op de luchtfoto’s zichtbaar is. Het is mogelijk dat dit spoor sedert

Publisher’s PDF, also known as Version of Record (includes final page, issue and volume numbers) Please check the document version of this publication:.. • A submitted manuscript is

Bewijs: a) ABDE is een koordenvierhoek b) FGDE is

characteristics (Baarda and De Goede 2001, p. As said before, one sub goal of this study was to find out if explanation about the purpose of the eye pictures would make a

In deze scriptie zal worden getracht dit soort vragen te beantwoorden door de achttiende-eeuwse kardinaal- nepoot centraal te stellen: welke functies en rol

To give recommendations with regard to obtaining legitimacy and support in the context of launching a non-technical innovation; namely setting up a Children’s Edutainment Centre with

Procentueel lijkt het dan wel alsof de Volkskrant meer aandacht voor het privéleven van Beatrix heeft, maar de cijfers tonen duidelijk aan dat De Telegraaf veel meer foto’s van