Faculty of Electrical Engineering, Mathematics & Computer Science
Design of haptic feedback
in pedal based UGV teleoperation to enhance situation awareness
Nils Rublein
B.Sc. Thesis Creative Technology July 2019
Supervisors:
dr. ir. D. Dresscher dr. ir. E. C. Dertien
Robotics and Mechatronics Group
Faculty of Electrical Engineering,
Mathematics and Computer Science
University of Twente
P.O. Box 217
7500 AE Enschede
The Netherlands
This assignment is being carried out in cooperation with i-Botics, a research center for teler- obotics at the University of Twente. The goal of i-Botics is to enhance robotic sensing and remote perception for telerobotics. Telerobotics is the control of a robot in a remote environ- ment, combining the cognitive abilities of a human and the physical abilities of a robot. This study was set out to investigate the effects of haptic feedback based on obstacle presence in remote UGV control on the situation awareness of the remote operator. Situation awareness is a key factor in telerobotics as it greatly influences performance of the human operator and the overall success of the task mission. Research has shown that haptic feedback can complement visual information in the context of obstacle avoidance systems and improve the SA of the operator by providing haptic cues indicating the distance and direction to obstacles in the remote environment.
The system that is used in this assignment is compromised out of two main parts: A robotic platform equipped with mechanum wheels and a robotic arm and a cockpit that provides audio, video and haptic feedback to guide the operator. The cockpit is furthermore equipped with a pair of pedals that are able to steer the platform and generate haptic feedback based on the distance to an obstacle. The goal of this assignment was to integrate the two systems, provide haptic feedback based on the presence of obstacles around the robotic platform to guide the operator’s decision making and evaluate the effect of haptic feedback in the context of situation awareness.
In the analysis, a conceptual foundation for the implementation of the system and answering of the main research question is presented. First, the state of the art of situation awareness is being discussed and suitable techniques for evaluating situation awareness in the context of telerobotics are being proposed. Second, the current state of the system will be described and evaluated which will give a basis towards the final realization of the system and possible improvements for individual system components. Third, the theoretical design of an obstacle detection & processing system is being discussed which will serve as foundation to generate haptic feedback. Fourth, based on the previous sections, the necessary steps towards a fully integrated system that can be used to evaluate the effect of pedal based haptic feedback on the SA will be discussed.
In the design & implementation chapter, the implementation of the finalized system is being discussed. The final system is compromised out of two main parts, a robotic platform that
iii
scans the remote environment for obstacles and a cockpit that provides haptic feedback via a pair of pedals which allow to steer the platform. Communication between these two main components is being discussed first, afterwards the platform and the cockpit are examined individually.
After the system is fully integrated, the functionality of the system is being evaluated and an user test is being conducted. First, the haptic feedback controller is being evaluated for stability & robustness and a suitable set of spring constants for the user testings is sought out to be found. Second, the obstacle detection & processing system is being evaluated as it serves as foundation of the haptic feedback. Third, user tests are being conducted to asses the SA of an operator steering a robotic platform in a remote environment guided by the haptic feedback. Due to time constraints and technical issues, only one spring constant is being explored. The tests are thus being carried out via the between approach, evaluating a control group with no haptic feedback and an experimental group with haptic feedback. The situation awareness of the participants is being evaluated by means of the SART questionnaire as well as an additional set of metrics based on the categories navigation, perception and manipulation.
The results of the user testing fail to observe any significant statistical difference on the effect of haptic feedback on the situation awareness of the operator between the experimental group and the control group. However, participants of the control group completed the user tests in significantly less time and also showed on average higher values in the SART questionnaire, indicating a higher value of situation awareness.
In conclusion, despite the efforts being made to understand the influence of haptic feedback
in pedal based UGV control on the SA of a human operator, more research is needed to
understand how well haptic feedback guides the decision making of the operator and how
it should be designed in order to optimally enhance the SA. The implementation of haptic
feedback for the particular telerobotic system of this study does indicate a higher value of
SA but should be explored more in depth by revising the user studies as described in the
analysis in order to obtain more meaningful and significant data.
Summary iii
1 Introduction 1
1.1 Context . . . . 1
1.2 Project goal . . . . 1
1.3 Related work . . . . 2
1.4 Report organization . . . . 3
2 Analysis 5 2.1 Situation awareness . . . . 5
2.1.1 Definition of Situation Awareness . . . . 5
2.1.2 Potential harmful factors for Situation Awareness . . . . 6
2.1.3 Metrics for Situation Awareness in telerobotics . . . . 6
2.1.4 Evaluation techniques for Situation Awareness . . . . 7
2.2 Current state of the system . . . . 8
2.2.1 Hardware . . . . 9
2.2.2 Software . . . . 10
2.3 Obstacle localization . . . . 16
2.3.1 Area of interest . . . . 16
2.3.2 Sensor configuration and coordinate transformation . . . . 17
2.3.3 Collision area . . . . 19
2.3.4 Path length calculation . . . . 20
2.4 Evaluations and Considerations . . . . 26
2.4.1 Haptic Feedback . . . . 26
2.4.2 Obstacle detection & processing . . . . 26
2.4.3 User testings . . . . 27
3 Design and Implementation 29 3.1 Communication . . . . 29
3.2 Platform . . . . 31
3.3 Cockpit . . . . 32
3.3.1 PC . . . . 32
3.3.2 RaMstix . . . . 33
v
3.3.3 Pedals . . . . 33
4 Results & Discussion 35 4.1 Controller . . . . 35
4.1.1 Goal . . . . 35
4.1.2 Design . . . . 35
4.1.3 Results . . . . 36
4.1.4 Discussion . . . . 39
4.2 Obstacle Detection & Processing . . . . 40
4.2.1 Goal . . . . 40
4.2.2 Design . . . . 40
4.2.3 Results . . . . 41
4.2.4 Discussion . . . . 42
4.3 Situation Awareness . . . . 43
4.3.1 Goal . . . . 43
4.3.2 Design . . . . 43
4.3.3 Results . . . . 47
4.3.4 Discussion . . . . 49
5 Conclusion 51 6 Recommendations 55 6.1 Controller . . . . 55
6.2 Obstacle detection & processing . . . . 55
6.3 User testing . . . . 55
6.4 Usability issues . . . . 56
References 57
Appendices
A Appendix 61
A.1 PD controller tests by Meijer . . . . 61
Introduction
1.1 Context
This assignment is a continuation of the work from Sierd Meijer [1] and Fabian van Hum- mel [2] and is being done in cooperation with i-Botics, a joint research and development center founded by TNO and the University Twente, that focuses on telerobotics. Telerobotics is the remote control of a robot, combining the cognitive abilities of a human and the physical abilities of a robot. and is used in many domains such as the chemical industry, healthcare or defense and has many applications on land, sea and in space. i-Botics’ goal is to enhance robotic sensing and situation awareness (SA) for telerobotics by researching alternatives to classical means of visual information technology in modern robotics such as VR or haptic feedback. SA is an important factor in telerobotics, as the human operator is physically iso- lated from the robotic vehicle. The lack of SA may lead to deteriorated decision making and performance of the human operator, as exclusive use of visual information to simultaneously perform the primary task (e.g. search and rescue) and to be aware of possible impediments to the operation becomes a very challenging task [3]. Research has shown that haptic feed- back can complement visual information in the context of obstacle avoidance systems and improve the SA of the operator by providing additional haptic cues indicating the distance and direction to the obstacle [4].
1.2 Project goal
The goal of this assignment is to design and implement a system that provides haptic feedback based on the presence of obstacles around a remote robotic platform to guide the operator’s decision making and enhance his SA and overall performance. The system is compromised out of two main parts: First, haptic feedback that is being generated by the pedals that steer the robotic platform and second, an obstacle localization system that will serve as basis for the feedback. The main research question of this assignment is:
”How to design haptic feedback based on an obstacle detection system to improve the situation awareness of the human operator in remote UGV control?”
1
In addition, the following sub-questions will be investigated:
What is the state of the art of the current system? Which elements are already imple- mented and which are missing, in order to have a fully integrated system that can be used to study the effect of haptic feedback on SA?
What obstacles are relevant for the scope of this assignment and how can they be detected?
How can haptic feedback be generated based on obstacle location and how can it be used to help avoiding relevant obstacles?
How should the SA of the operator be evaluated? How do user tests need to be designed to obtain reliable and meaningful data?
1.3 Related work
Relating the own to existing work in the field is important as it indicates relevance but also gives a opportunity to compare results and draw corresponding conclusions.
Using haptic feedback to guide the decisions of an UGV operator and improve his SA has been done already in a number of studies. For example, Luz et al. [3] compare the use of a tactile tablet, an traction cylinder and a vibrotactile glove to improve traction awareness of an UGV operator. The study shows that two of the haptic devices increase the SA of the operator with regards to the traction state of the UGV significantly when comparing to the exclusive use of visual information. The study conducted by Corujeira et al. [5] investigates the use of directional force feedback with a gaming controller that exerts haptic feedback via simulation based obstacle detection and processing. The study shows that directional force feedback improves the time needed to complete the navigation tasks and decreases the duration of collisions significantly in comparison to the control group.
When looking at pedal based haptic feedback, there are plenty of studies with focus on automobiles to improve for example risk predictive driving [6], [7] or encourage eco-friendly driving [8], [9] but there is little to almost no research on haptic feedback pedals with regards to telerobotic systems. A study that does use haptic feedback pedals in remote robotic systems was conducted by Kim et al. [10] who investigated applying haptic feedback to a set of pedals that can be manipulated via translation and rotation. Haptic feedback in this case is also based on obstacle detection, however the difference to this thesis is that here the pedals are meant to capture the movement of the user and simulate walking in a remote environment. Furthermore, this study only tested the functional requirements of the pedals and not how effective they are in guiding an user based on obstacle information.
To conclude, haptic feedback in general is well researched, but there is little existing work
in the field of remote telerobotic systems that use pedal based haptic feedback. Studies that
investigate the effect of obstacle location based haptic feedback like [3] and [5] may be used
to compare to the results of this study in regards of assessing how effective the system is in
improving the SA of the operator. The study of [10] may be used to compare the functionality
of the pedal in regards of how effective the pedals are in steering the robotic platform. The results of this study can be then used on the one hand to fine tune the individual components of this particular system but also serve as guideline for designing haptic feedback to improve SA of the operator on the other hand.
1.4 Report organization
The remainder of the report is structured as follows: The analysis (Chapter 2) includes the
theoretical knowledge that is necessary to answer the main research question and includes
research about SA, the current state of the existing system, obstacle localization and discusses
the necessary steps towards a fully integrated system that will serve as basis to evaluate the
effect of pedal based haptic feedback on SA. Chapter 3 contains the conceptual design and
implementation of the system based on the analysis, including detailed description about
hardware and software. Then, in chapter 4 (Results & Discussion) the system will be tested
based on the project goals and subsequently evaluated. Finally, in chapter 5 a conclusion
is being presented based on the observations of this report as well as recommendations for
future work.
Analysis
In this chapter a conceptual foundation for the implementation of the system is presented as well as the necessary theoretical knowledge for answering the main research questions and the respective sub-questions. First, the state of the art of SA will be discussed in order to obtain a general understanding of what SA is, what common pitfalls are when designing Human-Robotic-Interfaces and how to assess and evaluate the SA of an user. Second, the current state of the system will be described and evaluated which will give a basis towards the final realization of the system and possible improvements for individual system components of the setup. Third, in order to generate haptic feedback based on the presence of obstacle around the platform, the theoretical design of a obstacle detection & processing system will be discussed. Lastly, based on the previous sections, the necessary steps towards a fully integrated system that can be used to evaluate the effect of pedal based haptic feedback on SA will be discussed.
2.1 Situation awareness
In order to design haptic feedback with focus on improving the SA of an operator, a general understanding of SA and how to evaluate it is needed. Thus, first a general definition of SA will be given. Second, potential harmful factors for SA with attention on telerobotic systems will be discussed in order to improve the current design of the system. Once a device is realized, a relevant set of metrics needs to be determined in order to evaluate the SA of the operator, thus suitable metrics for evaluating telerobotic systems in the context of obstacle location based haptic feedback will be presented in the third part of this section. Finally, the forth part of this section will discuss several techniques that can be used to evaluate SA once a device has been realized and a suitable set of metrics has been determined.
2.1.1 Definition of Situation Awareness
SA was defined by Endsley in 1995 as “perception of the elements in the environment within a volume of time and space, the comprehension of their meaning and the projection of their status in the near future” [11]. Endsley divides SA in 3 different levels: the first and lowest level is perception of the environment. The second level is comprehension of relevant data
5
and applying it in the context of the mission. The third and highest level is projection of the environment and its possible system states.
2.1.2 Potential harmful factors for Situation Awareness
When designing interfaces or systems there are several factors that can potentially harm the SA of the operator. Endsley defined a set of general factors called the “8 demons” that include for example data overload, attentional narrowing, workload, fatigue and other stressors [12].
Additional factors in context of telerobotics that are relevant for the scope of this assign- ment include delays and scale & spacial ambiguities when perceiving a remote environment.
Multiple studies have found that time delays may lead to decreased operator performance, degraded driving and tracking, as well as reduced telepresence and over-actuation when delay is variable [13]–[18]. In addition, Tittle et al. [19] state that in contrast to a directly natural perceived environment, humans can experience complications perceiving the scale of a remote environment in relation to their own capabilities for movement as well as tracking their own spatial location based on visual information in that environment. Feedback information via the vestibular nerve system of the human body that indicates the acceleration of the own body and provides a natural scaling for distances in an environment is lost, as the human perceptual processor is decoupled from the environment that is being explored. Tittle et al. suggest to “identify and implement perceptual cues that can augment the remote video stream and allow the human perceiver to compensate for the absence of the complex combi- nation of naturally occurring information (e.g., muscular and vestibular feedback) that would exist if he were actually investigating the environment.” As an example, multiple studies have shown that haptic feedback can complement visual data for telerobotic systems by provid- ing additional perceptual cues about the environment, such as attitude perception, traction awareness or distance to obstacles and provide thus spatial and situational awareness that would be missing otherwise [3], [5], [20].
To conclude, general potential harmful factors towards SA include data overload, atten- tional narrowing, workload and fatigue. Factors with focus on telerobotics that are relevant for the scope of this assignment include delays as well as scale and spatial ambiguities. There- fore, the haptic feedback design of this system should minimize delays to mitigate possible impediments towards the SA of the operator. Furthermore, the design of the feedback should complement the visual information that is displayed on the screen of the cockpit to pro- vide the user with a beneficent spacial sense regarding obstacles around the platform and
”unburden the visual channel by using other human senses to improve SA” [3].
2.1.3 Metrics for Situation Awareness in telerobotics
Once the device is designed, a relevant set of metrics needs to be determined in order to facil-
itate an objective evaluation of the SA of the operator. According to Steinfeld et al. [21], key
metrics in human robot interaction can be assigned in 5 categories: Navigation, perception,
management, manipulation and social. In the context of designing haptic feedback based on
obstacle localization, the remainder of this section will focus on navigation, perception and
manipulation. Note that Steinfeld et al. describe manipulation as controlling and steering the robot whereas navigation is being described as path planning and dealing with environmental impediments encountered on the way.
Metrics for navigation include effectiveness measures such as number of obstacles that were successfully avoided or deviations from a planned route [22]; Efficiency measures that include e.g. the time needed to complete the task or average time needed for obstacle extraction [23];
Non-planned navigation effort measures including the number of operator interventions [24]
and ratio of operator to robot time to successfully initialize and execute a task of the robot [25].
Metrics for perception can be divided in passive perception, the interpretation of received sensor data, and active perception, the fusing of multiple sensor readings to make an infer- ence of the environment [26]. Measures for passive perception include detection measures (e.g. signal detection, detection by object orientation), recognition measures (e.g. classifi- cation accuracy, confusion matrices), judgment of extent measures (accuracy of quantitative judgments about the environment) and judgment of motion measures (accuracy with which movement of objects in the environment is judged) [21]. Active perception measures in- clude active identification (measures performance on recognition tasks involving mobility) and active search (measures performance on search tasks involving mobility) [21].
Finally, manipulation metrics include the degree of mental computation needed to execute a task such as mental rotation or mental short and long term memory [27]. In addition, the number of collisions and the type of contact error (hard/soft, glancing, etc.) is a key metric in manipulation tasks, especially for systems that involve obstacle avoidance, as it is highly indicative for the situation awareness and performance of the operator [21].
2.1.4 Evaluation techniques for Situation Awareness
Finally, when the interface is designed and suitable metrics for evaluation have been de- termined, the system can be evaluated via various evaluation techniques that focus on SA.
Hjelmfelt and Pokrant [28] state that these methods can be divided into 3 categories: 1) subjective, subjects rate their own SA, 2) implicit performance, measuring subjects’ task performance assuming it is related to the SA of the subject and 3) explicit performance, subjects’ SA is being directly evaluated during short interruptions of the experiment. Two of the most popular evaluation techniques for SA are the Situation Awareness Global As- sessment Technique (SAGAT) and the Situational Awareness Assessment Technique (SART).
SAGAT, developed by Endsley [29] is an objective, explicit performance evaluation technique
that freezes randomly mission or task simulations, and asks subjects queries about their SA
while the displays and interfaces are blanked. SART is in contrast to SAGAT a subjective
form of evaluating the subjects SA based on 10 criteria post trial [29]. These criteria include
for example complexity of the situation, focusing of attention, information quantity or spare
mental capacity. The subjects are asked to rate each criteria on a scale from 1 to 7, where
1 is the lowest rating and 7 the highest. The ratings are afterwards combined to determine
the subject’s SA.
Both SAGAT and SART have advantages and disadvantages that have to be considered when evaluating SA. Advantages of SART are that it is quick and easy to execute, while being applicable to various domains and being non-intrusive to task performance as it is administered post trial [30] [31]. On the other hand, Endsley [32] states that SART is highly correlated to the task performance, a participant that performs well would give himself a good SA rating while a participant that performs bad will would give himself a bad rating.
In addition, it seems that the memory of the participants regarding detailed information about past events is relatively poor and that post-trial questionnaires only record actual SA of the participant at the end of the task. In contrast, SAGAT avoids issues collecting data post trial by asking the subjects directly [33]. In addition it provides diagnostic data on how well the system assists the SA of the subject [32] and demonstrates reliability and validity in various domains [30]. However, Salmon et al. [30] also point out that interruptions diminish the flow of the task and are rarely executable non-simulated situations. Furthermore, the analysis requires extensive preparation and the outcome of the SAGAT evaluation culminates in multiple variables rather than on singe SA value; combining these values together to obtain a truthful representation can be challenging [33].
For the scope of this assignment and the time that is available, SART will be used as indicator for SA as it requires less time and resources to execute than SAGAT. However, SART does show some critical points which will influence the results of the user experiments and need to be accounted therefore. SART is based on the self assessment of the participants and can thus introduce bias through its subjective nature. To account for this bias, a set of metrics based on navigation, perception and manipulation will be chosen in section 4.3.2 to facilitate an objective evaluation of SA. In addition, it is stated that participants can forget detailed information through out the experiment, therefore experiments will be designed to only last a relatively short amount of time to record the true SA of the participant.
2.2 Current state of the system
In this section the state of the current system will be described and evaluated. The UGV is
a robotic platform that uses a segway RMP Omni 50 as base and is furthermore equipped
with a KUKA LWR 4+ arm and a ReFlex TakkTile gripper as depicted in figure 2.1. At the
operator’s side, the Leo Universal Cockpit (LUC) provides audio, video and haptic feedback
to guide the operator. A pair of pedals can be used to navigate the platform and generate
haptic feedback to guide the operator based on the location of obstacles. The pedals itself
have been designed by designed by Martijn de Roo [34] while hardware and software for the
haptic feedback have been designed and implemented by Sierd Meijer [1]. An overview of the
current system can be found in figure 2.2, which shows that obstacle detection & processing
is currently implemented only in a simulation.
Figure 2.1: LUC cockpit and robotic platform [1].
Figure 2.2: Overview schematic of the current system.
The following subsections hardware and software will recite and discuss the design and im- plementation of the haptic feedback by Meijer.
2.2.1 Hardware
The pedals are mounted to the base of the LUC via a BOIKON profile [35] and can be moved downwards and upwards in order to move either side of the robotic platform in two directions. The pedals have a rotational movement range of 30,2 degrees and an offset of 21,86 degrees. In order to generate haptic feedback via the pedals, a set of accelerometers, motors, optical rotary encoders and Elmo Whistles are used and controlled via a RaMstix, a FPGA developed by RaM. An overview over the hardware setup can be found in figure 2.3.
The system uses MMA7260Q accelerometers [36] and HEDS5540 optical rotary encoders [37] as sensors to determine position, velocity and acceleration of the pedals. The accelerom- eters have a 800mV/G ratio at 1.5g sensitivity and the encoders have a resolution of 1,85E-3 degrees per step of the encoder due to the rotational movement range of the pedals. The out- put signal of these sensors is used via the RaMstix to control two Maxon RE 50 motors [38]
by sending an analog voltage signal to the Elmo Whistles that regulate the current going towards the motors. The motors actuate the pedals then via a pulley belt system, providing haptic feedback to the operator.
The signal of the accelerometers is filtered, integrated and compensated for gravity to
obtain the velocity of the pedals. The acceleration signal from the accelerometers is likely
Figure 2.3: Schematic overview of the old hardware layout.
to contain an offset and high frequencies, which will cause the integrated signal to drift.
Drift has typically low frequencies and is in the current system partially dealt with gravity compensation and digital high pass filters. For this project, new ADXL 335 accelerometers by Sparkfun [39] have been placed under the pedal and need to be calibrated accordingly.
2.2.2 Software
The software that controls the pedals and provides haptic feedback has been implemented via the Robotic Operating System (ROS) framework and consists out of four nodes: a pedal driver, pedal interpretation, controller and obstacle interpretation node. An overview of the software architecture can be found in figure 2.4:
Figure 2.4: Schematic overview of the software architecture including the signal flows [1].
2.2.2.1 Pedal driver node
The pedal driver node requests the sensor values from the RaMstix and converts them into SI units which are then being used by the other ROS nodes. Gravity compensation of the acceleration and differentiation, integration and filtering for estimating the velocity of the pedals is done in this node as well.
2.2.2.2 Obstacle interpretation node
The obstacle interpretation node outputs distance to the obstacle and force distribution based on the relative angle to an obstacle. This node receives two vectors containing the location of the obstacle with respect to the orientation of the obstacle, X o which is parallel and Y o
which is perpendicular to the direction of the platform, as depicted in figure 2.5:
Figure 2.5: Calculation of the obstacle distance based on the input vectors [1].
The distance to the obstacle l o is calculated via Pythagoras’ theorem as the magnitude of the vectors X o and Y o , as described in equation 2.1:
l o = q
X o 2 + Y o 2 (2.1)
In order to calculate the force distribution, the angle to the obstacle needs to be calculated first:
θ o = arctan X o
Y o
(2.2) where θ o is the angle to the obstacle in radians. Equations 2.3 to 2.6 are used to calculate the force distribution for the left pedal U L on a scale from 0 to 1:
θ o between 0,and 0,5π:
U L = 0, 5 · θ o
0, 5π (2.3)
θ o between 0,5 and π:
U L = 0, 5 · θ o
0, 5π (2.4)
θ o between π and 1,5π:
U L = 0, 5 + 0, 5 · (0, 5π − |θ o |)
0, 5π (2.5)
θ o between 1,5π and 2π:
U L = 1 − 0, 5 · (0, 5π − θ o )
0, 5π (2.6)
The force distribution of the right pedal U R can be calculated via equation 2.7:
U R = 1 − U L (2.7)
However, it is important to note that this approach does not consider if the obstacle is actually in the path of motion and thus relevant, nor does it account for multiple obstacles and determines on what obstacle it should give feedback. In addition, the calculation of the force distribution is more related to the generation of haptic feedback than interpreting the location of an obstacle and should thus be moved to the controller node.
2.2.2.3 Pedal interpretation node
The pedal interpretation node subscribes to the pedal driver node to obtain the position of the pedal and converts the pedal orientation into a twist with two degrees of freedom. A twist expresses velocity of a rigid body into angular linear velocity components. The pedal positions are first converted to the displacement in percentage with respect to the origin as described in equation 2.8:
P = 100 · (θ p − θ min )
θ max − θ min − 50 (2.8)
where P is the displacement percentage between ±50% of one pedal, θ p the current pedal position and θ max and θ min the minimum and maximum position of the pedal in radians.
The linear and angular velocities are calculated via the sum or the difference between both displacement percentages as shown in equations 2.9 and 2.10:
v l = v max · (P L + P R )
100 (2.9)
v r = v max · (P L − P R )
100 (2.10)
where v l is the linear velocity and v r is the angular velocity, v max is the maximum velocity
of the robotic platform, P L and P R the displacement percentages of pedals. Once the twists
are calculated, the pedal interpretation node sends the twist to the control node which then
uses this data to drive the robotic platform.
2.2.2.4 Controller node
The controller node receives the pedals’ position and velocity from the pedal driver node and the distance to the obstacle and the force distribution between the pedals from the obstacle interpretation node and calculates the force that the pedals will exert. Force-stiffness feedback has been chosen as basis for the design of the feedback as it allows to adequately indicate the distance to an obstacle while at the same time providing guidance based on the objects around the platform as well as reactive feedback, that is how the system acts on the input from the user [1], [40]–[43]. The force-stiffness feedback in this system makes use of a spring constant that is based on the distance to an obstacle as well as the displacement of the pedal. Small displacements of the pedals will lead to small exerted forces which may not provide sufficient haptic information to the operator, therefore a force offset is being added to compensate for small displacements. An adaption of Hooke’s law, incorporating force stiffness feedback can be found in equation 2.11:
F spring = K · ((x rest + x of f set ) − x) (2.11) where F spring is the output force, K is the varying spring constant, x rest the origin position, x of f set the change in origin position and x the current position.
The controller is designed in such a way that the pedals will behave like a damped harmonic oscillator, where a virtual spring is being used to bring the pedals back to their origin position.
This is described by equation 2.12,
F = F s + F d = −Kθ p − Dω p (2.12)
where the spring force F s is given by the spring constant K and the displacement of the spring θ p and the virtual damper F d acting as friction is given by the damping constant D and velocity of the pedal, ω p . The spring force is negative as the exerted force should be in the opposite direction of the displacement of the spring. The damping ratio ζ is being used to tune the pedal and is given by equation 2.13:
ζ = D
2 √
IK (2.13)
where ζ is the damping ratio, D the damping constant, I the inertia of the pedal and K the spring constant. Setting ζ to 1 will give the pedals a critically damped behaviour, returning them to their origin position as quick as possible and preventing possible unwanted oscillations. The damping constant D can then be rewritten via equation 2.14:
D = 2 √
IK (2.14)
Consequentially, equation 2.12 can then be then rewritten as equation 2.15:
F = −Kθ p − 2ω p
√
IK (2.15)
Next, in order to operate the pedals with and without haptic feedback, the spring constant K is split up into the base spring constant K b and the spring increase constant K i , as shown in equation 2.16:
K = K b + K i (2.16)
Hence, in order to operate the pedals without haptic feedback, K i is set to 0 while K b should should be set large enough to provide the minimum force that is required to return the pedals back to their origin position. In addition, to counter gravity as result of the weight of the pedals, the force offset F of f is added to the output force of the controller. Thus, when no haptic feedback is being used, the new controller function is given by equation 2.17:
F = −K b · θ p − 2ω p
p IK b + F of f (2.17)
When using haptic feedback, K i is set unequal to 0 and is based on the distance and location of an obstacle, the platform’s velocity V and the force distribution U that is obtained from the obstacle interpretation node, as depicted in equation 2.18:
K i = K l · U · V (2.18)
K l in this case is the increase in the spring constant based on the distance l o to the obstacle with tunable parameters m, n and p, as depicted in equation 2.19:
K l = m
l o + n + p (2.19)
The bigger the distance to the obstacle, the smaller the increase will be and vice versa. If obstacles are outside of the maximum range of the sensors, K i should be 0 and consequently no feedback should be given, hence p is being used as a negative offset to obtain a value of 0 for K i when l o equals the maximum range of the sensors being used. The parameters n and m are being used to limit the amount of feedback that is being given, e.g. if l o would be 0, K i should not approach infinity as the user should still be able operate the pedals. Next, the velocity V of the platform is given by the equation 2.20:
V = 0.5 · |v r |
v max + 0.5 (2.20)
where |v r | is the magnitude of the current velocity and v max is the maximum velocity of the platform. As the feedback of the spring increase needs to be noticeable also at low velocities of the platform, an offset of 0,5 is added; that way the velocity of the platform is converted to a scale from 0,5 to 1. The final controller equation that is used to generate haptic feedback via the pedals of the platform is given by equation 2.21:
F out = −(K b + K i )θ p − 2ω p p
I(K b + K i ) + F of f (2.21)
where F out is the output force, K b is the base spring constant, K i is the spring constant
increase, θ p the pedal position, ω p the pedal velocity, I the pedal inertia and F Of f is the
force offset. A complete overview of the controller with its respective in- and outputs can be found in figure 2.6:
Figure 2.6: Schematic overview of the controller [1].
Using the combined velocity signal from the encoder and the accelerometer, the controller is able to maintain a constant damping ratio by altering the damping constant based on the spring constant. However, as the output force of the controller pushes the pedals back to their origin position, a distinct steady state error is being produced as the controller is not making use of an integrator. Using an integrator would actively work against the operator the stronger he is pressing against the pedals and is therefore not used.
Furthermore, it had been observed that increasing the spring constant lead to a decreased steady state error but at the same time the system shows increased underdamped behaviour.
It is expected that most of these problems are caused by delay that is introduced by the
architecture of the system, as sensor values being read by the RaMstix are first passing the
driver node to be converted to SI units and are then being sent to the controller node. In
addition, the same connection is being used to send the output force of the controller to the
motors of the pedals, creating a significant delay between measurement and actuation. Delay
leads to phase shift, which influences the phase margin. The phase margin is the amount of
phase that can be varied until the signal reaches a phase shift of -180 degrees while having a
gain of 0dB, as depicted in figure 2.7:
Figure 2.7: Phase and gain margin for an arbitrary system.
At this point the system is unstable as the open loop transfer function H(s)G(s) equals
−1, meaning that the closed loop function 1+H(s)G(s) G(s) will approach infinity. It is expected that redesigning the system architecture by moving the controller to the RaMstix will reduce delays in the system and lead to a less underdamped and more stable behavior of the pedals.
2.3 Obstacle localization
The obstacle detection & processing approach for this system will build on the research from Fabian van Hummel [2]. In order to determine the distance to the most relevant obstacle as input to the haptic controller, the general area of interest around the platform needs to be determined first which gives then a basis for a possible sensor configuration. Next, a collision area needs to be derived as haptic feedback should only given on obstacles that are in the path of motion. The last step is then to determine the distance to the closest obstacle.
2.3.1 Area of interest
The area of interest is dependent on the motion capabilities of the platform and the minimum
travel time. The platform is meant to be driven differentially, that means it can move for-
wards, backwards, but also rotate around its axis as depicted in figure 2.8. Any combination
of these movements will be result in a circlular path of motion with a certain radius. The
minimum travel time is the minimum time frame in which the operator should experience
haptic feedback prior to a collision. This has been defined as 5 seconds. Based on the mo-
tion capabilities of the platform and the minimum travel time, the area of interest has been
determined as shown in figure 2.9:
Figure 2.8: Motion capabilities of the platform [2].
Figure 2.9: Area of interest given a 5 second minimum travel time [2].
2.3.2 Sensor configuration and coordinate transformation
Based on the area of interest, a configuration for the sensors can be determined. The sensors that will be used in this assignment are URG-04LX-UG01 LIDAR sensors. These sensors illuminate their environment with pulsed laser light and measure the reflected pulses to determine the distance to an obstacle. The sensor scans an area of up to 240 ° and has a maximum range of up to 5.6 meters, with 1mm resolution. The angular resolution is 0.36 ° and the maximum divergence of the laser beam is 4cm starting from a 4m measuring distance.
This allows it to measure the distance and the angle to an obstacle very precisely in a 2D
dimension. As there are only 2 sensors available at this moment, the sensors will be placed
at the top corners as depicted in figure 2.10. This way almost all of the area of interest to
the front, left and right can be measured.
Figure 2.10: URG-04LX-UG01 sensor configuration.
The next step is to express each received sensor value in a base frame as depicted in figure 2.11 as the sensors measure distances with respect to their own location and orientation.
Representing the sensor readings in a uniform frame will make further obstacle processing more easy. The base frame orientation that is shown in figure 2.11 has been chosen as the IMU of the platform uses the same reference frame. This is important as the IMU provides orientation and acceleration of the platform and thus also velocity.
Figure 2.11: Base frame of the platform (note that the actual radius of the sensors is larger than depicted here).
In order to express the sensor readings in the base frame, rotation matrix H will be used as
described in equation:
H(x, y, θ) =
cos(θ) sin(θ) x
−sin(θ) cos(θ) y
0 0 1
(2.22)
Where x, y represent the coordinate of the origin of the original frame expressed in the new frame and θ represents the offset of the angle of the original frame to the new frame. This way a point p that is measured in the sensor frame s can be expressed as a point in the base frame b as described in equation 2.23 and 2.24:
b P = H s b s P (2.23)
b x
b y 1
=
cos(θ) sin(θ) x
−sin(θ) cos(θ) y
0 0 1
s x
s y 1
(2.24)
The last step is to determine the angle offset θ from the sensor frame to base frames as well as the Cartesian coordinates of the sensors with respect to the origin of the base frame, which is the location of the IMU. Given that a counterclockwise rotation results in a positive angle, sensor 1 has angle offset of 30 ° and sensor 2 has an angle offset of 150°, while the Cartesian coordinates for sensor 1 are (0.29, 0.28) and for sensor 2 (0.29, -0.28) in meters.
2.3.3 Collision area
The collision area is the path of motion of the platform given a certain minimum travel time.
Any movement of the platform can be described as a motion over an arc with a certain radius and angle, the collision area is therefore defined by the radius and angle range for a certain movement, visualized in figure 2.12 as the red area:
Figure 2.12: Collision area [2].
In order to calculate the radius and angle ranges, r cor , the radius from origin of the base frame to the center of rotation (P0), as well as the dimensions and the velocity of the platform are being used. The radius boundaries are determined via equation 2.25 and 2.26:
r P 1 = r min = r cor − w
2 (2.25)
r P 2 = r max = tan −1 ( h 2
r cor + w 2 ) (2.26)
where w is the width of the platform and h 1 is the upper height of the platform. Next, the angle boundaries are being determined. These are dependent on what quadrant the platform is moving in, as shown in equations 2.27 and 2.28:
quadrant 1 and 4
θ min = 2π − sin −1 ( r h2
max
) θ max = |v r
x|·t
cor
(2.27)
quadrant 2 and 3
θ min = −2π + sin −1 ( r h1
p3
) θ max = − |v r
x|·t
cor
(2.28)
where h 2 is the lower height of the platform, v x is the forward velocity of the platform obtained from the IMU and r p3 is given by equation 2.29:
r p3 = r
(r cor + w
2 ) 2 + h 2 1 (2.29)
2.3.4 Path length calculation
The next step is to check what the closest obstacle in the collision area is, as this is the most
relevant obstacle that needs to be avoided. The path length is defined as the length of an arc
from the platform to the obstacle. Ideally, this arc has the radius starting from the center of
rotation and ending at the location of the obstacle and an angle starting from the colliding
point from the platform and ending at the location of the obstacle. The path length has here
been simplified by locating the colliding point at the y axis, as depicted in figure 2.13:
Figure 2.13: Path length calculation [2].
Dependent on which quadrant the platform is moving and in which quadrant the obstacle is the located, the radius and angle can be calculated as defined in equations 2.30 and 2.31:
Radius =
px 2 + (r cor − |y|) 2 = a px 2 + (|y| − r cor ) 2 = b px 2 + (|y| + r cor ) 2 = c
(2.30)
Angle =
tan −1 ( r |x|
cor
−|y| ) = d tan −1 ( |y|−r |x|
cor
) = e tan −1 ( |y|+r |x|
cor