• No results found

The Twente humanoid head

N/A
N/A
Protected

Academic year: 2021

Share "The Twente humanoid head"

Copied!
2
0
0

Bezig met laden.... (Bekijk nu de volledige tekst)

Hele tekst

(1)

The Twente Humanoid Head

R. Reilink*, L. C. Visser*, J. Bennik**, R. Carloni*, D. M. Brouwer** and S. Stramigioli*

Abstract— This video shows the results of the project on the

mechatronic development of the Twente humanoid head. The mechanical structure consists of a neck with four degrees of freedom (DOFs) and two eyes (a stereo pair system) which tilt on a common axis and rotate sideways freely providing a three more DOFs. The motion control algorithm is designed to receive, as an input, the output of a biological-inspired vision processing algorithm and to exploit the redundancy of the joints for the realization of the movements. The expressions of the humanoid head are implemented by projecting light from the internal part of the translucent plastic cover.

I. INTRODUCTION

In the last years, the research interest on humanoids has in-creased and, within that, the interest in developing humanoid heads. In the literature, there are two categories of robotic head systems which basically differ in the complexity of the mechanical design, i.e. structure and number of degrees of freedom (dofs), and in the movement speed they can perform. For example, ASIMO [2] and Maveric [1] rely, respectively, on 2 and 3 dofs and can realize fast movements while tracking objects. iCub [4] and QRIO [3] can move slowly but they have, respectively, 3 and 4 dofs so to interact with human beings by mimicking human-like motions.

In this video, we present the Twente humanoid head (i.e. a complete system comprising neck, head and two cameras) that has been designed and realized at the University of Twente in cooperation with an industry partner. In the pur-pose of developing a research platform for human-machine interaction, the humanoid head should not only be able to focus on and track targets, but also it should be able to exhibit human-like motions, e.g. observing the environment by expressing interest/curiosity and interacting with people. This requires a high number of degrees of freedom to be able to mimik human-expressions, as well as a high bandwidth to enable motions at speeds comparable to a human. The final design is shown in Fig. 1.

II. MECHANICAL DESIGN

The general mechanical design of the Twente humanoid head is presented in [5] and it had to be a trade-off between having few DOFs enabling fast motions and several DOFs enabling complex, human-like motions/expressions. The final choice is to have a four DOF head-neck structure and three DOFs for the vision system. The major contribution in the mechanical structure is due to the introduction of a

*{r.reilink,l.c.visser,r.carloni,s.stramigioli}@utwente.nl, IMPACT Insti-tute, Faculty of Electrical Engineering, Mathematics and Computer Science, University of Twente, 7500 AE Enschede, The Netherlands.

**{j.bennik,d.m.brouwer}@utwente.nl, Faculty of Engineering Technol-ogy, University of Twente, 7500 AE Enschede, The Netherlands.

Fig. 1. The Twente humanoid head.

differential drive which combines in a small area the lower tilt (a rotation around the y-axis) and the pan (around the z-axis) motions of the head. The other two degrees of freedom of the neck are the roll (around the x-axis) and the upper tilt motions. Finally, the cameras, mounted on a carrier, share an actuated tilt axis and can rotate sideways freely, realizing three DOFs more.

To minimize motor torque and energy loss, a gravity compensation is applied in the roll and the lower tilt of the neck. The compensation for the lower tilt is realized by two linear springs. The gravity compensation reduces the torque the motor has to deliver by a minimum amount of 75% for the roll direction and 83% in the lower tilt direction. The acceleration, velocity and range specifications of the humanoid head are derived from biological data present in the literature, as reported in Table I. To create a long range of motion in the tilt direction, the tilt range has been split into two equal contributions over the lower and the upper tilt.

TABLE I

BIOLOGICAL DATA OF A HUMAN HEAD

Range (◦) Max Vel. (/s) Max Acc. (/s2)

Tilt −71 to + 103 352 3300

Roll ±63.5 352 3300

Pan ±100 352 3300

III. CONTROL

The motion control algorithm uses the kinematic prop-erties of the model by exploiting the redundancy of the mechanical structure. In particular, the control algorithm pro-cesses the information from the vision system and, while the 2009 IEEE International Conference on Robotics and Automation

Kobe International Conference Center Kobe, Japan, May 12-17, 2009

978-1-4244-2789-5/09/$25.00 ©2009 IEEE 1593

(2)

target position is changing in the image plane, the humanoid head can track it. Moreover, human-like motions/expressions can be performed while looking at the target. This means that through the control algorithm we have implemented, we properly exploit the mechanical structure of the system so to make the humanoid head move as described in biological studies. In particular, we are aiming to realize the behavior proposed in [6], according which human beings use both their head and eyes to track targets: the gaze (i.e. the angle of the eyes with respect to a fixed reference) changes fast due to the the fast and light-weight eyes moving towards the target quickly, while the heavy head follows later and slower. The vision algorithm provides the motion control algo-rithm with target coordinates x in the image space. The goal is to move the perceived target coordinates x in the center of the camera image. This is obtained by a proportional control law in the image space given by

˙x = −Kpx

where Kp is the proportional gain. In order to apply this

visual-servoing control law, it is required to invert the relation ˙

x = F(q) ˙q which describes the dynamics of the target coordinates with respect to the head-neck joint dynamics. Since the system is redundant, the solution is given by

˙q = F♯˙x + I7− F♯F z (1)

where F♯ denotes the weighted generalized pseudo inverse of the map F and z is an arbitrary vector which is projected onto the null-space of F. The first right-hand term of Eq. (1) is a minimum norm solution, where the norm is defined by the matrix Mq

k ˙qk =q˙qTM

q˙q (2)

In target tracking, we select the vector z in (1) as

z= W (q0− q) (3)

where W = diag(wi), i = 1, . . . , 7 in which the first four

elements refer to the neck and the remaining three refer to the eyes. This means that vector z is a proportional control that, through motions in the null-space, steers the joint configuration q to a desired neutral configuration q0,

in which the head is in a upright position and the eyes straight to the target.

The vector z in Eq. (1) is also used to achieve expression motions while the head is looking at a target, like nodding in agreement, shaking on disagreement, moving the head backwards in surprise or moving the head towards the target in curiosity. These motions can be generated by applying an appropriate time varying function to one or more of the joints. Fig. 2 presents the controller overview, see [8] for more details.

Motion Control

Vision Processing Robot

Sensors ˙x = −Kpx ˙q = F♯˙x + I 7− F♯F z ˙x ˙q q

Fig. 2. Control scheme.

IV. VISION

The camera images can be used to calculate the focus of attention setpoint. The vision system and vision algorithm are presented in [7]. Two different algorithms have been implemented.

The first method is a biologically inspired saliency algo-rithm which calculates the contrast between the foreground and the background of the image on certain channels, for example colour or intensity. The saliency map is used to determine the most interesting point, the focus of attention (FOA). This is done using a winner takes all (WTA) network which selects the pixel with the highest saliency value as the FOA. Two additional mechanisms influence the selection of the FOA: the inhibition of return (IOR) map and the WTA bias. An IOR map is used to prevent the FOA from staying constant all the time, by giving a negative bias to those regions of the saliency map that were attended recently. This IOR map is a first-order lowpass filter whose input is a Gaussian function positioned at the FOA. The WTA bias is a positive bias given to a region surrounding the previous FOA to create a hysteresis. This prevents jumping between multiple targets with an almost equal saliency.

Additionally, a Viola-Jones face-detection algorithm has been implemented [9]. This algorithm gives the head a very human-like behavior since it allows the vision system to recognize people.

V. EXPRESSIONS IMPLEMENTATION The design has been completed by adding a plastic cover which symbolize the University of Twente through the intro-duction of the University logo in the ears. The cover is made with a translucent plastic which allows the implementation of expressions. In particular, a LED system is mounted in the internal part of the cover and the light is projected from inside for the realization of the eyebrows and the mouth whose movements, together with eyelids, are coupled with the head-neck movements according to human machine interaction studies.

REFERENCES

[1] Maveric,http://www-humanoids.usc.edu/HH summary.html.

[2] Y. Sagakami, R. Watanabe, C. Aoyama, S. Matsunaga, N. Higaki and K. Fujimura, “The intelligent ASIMO: system overview and integration”, Proc. IEEE Int. Conf. Intelligent Robots and Systems, 2002.

[3] L. Geppert, ”QRIO, the robot that could”, IEEE Spectrum, vol. 41, n. 5, 2004, pp 34-37.

[4] R. Beira, M. Lopes, M. Praca, J. Santos-Victor, A. Bernardino, G. Metta, F. Becchi and R. Saltaren, “Design of the robot-cub (iCub) head”, Proc. IEEE Int. Conf. Robotics and Automation, 2006. [5] D. M. Brouwer, J. Bennik, J. Leideman, H. M. J. R. Soemers and S.

Stramigioli, ”Mechatronic design of a fast and long range 4 degrees of freedom humanoid neck”, IEEE Int. Conf. Robotics and Automation

2009.

[6] H.H.L.M. Goossens and A.J. van Opstal, ”Human eye-head coordina-tion in two dimensions under different sensorimotor condicoordina-tions”, Jour.

Experimental Brain Research, vol. 114, n. 3, 1997, pp 542-560.

[7] R. Reilink, ”Realtime Stereo Vision Processing for a Humanoid”,

MSc-Report 019CE2008, University of Twente, 2008.

[8] L. C. Visser, ”Motion Control of a Humanoid Head”, MSc-Report

016CE2008, University of Twente, 2008.

[9] P. Viola and M. Jones, “Robust real-time face detection”, Int. Jour.

Computer Vision, 2004.

1594

Referenties

GERELATEERDE DOCUMENTEN

Zouden we dit willen voorkomen, dan zouden we (1) als.. definitie van een vlak moeten aanvaarden en deze abstractie gaat weer boven de momentele mogeljkhëden van onze leerlingen

Uit de analyse van de dagelijkse vragenlijst blijkt ook dat het per participant erg verschilt in welke mate het wel of niet volgen van de mindfulness training effect heeft op de

Daarnaast bleek uit het huidige onderzoek dat de mate van nabijheid in de leerling- leerkrachtrelatie aan het begin van het schooljaar niet voorspellend was voor het gedrag van

Concluderend kwam er uit de resultaten van dit onderzoek naar voren dat zelfregulatie geen modererende rol speelt in de relatie tussen intentie om academisch te presteren en de

In support of this framing, we performed an exploratory policy analysis, applying future climate and socio-economic scenarios to account for the autonomous development of flood

Transition point is a flow of 10 µL/h: in case the output signal of the flow sensing system indicates a flow equal to or less than 10 µL/h, the signal of the thermal flow sensor

This raises the main question in this project: which exact quantum mechanical wave functions start to behave as classical solitons whenever the nonlinear Schr¨ odinger equation

The results of the benchmark as described in 4.2 are shown in figure 5.2. Take notice that the Error bars for GPU accelerated are much bigger than for his sequential counterpart.