Gaze Stabilization for Humanoid Robots: a Comprehensive Framework
Alessandro Roncone, Ugo Pattacini, Giorgio Metta, Lorenzo Natale
GGaze Stabilization for Humanoid Robots: a Comprehensive Framework
Alessandro Roncone , Ugo Pattacini , Giorgio Metta and Lorenzo Natale Abstract — Gaze stabilization is an important requisite forhumanoid robots. Previous work on this topic has focusedon the integration of inertial and visual information. Littleattention has been given to a third component, which is theknowledge that the robot has about its own movement. Inthis work we propose a comprehensive framework for gazestabilization in a humanoid robot. We focus on the problem ofcompensating for disturbances induced in the cameras due toself-generated movements of the robot. In this work we employtwo separate signals for stabilization: (1) an anticipatory termobtained from the velocity commands sent to the joints whilethe robot moves autonomously; (2) a feedback term from theon board gyroscope, which compensates unpredicted externaldisturbances. We first provide the mathematical formulation toderive the forward and the differential kinematics of the fixationpoint of the stereo system. We finally test our method on theiCub robot. We show that the stabilization consistently reducesthe residual optical flow during the movement of the robot andin presence of external disturbances. We also demonstrate thatproper integration of the neck DoF is crucial to achieve correctstabilization.
I. INTRODUCTIONEfficient gaze stabilization in mammals is fundamentalbecause it reduces image blur elicited by the movement of thebody during locomotion. The brain senses external motionthrough the vestibular system and the generated optical flowand performs compensatory movements with the eyes and thehead to maintain stable fixation. The effect of the absenceof stabilization can be easily measured by taking a pictureor shooting a video while walking or running.Gaze stabilization is therefore a fundamental capability fora humanoid robot. Conventionally, algorithms and behaviorsfor visual stabilization have been designed drawing inspira-tion from biological systems. Due to its relative simplicity thebrain circuitries involved are relatively well understood [1].Broadly speaking compensatory movements are obtainedwith two main contributions. The vestibulo-ocular reflex(VOR) exploits the information about the head movementcoming from the vestibular system. The whole control loop inthis case involves a few synapses and it is therefore very fast.The opto-kinetic reflex (OKR) uses on the other hand retinalslip from the eyes to generate compensatory movementand maintain stable fixation. The computation in this caseinvolves more complex computations, it has larger latencyand is less efficient. However these contributions perform *This work was supported by the European Project KoroiBot (FP7-ICT-611909). A. Roncone, U. Pattacini, G. Metta and L. Natale are withiCub Facility, Istituto Italiano di Tecnologia, Via Morego 30, 16163Genova, Italy. { alessandro.roncone, ugo.pattacini,giorgio.metta, lorenzo.natale } @iit.it best at different frequencies and are therefore integrated forefficient stabilization.Early work on oculomotor control in robotics has focusedon replicating various type of eye movements like vergence,smooth-pursuit, saccades [2], [3], [4] and gaze stabilizationreflexes obtained using inertial and visual input [5], [6], [7].Computation of the eye velocity command for proper sta-bilization depends on several parameters: eye-head geometry,relative distance between the fixation point and the head butalso non-linearities due to lens distortions and delays in theplant. If the eyes and the head do not rotate around thesame axes, the compensation signal must take into accountthe translational velocity due to parallax. This can be doneanalytically [5] or with Feedback Error Learning [6], [7]. Theadvantage of the latter methods is that it can also optimallyintegrate visual and inertial information and compensate fordelays in the plant.Only in a few cases the attention has been devoted to theproblem of gaze stabilization during legged locomotion [8],[9]. In [8] the authors implement a controller based onan oscillator which is adapted to match the frequency andphase of the optical flow generated by the robot gait, inthe assumption that the latter is periodic. In [9] the authorsuse genetic algorithms to evolve a central pattern generatorthat optimally reduces head shaking during locomotion of aquadruped. Previous work on gaze stabilization has focusedon the control of the eyes and has ignored a third sourceof information useful for gaze stabilization, i.e. the motorsignals issued to the robot during walking and genericwhole-body movements. This information, however, providesimportant cues for stabilizing motion due to the robot ownmovement. With respect to inertial and visual signals thisinformation is predictive in that it allows anticipating andplanning compensatory movements in advance.In this paper we solve the problem of gaze stabilization byintegrating a feedback component coming form the sensorysystem with a feedforward component derived from thecommands issued to the motors. We build upon the gazecontroller implemented on the iCub [10] and extend it tostabilize gaze during active movements of the iCub [11].The system uses all 6 DoF of the head and it relies on twosources of information: i) the inertial information read fromthe IMU placed on the robot’s head ( feedback ) and ii) anequivalent signal computed from the commands issued tothe motors of the torso ( feedforward ). For both cues wecompute the resulting perturbation of the fixation point anduse the Jacobian of the iCub stereo system to compute themotor command that compensates the perturbation. Retinalslip (i.e. optical flow) is used to measure the performance of a r X i v : . [ c s . R O ] N ov ig. 1. Block diagram of the framework presented. The Gaze Stabilizermodule (in green) is designed to operate both in presence of a kinematicfeedforward ( kFF ) and an inertial feedback ( iFB ). In both cases, it estimatesthe motion of the fixation point and controls the head joints in order tocompensate for that motion. the system. We show that the feedforward component allowsfor better compensation of the robot’s own movements and,if properly integrated with inertial cues, may contribute toimprove performance in presence of external perturbations.We also show that the DoF of the neck must be integrated inthe control loop to achieve good stabilization performance.The article is structured as follows. In Section II, theproposed framework is defined. The experimental protocoland the related experiments are presented in Section III,followed by Conclusions and Future Work (Section IV).II. METHODWe define the stabilization problem as the stabilization ofthe 3D position of the fixation point x F P of the robot. Itis achieved by controlling the cameras to keep the velocity ˙ x F P equal to zero. The velocity of the fixation point is 6-dimensional, and is composed of a translational component v F P and a rotational part ω F P .A diagram of the proposed framework is presented inFig. 1. As highlighted in Section I, the gaze stabilizationmodule has been designed to operate in two (so far mutuallyexclusive) scenarios: • a kinematic feed-forward (kFF) scenario, in which therobot produces self-generated disturbances due to itsown motion; in this case motor commands predict theperturbation of the fixation point and can be used tostabilize the gaze. • an inertial feed-back (iFB) scenario, in which perturba-tions are (partially) estimated by an Inertial Measure-ment Unit (IMU).As result, the Gaze Stabilizer is realized by the cascade oftwo main blocks: the first block is used for estimating the 6Dmotion of the fixation point ˙ x F P by means of the forwardkinematics, while the latter exploits the inverse kinematicsof the neck-eye plant in order to compute a suitable setof desired joint velocities ˙ q NE able to compensate forthat motion. The forward kinematics block represents ascenario-dependent component, meaning that its implemen-tation varies according to the type of input signal (i.e. feed-forward or feedback). Conversely, the inverse kinematicsmodule has a unique realization. Fig. 2. Kinematics of the iCub’s torso and head. The upper body of theiCub is composed of a 3 DoF torso, a 3 DoF neck and a 3 DoF binocularsystem, for a total of 9 DoF. Each of these joints, depicted in red, areresponsible for the motion of the fixation point. The Inertial MeasurementUnit (IMU) is the green rectangle placed in the head; its motion is notaffected by the eyes.
Crucial to this work is the computation of the position ofthe fixation point and its Jacobian. Section II-A provides acomplete formulation of the kinematic problem occurring atthe eyes, whereas Section II-B and II-C analyze the forwardand the inverse kinematics modules composing the GazeStabilizer.
A. Forward and Differential Kinematics of the iCub stereosystem
To derive the Jacobian of the fixation point we start fromthe forward kinematic law of the eyes as illustrated in Fig.2. The position of the fixation point x F P is computed intwo steps. The first step computes the position of the frameof reference of the eyes. This uses a representation of theforward kinematics of the iCub head in standard Denavit-Hartenberg notation (the DH parameters of the iCub arereported here: [10]). The second step computes x F P asthe intersection of the two rays joining the cameras opticalcenters and the projection of the target on the camera planes.
1) Forward Kinematics: by referring to Figure 2, the3D Cartesian position of the fixation point x F P can beintuitively defined as the intersection point of the lines l ( τ l ) and r ( τ r ) that originate from the left and right camera planespassing through the respective optical centers. In a parametricformulation, they are defined as: l ( τ l ) : o l + τ l · z l r ( τ r ) : o r + τ r · z r , (1)here o l and o r are the centers of the left and right cameraplanes respectively, and z l and z r are the axes perpendicularto these planes, as shown in Figure 2. To address the moregeneral case of skew lines (i.e. l ( τ l ) and r ( τ r ) might not becoplanar due to mechanical misalignments of image planes),the fixation point x F P can be defined as the mean point ofthe shortest segment between l ( τ l ) and r ( τ r ) . From Eq. 1, itis possible to derive the points P l and P r that belong to eachline and minimize the distance from the other line. They aregiven by: τ ∗ l = [ z l − ( z l · z r ) · z r ] · [ o l − o r ]( z l · z r ) − τ ∗ r = [( z l · z r ) · z l − z r ] · [ o l − o r ]( z l · z r ) − . (2)Finally, the intersection point x F P can be found as the meanpoint between P l and P r : x F P = P l + P r o l + τ ∗ l · z l + o r + τ ∗ r · z r . (3)
2) Differential Kinematics: the position of the fixationpoint in the Cartesian space depends on the whole bodyconfiguration, namely the legs, the torso, the neck and theeyes: q = [ q L , q T , q N , q E ] T . It is possible to profitablyapply the standard DH notation to the kinematics of all thebody parts with the exception of the eyes. On the iCub,indeed, three DoFs (the common tilt t c , the version v s andthe vergence v g ) account for four coupled joints actuatingthe eyes (the tilt and pan for the left and right cameras, i.e. [ t l , p l ] T and [ t r , p r ] T respectively). In particular, q E isgiven by: q E = t c v s v g = t l = t r p l + p r p l − p r , (4)and this leads to the inverse relations: t l = t r = t c , p l = v s + v g / , p r = v s − v g / . (5)For what concerns the motion of the fixation point ˙ x F P ,for the purposes of this work we are only interested infinding the relation between the joints velocities ˙ q E and itstranslational component v F P , as detailed in Section II-C.Under this assumption, the Jacobian matrix J E that relatesthe motion of the fixation point x F P with the eyes joints q E will be reduced to a × matrix. The standard analyticalJacobian matrix is defined as: J E = ∂ x F P ( q E ) ∂ q E = (cid:20) ∂ x F P ∂t c , ∂ x F P ∂v s , ∂ x F P ∂v g (cid:21) . (6) Using the chain rule, and Equations 3 and 5, leads to: ∂ x FP ∂t c = ∂ x FP ∂t c = 12 (cid:18) ∂ P l ∂t c + ∂ P r ∂t c (cid:19) (7a) ∂ x FP ∂v s = ∂ x FP ∂ p l + ∂ x FP ∂ p r == 12 (cid:18) ∂ P l ∂ p l + ∂ P l ∂ p r + ∂ P r ∂ p l + ∂ P r ∂ p r (cid:19) (7b) ∂ x FP ∂v g = 12 · (cid:18) ∂ x FP ∂ p l − ∂ x FP ∂ p r (cid:19) == 14 · (cid:18) ∂ P l ∂ p l − ∂ P l ∂ p r + ∂ P r ∂ p l − ∂ P r ∂ p r (cid:19) . (7c) The computation of the quantities presented in Equations7a, 7b and 7c depends from Equations 3 and 2. For simplicitywe derive only the first factor of Eq. 7a; the derivation ofthe other components has been omitted for brevity but canbe derived similarly. ∂ P l /∂t c is given by: ∂ P l ∂t c = ∂ ( o l + τ ∗ l · z l ) ∂t c = ∂o l ∂t c + ∂τ ∗ l ∂t c · z l + τ ∗ l · ∂z l ∂t c . (8) ∂o l /∂t c and ∂z l /∂t c represent, respectively, the geometricJacobian of the left eye and the analytical Jacobian of thez-axis of the left eye with respect to the tilt; they aredescribed in Equation 11. The second derivative is insteadmore complex. Let us define: ξ = z l · z r ξ = [ z l − ( z l · z r ) · z r ] = [ z l − ξ · z r ] ξ = [ o l − o r ] ξ = ( z l · z r ) − ξ ) − (9)thus, ∂τ ∗ l /∂t c becomes: ∂τ ∗ l ∂t c = ∂∂t c (cid:18) ξ · ξ ξ (cid:19) == ξ ξ ∂ξ /∂t c + ξ ξ ∂ξ /∂t c − ξ ξ ∂ξ /∂t c ξ . (10)Finally, ∂ξ /∂t c , ∂ξ /∂t c , and ∂ξ /∂t c can be derived fromEquation 9 and are compositions of: ∂o l ∂t c = J Gl ( t c ) ∂z l ∂t c = J Al ( t c ) ∂o r ∂t c = J Gr ( t c ) ∂z r ∂t c = J Ar ( t c ) , (11)where J Gl ( t c ) and J Gr ( t c ) are the geometric Jacobians ofthe left and right camera optical centers with respect to thecommon tilt, whereas J Al ( t c ) and J Ar ( t c ) are the analyticalJacobians of the left and right z-axis with respect to the tilt.Both J A and J G can be retrieved with resort to the standardkinematics libraries as in [10]. B. Estimating the motion of the fixation point
As discussed in Sections I and II, in this work we exploitedthe gaze stabilization in two different scenarios, described inthe following Subsections. ) Kinematic Feedforward: in the first scenario the robotmoves autonomously its body and we estimate the motion ofthe fixation point with resort to the kinematic model of therobot [10]. Under these assumptions, the task is completelydefined: given the joints velocities that the robot is actuatingat the motors, the fixation point is moving according to theJacobian of the kinematic chain under consideration. As anexample, let us assume that the robot has fixed hips (i.e. nomovement at the lower limbs) and is exerting a given set ofvelocities at the torso ( ˙ q T ), neck ( ˙ q N ) and eyes ( ˙ q E ). Atany given instant of time, the motion of the fixation point isgiven by: ˙ x F P = (cid:34) v F P ω F P (cid:35) = J T NE · ˙ q T ˙ q N ˙ q E , (12)where J T NE is the × Jacobian of the forward kinematicsmap relative to the torso, the neck and the eyes.
2) IMU Feedback: in the second application, we exploitedthe measurements provided by the IMU device to estimatethe motion occurring at the head. The iCub head is currentlyequipped with the MTx sensor from Xsens [12], whoselocation with respect to the robot kinematic is known [10].Among the various sensing elements available from suchdevice, the one of interest here is the gyroscope, able toestimate the 3D rotational velocity ω IMU of the sensor at anygiven instant of time. From this measurement, it is possibleto derive the 6D velocity of the fixation point ˙ x F P : v F P = ω IMU × r , r = x F P − x IMU (13a) ω F P = ω IMU , (13b)where v F P is the 3D translational velocity of the fixationpoint, ω F P is its 3D rotational velocity, and r is the leverarm between the position of the fixation point x F P and theposition of the inertial sensor x IMU . It is worth noticingthat this is a sub-optimal case: since the inertial sensormeasures only a 3D rotational velocity (i.e. ω IMU ), we donot have access to the 3D translational component v IMU . Inthis scenario we can only compensate for the the rotationalvelocity as it is measured by the sensor (Eq. 13b) and itseffect on the translational component (Eq. 13a).
C. Gaze stabilization from the estimation of the fixation pointmotion
In the previous sections we illustrated how the feedforwardand feedback terms produce an estimation of the velocity ofthe fixation point ˙ x F P = [ v F P , ω
F P ] T . Using the inversekinematics we derive the compensatory motor commands forthe head (see Figure 1): (cid:34) ˙ q N ˙ q E (cid:35) = − J NE · (cid:34) v F P ω F P (cid:35) , (14)where J NE is the × pseudo-inverse of the Jacobian ofthe forward kinematics map relative to the neck and the eyes,and ˙ q N , ˙ q E are the desired joint velocities at the neck andeyes respectively. In this work, we chose to decouple the inverse kinematicsproblem into two sub-problems: instead of using the full 6-DoF chain of the neck and the eyes to stabilize the 6-DoFmotion of the fixation point, we designed the controller suchthat the neck compensates the rotational component ω F P ,whilst the eyes have to counterbalance the translational part v F P . The reason is twofold: 1) the neck and the eyes exhibittwo different dynamics, the eyes being faster than the neckjoints; 2) it is not physically possible for the neck joints aloneto stabilize the translational motion v F P and, similarly, theeyes chain can not compensate for the roll of the fixationpoint by mechanical design. Hence, Equation 14 has beensplit into: (cid:40) ˙ q N = − J N · ω F P ˙ q E = − J E · v F P , (15)with J N and J E being the two independent × pseudo-inverse matrices of the neck and the eyes respectively. Thecomputed joint velocities ˙ q N , ˙ q E are then used as referencesignals by the joint-level PID controllers.This decoupling is beneficial for the stability of the systemand it does not affect the final performance. The neck and theeyes are controlled to compensate two different componentsof the motion of the fixation point but cooperate to achievethe task. The rotational motion that is not compensated by theneck in fact produces translational velocities of the fixationpoint that are compensated by the eyes.III. EXPERIMENTAL RESULTSTo validate our work we set up two experiments: • Exp. A: compensation of self-generated motion : weissue a predefined sequence at the yaw, pitch, and roll ofthe torso and test both the iKK and the iFB conditionsto proved a repeatable comparison between the two. • Exp. B: compensation in presence of an external per-turbation : the motion of the fixation point is causedby the experimenter who physically moves the torso ofthe robot. In this case there is no feedforward signalavailable, and the robot uses only the iFB signal.For each experiment, two different sessions have been con-ducted: in the first session the robot stabilizes the gaze onlywith the eyes, while in the second session it uses both theneck and the eyes. In both the scenarios, a session withoutcompensation has been performed and used as a baselinefor comparison. It is worth noticing that Experiment A isobviously a more controlled scenario, and for this reason wehave used it to obtain a quantitative analysis. In ExperimentB instead the disturbances are generated manually, and,as such, it provides only a qualitative assessment of theperformance of the iFB modality.For validation we use the dense optical flow measuredfrom the cameras. This can be used as an external, unbiasedmeasure because as explained in Section I it is not usedin the stabilization loop. We used the OpenCV [13] imple-mentation of the dense optical flow algorithm proposed byFarneback [14]. Given an input image at time t , the methodfinds the 2D optical flow vector of ( u, v ) for each pixel in ig. 3. Optical flow computed from two subsequent image frames fromthe left camera, baseline experiment (no compensation). Blue 2D arrowsrepresent the optical flow vector of t ( u, v ) at each pixel. For clarity opticalflow vectors are reported only for a subset of the pixels (one pixel everyfive).Fig. 4. Optical flow computed fro two subsequent image frames from theleft camera, iFB experiment (compensation using inertial feedback). Blue2D arrows represent the optical flow vector of t ( u, v ) at each pixel. Forclarity optical flow vectors are reported only for a subset of the pixels (onepixel every five). the image. We derive a measure of performance by averagingthe norm of the motion vectors of ( u, v ) in the whole image,i.e.: optFl ( t ) = 1 W − × H − W − (cid:88) u =20 H − (cid:88) v =20 || of t ( u, v ) || , (16)in which we remove from the computation the optical flowvectors of the peripheral region of the image. The reasonfor this is to compute a performance index that is moreappropriate for the task, given that the gaze stabilization iscomputed for the fixation point (in this work W = 320 , H = 240 ).The optical flow computed during an experimental sessionis shown in Figure 3 and 4 for two consecutive frames in thebaseline experiment (no compensation) and the iFb experi-ment (stabilization with inertial feedback) respectively. Thisqualitative evaluation shows that the stabilization effectivelyreduces the motion in the images. In the following Sectionswe provide a quantitative evaluation of our framework. A. Compensation in presence of predefined torso movements
In experiment A we generate a set of predefined move-ments with the torso. We then compare the kFF and the iFB conditions with respect to the baseline. In all three cases weuse the same sequence of velocity commands to the threetorso joints (yaw, pitch and roll). Joints have been controlledwith a velocity commands of deg / s ) first independentlyand then simultaneously. As discussed in Section III, thecontroller has been tested in two cases: using only the 3 DoFof the eyes, and using all 6 DoF composed by the neck and Time[s] A v e r age O p t i c a l F l o w [ p x ] BaselinekFFiFB
Fig. 5. Average Optical Flow during Experiment A. In this case only theeyes are controlled. The baseline session is the dashed blue line, while the kFF and iFB conditions are green and the red lines respectively.
Time[s] A v e r age O p t i c a l F l o w [ p x ] BaselinekFFiFB
Fig. 6. Average optical flow during Experiment A. In this case stabilizationuses all 6 Dof of the head. The baseline behavior is the dashed blue line,while the kFF and iFB conditions are green and red lines respectively. the eyes. Figures 5 and 6 report the average optical flow optFl ( t ) in the two conditions respectively.The two plots show the improvement of the stabilizationwith respect to the baseline ( . on average). As expected,the system performed better in the kFF condition than inthe the iFB case ( . on average): this is because in theformer case the system uses a feedforward command thatanticipates and better compensates for the disturbances atthe fixation point x F P . Furthermore, a comparison betweenFigure 5 and Figure 6 confirms that by exploiting all 6 DoFsin the head, the performance of the system improves by . on average. This occurs in particular when, during thesequence, the robot performs a large movement along the rollwith the torso (roughly between t = 6 s and t = 10 s , see alsoFigure 7). In this situation the optical flow in both the kFF and the iFB conditions has a peak because the disturbancecannot be compensated with the eyes. Indeed in this case thestabilization fails completely and actually produces unwantedmotion (optical flow is higher than the baseline). Notice bycomparison with Figure 6 that stabilization is more effectivewhen the robot can exploit the additional DoFs of the neck. B. Compensation of unknown disturbances
In experiment B the motors of the joints have been deac-tivated to allow a human operator to produce disturbances ig. 7. The iCub compensating for the roll movement at the torso (Exp A, kFF scenario). In this particular occurrence, the stabilization is possible onlywith respect to the rotational component ω rollFP , since it is not physicallyfeasible for the eyes to compensate such a movement. Time[s] A v e r age O p t i c a l F l o w [ p x ] BaselineEyesNeck+Eyes
Fig. 8. Average optical flow during Experiment B. The blue dashed linerepresents the baseline. Green line is the optical flow when the stabilizationuses only the eyes while green line is the optical flow when the stabilizationuses all 6 DoF of the head. by manually shaking the torso. This is by design a non-repeatable experiment, but it can act as a confirmation ofthe performances of the iFB . As for Experiment A theimprovement of the stabilization with respect to the baselineare remarkable ( on average), with an improvement of . when the robot uses all 6 DoF of the head.IV. CONCLUSIONS AND FUTURE WORKIn this paper we described a framework for gaze stabi-lization of a humanoid robot. With respect to previous workwe focus on the use of feedforward commands derived fromthe knowledge of the motor commands issued to the robotto improve stabilization when perturbations are generated bythe robot own movements (e.g. locomotion or generic whole-body motion). To compensate for external perturbations wealso include a feedback component provided by the inertialunit mounted on the head of the robot. Our experimentsdemonstrate that the feedforward component is effective for stabilization when perturbations are due to the robot’s ownmovement. We also demonstrate that proper integration ofthe DoFs of the neck in the control loop is crucial to achievegood stabilization.In the experiments reported in this paper the robot com-pensated disturbances induced only by the motion of theupper body and we did not integrate the feedback andfeedforward components. In addition optical flow was notused for the stabilization but only as a performance measure.This is therefore only a first step in the implementationof a full gaze stabilization system for a humanoid robot.As part of our future work we will investigate how tooptimally integrate feedforward information with feedbackcoming from the inertial system and optical flow from thecameras. Furthermore, a natural extension of this frameworkis to integrate the information from the whole body ofthe iCub, including feedforward commands for all motors,feedback from the inertial units, torque sensors at the armsand legs as well as the tactile feedback from the skin.R EFERENCES[1] Roger H.S. Carpenter,
Movements of the eyes
Computer Vision and Pattern Recog-nition, 1992. Proceedings CVPR ’92., 1992 IEEE Computer SocietyConference on , Jun 1992, pp. 23–28.[3] L. Berthouze, S. Rougeaux, F. Chavand, and Y. Kuniyoshi, “Cali-bration of a foveated wide-angle lens on an active vision head,” in
Computer Vision and Pattern Recognition, 1996. Proceedings CVPR’96, 1996 IEEE Computer Society Conference on , Jun 1996, pp. 183–188.[4] C. Capurro, F. Panerai, and G. Sandini, “Dynamic vergence using log-polar images,”
International Journal of Computer Vision , vol. 24.[5] F. Panerai and G. Sandini, “Oculo-motor stabilization reflexes: inte-gration of inertial and visual information,”
Neural Networks , vol. 11,no. 7-8, pp. 1191–1204, 1998.[6] T. Shibata and S. Schaal,
Biomimetic gaze stabilization . WorldScientific, 2000, pp. 31–52.[7] F. Panerai, G. Metta, and G. Sandini, “Learning visual stabilizationreflexes in robots with moving eyes,”
Neurocomputing , vol. 48, no.14, pp. 323 – 337, 2002.[8] S. Gay, A. Ijspeert, and J. Santos-Victor, “Predictive gaze stabilizationduring periodic locomotion based on adaptive frequency oscillators,” in
Robotics and Automation (ICRA), 2012 IEEE International Conferenceon , May 2012, pp. 271–278.[9] C. P. Santos, M. Oliveira, A. M. A. Rocha, and L. Costa, “Head motionstabilization during quadruped robot locomotion: Combining dynam-ical systems and a genetic algorithm,” in
Robotics and Automation,2009. ICRA ’09. IEEE International Conference on , May 2009, pp.2294–2299.[10] U. Pattacini, “Modular cartesian controllers for humanoid robots:Design and implementation on the iCub,” Ph.D. dissertation, IstitutoItaliano di Tecnologia, Genova, Italy, 2011.[11] G. Metta, L. Natale, F. Nori, G. Sandini, D. Vernon, L. Fadiga, C. vonHofsten, K. Rosander, M. Lopes, J. Santos-Victor, A. Bernardino, andL. Montesano, “The iCub humanoid robot: An open-systems platformfor research in cognitive development,”
Neural Networks
Dr. Dobb’s Journal of SoftwareTools , 2000.[14] G. Farneb¨ack, “Two-frame motion estimation based on polynomial ex-pansion,” in