Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Antonio Paolillo is active.

Publication


Featured researches published by Antonio Paolillo.


international conference on robotics and automation | 2013

Vision-based corridor navigation for humanoid robots

Angela Faragasso; Giuseppe Oriolo; Antonio Paolillo; Marilena Vendittelli

We present a control-based approach for visual navigation of humanoid robots in office-like environments. In particular, the objective of the humanoid is to follow a maze of corridors, walking as close as possible to their center to maximize motion safety. Our control algorithm is inspired by a technique originally designed for unicycle robots and extended here to cope with the presence of turns and junctions. The feedback signals computed for the unicycle are transformed to inputs that are suited for the locomotion system of the humanoid, producing a natural, human-like behavior. Experimental results for the humanoid robot NAO are presented to show the validity of the approach, and in particular the successful extension of the controller to turns and junctions.


Autonomous Robots | 2016

Humanoid odometric localization integrating kinematic, inertial and visual information

Giuseppe Oriolo; Antonio Paolillo; Lorenzo Rosa; Marilena Vendittelli

We present a method for odometric localization of humanoid robots using standard sensing equipment, i.e., a monocular camera, an inertial measurement unit (IMU), joint encoders and foot pressure sensors. Data from all these sources are integrated using the prediction-correction paradigm of the Extended Kalman Filter. Position and orientation of the torso, defined as the representative body of the robot, are predicted through kinematic computations based on joint encoder readings; an asynchronous mechanism triggered by the pressure sensors is used to update the placement of the support foot. The correction step of the filter uses as measurements the torso orientation, provided by the IMU, and the head pose, reconstructed by a VSLAM algorithm. The proposed method is validated on the humanoid NAO through two sets of experiments: open-loop motions aimed at assessing the accuracy of localization with respect to a ground truth, and closed-loop motions where the humanoid pose estimates are used in real-time as feedback signals for trajectory control.


ieee-ras international conference on humanoid robots | 2012

Vision-based Odometric Localization for humanoids using a kinematic EKF

Giuseppe Oriolo; Antonio Paolillo; Lorenzo Rosa; Marilena Vendittelli

We propose an odometric system for localizing a walking humanoid robot using standard sensory equipment, i.e., a camera, an Inertial Measurement Unit, joint encoders and foot pressure sensors. Our method has the prediction-correction structure of an Extended Kalman Filter. At each sampling instant, position and orientation of the torso are predicted on the basis of the differential kinematic map from the support foot to the torso, using encoder data from the support joints. The actual measurements coming from the camera (head position and orientation reconstructed by a V-SLAM algorithm) and the Inertial Measurement Unit (torso orientation) are then compared with their predicted values to correct the estimate. The filter is made aware of the current placement of the support foot by an asynchronous update mechanism triggered by the pressure sensors. An experimental validation on the humanoid NAO shows the satisfactory performance of the proposed method.


international conference on robotics and automation | 2011

Walking motion generation with online foot position adaptation based on ℓ 1 - and ℓ ℞ -norm penalty formulations

Dimitar Dimitrov; Antonio Paolillo; Pierre-Brice Wieber

The article presents an improved formulation of an existing model predictive control scheme used to generate online “stable” walking motions for a humanoid robot. We introduce: (i) a change of variable that simplifies the optimization problem to be solved; (ii) a simply bounded formulation in the case when the positions of the feet are predetermined; (iii) a formulation allowing foot repositioning (when the system is perturbed) based on ℓ1- and ℓ℞-norm minimization; (iv) a formulation that accounts for (approximate) double support constraints when foot repositioning occurs.


ieee-ras international conference on humanoid robots | 2013

Vision-based trajectory control for humanoid navigation

Giuseppe Oriolo; Antonio Paolillo; Lorenzo Rosa; Marilena Vendittelli

We address the problem of robustly tracking a desired workspace trajectory with a humanoid robot. The proposed solution is based on the suitable definition of a controlled output, which represents an averaged motion of the torso after cancellation of the sway oscillation. In particular, two different techniques are presented for extracting the averaged motion. For control design purposes, a unicycle-like model is associated to the evolution of this output. The feedback loop is then closed using a vision-based odometric localization method to estimate the torso motion. The proposed approach is validated through comparative experiments on the humanoid robot NAO.


international conference on robotics and automation | 2014

Manual guidance of humanoid robots without force sensors: Preliminary experiments with NAO

Marco Bellaccini; Leonardo Lanari; Antonio Paolillo; Marilena Vendittelli

In this paper we propose a method to perform manual guidance with humanoid robots. Manual guidance is a general model of physical interaction: here we focus on guiding a humanoid by its hands. The proposed technique can be, however, used also for joint object transportation and other tasks implying human-humanoid physical interaction. Using a measure of the Instantaneous Capture Point, we develop an equilibrium-based interaction technique that does not require force/torque or vision sensors. It is, therefore, particularly suitable for low-cost humanoids and toys. The proposed method has been experimentally validated on the small humanoid NAO.


Autonomous Robots | 2017

Vision-based maze navigation for humanoid robots

Antonio Paolillo; Angela Faragasso; Giuseppe Oriolo; Marilena Vendittelli

We present a vision-based approach for navigation of humanoid robots in networks of corridors connected through curves and junctions. The objective of the humanoid is to follow the corridors, walking as close as possible to their center to maximize motion safety, and to turn at curves and junctions. Our control algorithm is inspired by a technique originally designed for unicycle robots that we have adapted to humanoid navigation and extended to cope with the presence of turns and junctions. In addition, we prove here that the corridor following control law provides asymptotic convergence of robot heading and position to the corridor bisector even when the corridor walls are not parallel. A state transition system is designed to allow navigation in mazes of corridors, curves and T-junctions. Extensive experimental validation proves the validity and robustness of the approach.


Journal of Field Robotics | 2018

Autonomous car driving by a humanoid robot

Antonio Paolillo; Pierre Gergondet; Andrea Cherubini; Marilena Vendittelli; Abderrahmane Kheddar

Enabling a humanoid robot to drive a car requires the development of a set of basic primitive actions. These include walking to the vehicle, manually controlling its commands (e.g., ignition, gas pedal, and steering) and moving with the whole body to ingress/egress the car. We present a sensor-based reactive framework for realizing the central part of the complete task, consisting of driving the car along unknown roads. The proposed framework provides three driving strategies by which a human supervisor can teleoperate the car or give the robot full or partial control of the car. A visual servoing scheme uses features of the road image to provide the reference angle for the steering wheel to drive the car at the center of the road. Simultaneously, a Kalman filter merges optical flow and accelerometer measurements to estimate the car linear velocity and correspondingly compute the gas pedal command for driving at a desired speed. The steering wheel and gas pedal reference are sent to the robot control to achieve the driving task with the humanoid. We present results from a driving experience with a real car and the humanoid robot HRP-2Kai. Part of the framework has been used to perform the driving task at the DARPA Robotics Challenge.


ieee-ras international conference on humanoid robots | 2016

Residual-based contacts estimation for humanoid robots

Fabrizio Flacco; Antonio Paolillo; Abderrahmane Kheddar

The residual method for detecting contacts is a promising approach to allow physical interaction tasks with humanoid robots. Nevertheless, the classical formulation, as developed for fixed-base robots, cannot be directly applied to floating-base systems. This paper presents a novel formulation of the residual based on the floating-base dynamics modeling of humanoids. This leads to the definition of the internal and external residual. The first estimates the joints effort due to the external perturbation acting on the robot. The latter is an estimation of the external forces acting on the floating-base of the robot. The potential of the method is shown proposing a simple internal residual-based reaction strategy, and a procedure for estimating the contact point that combines both the internal and external residuals.


ieee-ras international conference on humanoid robots | 2014

Toward autonomous car driving by a humanoid robot: A sensor-based framework

Antonio Paolillo; Andrea Cherubini; François Keith; Abderrahmane Kheddar; Marilena Vendittelli

To achieve the complete car driving task with a humanoid robot, it is necessary to develop a set of basic action primitives, including: walking to the vehicle, manually controlling its commands (ignition, accelerator and steering), and moving with the whole-body, for car ingress/egress. In this paper, we propose an approach for realizing the central part of the complete task, consisting in driving the car along a road. The proposed method is composed of two main parts. First, a vision-based controller uses image features of the road, to provide the reference angle for the steering wheel. Second, an admittance controller allows the humanoid to safely rotate the steering wheel with its hands and realize the desired steering command. We present results from a car driving experience, by humanoid robot HRP-4, within a video game setup.

Collaboration


Dive into the Antonio Paolillo's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Abderrahmane Kheddar

National Institute of Advanced Industrial Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Giuseppe Oriolo

Sapienza University of Rome

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lorenzo Rosa

Sapienza University of Rome

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge