Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alexandre Bernardino is active.

Publication


Featured researches published by Alexandre Bernardino.


Neural Networks | 2010

The iCub humanoid robot: An open-systems platform for research in cognitive development

Giorgio Metta; Lorenzo Natale; Francesco Nori; Giulio Sandini; David Vernon; Luciano Fadiga; Claes von Hofsten; Kerstin Rosander; Manuel Lopes; José Santos-Victor; Alexandre Bernardino; Luis Montesano

We describe a humanoid robot platform--the iCub--which was designed to support collaborative research in cognitive development through autonomous exploration and social interaction. The motivation for this effort is the conviction that significantly greater impact can be leveraged by adopting an open systems policy for software and hardware development. This creates the need for a robust humanoid robot that offers rich perceptuo-motor capabilities with many degrees of freedom, a cognitive capacity for learning and development, a software architecture that encourages reuse & easy integration, and a support infrastructure that fosters collaboration and sharing of resources. The iCub satisfies all of these needs in the guise of an open-system platform which is freely available and which has attracted a growing community of users and developers. To date, twenty iCubs each comprising approximately 5000 mechanical and electrical parts have been delivered to several research labs in Europe and to one in the USA.


IEEE Transactions on Robotics | 2008

Learning Object Affordances: From Sensory--Motor Coordination to Imitation

Luis Montesano; Manuel Lopes; Alexandre Bernardino; José Santos-Victor

Affordances encode relationships between actions, objects, and effects. They play an important role on basic cognitive capabilities such as prediction and planning. We address the problem of learning affordances through the interaction of a robot with the environment, a key step to understand the world properties and develop social skills. We present a general model for learning object affordances using Bayesian networks integrated within a general developmental architecture for social robots. Since learning is based on a probabilistic model, the approach is able to deal with uncertainty, redundancy, and irrelevant information. We demonstrate successful learning in the real world by having an humanoid robot interacting with objects. We illustrate the benefits of the acquired knowledge in imitation games.


international conference on robotics and automation | 2006

Design of the robot-cub (iCub) head

Ricardo Beira; Manuel Lopes; M. Praga; José Santos-Victor; Alexandre Bernardino; Giorgio Metta; Francesco Becchi; Roque Saltaren

This paper describes the design of a robot head, developed in the framework of the RobotCub project. This project goals consists on the design and construction of a humanoid robotic platform, the iCub, for studying human cognition. The final platform would be approximately 90 cm tall, with 23 kg and with a total number of 53 degrees of freedom. For its size, the iCub is the most complete humanoid robot currently being designed, in terms of kinematic complexity. The eyes can also move, as opposed to similarly sized humanoid platforms. Specifications are made based on biological anatomical and behavioral data, as well as tasks constraints. Different concepts for the neck design (flexible, parallel and serial solutions) are analyzed and compared with respect to the specifications. The eye structure and the proprioceptive sensors are presented, together with some discussion of preliminary work on the face design


IEEE Journal of Oceanic Engineering | 2003

Mosaic-based navigation for autonomous underwater vehicles

Nuno Gracias; S. van der Zwaan; Alexandre Bernardino; José Santos-Victor

We propose an approach for vision-based navigation of underwater robots that relies on the use of video mosaics of the sea bottom as environmental representations for navigation. We present a methodology for building high-quality video mosaics of the sea bottom in a fully automatic manner, which ensures global spatial coherency. During navigation, a set of efficient visual routines are used for the fast and accurate localization of the underwater vehicle with respect to the mosaic. These visual routines were developed taking into account the operating requirements of real-time position sensing, error bounding, and computational load. A visual servoing controller, based on the vehicles kinematics, is used to drive the vehicle along a computed trajectory, specified in the mosaic, while maintaining constant altitude. The trajectory toward a goal point is generated online to avoid undefined areas in the mosaic. We have conducted a large set of sea trials, under realistic operating conditions. This paper demonstrates that without resorting to additional sensors, visual information can be used to create environment representations of the sea bottom (mosaics) and support long runs of navigation in a robust manner.


IEEE Transactions on Intelligent Transportation Systems | 2006

Detection and classification of highway lanes using vehicle motion trajectories

José Melo; Andrew Naftel; Alexandre Bernardino; José Santos-Victor

Intelligent vision-based traffic surveillance systems are assuming an increasingly important role in highway monitoring and road management schemes. This paper describes a low-level object tracking system that produces accurate vehicle motion trajectories that can be further analyzed to detect lane centers and classify lane types. Accompanying techniques for indexing and retrieval of anomalous trajectories are also derived. The predictive trajectory merge-and-split algorithm is used to detect partial or complete occlusions during object motion and incorporates a Kalman filter that is used to perform vehicle tracking. The resulting motion trajectories are modeled using variable low-degree polynomials. A K-means clustering technique on the coefficient space can be used to obtain approximate lane centers. Estimation bias due to vehicle lane changes can be removed using robust estimation techniques based on Random Sample Consensus (RANSAC). Through the use of nonmetric distance functions and a simple directional indicator, highway lanes can be classified into one of the following categories: entry, exit, primary, or secondary. Experimental results are presented to show the real-time application of this approach to multiple views obtained by an uncalibrated pan-tilt-zoom traffic camera monitoring the junction of two busy intersecting highways.


international conference on robotics and automation | 2008

Multimodal saliency-based bottom-up attention a framework for the humanoid robot iCub

Jonas Ruesch; Manuel Lopes; Alexandre Bernardino; Jonas Hörnstein; José Santos-Victor; Rolf Pfeifer

This work presents a multimodal bottom-up attention system for the humanoid robot iCub where the robots decisions to move eyes and neck are based on visual and acoustic saliency maps. We introduce a modular and distributed software architecture which is capable of fusing visual and acoustic saliency maps into one egocentric frame of reference. This system endows the iCub with an emergent exploratory behavior reacting to combined visual and auditory saliency. The developed software modules provide a flexible foundation for the open iCub platform and for further experiments and developments, including higher levels of attention and representation of the peripersonal space.


Robotics and Autonomous Systems | 2010

A review of log-polar imaging for visual perception in robotics

V. Javier Traver; Alexandre Bernardino

Log-polar imaging consists of a type of methods that represent visual information with a space-variant resolution inspired by the visual system of mammals. It has been studied for about three decades and has surpassed conventional approaches in robotics applications, mainly the ones where real-time constraints make it necessary to utilize resource-economic image representations and processing methodologies. This paper surveys the application of log-polar imaging in robotic vision, particularly in visual attention, target tracking, egomotion estimation, and 3D perception. The concise yet comprehensive review offered in this paper is intended to provide novel and experienced roboticists with a quick and gentle overview of log-polar vision and to motivate vision researchers to investigate the many open problems that still need solving. To help readers identify promising research directions, a possible research agenda is outlined. Finally, since log-polar vision is not restricted to robotics, a couple of other areas of application are discussed.


international conference on robotics and automation | 1999

Binocular tracking: integrating perception and control

Alexandre Bernardino; José Santos-Victor

Presents an active binocular tracking system using log-polar images with contributions in both the perceptual and control aspects. The control part is based on the visual servoing framework, including kinematics and dynamics. We introduce a fixation constraint that simplifies the tracking problem by decoupling the visual kinematics and allowing us to express system dynamics in image coordinates. Simple dynamic controllers are designed for each degree of freedom directly from image features. In the perceptual part, we use a space variant sensor that emphasizes the center of the visual field (log-polar geometry). We present a disparity estimation algorithm for log-polar images and provide a theoretical analysis to illustrate the advantages of using space variant images. The overall system is implemented in the Medusa binocular head without any specific processing hardware. The use of log-polar images allows real-time performance (50 Hz). Tracking experiments are presented to illustrate system performance with different control strategies and objects of different shapes and motions.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2015

Matrix Completion for Weakly-Supervised Multi-Label Image Classification

Ricardo Silveira Cabral; Fernando De la Torre; João Paulo Costeira; Alexandre Bernardino

In the last few years, image classification has become an incredibly active research topic, with widespread applications. Most methods for visual recognition are fully supervised, as they make use of bounding boxes or pixelwise segmentations to locate objects of interest. However, this type of manual labeling is time consuming, error prone and it has been shown that manual segmentations are not necessarily the optimal spatial enclosure for object classifiers. This paper proposes a weakly-supervised system for multi-label image classification. In this setting, training images are annotated with a set of keywords describing their contents, but the visual concepts are not explicitly segmented in the images. We formulate the weakly-supervised image classification as a low-rank matrix completion problem. Compared to previous work, our proposed framework has three advantages: (1) Unlike existing solutions based on multiple-instance learning methods, our model is convex. We propose two alternative algorithms for matrix completion specifically tailored to visual data, and prove their convergence. (2) Unlike existing discriminative methods, our algorithm is robust to labeling errors, background noise and partial occlusions. (3) Our method can potentially be used for semantic segmentation. Experimental validation on several data sets shows that our method outperforms state-of-the-art classification algorithms, while effectively capturing each class appearance.


Robotics and Autonomous Systems | 2002

Visual station keeping for floating robots in unstructured environments

Sjoerd van der Zwaan; Alexandre Bernardino; José Santos-Victor

Abstract This paper describes the use of vision for navigation of mobile robots floating in 3D space. The problem addressed is that of automatic station keeping relative to some naturally textured environmental region. Due to the motion disturbances in the environment (currents), these tasks are important to keep the vehicle stabilized relative to an external reference frame. Assuming short range regions in the environment, vision can be used for local navigation, so that no global positioning methods are required. A planar environmental region is selected as a visual landmark and tracked throughout a monocular video sequence. For a camera moving in 3D space, the observed deformations of the tracked image region are according to planar projective transformations and reveal information about the robot relative position and orientation w.r.t. the landmark. This information is then used in a visual feedback loop so as to realize station keeping. Both the tracking system and the control design are discussed. Two robotic platforms are used for experimental validation, namely an indoor aerial blimp and a remote operated underwater vehicle. Results obtained from these experiments are described.

Collaboration


Dive into the Alexandre Bernardino's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Plinio Moreno

Instituto Superior Técnico

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lorenzo Jamone

Instituto Superior Técnico

View shared research outputs
Top Co-Authors

Avatar

Ricardo Ferreira

Instituto Superior Técnico

View shared research outputs
Top Co-Authors

Avatar

Rui Figueiredo

Instituto Superior Técnico

View shared research outputs
Top Co-Authors

Avatar

Matteo Taiana

Instituto Superior Técnico

View shared research outputs
Top Co-Authors

Avatar

Nicola Greggio

Instituto Superior Técnico

View shared research outputs
Top Co-Authors

Avatar

Dario Figueira

Instituto Superior Técnico

View shared research outputs
Researchain Logo
Decentralizing Knowledge