Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where José Santos-Victor is active.

Publication


Featured researches published by José Santos-Victor.


international conference on robotics and automation | 2000

Vision-based navigation and environmental representations with an omnidirectional camera

José António Gaspar; Niall Winters; José Santos-Victor

Proposes a method for the visual-based navigation of a mobile robot in indoor environments, using a single omnidirectional (catadioptric) camera. The geometry of the catadioptric sensor and the method used to obtain a birds eye (orthographic) view of the ground plane are presented. This representation significantly simplifies the solution to navigation problems, by eliminating any perspective effects. The nature of each navigation task is taken into account when designing the required navigation skills and environmental representations. We propose two main navigation modalities: topological navigation and visual path following. Topological navigation is used for traveling long distances and does not require knowledge of the exact position of the robot but rather, a qualitative position on the topological map. The navigation process combines appearance based methods and visual servoing upon some environmental features. Visual path following is required for local, very precise navigation, e.g., door traversal, docking. The robot is controlled to follow a prespecified path accurately, by tracking visual landmarks in birds eye views of the ground plane. By clearly separating the nature of these navigation tasks, a simple and yet powerful navigation system is obtained.


Neural Networks | 2010

The iCub humanoid robot: An open-systems platform for research in cognitive development

Giorgio Metta; Lorenzo Natale; Francesco Nori; Giulio Sandini; David Vernon; Luciano Fadiga; Claes von Hofsten; Kerstin Rosander; Manuel Lopes; José Santos-Victor; Alexandre Bernardino; Luis Montesano

We describe a humanoid robot platform--the iCub--which was designed to support collaborative research in cognitive development through autonomous exploration and social interaction. The motivation for this effort is the conviction that significantly greater impact can be leveraged by adopting an open systems policy for software and hardware development. This creates the need for a robust humanoid robot that offers rich perceptuo-motor capabilities with many degrees of freedom, a cognitive capacity for learning and development, a software architecture that encourages reuse & easy integration, and a support infrastructure that fosters collaboration and sharing of resources. The iCub satisfies all of these needs in the guise of an open-system platform which is freely available and which has attracted a growing community of users and developers. To date, twenty iCubs each comprising approximately 5000 mechanical and electrical parts have been delivered to several research labs in Europe and to one in the USA.


IEEE Transactions on Robotics | 2008

Learning Object Affordances: From Sensory--Motor Coordination to Imitation

Luis Montesano; Manuel Lopes; Alexandre Bernardino; José Santos-Victor

Affordances encode relationships between actions, objects, and effects. They play an important role on basic cognitive capabilities such as prediction and planning. We address the problem of learning affordances through the interaction of a robot with the environment, a key step to understand the world properties and develop social skills. We present a general model for learning object affordances using Bayesian networks integrated within a general developmental architecture for social robots. Since learning is based on a probabilistic model, the approach is able to deal with uncertainty, redundancy, and irrelevant information. We demonstrate successful learning in the real world by having an humanoid robot interacting with objects. We illustrate the benefits of the acquired knowledge in imitation games.


Proceedings IEEE Workshop on Omnidirectional Vision (Cat. No.PR00704) | 2000

Omni-directional vision for robot navigation

Niall Winters; José António Gaspar; Gerard Lacey; José Santos-Victor

We describe a method for visual based robot navigation with a single omni-directional (catadioptic) camera. We show how omni-directional images can be used to generate the representations needed for two main navigation modalities: Topological Navigation and Visual Path Following. Topological Navigation relies on the robots qualitative global position, estimated from a set of omni-directional images obtained during a training stage (compressed using PCA). To deal with illumination changes, an eigenspace approximation to the Hausdorff measure is exploited. We present a method to transform omni-directional images to Birds Eye Views that correspond to scaled orthographic views of the ground plane. These images are used to locally control the orientation of the robot, through visual servoing. Visual Path Following is used to accurately control the robot along a prescribed trajectory, by using birds eye views to track landmarks on the ground plane. Due to the simplified geometry of these images, the robots pose can be estimated easily and used for accurate trajectory following. Omni-directional images facilitate landmark based navigation, since landmarks remain visible in all images, as opposed to a small field-of-view standard camera. Also, omni-directional images provide the means of having adequate representations to support both accurate or qualitative navigation. Results are described in the paper.


International Journal of Computer Vision | 1995

Divergent stereo in autonomous navigation: from bees to robots

José Santos-Victor; Giulio Sandini; Francesca Curotto; Stefano Garibaldi

This article presents some experiments of a real-time navigation system driven by two cameras pointing laterally to the navigation direction (Divergent Stereo). Similarly to what has been proposed in (Franceschini et al. 1991; Coombs and Roberts 1992), our approach (Sandini et al. 1992; Santos-Victor et al. 1993) assumes that, for navigation purposes, the driving information is not distance (as it is obtainable by a stereo setup) but motion and, more precisely, by the use of qualitative optical-flow information computed over nonoverlapping areas of the visual field of two cameras.Following this idea, a mobile vehicle has been equipped with a pair of cameras looking laterally (much like honeybees) and a controller based on fast, real-time computation of optical flow has been implemented. The control of the mobile robot (Robee) is based on the comparison between the apparent image velocity of the left and the right cameras. The solution adopted is derived from recent studies (Srinivasan 1991) describing the behavior of freely flying honeybees and the mechanisms they use to perceive range.This qualitative information (no explicit measure of depth is performed) is used in many experiments to show the robustness of the approach, and a detailed description of the control structure is presented to demonstrate the feasibility of the approach in driving the mobile robot within a cluttered environment.A discussion about the potentialities of the approach and the implications in terms of sensor structure is also presented.


computer vision and pattern recognition | 2000

Underwater Video Mosaics as Visual Navigation Maps

Nuno Gracias; José Santos-Victor

This paper presents a set of algorithms for the creation of underwater mosaics and illustrates their use as visual maps for underwater vehicle navigation. First, we describe the automatic creation of video mosaics, which deals with the problem of image motion estimation in a robust and automatic way. The motion estimation is based on a initial matching of corresponding areas over pairs of images, followed by the use of a robust matching technique, which can cope with a high percentage of incorrect matches. Several motion models, established under the projective geometry framework, allow for the creation of high quality mosaics where no assumptions are made about the camera motion. Several tests were run on underwater image sequences, testifying to the good performance of the implemented matching and registration methods. Next, we deal with the issue of determining the 3D position and orientation of a vehicle from new views of a previously created mosaic. The problem of pose estimation is tackled, using the available information on the camera intrinsic parameters. This information ranges from the full knowledge to the case where they are estimated using a self-calibration technique based on the analysis of an image sequence captured under pure rotation. The performance of the 3D positioning algorithms is evaluated using images for which accurate ground truth is available.


international conference on robotics and automation | 2006

Design of the robot-cub (iCub) head

Ricardo Beira; Manuel Lopes; M. Praga; José Santos-Victor; Alexandre Bernardino; Giorgio Metta; Francesco Becchi; Roque Saltaren

This paper describes the design of a robot head, developed in the framework of the RobotCub project. This project goals consists on the design and construction of a humanoid robotic platform, the iCub, for studying human cognition. The final platform would be approximately 90 cm tall, with 23 kg and with a total number of 53 degrees of freedom. For its size, the iCub is the most complete humanoid robot currently being designed, in terms of kinematic complexity. The eyes can also move, as opposed to similarly sized humanoid platforms. Specifications are made based on biological anatomical and behavioral data, as well as tasks constraints. Different concepts for the neck design (flexible, parallel and serial solutions) are analyzed and compared with respect to the specifications. The eye structure and the proprioceptive sensors are presented, together with some discussion of preliminary work on the face design


IEEE Journal of Oceanic Engineering | 2003

Mosaic-based navigation for autonomous underwater vehicles

Nuno Gracias; S. van der Zwaan; Alexandre Bernardino; José Santos-Victor

We propose an approach for vision-based navigation of underwater robots that relies on the use of video mosaics of the sea bottom as environmental representations for navigation. We present a methodology for building high-quality video mosaics of the sea bottom in a fully automatic manner, which ensures global spatial coherency. During navigation, a set of efficient visual routines are used for the fast and accurate localization of the underwater vehicle with respect to the mosaic. These visual routines were developed taking into account the operating requirements of real-time position sensing, error bounding, and computational load. A visual servoing controller, based on the vehicles kinematics, is used to drive the vehicle along a computed trajectory, specified in the mosaic, while maintaining constant altitude. The trajectory toward a goal point is generated online to avoid undefined areas in the mosaic. We have conducted a large set of sea trials, under realistic operating conditions. This paper demonstrates that without resorting to additional sensors, visual information can be used to create environment representations of the sea bottom (mosaics) and support long runs of navigation in a robust manner.


IEEE Transactions on Intelligent Transportation Systems | 2006

Detection and classification of highway lanes using vehicle motion trajectories

José Melo; Andrew Naftel; Alexandre Bernardino; José Santos-Victor

Intelligent vision-based traffic surveillance systems are assuming an increasingly important role in highway monitoring and road management schemes. This paper describes a low-level object tracking system that produces accurate vehicle motion trajectories that can be further analyzed to detect lane centers and classify lane types. Accompanying techniques for indexing and retrieval of anomalous trajectories are also derived. The predictive trajectory merge-and-split algorithm is used to detect partial or complete occlusions during object motion and incorporates a Kalman filter that is used to perform vehicle tracking. The resulting motion trajectories are modeled using variable low-degree polynomials. A K-means clustering technique on the coefficient space can be used to obtain approximate lane centers. Estimation bias due to vehicle lane changes can be removed using robust estimation techniques based on Random Sample Consensus (RANSAC). Through the use of nonmetric distance functions and a simple directional indicator, highway lanes can be classified into one of the following categories: entry, exit, primary, or secondary. Experimental results are presented to show the real-time application of this approach to multiple views obtained by an uncalibrated pan-tilt-zoom traffic camera monitoring the junction of two busy intersecting highways.


international conference on robotics and automation | 2008

Multimodal saliency-based bottom-up attention a framework for the humanoid robot iCub

Jonas Ruesch; Manuel Lopes; Alexandre Bernardino; Jonas Hörnstein; José Santos-Victor; Rolf Pfeifer

This work presents a multimodal bottom-up attention system for the humanoid robot iCub where the robots decisions to move eyes and neck are based on visual and acoustic saliency maps. We introduce a modular and distributed software architecture which is capable of fusing visual and acoustic saliency maps into one egocentric frame of reference. This system endows the iCub with an emergent exploratory behavior reacting to combined visual and auditory saliency. The developed software modules provide a flexible foundation for the open iCub platform and for further experiments and developments, including higher levels of attention and representation of the peripersonal space.

Collaboration


Dive into the José Santos-Victor's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Plinio Moreno

Instituto Superior Técnico

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lorenzo Jamone

Instituto Superior Técnico

View shared research outputs
Top Co-Authors

Avatar

Giulio Sandini

Istituto Italiano di Tecnologia

View shared research outputs
Top Co-Authors

Avatar

Nicola Greggio

Instituto Superior Técnico

View shared research outputs
Top Co-Authors

Avatar

Rui Figueiredo

Instituto Superior Técnico

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge