Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where José António Gaspar is active.

Publication


Featured researches published by José António Gaspar.


international conference on robotics and automation | 2000

Vision-based navigation and environmental representations with an omnidirectional camera

José António Gaspar; Niall Winters; José Santos-Victor

Proposes a method for the visual-based navigation of a mobile robot in indoor environments, using a single omnidirectional (catadioptric) camera. The geometry of the catadioptric sensor and the method used to obtain a birds eye (orthographic) view of the ground plane are presented. This representation significantly simplifies the solution to navigation problems, by eliminating any perspective effects. The nature of each navigation task is taken into account when designing the required navigation skills and environmental representations. We propose two main navigation modalities: topological navigation and visual path following. Topological navigation is used for traveling long distances and does not require knowledge of the exact position of the robot but rather, a qualitative position on the topological map. The navigation process combines appearance based methods and visual servoing upon some environmental features. Visual path following is required for local, very precise navigation, e.g., door traversal, docking. The robot is controlled to follow a prespecified path accurately, by tracking visual landmarks in birds eye views of the ground plane. By clearly separating the nature of these navigation tasks, a simple and yet powerful navigation system is obtained.


Proceedings IEEE Workshop on Omnidirectional Vision (Cat. No.PR00704) | 2000

Omni-directional vision for robot navigation

Niall Winters; José António Gaspar; Gerard Lacey; José Santos-Victor

We describe a method for visual based robot navigation with a single omni-directional (catadioptic) camera. We show how omni-directional images can be used to generate the representations needed for two main navigation modalities: Topological Navigation and Visual Path Following. Topological Navigation relies on the robots qualitative global position, estimated from a set of omni-directional images obtained during a training stage (compressed using PCA). To deal with illumination changes, an eigenspace approximation to the Hausdorff measure is exploited. We present a method to transform omni-directional images to Birds Eye Views that correspond to scaled orthographic views of the ground plane. These images are used to locally control the orientation of the robot, through visual servoing. Visual Path Following is used to accurately control the robot along a prescribed trajectory, by using birds eye views to track landmarks on the ground plane. Due to the simplified geometry of these images, the robots pose can be estimated easily and used for accurate trajectory following. Omni-directional images facilitate landmark based navigation, since landmarks remain visible in all images, as opposed to a small field-of-view standard camera. Also, omni-directional images provide the means of having adequate representations to support both accurate or qualitative navigation. Results are described in the paper.


Proceedings of the IEEE Workshop on Omnidirectional Vision 2002. Held in conjunction with ECCV'02 | 2002

Constant resolution omnidirectional cameras

José António Gaspar; Cláudia Deccó; Jun Okamoto; José Santos-Victor

In this paper we present a general methodology for designing mirrors of catadioptric onmidirectional sensors encompassing linear projection properties, the so called constant resolution cameras. The linearity is stated between 3D distances (or angles) and pixel coordinates. We include three practical cases of interest of linear constraints for both standard (cartesian pixel distribution) and log-polar cameras: constant vertical, horizontal and angular resolution. Finally, the formulation is applied in designing a camera combining some of the presented practical cases. Resulting images show that the design was successful as the desired linear properties were obtained.


intelligent robots and systems | 2005

Cooperative localization by fusing vision-based bearing measurements and motion

Luis Montesano; José António Gaspar; José Santos-Victor; Luis Montano

This paper presents a method to cooperatively localize pairs of robots fusing bearing-only information provided by cameras and the motion of the vehicles. The algorithm uses the robots as landmarks to estimate their relative location. Bearings are the simplest measurements directly obtained from the cameras, as opposed to measuring depths which would require knowledge or reconstruction of the world structure. We present the general recursive Bayes estimator and three different implementations based on an extended Kalman filter, a particle filter and a combination of both techniques. We have compared the performance of the different implementations using real data acquired with two platforms equipped with omnidirectional cameras and simulated data.


intelligent robots and systems | 2009

ISROBOTNET: A testbed for sensor and robot network systems

Marco Barbosa; Alexandre Bernardino; Dario Figueira; José António Gaspar; Nelson Gonçalves; Pedro U. Lima; Plinio Moreno; Abdolkarim Pahliani; José Santos-Victor; Matthijs T. J. Spaan; João Sequeira

This paper introduces a testbed for sensor and robot network systems, currently composed of 10 cameras and 5 mobile wheeled robots equipped with several sensors for self-localization, obstacle avoidance and vision cameras, and wireless communications. The testbed includes a service-oriented middleware to enable fast prototyping and implementation of algorithms previously tested in simulation, as well as to simplify integration of subsystems developed by different partners. We survey an integrated approach to human-robot interaction that has been developed supported by the testbed under an European research project. The application integrates innovative methods and algorithms for people tracking and waving detection, cooperative perception among static and mobile cameras to improve people tracking accuracy, as well as decision-theoretical approaches to sensor selection and task allocation within the sensor network.


Robotics and Autonomous Systems | 2010

Tracking objects with generic calibrated sensors: An algorithm based on color and 3D shape features

Matteo Taiana; João Santos; José António Gaspar; Jacinto C. Nascimento; Alexandre Bernardino; Pedro U. Lima

We present a color and shape based 3D tracking system suited to a large class of vision sensors. The method is applicable, in principle, to any known calibrated projection model. The tracking architecture is based on particle filtering methods where each particle represents the 3D state of the object, rather than its state in the image, therefore overcoming the nonlinearity caused by the projection model. This allows the use of realistic 3D motion models and easy incorporation of self-motion measurements. All nonlinearities are concentrated in the observation model so that each particle projects a few tens of special points onto the image, on (and around) the 3D objects surface. The likelihood of each state is then evaluated by comparing the color distributions inside and outside the objects occluding contour. Since only pixel access operations are required, the method does not require the use of image processing routines like edge/feature extraction, color segmentation or 3D reconstruction, which can be sensitive to motion blur and optical distortions typical in applications of omnidirectional sensors to robotics. We show tracking applications considering different objects (balls, boxes), several projection models (catadioptric, dioptric, perspective) and several challenging scenarios (clutter, occlusion, illumination changes, motion and optical blur). We compare our methodology against a state-of-the-art alternative, both in realistic tracking sequences and with ground truth generated data.


Computer Vision and Image Understanding | 2010

Discrete camera calibration from pixel streams

Etienne Grossmann; José António Gaspar; Francesco Orabona

We consider the problem of estimating the relative orientation of a number of individual photocells - or pixels - that hold fixed relative positions. The photocells measure the intensity of light traveling on a pencil of lines. We assume that the light-field thus sampled is changing, e.g. as the result of motion of the sensors and use the obtained measurements to estimate the orientations of the photocells. Our approach is based on correlation and information-theory dissimilarity measures. Experiments with real-world data show that the dissimilarity measures are strongly related to the angular separation between the photocells, and the relation can be modeled quantitatively. In particular we show that this model allows to estimate the angular separation from the dissimilarity. Although the resulting estimators are not very accurate, they maintain their performance throughout different visual environments, suggesting that the model encodes a very general property of our visual world. Finally, leveraging this method to estimate angles from signal pairs, we show how distance geometry techniques allow to recover the complete sensor geometry.


intelligent robots and systems | 2009

Calibrating an outdoor distributed camera network using Laser Range Finder data

Agustin Ortega; Bruno Dias; Ernesto H. Teniente; Alexandre Bernardino; José António Gaspar; Juan Andrade-Cetto

Outdoor camera networks are becoming ubiquitous in critical urban areas of large cities around the world. Although current applications of camera networks are mostly limited to video surveillance, recent research projects are exploiting advances on outdoor robotics technology to develop systems that put together networks of cameras and mobile robots in people assisting tasks. Such systems require the creation of robot navigation systems in urban areas with a precise calibration of the distributed camera network. Despite camera calibration has been an extensively studied topic, the calibration (intrinsic and extrinsic) of large outdoor camera networks with no overlapping view fields, and likely to suffer frequent recalibration, poses novel challenges in the development of practical methods for user-assisted calibration that minimize intervention times and maximize precision. In this paper we propose the utilization of laser range finder (LRF) data covering the area of the camera network to support the calibration process and develop a semi-automated methodology allowing quick and precise calibration of large camera networks. The proposed methods have been tested in a real urban environment and have been applied to create direct mappings (homographies) between image coordinates and world points in the ground plane (walking areas) to support person and robot detection and localization algorithms.


british machine vision conference | 2008

Sample-Based 3D Tracking of Colored Objects: A Flexible Architecture

Matteo Taiana; Jacinto C. Nascimento; José António Gaspar; Alexandre Bernardino

This paper presents a method for 3D model-based tracking of colored objects using a sampling methodology. The problem is formulated in a Monte Carlo filtering approach, whereby the state of an object is re presented by a set of hypotheses. The main originality of this work is an observation model consisting in the comparison of the color information in some sampling points around the target’s hypothetical edges. On the contrary to existing approaches the method does not need to explicitly compute edges in the video stream, thus dealing well with optical or motion blur. The method does not require the projection of the full 3D object on the image, but just of some selected points around the target’s boundaries. This a llows a flexible and modular architecture illustrated by experiments performed with different objects (balls and boxes), camera models (perspective, catadioptric, dioptric) and tracking methodologies (particle and Kalman filtering) .


intelligent robots and systems | 2012

Online calibration of a humanoid robot head from relative encoders, IMU readings and visual data

Nuno Moutinho; Martim Brandao; Ricardo Ferreira; José António Gaspar; Alexandre Bernardino; Atsuo Takanishi; José Santos-Victor

Humanoid robots are complex sensorimotor systems where the existence of internal models are of utmost importance both for control purposes and for predicting the changes in the world arising from the systems own actions. This so-called expected perception relies on the existence of accurate internal models of the robots sensorimotor chains.

Collaboration


Dive into the José António Gaspar's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ricardo Ferreira

Instituto Superior Técnico

View shared research outputs
Top Co-Authors

Avatar

Ricardo Galego

Technical University of Lisbon

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nuno Moutinho

Technical University of Lisbon

View shared research outputs
Top Co-Authors

Avatar

Pedro U. Lima

Instituto Superior Técnico

View shared research outputs
Researchain Logo
Decentralizing Knowledge