Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gonzalo López-Nicolás is active.

Publication


Featured researches published by Gonzalo López-Nicolás.


systems man and cybernetics | 2010

Homography-Based Control Scheme for Mobile Robots With Nonholonomic and Field-of-View Constraints

Gonzalo López-Nicolás; Nicholas R. Gans; Sourabh Bhattacharya; Carlos Sagüés; Josechu J. Guerrero; Seth Hutchinson

In this paper, we present a visual servo controller that effects optimal paths for a nonholonomic differential drive robot with field-of-view constraints imposed by the vision system. The control scheme relies on the computation of homographies between current and goal images, but unlike previous homography-based methods, it does not use the homography to compute estimates of pose parameters. Instead, the control laws are directly expressed in terms of individual entries in the homography matrix. In particular, we develop individual control laws for the three path classes that define the language of optimal paths: rotations, straight-line segments, and logarithmic spirals. These control laws, as well as the switching conditions that define how to sequence path segments, are defined in terms of the entries of homography matrices. The selection of the corresponding control law requires the homography decomposition before starting the navigation. We provide a controllability and stability analysis for our system and give experimental results.


IEEE Transactions on Robotics | 2011

A Sliding-Mode-Control Law for Mobile Robots Based on Epipolar Visual Servoing From Three Views

Hector M. Becerra; Gonzalo López-Nicolás; C. Sagüés

Driving mobile robots to precise locations is of recognized interest, and using vision sensors in this context supplies many advantages. We propose a novel control law based on sliding-mode theory in order to drive mobile robots to a target location, which is specified by a previously acquired reference image. The control scheme exploits the piecewise epipolar geometry of three views on the basis of image-based visual servoing, in such a way that no 3-D scene information is required. The contribution of the paper is a new control law that achieves convergence to the target with no auxiliary images and without changing to any approach other than epipolar-based control. Additionally, the use of sliding-mode control deals with singularities, thus allowing the robot to move directly toward the target as well as avoiding the need of a precise camera calibration. The effectiveness of our approach is tested with simulations and real-world experiments.


Robotics and Autonomous Systems | 2008

Switching visual control based on epipoles for mobile robots

Gonzalo López-Nicolás; Carlos Sagüés; José Jesús Guerrero; Danica Kragic; Patric Jensfelt

In this paper, we present a visual control approach consisting in a switching control scheme based on the epipolar geometry. The method facilitates a classical teach-by-showing approach where a reference image is used to control the robot to the desired pose (position and orientation). As a result of our proposal a mobile robot carries out a smooth trajectory towards the target and the epipolar geometry model is used through the whole motion. The control scheme developed considers the motion constraints of the mobile platform in a framework based on the epipolar geometry that does not rely on artificial markers or specific models of the environment. The proposed method is designed in order to cope with the degenerate estimation case of the epipolar geometry with short baseline. Experimental evaluation has been performed in realistic indoor and outdoor settings.


Robotics and Autonomous Systems | 2010

Omnidirectional visual control of mobile robots based on the 1D trifocal tensor

Hector M. Becerra; Gonzalo López-Nicolás; Carlos Sagüés

The precise positioning of robotic systems is of great interest particularly in mobile robots. In this context, the use of omnidirectional vision provides many advantages thanks to its wide field of view. This paper presents an image-based visual control to drive a mobile robot to a desired location, which is specified by a target image previously acquired. It exploits the properties of omnidirectional images to preserve the bearing information by using a 1D trifocal tensor. The main contribution of the paper is that the elements of the tensor are introduced directly in the control law and neither any a priori knowledge of the scene nor any auxiliary image are required. Our approach can be applied with any visual sensor obeying approximately a central projection model, presents good robustness to image noise, and avoids the problem of a short baseline by exploiting the information of three views. A sliding mode control law in a square system ensures stability and robustness for the closed loop. The good performance of the control system is proven via simulations and real world experiments with a hypercatadioptric imaging system.


Robotics and Autonomous Systems | 2010

Visual control through the trifocal tensor for nonholonomic robots

Gonzalo López-Nicolás; José Jesús Guerrero; Carlos Sagüés

We present a new vision-based control approach which drives autonomously a nonholonomic vehicle to a target location. The vision system is a camera fixed on the vehicle and the target location is defined by an image taken previously in that location. The control scheme is based on the trifocal tensor model, which is computed from feature correspondences in calibrated retina across three views: initial, current and target images. The contribution is a trifocal-based control law defined by an exact input-output linearization of the trifocal tensor model. The desired evolution of the system towards the target is directly defined in terms of the trifocal tensor elements by means of sinusoidal functions without needing metric or additional information from the environment. The trifocal tensor presents important advantages for visual control purposes, because it is more robust than two-view geometry as it includes the information of a third view and, contrary to the epipolar geometry, short baseline is not a problem. Simulations show the performance of the approach, which has been tested with image noise and calibration errors.


international conference on robotics and automation | 2006

Nonholonomic epipolar visual servoing

Gonzalo López-Nicolás; Carlos Sagüés; José Jesús Guerrero; Danica Kragic; Patric Jensfelt

A significant amount of work has been reported in the area of visual servoing during the last decade. However, most of the contributions are applied in cases of holonomic robots. More recently, the use of visual feedback for control of nonholonomic vehicles has been reported. Some of the examples are docking and parallel parking maneuvers of cars or vision-based stabilization of a mobile manipulator to a desired pose with respect to a target of interest. Still, many of the approaches are mostly interested in the control part of visual servoing loop considering very simple vision algorithms based on artificial markers. In this paper, we present an approach for nonholonomic visual servoing based on epipolar geometry. The method facilitates a classical teach-by-showing approach where a reference image is used to define the desired pose (position and orientation) of the robot. The major contribution of the paper is the design of the control law that considers nonholonomic constraints of the robot as well as the robust feature detection and matching process based on scale and rotation invariant image features. An extensive experimental evaluation has been performed in a realistic indoor setting and the results are summarized in the paper


IEEE Systems Journal | 2016

Navigation Assistance for the Visually Impaired Using RGB-D Sensor With Range Expansion

Aitor Aladren; Gonzalo López-Nicolás; Luis Puig; Josechu J. Guerrero

Navigation assistance for visually impaired (NAVI) refers to systems that are able to assist or guide people with vision loss, ranging from partially sighted to totally blind, by means of sound commands. In this paper, a new system for NAVI is presented based on visual and range information. Instead of using several sensors, we choose one device, a consumer RGB-D camera, and take advantage of both range and visual information. In particular, the main contribution is the combination of depth information with image intensities, resulting in the robust expansion of the range-based floor segmentation. On one hand, depth information, which is reliable but limited to a short range, is enhanced with the long-range visual information. On the other hand, the difficult and prone-to-error image processing is eased and improved with depth information. The proposed system detects and classifies the main structural elements of the scene providing the user with obstacle-free paths in order to navigate safely across unknown scenarios. The proposed system has been tested on a wide variety of scenarios and data sets, giving successful results and showing that the system is robust and works in challenging indoor environments.


international conference on robotics and automation | 2007

Switched Homography-Based Visual Control of Differential Drive Vehicles with Field-of-View Constraints

Gonzalo López-Nicolás; Sourabh Bhattacharya; José Jesús Guerrero; Carlos Sagüés; Seth Hutchinson

This paper presents a switched homography-based visual control for differential drive vehicles. The goal is defined by an image taken at the desired position, which is the only previous information needed from the scene. The control takes into account the field-of-view constraints of the vision system through the specific design of the paths with optimality criteria. The optimal paths consist of straight lines and curves that saturate the sensor viewing angle. We present the controls that move the robot along these paths based on the convergence of the elements of the homography matrix. Our contribution is the design of the switched homography-based control, following optimal paths guaranteeing the visibility of the target.


international conference on robotics and automation | 2007

Homography-Based Visual Control of Nonholonomic Vehicles

Gonzalo López-Nicolás; Carlos Sagüés; José Jesús Guerrero

This paper presents a new visual control approach based on homography. The method is intended for nonholonomic vehicles with a fixed monocular system on board. The idea of visual control used here is the usual approach where the desired position of the robot is given by a target image taken at that position. This target image is the only previous information needed by the control law to perform the navigation from the initial position to the target. The control law is designed by the input-output linearization of the system using elements of the homography as output. The contribution is a controller that deals with the nonholonomic constraints of the mobile platform needing neither decomposition of the homography nor depth estimation to the target.


systems man and cybernetics | 2012

Visual Control for Multirobot Organized Rendezvous

Gonzalo López-Nicolás; Miguel Aranda; Youcef Mezouar; Carlos Sagüés

This paper addresses the problem of visual control of a set of mobile robots. In our framework, the perception system consists of an uncalibrated flying camera performing an unknown general motion. The robots are assumed to undergo planar motion considering nonholonomic constraints. The goal of the control task is to drive the multirobot system to a desired rendezvous configuration relying solely on visual information given by the flying camera. The desired multirobot configuration is defined with an image of the set of robots in that configuration without any additional information. We propose a homography-based framework relying on the homography induced by the multirobot system that gives a desired homography to be used to define the reference target, and a new image-based control law that drives the robots to the desired configuration by imposing a rigidity constraint. This paper extends our previous work, and the main contributions are that the motion constraints on the flying camera are removed, the control law is improved by reducing the number of required steps, the stability of the new control law is proved, and real experiments are provided to validate the proposal.

Collaboration


Dive into the Gonzalo López-Nicolás's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Youcef Mezouar

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

D. Paesa

University of Zaragoza

View shared research outputs
Researchain Logo
Decentralizing Knowledge