Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gabriel J. Garcia is active.

Publication


Featured researches published by Gabriel J. Garcia.


Sensors | 2014

A survey on FPGA-based sensor systems: towards intelligent and reconfigurable low-power sensors for computer vision, control and signal processing.

Gabriel J. Garcia; Carlos A. Jara; Jorge Pomares; Aiman Alabdo; Lucas M. Poggi; Fernando Torres

The current trend in the evolution of sensor systems seeks ways to provide more accuracy and resolution, while at the same time decreasing the size and power consumption. The use of Field Programmable Gate Arrays (FPGAs) provides specific reprogrammable hardware technology that can be properly exploited to obtain a reconfigurable sensor system. This adaptation capability enables the implementation of complex applications using the partial reconfigurability at a very low-power consumption. For highly demanding tasks FPGAs have been favored due to the high efficiency provided by their architectural flexibility (parallelism, on-chip memory, etc.), reconfigurability and superb performance in the development of algorithms. FPGAs have improved the performance of sensor systems and have triggered a clear increase in their use in new fields of application. A new generation of smarter, reconfigurable and lower power consumption sensors is being developed in Spain based on FPGAs. In this paper, a review of these developments is presented, describing as well the FPGA technologies employed by the different research groups and providing an overview of future research within this field.


Sensors | 2009

Survey of visual and force/tactile control of robots for physical interaction in Spain.

Gabriel J. Garcia; Juan A. Corrales; Jorge Pomares; Fernando Torres

Sensors provide robotic systems with the information required to perceive the changes that happen in unstructured environments and modify their actions accordingly. The robotic controllers which process and analyze this sensory information are usually based on three types of sensors (visual, force/torque and tactile) which identify the most widespread robotic control strategies: visual servoing control, force control and tactile control. This paper presents a detailed review on the sensor architectures, algorithmic techniques and applications which have been developed by Spanish researchers in order to implement these mono-sensor and multi-sensor controllers which combine several sensors.


Journal of Intelligent and Robotic Systems | 2007

A Robust Approach to Control Robot Manipulators by Fusing Visual and Force Information

Jorge Pomares; Gabriel J. Garcia; Fernando Torres

In this paper, a method to combine visual and force information using the information obtained from the movement flow-based visual servoing system is proposed. This method allows not only to achieve a given desired position but also to specify the trajectory that the robot will follow from its initial position to its final one in the 3D space. This paper also extends the visual servoing system in order to increase the robustness when errors in the camera calibration parameters appear. Experiments using an eye-in-hand robotic system demonstrate the correct behaviour when important errors exist in the camera intrinsic parameters. After the description of this strategy, we then describe its application to an insertion task to be performed by the robotic system in which the joint use of visual and force information is required. To combine both sensorial systems, a position-based impedance-control system is implemented, which modifies the trajectory generated by the visual system depending on the robot’s interaction with its setting. This modification is performed without knowledge of the exact camera calibration parameters. Furthermore, the visual-force approach based on impedance control does not require having previous knowledge about the contact geometry.


emerging technologies and factory automation | 2007

A new time-independent image path tracker to guide robots using visual servoing

Gabriel J. Garcia; Jorge Pomares; Fernando Torres

In this paper, a new method to track image trajectories by visual servoing is proposed. This method solves the problem of the previous proposed time-independent tracking systems based on visual servoing. With the proposed method, the robot can track a previously generated trajectory affording a correct tracking not only in the image but also in the 3D space. This new method presents several improvements over the previous ones such as the possibility of specifying the desired tracking velocity, a less oscillating behavior or a correct tracking in the 3D space when high velocities are used. In order to demonstrate the correct behavior of the visual servoing system, an eye-in-hand camera system is used.


International Journal of Advanced Robotic Systems | 2011

Direct Visual Servoing to Track Trajectories in Human-Robot Cooperation

Jorge Pomares; Juan A. Corrales; Gabriel J. Garcia; Fernando Torres

This paper describes a dynamic image-based control system to guide two coupled robots. The first robot is a Mitsubishi PA-10 robotic manipulator which has a second mini-robot with 3 degrees of freedom (DOF) attached at its end-effector. The vision system used for the guidance of both robots is composed of a camera at the end-effector of the mini-robot. The paper presents a new method to perform the mini-robot guidance using dynamic control to track a previous generated image trajectory. The mini-robot performs the tracking in a workspace in cooperation with a human operator. Therefore, the proposed visual control is combined with virtual visual servoing to perform a safety behavior.


The International Journal of Advanced Manufacturing Technology | 2009

A cooperative robotic system based on multiple sensors to construct metallic structures.

Pablo Gil; Jorge Pomares; Santiago T. Puente; Francisco A. Candelas; Gabriel J. Garcia; Juan A. Corrales; Fernando Torres

This paper describes a multisensorial robotic system to automatically construct metallic structures. Two robots must work cooperatively in the same workspace to perform the task. The robots are automatically guided using visual and force sensor information. A new time-independent visual force control system which guarantees the adequate robot behaviour during the construction of the structure is described. During the construction of the structure, a human operator works cooperatively with the robots in order to perform some tasks which cannot be automatically developed by the robots. To do so, a new human–robot cooperation approach is described in order to guarantee the human safety. The correct behaviour of the different subsystems proposed in the paper is demonstrated in Section 6 by the construction of a real structure composed of several metallic tubes and different types of pieces to join them.


Image and Vision Computing | 2008

Improving detection of surface discontinuities in visual-force control systems

Jorge Pomares; Pablo Gil; Gabriel J. Garcia; José M. Sebastián; Fernando Torres

In this paper, a new approach to detect surface discontinuities in a visual-force control task is described. A task which consists in tracking a surface using visual-force information is shown. In this task, in order to reposition the robot tool with respect to the surface it is necessary to determine the surface discontinuities. This paper describes a new method to detect surface discontinuities employing sensorial information obtained from a force sensor, a camera and structured light. This method has proved to be more robust than previous systems even in situations where high frictions occur.


IFAC Proceedings Volumes | 2007

ROBOT GUIDANCE BY ESTIMATING THE FORCE-IMAGE INTERACTION MATRIX

Gabriel J. Garcia; Jorge Pomares; Fernando Torres

Abstract This paper describes an uncalibrated visual-force control system which does not require any kinematic calibration to develop the task. An important aspect of these kinds of control systems is the necessity to maintain the coherence between the control actions obtained from each sensorial system. To do so, the paper proposes to modify the image trajectory from the information obtained from the force sensor by using the concept of force-image interaction matrix. This matrix relates changes in the image space with changes in the interaction forces. In order to estimate the value of this matrix this paper suggests the use of a Gauss-Newton method.


frontiers in education conference | 2014

Computer networks virtualization with GNS3: Evaluating a solution to optimize resources and achieve a distance learning

Pablo Gil; Gabriel J. Garcia; A. Delgado; Rosa M. Medina; Antonio Calderon; Patricia Marti

Designing educational resources allow students to modify their learning process. In particular, on-line and downloadable educational resources have been successfully used in engineering education the last years [1]. Usually, these resources are free and accessible from web. In addition, they are designed and developed by lecturers and used by their students. But, they are rarely developed by students in order to be used by other students. In this work-in-progress, lecturers and students are working together to implement educational resources, which can be used by students to improve the learning process of computer networks subject in engineering studies. In particular, network topologies to model LAN (Local Area Network) and MAN (Metropolitan Area Network) are virtualized in order to simulate the behavior of the links and nodes when they are interconnected with different physical and logical design.


Sensors | 2011

A Multi-Sensorial Hybrid Control for Robotic Manipulation in Human-Robot Workspaces

Jorge Pomares; Iván Perea; Gabriel J. Garcia; Carlos A. Jara; Juan A. Corrales; Fernando Torres

Autonomous manipulation in semi-structured environments where human operators can interact is an increasingly common task in robotic applications. This paper describes an intelligent multi-sensorial approach that solves this issue by providing a multi-robotic platform with a high degree of autonomy and the capability to perform complex tasks. The proposed sensorial system is composed of a hybrid visual servo control to efficiently guide the robot towards the object to be manipulated, an inertial motion capture system and an indoor localization system to avoid possible collisions between human operators and robots working in the same workspace, and a tactile sensor algorithm to correctly manipulate the object. The proposed controller employs the whole multi-sensorial system and combines the measurements of each one of the used sensors during two different phases considered in the robot task: a first phase where the robot approaches the object to be grasped, and a second phase of manipulation of the object. In both phases, the unexpected presence of humans is taken into account. This paper also presents the successful results obtained in several experimental setups which verify the validity of the proposed approach.

Collaboration


Dive into the Gabriel J. Garcia's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Pablo Gil

University of Alicante

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge