Raquel Frizera Vassallo
Universidade Federal do Espírito Santo
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Raquel Frizera Vassallo.
Proceedings of the IEEE Workshop on Omnidirectional Vision 2002. Held in conjunction with ECCV'02 | 2002
Raquel Frizera Vassallo; José Santos-Victor; Hans Jörg Andreas Schneebeli
Computing a cameras ego-motion from an image sequence is easier to accomplish when a spherical retina is used, as opposed to a standard retinal plane. On a spherical field of view both the focus of expansion and contraction are visible, whereas for a planar retina that is not necessarily the case. Recent research has shown that omnidirectional systems can be used to emulate spherical retinas by mapping image velocity vectors from the omnidirectional image to the spherical retina. That has been done by using the Jacobian of the transformation between the image formation model and the spherical coordinate system. As a consequence, the Jacobian matrix must be derived for each specific omnidirectional camera, to account for the different mirror shapes. Instead, in this paper we derive the Jacobian matrix using of a general projection model, that can describe all single projection center cameras by suitable parameterization. Hence, both the back-projection of an image point to the unit sphere, as well as the mapping of velocities through the transformation Jacobian remains general for all cameras with a single center of projection. We have conducted a series of experimental tests to illustrate the validity of our approach which lead to encouraging results.
Robotics and Autonomous Systems | 2000
Raquel Frizera Vassallo; Hans Jörg Andreas Schneebeli; José Santos-Victor
Abstract We address the problem of visual-based navigation of a mobile robot in indoors environments. The robot control system is based on a single camera to provide the required visual feedback information. The control strategy merges two distinct paradigms that appeared recently in the technical literature, in order to provide the robustness and computation speed needed for closed loop control. On one hand, we servo on the vanishing point defined by the intersection of the corridor guidelines. This mode is used for the heading control and ensures that the vehicle moves along corridors. On the other hand, we use appearance-based processes to monitor the robot position along the path and to launch different navigation tasks (e.g. turn left, enter door, etc.). The combination of visual servoing techniques that provide stable control loops for specific local tasks, and appearance-based methods that embed a representation of the environment at a larger scale, results in extended autonomy even with modest computational resources. Preliminary tests have shown encouraging results, as discussed in the paper.
international conference on computer vision systems | 1999
José Santos-Victor; Raquel Frizera Vassallo; Hans Jörg Andreas Schneebeli
We address the problem of visual-based indoors navigation based on a single camera that provides the required visual feedback information. The usual approach relies on a map to relocate the robot with respect to the environment. Once the robot position and orientation are known, a suitable trajectory is defined according to the mission goals and the structure of the environment. However, one could argue that it should be possible to perform most missions without a precise knowledge of the robot position and orientation. This is indeed the case for many living beings when they navigate in complex environments. We propose to represent the environment as a topological map that is tightly related to the system perceptual and motion capabilities. The map should contain environmental information that can easily be extracted by the system and the mission should be described in terms of a set of available behaviors or primitive actions. We present results that merge visual servoing and appearance based methods. Servoing is used locally when a continuous stream of visual information is available. Appearance based methods offer a means of providing a topological description of the environment, without using odometry information or any absolute localization method. Preliminary tests are presented and discussed.
Sba: Controle & Automação Sociedade Brasileira de Automatica | 2008
Andre Ferreira; Flávio Garcia Pereira; Raquel Frizera Vassallo; Teodiano Freire Bastos Filho; Mario Sarcinelli Filho
An approach to guide a mobile robot from an initial position to a goal position avoiding any obstacle in its path, when navigating in a semi-structured environment, is proposed in this paper. Such an approach, hereinafter referred to as tangential escape, consists in changing the current robot orientation through a suitable combination of the values of the angular and linear velocities (the control actions) whenever an obstacle is detected close to it. Then, the robot starts navigating in parallel to the tangent to the obstacle, regarding the point of the obstacle boundary the robot sensing system identifies as the closest one. The stability of the control system designed according this approach is proven, showing that the robot reaches any reachable goal, with or without a prescribed final orientation. Such a control system is programmed onboard a mobile platform whose sensing system is a laser scanner which provides 181 range measurements, for experimental validation. The results obtained are presented and discussed, allowing concluding that the tangential escape approach is able to guide the robot along trajectories that result in a reduction of the traveling time, thus saving batteries and reducing the motor wearing.
international conference on robotics and automation | 2007
Christiano Couto Gava; Raquel Frizera Vassallo; Flavio Roberti; Ricardo Carelli; Teodiano Bastos-Filho
In this work a robot cooperation strategy based on omnidirectional vision is presented. Such strategy will be applied to a mobile robot team formed by small and simple robots and a bigger leader robot with more computational power. The leader must control team formation. It has an omnidirectional camera and sees the other robots. Color segmentation and Kalman filtering is used to obtain the pose of the followers. This information is then used by a nonlinear stable controller to manage team formation. Simulations and some preliminary experiments were run. The current results are interesting and encourage towards the next steps.
Sensors | 2014
Mariana Rampinelli; Vitor Buback. Covre; Felippe Mendonça. de Queiroz; Raquel Frizera Vassallo; Teodiano Bastos-Filho; Manuel Mazo
This paper describes an intelligent space, whose objective is to localize and control robots or robotic wheelchairs to help people. Such an intelligent space has 11 cameras distributed in two laboratories and a corridor. The cameras are fixed in the environment, and image capturing is done synchronously. The system was programmed as a client/server with TCP/IP connections, and a communication protocol was defined. The client coordinates the activities inside the intelligent space, and the servers provide the information needed for that. Once the cameras are used for localization, they have to be properly calibrated. Therefore, a calibration method for a multi-camera network is also proposed in this paper. A robot is used to move a calibration pattern throughout the field of view of the cameras. Then, the captured images and the robot odometry are used for calibration. As a result, the proposed algorithm provides a solution for multi-camera calibration and robot localization at the same time. The intelligent space and the calibration method were evaluated under different scenarios using computer simulations and real experiments. The results demonstrate the proper functioning of the intelligent space and validate the multi-camera calibration method, which also improves robot localization.
intelligent robots and systems | 2002
Raquel Frizera Vassallo; José Santos-Victor; Hans Jörg Andreas Schneebeli
We propose the use of motor vocabulary, that express a robots specific motor capabilities, for topological mapbuilding and navigation. First, the motor vocabulary is created automatically through an imitation behaviour where the robot learns about its own motor repertoire, by following a tutor and associating its own motion perception to motor words. The learnt motor representation is then used for building the topological map. The robot is guided through the environment and automatically captures relevant (omnidirectional) images and associates motor words to links between places in the topological map. Finally, the created map is used for navigation, by invoking sequences of motor words that represent the actions for reaching a desired goal. In addition, a reflex-type behaviour based on optical flow extracted from omnidirectional images is used to avoid lateral collisions during navigation. The relation between motor vocabulary and imitation is stressed by the recent findings in neurophysiology, of visuomotor (mirror) neurons that may represent an internal motor representation related to the animals capacity of imitation. This approach provides a natural adaptation between the robots motion capabilities, the environment representations (maps) and navigation processes. Encouraging results are presented and discussed.
Sensors | 2017
Rafael Vivacqua; Raquel Frizera Vassallo; Felipe N. Martins
Autonomous driving in public roads requires precise localization within the range of few centimeters. Even the best current precise localization system based on the Global Navigation Satellite System (GNSS) can not always reach this level of precision, especially in an urban environment, where the signal is disturbed by surrounding buildings and artifacts. Laser range finder and stereo vision have been successfully used for obstacle detection, mapping and localization to solve the autonomous driving problem. Unfortunately, Light Detection and Ranging (LIDARs) are very expensive sensors and stereo vision requires powerful dedicated hardware to process the cameras information. In this context, this article presents a low-cost architecture of sensors and data fusion algorithm capable of autonomous driving in narrow two-way roads. Our approach exploits a combination of a short-range visual lane marking detector and a dead reckoning system to build a long and precise perception of the lane markings in the vehicle’s backwards. This information is used to localize the vehicle in a map, that also contains the reference trajectory for autonomous driving. Experimental results show the successful application of the proposed system on a real autonomous driving situation.
international conference on industrial technology | 2010
Flávio Garcia Pereira; Fabricio Bortolini de Sá; Daniel Bozi Ferreira; Raquel Frizera Vassallo
This paper addresses the human-robot cooperation problem on object transportation tasks, particularly when a human grasps one side of the object while the other extremity is carried by the robot. The robot is equipped with a pair of infrared sensors used to obtain the loads status and apply it as a feedback to accomplish a specific task. A nonlinear controller is proposed to allow the robot to perform coordinated movements and thus help a human to carry an object from an initial position to a final goal with a defined position and orientation. The controller is proved to be asymptotically stable at the equilibrium point, which guarantees the accomplishment of the task. In contrast to other methods, the approach presented in this paper does not use either force or visual information to estimate the status of the robot and the object. In order to validate the proposed method some experiments are shown.
International Journal of Advanced Robotic Systems | 2009
Flavio Roberti; Juan Marcos Toibero; Carlos Soria; Raquel Frizera Vassallo; Ricardo Carelli
This paper presents the use of a hybrid collaborative stereo vision system (3D-distributed visual sensing using different kinds of vision cameras) for the autonomous navigation of a wheeled robot team. It is proposed a triangulation-based method for the 3D-posture computation of an unknown object by considering the collaborative hybrid stereo vision system, and this way to steer the robot team to a desired position relative to such object while maintaining a desired robot formation. Experimental results with real mobile robots are included to validate the proposed vision system.