Hans Jörg Andreas Schneebeli
Universidade Federal do Espírito Santo
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Hans Jörg Andreas Schneebeli.
Proceedings of the IEEE Workshop on Omnidirectional Vision 2002. Held in conjunction with ECCV'02 | 2002
Raquel Frizera Vassallo; José Santos-Victor; Hans Jörg Andreas Schneebeli
Computing a cameras ego-motion from an image sequence is easier to accomplish when a spherical retina is used, as opposed to a standard retinal plane. On a spherical field of view both the focus of expansion and contraction are visible, whereas for a planar retina that is not necessarily the case. Recent research has shown that omnidirectional systems can be used to emulate spherical retinas by mapping image velocity vectors from the omnidirectional image to the spherical retina. That has been done by using the Jacobian of the transformation between the image formation model and the spherical coordinate system. As a consequence, the Jacobian matrix must be derived for each specific omnidirectional camera, to account for the different mirror shapes. Instead, in this paper we derive the Jacobian matrix using of a general projection model, that can describe all single projection center cameras by suitable parameterization. Hence, both the back-projection of an image point to the unit sphere, as well as the mapping of velocities through the transformation Jacobian remains general for all cameras with a single center of projection. We have conducted a series of experimental tests to illustrate the validity of our approach which lead to encouraging results.
Robotics and Autonomous Systems | 2000
Raquel Frizera Vassallo; Hans Jörg Andreas Schneebeli; José Santos-Victor
Abstract We address the problem of visual-based navigation of a mobile robot in indoors environments. The robot control system is based on a single camera to provide the required visual feedback information. The control strategy merges two distinct paradigms that appeared recently in the technical literature, in order to provide the robustness and computation speed needed for closed loop control. On one hand, we servo on the vanishing point defined by the intersection of the corridor guidelines. This mode is used for the heading control and ensures that the vehicle moves along corridors. On the other hand, we use appearance-based processes to monitor the robot position along the path and to launch different navigation tasks (e.g. turn left, enter door, etc.). The combination of visual servoing techniques that provide stable control loops for specific local tasks, and appearance-based methods that embed a representation of the environment at a larger scale, results in extended autonomy even with modest computational resources. Preliminary tests have shown encouraging results, as discussed in the paper.
international conference on computer vision systems | 1999
José Santos-Victor; Raquel Frizera Vassallo; Hans Jörg Andreas Schneebeli
We address the problem of visual-based indoors navigation based on a single camera that provides the required visual feedback information. The usual approach relies on a map to relocate the robot with respect to the environment. Once the robot position and orientation are known, a suitable trajectory is defined according to the mission goals and the structure of the environment. However, one could argue that it should be possible to perform most missions without a precise knowledge of the robot position and orientation. This is indeed the case for many living beings when they navigate in complex environments. We propose to represent the environment as a topological map that is tightly related to the system perceptual and motion capabilities. The map should contain environmental information that can easily be extracted by the system and the mission should be described in terms of a set of available behaviors or primitive actions. We present results that merge visual servoing and appearance based methods. Servoing is used locally when a continuous stream of visual information is available. Appearance based methods offer a means of providing a topological description of the environment, without using odometry information or any absolute localization method. Preliminary tests are presented and discussed.
intelligent robots and systems | 2002
Raquel Frizera Vassallo; José Santos-Victor; Hans Jörg Andreas Schneebeli
We propose the use of motor vocabulary, that express a robots specific motor capabilities, for topological mapbuilding and navigation. First, the motor vocabulary is created automatically through an imitation behaviour where the robot learns about its own motor repertoire, by following a tutor and associating its own motion perception to motor words. The learnt motor representation is then used for building the topological map. The robot is guided through the environment and automatically captures relevant (omnidirectional) images and associates motor words to links between places in the topological map. Finally, the created map is used for navigation, by invoking sequences of motor words that represent the actions for reaching a desired goal. In addition, a reflex-type behaviour based on optical flow extracted from omnidirectional images is used to avoid lateral collisions during navigation. The relation between motor vocabulary and imitation is stressed by the recent findings in neurophysiology, of visuomotor (mirror) neurons that may represent an internal motor representation related to the animals capacity of imitation. This approach provides a natural adaptation between the robots motion capabilities, the environment representations (maps) and navigation processes. Encouraging results are presented and discussed.
IFAC Proceedings Volumes | 2004
Mario Sarcinelli-Filho; Hans Jörg Andreas Schneebeli; Eliete Maria de Oliveira Caldeira; Bruno Moreira Silva
Abstract The use of optical flow to provide the sensorial information a mobile robot needs to navigate is addressed. An optical flow-based sensing subsystem is proposed to continuously detect obstacles, and a control subsystem is implemented to update the heading angle of the robot accordingly. All the computations are performed onboard the robot, due to the low memory and processing time demanded by the algorithms programmed, thus guaranteeing to the robot full autonomy. An experiment using such system is discussed, whose results validate the sensorial subsystem proposed.
Sba: Controle & Automação Sociedade Brasileira de Automatica | 2007
Eliete Maria de Oliveira Caldeira; Hans Jörg Andreas Schneebeli; Mario Sarcinelli-Filho
This work discusses the use of optical flow to generate the sensorial information a mobile robot needs to react to the presence of obstacles when navigating in a non-structured environment. A sensing system based on optical flow and time-to-collision calculation is here proposed and experimented, which accomplishes two important paradigms. The first one is that all computations are performed onboard the robot, in spite of the limited computational capability available. The second one is that the algorithms for optical flow and time-to-collision calculations are fast enough to give the mobile robot the capability of reacting to any environmental change in real-time. Results of real experiments in which the sensing system here proposed is used as the only source of sensorial data to guide a mobile robot to avoid obstacles while wandering around are presented, and the analysis of such results allows validating the proposed sensing system.
Sba: Controle & Automação Sociedade Brasileira de Automatica | 2007
Raquel Frizera Vassallo; José Santos-Victor; Hans Jörg Andreas Schneebeli
We propose an approach that allows a robot to learn a task and represent/adapt it to its own motor repertoire. First the robot creates a sensorimotor map to convert sensorial information into motor data. Then learning happens through imitation, using motor representations. By imitating other agents, the robot learns a set of elementary motions that forms a motor vocabulary. That vocabulary can eventually be used to compose more complex actions, by combining basic actions, for each specific task domain. We illustrate the approach in a mobile robotics task: topological mapping and navigation. Egomotion estimation is used as a visuomotor map and allows the robot to learn a motor vocabulary coverting optical flow measurements from omnidirectional images into motor information. Then the learnt vocabulary is used for topological mapping and navigation.The approach can be extended to different robots and applications. Encouraging results are presented and discussed.
IFAC Proceedings Volumes | 2004
Raquel Frizera Vassallo; José Santos-Victor; Hans Jörg Andreas Schneebeli
Abstract We propose an approach that allows a robot to learn a task through imitation, using motor representations, as suggested by recent findings in neuro-science. The robot relies on a visuomotor mapto convert visual information into motor data. Then, by observing and imitating other agents, the robot can learn a set of elementary motions (motor vocabulary), that will eventually be used to compose more complex actions, for each specific task domain. We illustrate the approach in a mobile robotics task. Egomotion estimation is used as a visuomotor map, that allows the robot to learn a motor vocabulary for topological mapping and navigation. The approach can be extended to different robots and applications. Encouraging results are presented and discussed
IFAC Proceedings Volumes | 1998
Teodiano Freire Bastos Filho; Mario Sarcinelli Filho; Eduardo Oliveira Freire; Roger Alex de Castro Freitas; Hans Jörg Andreas Schneebeli
Abstract Object recognition is an important task associated to mobile robots that transport pieces between cells in a flexible production system or between different sections in an industrial plant. Upon detecting any obstacle, the recognition system must be able to inform which obstacle is in the robot path. Thus, the control system should be able to change the current robot behavior, in order to deviate from the detected obstacle or to follow it However, it is normally necessary to recognize just a few obstacles that are commonly present in the robot operation environment. In this paper, a system is proposed to recognize some objects that can appear in the path of a mobile robot. This system is based on information coming from ultrasonic transducers and a digital monochromatic camera.
Journal of The Optical Society of America A-optics Image Science and Vision | 2015
Josemar Simão; Hans Jörg Andreas Schneebeli; Raquel Frizera Vassallo
Color constancy is the ability to perceive the color of a surface as invariant even under changing illumination. In outdoor applications, such as mobile robot navigation or surveillance, the lack of this ability harms the segmentation, tracking, and object recognition tasks. The main approaches for color constancy are generally targeted to static images and intend to estimate the scene illuminant color from the images. We present an iterative color constancy method with temporal filtering applied to image sequences in which reference colors are estimated from previous corrected images. Furthermore, two strategies to sample colors from the images are tested. The proposed method has been tested using image sequences with no relative movement between the scene and the camera. It also has been compared with known color constancy algorithms such as gray-world, max-RGB, and gray-edge. In most cases, the iterative color constancy method achieved better results than the other approaches.