Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Libor Spacek is active.

Publication


Featured researches published by Libor Spacek.


Robotics and Autonomous Systems | 2007

Instantaneous robot self-localization and motion estimation with omnidirectional vision

Libor Spacek; Christopher Burbridge

This paper presents two related methods for autonomous visual guidance of robots: localization by trilateration, and interframe motion estimation. Both methods use coaxial omnidirectional stereopsis (omnistereo), which returns the range r to objects or guiding points detected in the images. The trilateration method achieves self-localization using r from the three nearest objects at known positions. The interframe motion estimation is more general, being able to use any features in an unknown environment. The guiding points are detected automatically on the basis of their perceptual significance and thus they need not have either special markings or be placed at known locations. The interframe motion estimation does not require previous motion history, making it well suited for detecting acceleration (in 20th of a second) and thus supporting dynamic models of robots motion which will gain in importance when autonomous robots achieve useful speeds. An initial estimate of the robots rotation ω (the visual compass) is obtained from the angular optic flow in an omnidirectional image. A new noniterative optic flow method has been developed for this purpose. Adding ω to all observed (robot relative) bearings θ gives true bearings towards objects (relative to a fixed coordinate frame). The rotation ω and the r,θ coordinates obtained at two frames for a single fixed point at unknown location are sufficient to estimate the translation of the robot. However, a large number of guiding points are typically detected and matched in most real images. Each such point provides a solution for the robots translation. The solutions are combined by a robust clustering algorithm Clumat that reduces rotation and translation errors. Simulator experiments are included for all the presented methods. Real images obtained from ScitosG5 autonomously moving robot were used to test the interframe rotation and to show that the presented vision methods are applicable to real images in real robotics scenarios.


Robotics and Autonomous Systems | 2005

A catadioptric sensor with multiple viewpoints

Libor Spacek

Conventional cameras with a limited field of view often lose sight of objects when their bearings change suddenly due to a significant turn of the observer (robot), the object, or both. Catadioptric omnidirectional sensors, consisting of a camera and a mirror, can track objects and estimate their distances more robustly. The shapes of mirrors used by such sensors have diering merits. This paper discusses several advantages of the conical mirror over other shapes of mirrors in current use. A perspective projection unwarping of the conical mirror images is developed and demonstrated. This has hitherto been considered impossible for mirrors with multiple viewpoints. An estimation of distance (range) over a large surrounding area is crucial in mobile robotics. A solution is proposed here in the form of an omnidirectional stereo apparatus with two catadioptric sensors in a vertical coaxial arrangement. The coaxial stereo requires very simple matching since the epipolar lines are the radial lines of identical orientations in both omnidirectional images. The radial matching is supported by a novel polar edge finder which uses discrete cosine transform and returns image gradients expressed in polar coordinates.


international conference on advanced intelligent mechatronics | 2003

Learning fuzzy logic controller for reactive robot behaviours

Dongbing Gu; Huosheng Hu; Libor Spacek

Fuzzy logic plays an important role in the design of reactive robot behaviours. This paper presents a learning approach to the development of a fuzzy logic controller based on the delayed rewards from the real world. The delayed rewards are apportioned to the individual fuzzy rules by using reinforcement Q-learning. The efficient exploration of a solution space is one of the key issues in the reinforcement learning. A specific genetic algorithm is developed in this paper to trade off the exploration of learning spaces and the exploitation of learned experience. The proposed approach is evaluated on some reactive behaviour of the football-playing robots.


european conference on computer vision | 2004

Coaxial Omnidirectional Stereopsis

Libor Spacek

Catadioptric omnidirectional sensors, consisting of a camera and a mirror, can track objects even when their bearings change suddenly, usually due to the observer making a significant turn. There has been much debate concerning the relative merits of several possible shapes of mirrors to be used by such sensors.


british machine vision conference | 1994

Constructing Coherent Boundaries.

Tajje-eddine Rachidi; Libor Spacek

This paper presents a new approach to the problem of gap bridging and junction detection. Perceptual groupings and evidence from existing boundaries are combined to produce joins which are consistent with the initial image structure. In particular, a new definition of co-curvilinea rity, which distinguishes between true and false co-curvilinearity, is given. Structural quantities are computed for all potential joins. A local non-iterative algorithm selects the best joins and/or junctions which satisfy specific structural conditions. No assumption, domain restriction, or model is needed. The current implementation is fully presented together with the results obtained.


Applied Artificial Intelligence | 1994

FACE RECOGNITION THROUGH LEARNED BOUNDARY CHARACTERISTICS

Libor Spacek; Miroslav Kubat; Doris Flotzinger

This paper presents a new approach to face recognition, combining the techniques of computer vision and machine learning. A steady improvement in recognition performance is demonstrated. It is achieved by learning individual faces in terms of the local shapes of image boundaries. High-level facial features, such as nose, are not explicitly used in this scheme. Several machine learning methods are tested and compared. The overall objectives are formulated as follows: Classify the different tasks of “face recognition” and suggest an orderly terminology to distinguish between them. Design a set of easily and reliably obtainable descriptors and their automatic extraction from the images. Compare plausible machine learning methods; tailor them to this domain. Design experiments that would best reflect the needs of real world applications, and suggest a general methodology for further research. Perform the experiments and compare the performance.


british machine vision conference | 1997

Distinctive Descriptions for Face Processing.

Darryl Hond; Libor Spacek


Archive | 1985

The Detection of Contours and their Visual Motion

Libor Spacek


Archive | 2005

Omnidirectional Vision Simulation and Robot Localisation

Christopher Burbridge; Libor Spacek


Applied Informatics | 2003

An adaptive color segmentation algorithm for sony legged robots

Bo Li; Huosheng Hu; Libor Spacek

Collaboration


Dive into the Libor Spacek's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bo Li

Tsinghua University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge