Jan-Olof Eklundh
Royal Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Jan-Olof Eklundh.
european conference on computer vision | 2006
Gareth Loy; Jan-Olof Eklundh
A novel and efficient method is presented for grouping feature points on the basis of their underlying symmetry and characterising the symmetries present in an image. We show how symmetric pairs of features can be efficiently detected, how the symmetry bonding each pair is extracted and evaluated, and how these can be grouped into symmetric constellations that specify the dominant symmetries present in the image. Symmetries over all orientations and radii are considered simultaneously, and the method is able to detect local or global symmetries, locate symmetric figures in complex backgrounds, detect bilateral or rotational symmetry, and detect multiple incidences of symmetry.
IEEE Transactions on Pattern Analysis and Machine Intelligence | 1991
Ann Bengtsson; Jan-Olof Eklundh
An approach is presented for deriving qualitative descriptions of contours containing structures at different (unknown) scales. The descriptions are in terms of straight arcs, curved arcs with sign of curvature, corners, and points delimiting the arcs: inflexion points and transitions from straight to curved. Furthermore, the tangents at these points are derived. The approach is based on the construction of a hierarchic family of polygons, having the scale-space property of causality; structure can only disappear as scale goes from fine to coarse. Using the principle that structures that are stable over scale represent significant properties, the features of the descriptive representations are then derived. >
Robotics and Autonomous Systems | 2005
Danica Kragic; Mårten Björkman; Henrik I. Christensen; Jan-Olof Eklundh
In this paper, we present a vision system for robotic object manipulation tasks in natural, domestic environments. Given complex fetch-and-carry robot tasks, the issues related to the whole detect-approach-grasp loop are considered. Our vision system integrates a number of algorithms using monocular and binocular cues to achieve robustness in realistic settings. The cues are considered and used in connection to both foveal and peripheral vision to provide depth information, segmentation of the object(s) of interest, object recognition, tracking and pose estimation. One important property of the system is that the step from object recognition to pose estimation is completely automatic combining both appearance and geometric models. Experimental evaluation is performed in a realistic indoor environment with occlusions, clutter, changing lighting and background conditions.
european conference on computer vision | 1990
Stefan Carlsson; Jan-Olof Eklundh
When a visual observer moves forward, the projections of the objects in the scene will move over the visual image. If an object extends vertically from the ground, its image will move differently from the immediate background. This difference is called motion parallax [1, 2]. Much work in automatic visual navigation and obstacle detection has been concerned with computing motion fields or more or less complete 3-D information about the scene [3–5]. These approaches, in general, assume a very unconstrained environment and motion. If the environment is constrained, for example, motion occurs on a planar road, then this information can be exploited to give more direct solutions to, for example, obstacle detection [6]. Figure 6.1 shows superposed the images from two successive times for an observer translating relative to a planar road. The arrows show the displacement field, that is, the transformation of the image points between the successive time points.
Cvgip: Image Understanding | 1992
Kourosh Pahlavan; Jan-Olof Eklundh
Abstract In this paper it is argued that if the goal of research in computational vision is to understand seeing systems, then we need to develop systems with anthropomorphic features. The empirical nature of such research also requires systems by which one can perform experiments in real time and in interaction with the environment. By an analysis based on observations of the human visual system we describe a design of such a system. Details of its implementation and performance are also provided. To demonstrate that the system meets the requirements derived from our analysis, we finally describe a set of experiments with our system involving gaze control and the integration of primary ocular processes.
Image and Vision Computing | 2010
Barbara Caputo; Eric Hayman; Mario Fritz; Jan-Olof Eklundh
Classifying materials from their appearance is challenging. Impressive results have been obtained under varying illumination and pose conditions. Still, the effect of scale variations and the possibility to generalise across different material samples are still largely unexplored. This paper (A preliminary version of this work was presented in Hayman et al. [E. Hayman, B. Caputo, M.J. Fritz, J.-O. Eklundh, On the significance of real world conditions for material classification, in: Proceedings of the ECCV, Lecture Notes in Computer Science, vol. 4, Springer, Prague, 2004, pp. 253-266].) addresses these issues, proposing a pure learning approach based on support vector machines. We study the effect of scale variations first on the artificially scaled CUReT database, showing how performance depends on the amount of scale information available during training. Since the CUReT database contains little scale variation and only one sample per material, we introduce a new database containing 10 CUReT materials at different distances, pose and illumination. This database provides scale variations, while allowing to evaluate generalisation capabilities: does training on the CUReT database enable recognition of another piece of sandpaper? Our results demonstrate that this is not yet possible, and that material classification is far from being solved in scenarios of practical interest.
Computer Vision and Image Understanding | 2000
Atsuto Maki; Peter Nordlund; Jan-Olof Eklundh
We present an approach to attention in active computer vision. The notion of attention plays an important role in biological vision. In recent years, and especially with the emerging interest in active vision, computer vision researchers have been increasingly concerned with attentional mechanisms as well. The basic principles behind these efforts are greatly influenced by psychophysical research. That is the case also in the work presented here, which adapts to the model of Treisman (1985, Comput. Vision Graphics Image Process. Image Understanding31, 156?177), with an early parallel stage with preattentive cues followed by a later serial stage where the cues are integrated. The contributions in our approach are (i) the incorporation of depth information from stereopsis, (ii) the simple implementation of low level modules such as disparity and flow by local phase, and (iii) the cue integration along pursuit and saccade mode that allows us a proper target selection based on nearness and motion. We demonstrate the technique by experiments in which a moving observer selectively masks out different moving objects in real scenes.
international conference on pattern recognition | 1996
Atsuto Maki; Peter Nordlund; Jan-Olof Eklundh
We present a computational model for attention. It consists of an early parallel stage with preattentive cues followed by a later serial stage, where the cues are integrated. We base the model on disparity image flow and motion. As one of the several possibilities we choose a depth-based criterion to integrate these cues, in such a way that the attention is maintained to the closest moving object. We demonstrate the technique by experiments in which a moving observer selectively mask our different moving objects in real scenes.
computer vision and pattern recognition | 2001
Peter Nillius; Jan-Olof Eklundh
We present a fully automatic algorithm for estimating the projected light source direction from a single image. The requirement is that there exists a segment of an occluding contour of an object with locally Lambertian surface reflectance in the image. The algorithm consists of three stages. First a heuristic algorithm picks out potential occluding contours using color and edge information. Secondly, for each contour the light source direction is estimated using a shading model. In the final stage the results from the estimations are fused together in a Bayesian network to arrive at the most likely light source direction. The probabilistic model takes into account that the contours from the first stage might not be occluding contours. Using the same framework the contours are also classified as occluding or not. Experiments test the second stage, estimating the light source direction from an occluding contour, as well as the full algorithm.
european conference on computer vision | 1992
Kourosh Pahlavan; Tomas Uhlin; Jan-Olof Eklundh
Abstract The study of active vision using binocular head-eye systems requires answers to some fundamental questions in control of attention. This paper presents a cooperative solution to resolve the ambiguities generated by the processes engaged in fixation. We suggest an approach based on integration of these processes, resulting in cooperatively extracted unique solutions. The discussion begins by looking at biological vision. Based on this discussion, a model of integration for machine vision is suggested. The implementation of the model on the KTH-head — a head-eye system simulating the essential degrees of freedom in mammalians — is explained, and in this context the primary processes in the head-eye system are briefly described. The major stress is put on the idea that the rivalry processes in vision in general, and the heads behavioural processes in particular, results in a reliable outcome. As an experiment, the ambiguities raised by fixation at repetitive patterns is tested; the cooperative approach proves to handle the problem correctly and find a unique solution for the fixation point dynamically and in real-time.