John Folkesson
Royal Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by John Folkesson.
international conference on robotics and automation | 2004
John Folkesson; Henrik I. Christensen
We describe an approach to simultaneous localization and mapping, SLAM. This approach has the highly desirable property of robustness to data association errors. Another important advantage of our algorithm is that non-linearities are computed exactly, so that global constraints can be imposed even if they result in large shifts to the map. We represent the map as a graph and use the graph to find an efficient map update algorithm. We also show how topological consistency can be imposed on the map, such as, closing a loop. The algorithm has been implemented on an outdoor robot and we have experimental validation of our ideas. We also explain how the graph can be simplified leading to linear approximations of sections of the map. This reduction gives us a natural way to connect local map patches into a much larger global map.
international conference on robotics and automation | 2006
Patric Jensfelt; Danica Kragic; John Folkesson; Mårten Björkman
This paper presents a framework for 3D vision based bearing only SLAM using a single camera, an interesting setup for many real applications due to its low cost. The focus in is on the management of the features to achieve real-time performance in extraction, matching and loop detection. For matching image features to map landmarks a modified, rotationally variant SIFT descriptor is used in combination with a Harris-Laplace detector. To reduce the complexity in the map estimation while maintaining matching performance only a few, high quality, image features are used for map landmarks. The rest of the features are used for matching. The framework has been combined with an EKF implementation for SLAM. Experiments performed in indoor environments are presented. These experiments demonstrate the validity and effectiveness of the approach. In particular they show how the robot is able to successfully match current image features to the map when revisiting an area
international conference on robotics and automation | 2005
John Folkesson; Patric Jensfelt; Henrik I. Christensen
In this paper we describe an approach to feature representation for simultaneous localization and mapping, SLAM. It is a general representation for features that addresses symmetries and constraints in the feature coordinates. Furthermore, the representation allows for the features to be added to the map with partial initialization. This is an important property when using oriented vision features where angle information can be used before their full pose is known. The number of the dimensions for a feature can grow with time as more information is acquired. At the same time as the special properties of each type of feature are accounted for, the commonalities of all map features are also exploited to allow SLAM algorithms to be interchanged as well as choice of sensors and features. In other words the SLAM implementation need not be changed at all when changing sensors and features and vice versa. Experimental results both with vision and range data and combinations thereof are presented.
international conference on robotics and automation | 2011
Alper Aydemir; Kristoffer Sjöö; John Folkesson; Andrzej Pronobis; Patric Jensfelt
Objects are integral to a robots understanding of space. Various tasks such as semantic mapping, pick-and-carry missions or manipulation involve interaction with objects. Previous work in the field largely builds on the assumption that the object in question starts out within the ready sensory reach of the robot. In this work we aim to relax this assumption by providing the means to perform robust and large-scale active visual object search. Presenting spatial relations that describe topological relationships between objects, we then show how to use these to create potential search actions. We introduce a method for efficiently selecting search strategies given probabilities for those relations. Finally we perform experiments to verify the feasibility of our approach.
IEEE Transactions on Robotics | 2007
John Folkesson; Henrik I. Christensen
The problem of simultaneous localization and mapping (SLAM) is addressed using a graphical method. The main contributions are a computational complexity that scales well with the size of the environment, the elimination of most of the linearization inaccuracies, and a more flexible and robust data association. We also present a detection criteria for closing loops. We show how multiple topological constraints can be imposed on the graphical solution by a process of coarse fitting followed by fine tuning. The coarse fitting is performed using an approximate system. This approximate system can be shown to possess all the local symmetries. Observations made during the SLAM process often contain symmetries, that is to say, directions of change to the state space that do not affect the observed quantities. It is important that these directions do not shift as we approximate the system by, for example, linearization. The approximate system is both linear and block diagonal. This makes it a very simple system to work with especially when imposing global topological constraints on the solution. These global constraints are nonlinear. We show how these constraints can be discovered automatically. We develop a method of testing multiple hypotheses for data matching using the graph. This method is derived from statistical theory and only requires simple counting of observations. The central insight is to examine the probability of not observing the same features on a return to a region. We present results with data from an outdoor scenario using a SICK laser scanner.
intelligent robots and systems | 2005
John Folkesson; Patric Jensfelt; Henrik I. Christensen
In this paper we combine a graphical approach for simultaneous localization and mapping, SLAM, with a feature representation that addresses symmetries and constraints in the feature coordinates, the measurement subspace, M-space. The graphical method has the advantages of delayed linearizations and soft commitment to feature measurement matching. It also allows large maps to be built up as a network of small local patches, star nodes. This local map net is then easier to work with. The formation of the star nodes is explicitly stable and invariant with all the symmetries of the original measurements. All linearization errors are kept small by using a local frame. The construction of this invariant star is made clearer by the M-space feature representation. The M-space allows the symmetries and constraints of the measurements to be explicitly represented. We present results using both vision and laser sensors.
IEEE Transactions on Robotics | 2007
John Folkesson; Patric Jensfelt; Henrik I. Christensen
In this paper, a new feature representation for simultaneous localization and mapping (SLAM) is discussed. The representation addresses feature symmetries and constraints explicitly to make the basic model numerically robust. In previous SLAM work, complete initialization of features is typically performed prior to introduction of a new feature into the map. This results in delayed use of new data. To allow early use of sensory data, the new feature representation addresses the use of features that initially have been partially observed. This is achieved by explicitly modelling the subspace of a feature that has been observed. In addition to accounting for the special properties of each feature type, the commonalities can be exploited in the new representation to create a feature framework that allows for interchanging of SLAM algorithms, sensor and features. Experimental results are presented using a low-cost Web-cam, a laser range scanner, and combinations thereof.
IEEE Journal of Oceanic Engineering | 2013
Maurice Fallon; John Folkesson; Hunter McClelland; John J. Leonard
This paper describes a system for reacquiring features of interest in a shallow-water ocean environment, using autonomous underwater vehicles (AUVs) equipped with low-cost sonar and navigation sensors. In performing mine countermeasures, it is critical to enable AUVs to navigate accurately to previously mapped objects of interest in the water column or on the seabed, for further assessment or remediation. An important aspect of the overall system design is to keep the size and cost of the reacquisition vehicle as low as possible, as it may potentially be destroyed in the reacquisition mission. This low-cost requirement prevents the use of sophisticated AUV navigation sensors, such as a Doppler velocity log (DVL) or an inertial navigation system (INS). Our system instead uses the Proviewer 900-kHz imaging sonar from Blueview Technologies, which produces forward-looking sonar (FLS) images at ranges up to 40 mat approximately 4 Hz. In large volumes, it is hoped that this sensor can be manufactured at low cost. Our approach uses a novel simultaneous localization and mapping (SLAM) algorithm that detects and tracks features in the FLS images to renavigate to a previously mapped target. This feature-based navigation (FBN) system incorporates a number of recent advances in pose graph optimization algorithms for SLAM. The system has undergone extensive field testing over a period of more than four years, demonstrating the potential for the use of this new approach for feature reacquisition. In this report, we review the methodologies and components of the FBN system, describe the systems technological features, review the performance of the system in a series of extensive in-water field tests, and highlight issues for future research.
international conference on robotics and automation | 2003
John Folkesson; Henrik I. Christensen
In this paper we describe the use of automatic explorationfor autonomous mapping of outdoor scenes. We describe areal-time SLAM implementation along with an autonomous explorationalgorithm. We have ...
intelligent robots and systems | 2014
Rares Ambrus; Nils Bore; John Folkesson; Patric Jensfelt
We present a novel method for re-creating the static structure of cluttered office environments - which we define as the “meta-room” - from multiple observations collected by an autonomous robot equipped with an RGB-D depth camera over extended periods of time. Our method works directly with point clusters by identifying what has changed from one observation to the next, removing the dynamic elements and at the same time adding previously occluded objects to reconstruct the underlying static structure as accurately as possible. The process of constructing the meta-rooms is iterative and it is designed to incorporate new data as it becomes available, as well as to be robust to environment changes. The latest estimate of the meta-room is used to differentiate and extract clusters of dynamic objects from observations. In addition, we present a method for re-identifying the extracted dynamic objects across observations thus mapping their spatial behaviour over extended periods of time.