Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Patric Jensfelt is active.

Publication


Featured researches published by Patric Jensfelt.


international joint conference on artificial intelligence | 2001

Active global localization for a mobile robot using multiple hypothesis tracking

Patric Jensfelt; Steen Kristensen

We present a probabilistic approach for mobile robot localization using an incomplete topological world model. The method, called the multi-hypothesis localization (MHL), uses multi-hypothesis Kalman filter based pose tracking combined with a probabilistic formulation of hypothesis correctness to generate and track Gaussian pose hypotheses online. Apart from a lower computational complexity, this approach has the advantage over traditional grid based methods that incomplete and topological world model information can be utilized. Furthermore, the method generates movement commands for the platform to enhance the gathering of information for the pose estimation process. Extensive experiments are presented from two different environments, a typical office environment and an old hospital building.


Robotics and Autonomous Systems | 2008

Conceptual spatial representations for indoor mobile robots

Hendrik Zender; O. Martinez Mozos; Patric Jensfelt; Geert-Jan M. Kruijff; Wolfram Burgard

We present an approach for creating conceptual representations of human-made indoor environments using mobile robots. The concepts refer to spatial and functional properties of typical indoor environments. Following different findings in spatial cognition, our model is composed of layers representing maps at different levels of abstraction. The complete system is integrated in a mobile robot endowed with laser and vision sensors for place and object recognition. The system also incorporates a linguistic framework that actively supports the map acquisition process, and which is used for situated dialogue. Finally, we discuss the capabilities of the integrated system.


international conference on robotics and automation | 2012

Large-scale semantic mapping and reasoning with heterogeneous modalities

Andrzej Pronobis; Patric Jensfelt

This paper presents a probabilistic framework combining heterogeneous, uncertain, information such as object observations, shape, size, appearance of rooms and human input for semantic mapping. It abstracts multi-modal sensory information and integrates it with conceptual common-sense knowledge in a fully probabilistic fashion. It relies on the concept of spatial properties which make the semantic map more descriptive, and the system more scalable and better adapted for human interaction. A probabilistic graphical model, a chaingraph, is used to represent the conceptual information and perform spatial reasoning. Experimental results from online system tests in a large unstructured office environment highlight the systems ability to infer semantic room categories, predict existence of objects and values of other spatial properties as well as reason about unexplored space.


The International Journal of Robotics Research | 2010

Multi-modal Semantic Place Classification

Andrzej Pronobis; O. Martinez Mozos; Barbara Caputo; Patric Jensfelt

The ability to represent knowledge about space and its position therein is crucial for a mobile robot. To this end, topological and semantic descriptions are gaining popularity for augmenting purely metric space representations. In this paper we present a multi-modal place classification system that allows a mobile robot to identify places and recognize semantic categories in an indoor environment. The system effectively utilizes information from different robotic sensors by fusing multiple visual cues and laser range data. This is achieved using a high-level cue integration scheme based on a Support Vector Machine (SVM) that learns how to optimally combine and weight each cue. Our multi-modal place classification approach can be used to obtain a real-time semantic space labeling system which integrates information over time and space. We perform an extensive experimental evaluation of the method for two different platforms and environments, on a realistic off-line database and in a live experiment on an autonomous robot. The results clearly demonstrate the effectiveness of our cue integration scheme and its value for robust place classification under varying conditions.


IEEE Transactions on Robotics | 2008

Attentional Landmarks and Active Gaze Control for Visual SLAM

Simone Frintrop; Patric Jensfelt

This paper is centered around landmark detection, tracking, and matching for visual simultaneous localization and mapping using a monocular vision system with active gaze control. We present a system that specializes in creating and maintaining a sparse set of landmarks based on a biologically motivated feature-selection strategy. A visual attention system detects salient features that are highly discriminative and ideal candidates for visual landmarks that are easy to redetect. Features are tracked over several frames to determine stable landmarks and to estimate their 3-D position in the environment. Matching of current landmarks to database entries enables loop closing. Active gaze control allows us to overcome some of the limitations of using a monocular vision system with a relatively small field of view. It supports 1) the tracking of landmarks that enable a better pose estimation, 2) the exploration of regions without landmarks to obtain a better distribution of landmarks in the environment, and 3) the active redetection of landmarks to enable loop closing in situations in which a fixed camera fails to close the loop. Several real-world experiments show that accurate pose estimation is obtained with the presented system and that active camera control outperforms the passive approach.


intelligent robots and systems | 2006

A Discriminative Approach to Robust Visual Place Recognition

Andrzej Pronobis; Barbara Caputo; Patric Jensfelt; Henrik I. Christensen

An important competence for a mobile robot system is the ability to localize and perform context interpretation. This is required to perform basic navigation and to facilitate local specific services. Usually localization is performed based on a purely geometric model. Through use of vision and place recognition a number of opportunities open up in terms of flexibility and association of semantics to the model. To achieve this we present an appearance based method for place recognition. The method is based on a large margin classifier in combination with a rich global image descriptor. The method is robust to variations in illumination and minor scene changes. The method is evaluated across several different cameras, changes in time-of-day and weather conditions. The results clearly demonstrate the value of the approach.


international conference on robotics and automation | 2006

A framework for vision based bearing only 3D SLAM

Patric Jensfelt; Danica Kragic; John Folkesson; Mårten Björkman

This paper presents a framework for 3D vision based bearing only SLAM using a single camera, an interesting setup for many real applications due to its low cost. The focus in is on the management of the features to achieve real-time performance in extraction, matching and loop detection. For matching image features to map landmarks a modified, rotationally variant SIFT descriptor is used in combination with a Harris-Laplace detector. To reduce the complexity in the map estimation while maintaining matching performance only a few, high quality, image features are used for map landmarks. The rest of the features are used for matching. The framework has been combined with an EKF implementation for SLAM. Experiments performed in indoor environments are presented. These experiments demonstrate the validity and effectiveness of the approach. In particular they show how the robot is able to successfully match current image features to the map when revisiting an area


international conference on robotics and automation | 2001

Pose tracking using laser scanning and minimalistic environmental models

Patric Jensfelt; Henrik I. Christensen

Keeping track of the position and orientation over time using sensor data, i.e., pose tracking, is a central component in many mobile robot systems. In this paper, we present a Kalman filter-based approach utilizing a minimalistic environmental model. By continuously updating the pose, matching the sensor data to the model is straightforward and outliers can be filtered out effectively by validation gates. The minimalistic model paves the way for a low-complexity algorithm with a high degree of robustness and accuracy. Robustness here refers both to being able to track the pose for a long time, but also handling changes and clutter in the environment. This robustness is gained by the minimalistic model only capturing the stable and large scale features of the environment. The effectiveness of the pose tracking is demonstrated through a number of experiments, including a run of 90 min., which clearly establishes the robustness of the method.


International Journal of Advanced Robotic Systems | 2007

Situated Dialogue and Spatial Organization: What, Where and Why?

Geert-Jan M. Kruijff; Hendrik Zender; Patric Jensfelt; Henrik I. Christensen

The paper presents an HRI architecture for human-augmented mapping, which has been implemented and tested on an autonomous mobile robotic platform. Through interaction with a human, the robot can augment its autonomously acquired metric map with qualitative information about locations and objects in the environment. The system implements various interaction strategies observed in independently performed Wizard-of-Oz studies. The paper discusses an ontology-based approach to multi-layered conceptual spatial mapping that provides a common ground for human-robot dialogue. This is achieved by combining acquired knowledge with innate conceptual commonsense knowledge in order to infer new knowledge. The architecture bridges the gap between the rich semantic representations of the meaning expressed by verbal utterances on the one hand and the robots internal sensor-based world representation on the other. It is thus possible to establish references to spatial areas in a situated dialogue between a human and a robot about their environment. The resulting conceptual descriptions represent qualitative knowledge about locations in the environment that can serve as a basis for achieving a notion of situational awareness.


international conference on robotics and automation | 2005

Vision SLAM in the Measurement Subspace

John Folkesson; Patric Jensfelt; Henrik I. Christensen

In this paper we describe an approach to feature representation for simultaneous localization and mapping, SLAM. It is a general representation for features that addresses symmetries and constraints in the feature coordinates. Furthermore, the representation allows for the features to be added to the map with partial initialization. This is an important property when using oriented vision features where angle information can be used before their full pose is known. The number of the dimensions for a feature can grow with time as more information is acquired. At the same time as the special properties of each type of feature are accounted for, the commonalities of all map features are also exploited to allow SLAM algorithms to be interchanged as well as choice of sensors and features. In other words the SLAM implementation need not be changed at all when changing sensors and features and vice versa. Experimental results both with vision and range data and combinations thereof are presented.

Collaboration


Dive into the Patric Jensfelt's collaboration.

Top Co-Authors

Avatar

John Folkesson

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Henrik I. Christensen

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alper Aydemir

Jet Propulsion Laboratory

View shared research outputs
Top Co-Authors

Avatar

Kristoffer Sjöö

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Danica Kragic

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Nick Hawes

University of Birmingham

View shared research outputs
Top Co-Authors

Avatar

Rares Ambrus

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Nils Bore

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge