Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nils Bore is active.

Publication


Featured researches published by Nils Bore.


intelligent robots and systems | 2014

Meta-rooms : Building and Maintaining Long Term Spatial Models in a Dynamic World

Rares Ambrus; Nils Bore; John Folkesson; Patric Jensfelt

We present a novel method for re-creating the static structure of cluttered office environments - which we define as the “meta-room” - from multiple observations collected by an autonomous robot equipped with an RGB-D depth camera over extended periods of time. Our method works directly with point clusters by identifying what has changed from one observation to the next, removing the dynamic elements and at the same time adding previously occluded objects to reconstruct the underlying static structure as accurately as possible. The process of constructing the meta-rooms is iterative and it is designed to incorporate new data as it becomes available, as well as to be robust to environment changes. The latest estimate of the meta-room is used to differentiate and extract clusters of dynamic objects from observations. In addition, we present a method for re-identifying the extracted dynamic objects across observations thus mapping their spatial behaviour over extended periods of time.


IEEE Robotics & Automation Magazine | 2017

The STRANDS Project: Long-Term Autonomy in Everyday Environments

Nick Hawes; Christopher Burbridge; Ferdian Jovan; Lars Kunze; Bruno Lacerda; Lenka Mudrová; Jay Young; Jeremy L. Wyatt; Denise Hebesberger; Tobias Körtner; Rares Ambrus; Nils Bore; John Folkesson; Patric Jensfelt; Lucas Beyer; Alexander Hermans; Bastian Leibe; Aitor Aldoma; Thomas Faulhammer; Michael Zillich; Markus Vincze; Eris Chinellato; Muhannad Al-Omari; Paul Duckworth; Yiannis Gatsoulis; David C. Hogg; Anthony G. Cohn; Christian Dondrup; Jaime Pulido Fentanes; Tomas Krajnik

Thanks to the efforts of the robotics and autonomous systems community, the myriad applications and capacities of robots are ever increasing. There is increasing demand from end users for autonomous service robots that can operate in real environments for extended periods. In the Spatiotemporal Representations and Activities for Cognitive Control in Long-Term Scenarios (STRANDS) project (http://strandsproject.eu), we are tackling this demand head-on by integrating state-of-the-art artificial intelligence and robotics research into mobile service robots and deploying these systems for long-term installations in security and care environments. Our robots have been operational for a combined duration of 104 days over four deployments, autonomously performing end-user-defined tasks and traversing 116 km in the process. In this article, we describe the approach we used to enable long-term autonomous operation in everyday environments and how our robots are able to use their long run times to improve their own performance.


Robotics and Autonomous Systems | 2017

Efficient retrieval of arbitrary objects from long-term robot observations

Nils Bore; Rares Ambrus; Patric Jensfelt; John Folkesson

We present a novel method for efficient querying and retrieval of arbitrarily shaped objects from large amounts of unstructured 3D point cloud data. Our approach first performs a convex segmentation of the data after which local features are extracted and stored in a feature dictionary. We show that the representation allows efficient and reliable querying of the data. To handle arbitrarily shaped objects, we propose a scheme which allows incremental matching of segments based on similarity to the query object. Further, we adjust the feature metric based on the quality of the query results to improve results in a second round of querying. We perform extensive qualitative and quantitative experiments on two datasets for both segmentation and retrieval, validating the results using ground truth data. Comparison with other state of the art methods further enforces the validity of the proposed method. Finally, we also investigate how the density and distribution of the local features within the point clouds influence the quality of the results.


intelligent robots and systems | 2017

Autonomous meshing, texturing and recognition of object models with a mobile robot

Rares Ambrus; Nils Bore; John Folkesson; Patric Jensfelt

We present a system for creating object models from RGB-D views acquired autonomously by a mobile robot. We create high-quality textured meshes of the objects by approximating the underlying geometry with a Poisson surface. Our system employs two optimization steps, first registering the views spatially based on image features, and second aligning the RGB images to maximize photometric consistency with respect to the reconstructed mesh. We show that the resulting models can be used robustly for recognition by training a Convolutional Neural Network (CNN) on images rendered from the reconstructed meshes. We perform experiments on data collected autonomously by a mobile robot both in controlled and uncontrolled scenarios. We compare quantitatively and qualitatively to previous work to validate our approach.


international conference on computer vision systems | 2015

Querying 3D Data by Adjacency Graphs

Nils Bore; Patric Jensfelt; John Folkesson

The need for robots to search the 3D data they have saved is becoming more apparent. We present an approach for finding structures in 3D models such as those built by robots of their environment. The method extracts geometric primitives from point cloud data. An attributed graph over these primitives forms our representation of the surface structures. Recurring substructures are found with frequent graph mining techniques. We investigate if a model invariant to changes in size and reflection using only the geometric information of and between primitives can be discriminative enough for practical use. Experiments confirm that it can be used to support queries of 3D models.


european conference on mobile robots | 2015

Retrieval of arbitrary 3D objects from robot observations

Nils Bore; Patric Jensfelt; John Folkesson

We have studied the problem of retrieval of arbitrary object instances from a large point cloud data set. The context is autonomous robots operating for long periods of time, weeks up to months and regularly saving point cloud data. The ever growing collection of data is stored in a way that allows ranking candidate examples of any query object, given in the form of a single view point cloud, without the need to access the original data. The top ranked ones can then be compared in a second phase using the point clouds themselves. Our method does not assume that the point clouds are segmented or that the objects to be queried are known ahead of time. This means that we are able to represent the entire environment but it also poses problems for retrieval. To overcome this our approach learns from each actual query to improve search results in terms of the ranking. This learning is automatic and based only on the queries. We demonstrate our system on data collected autonomously by a robot operating over 13 days in our building. Comparisons with other techniques and several variations of our method are shown.


international joint conference on artificial intelligence | 2017

Grounding of Human Environments and Activities for Autonomous Robots

Muhannad Al-Omari; Paul Duckworth; Nils Bore; Majd Hawasly; David C. Hogg; Anthony G. Cohn

With the recent proliferation of robotic applications in domestic and industrial scenarios, it is vital for robots to continually learn about their environments and about the humans they share their environments with. In this paper, we present a framework for autonomous, unsupervised learning from various sensory sources of useful human ‘concepts’; including colours, people names, usable objects and simple activities. This is achieved by integrating state-of-the-art object segmentation, pose estimation, activity analysis and language grounding into a continual learning framework. Learned concepts are grounded to natural language if commentary is available, allowing the robot to communicate in a human-understandable way. We show, using a challenging, real-world dataset of human activities, that our framework is able to extract useful concepts, ground natural language descriptions to them, and, as a proof-of-concept, to generate simple sentences from templates to describe people and activities.


arXiv: Robotics | 2017

Unsupervised Object Discovery and Segmentation of RGBD-images.

Johan Ekekrantz; Nils Bore; Rares Ambrus; John Folkesson; Patric Jensfelt


arXiv: Robotics | 2017

Detection and Tracking of General Movable Objects in Large 3D Maps.

Nils Bore; Johan Ekekrantz; Patric Jensfelt; John Folkesson


arXiv: Robotics | 2018

Multiple Object Detection, Tracking and Long-Term Dynamics Learning in Large 3D Maps.

Nils Bore; Patric Jensfelt; John Folkesson

Collaboration


Dive into the Nils Bore's collaboration.

Top Co-Authors

Avatar

John Folkesson

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Patric Jensfelt

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Rares Ambrus

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Johan Ekekrantz

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bruno Lacerda

University of Birmingham

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge