Rares Ambrus
Royal Institute of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Rares Ambrus.
intelligent robots and systems | 2014
Rares Ambrus; Nils Bore; John Folkesson; Patric Jensfelt
We present a novel method for re-creating the static structure of cluttered office environments - which we define as the “meta-room” - from multiple observations collected by an autonomous robot equipped with an RGB-D depth camera over extended periods of time. Our method works directly with point clusters by identifying what has changed from one observation to the next, removing the dynamic elements and at the same time adding previously occluded objects to reconstruct the underlying static structure as accurately as possible. The process of constructing the meta-rooms is iterative and it is designed to incorporate new data as it becomes available, as well as to be robust to environment changes. The latest estimate of the meta-room is used to differentiate and extract clusters of dynamic objects from observations. In addition, we present a method for re-identifying the extracted dynamic objects across observations thus mapping their spatial behaviour over extended periods of time.
intelligent robots and systems | 2008
Yashodhan Nevatia; Todor Stoyanov; Ravi Rathnam; Max Pfingsthorn; Stefan Markov; Rares Ambrus; Andreas Birk
Exploration of unknown environments remains one of the fundamental problems of mobile robotics. It is also a prime example for a task that can benefit significantly from multi-robot teams. We present an integrated system for semi-autonomous cooperative exploration, augmented by an intuitive user interface for efficient human supervision and control. In this preliminary study we demonstrate the effectiveness of the system as a whole and the intuitive interface in particular. Congruent with previous findings, results confirm that having a human in the loop improves task performance, especially with larger numbers of robots. Specific to our interface, we find that even untrained operators can efficiently manage a decently sized team of robots.
IEEE Robotics & Automation Magazine | 2017
Nick Hawes; Christopher Burbridge; Ferdian Jovan; Lars Kunze; Bruno Lacerda; Lenka Mudrová; Jay Young; Jeremy L. Wyatt; Denise Hebesberger; Tobias Körtner; Rares Ambrus; Nils Bore; John Folkesson; Patric Jensfelt; Lucas Beyer; Alexander Hermans; Bastian Leibe; Aitor Aldoma; Thomas Faulhammer; Michael Zillich; Markus Vincze; Eris Chinellato; Muhannad Al-Omari; Paul Duckworth; Yiannis Gatsoulis; David C. Hogg; Anthony G. Cohn; Christian Dondrup; Jaime Pulido Fentanes; Tomas Krajnik
Thanks to the efforts of the robotics and autonomous systems community, the myriad applications and capacities of robots are ever increasing. There is increasing demand from end users for autonomous service robots that can operate in real environments for extended periods. In the Spatiotemporal Representations and Activities for Cognitive Control in Long-Term Scenarios (STRANDS) project (http://strandsproject.eu), we are tackling this demand head-on by integrating state-of-the-art artificial intelligence and robotics research into mobile service robots and deploying these systems for long-term installations in security and care environments. Our robots have been operational for a combined duration of 104 days over four deployments, autonomously performing end-user-defined tasks and traversing 116 km in the process. In this article, we describe the approach we used to enable long-term autonomous operation in everyday environments and how our robots are able to use their long run times to improve their own performance.
international conference on robotics and automation | 2017
Thomas Faulhammer; Rares Ambrus; Christopher Burbridge; Michael Zillich; John Folkesson; Nick Hawes; Patric Jensfelt; Markus Vincze
In this article, we present and evaluate a system, which allows a mobile robot to autonomously detect, model, and re-recognize objects in everyday environments. While other systems have demonstrated one of these elements, to our knowledge, we present the first system, which is capable of doing all of these things, all without human interaction, in normal indoor scenes. Our system detects objects to learn by modeling the static part of the environment and extracting dynamic elements. It then creates and executes a view plan around a dynamic element to gather additional views for learning. Finally, these views are fused to create an object model. The performance of the system is evaluated on publicly available datasets as well as on data collected by the robot in both controlled and uncontrolled scenarios.
international conference on robotics and automation | 2015
Tomas Krajnik; Miroslav Kulich; Lenka Mudrová; Rares Ambrus; Tom Duckett
We present a novel approach to mobile robot search for non-stationary objects in partially known environments. We formulate the search as a path planning problem in an environment where the probability of object occurrences at particular locations is a function of time. We propose to explicitly model the dynamics of the object occurrences by their frequency spectra. Using this spectral model, our path planning algorithm can construct plans that reflect the likelihoods of object locations at the time the search is performed. Three datasets collected over several months containing person and object occurrences in residential and office environments were chosen to evaluate the approach. Several types of spatio-temporal models were created for each of these datasets and the efficiency of the search method was assessed by measuring the time it took to locate a particular object. The results indicate that modeling the dynamics of object occurrences reduces the search time by 25% to 65% compared to maps that neglect these dynamics.
intelligent robots and systems | 2014
Zhan Wang; Rares Ambrus; Patric Jensfelt; John Folkesson
This paper presents a novel approach to model motion patterns of dynamic objects, such as people and vehicles, in the environment with the occupancy grid map representation. Corresponding to the ever-changing nature of the motion pattern of dynamic objects, we model each occupancy grid cell by an IOHMM, which is an inhomogeneous variant of the HMM. This distinguishes our work from existing methods which use the conventional HMM, assuming motion evolving according to a stationary process. By introducing observations of neighbor cells in the previous time step as input of IOHMM, the transition probabilities in our model are dependent on the occurrence of events in the cells neighborhood. This enables our method to model the spatial correlation of dynamics across cells. A sequence processing example is used to illustrate the advantage of our model over conventional HMM based methods. Results from the experiments in an office corridor environment demonstrate that our method is capable of capturing dynamics of such human living environments.
intelligent robots and systems | 2015
Rares Ambrus; Johan Ekekrantz; John Folkesson; Patric Jensfelt
We present a novel method for clustering segmented dynamic parts of indoor RGB-D scenes across repeated observations by performing an analysis of their spatial-temporal distributions. We segment areas of interest in the scene using scene differencing for change detection. We extend the Meta-Room method and evaluate the performance on a complex dataset acquired autonomously by a mobile robot over a period of 30 days. We use an initial clustering method to group the segmented parts based on appearance and shape, and we further combine the clusters we obtain by analyzing their spatial-temporal behaviors. We show that using the spatial-temporal information further increases the matching accuracy.
international conference on control, automation, robotics and vision | 2014
Akshaya Thippur; Rares Ambrus; Gaurav Agrawal; Adria Gallart del Burgo; Janardhan Haryadi Ramesh; Mayank Kumar Jha; Malepati Bala Siva Sai Akhil; Nishan Bhavanishankar Shetty; John Folkesson; Patric Jensfelt
Long-term autonomous learning of human environments entails modelling and generalizing over distinct variations in: object instances in different scenes, and different scenes with respect to space and time. It is crucial for the robot to recognize the structure and context in spatial arrangements and exploit these to learn models which capture the essence of these distinct variations. Table-tops posses a typical structure repeatedly seen in human environments and are identified by characteristics of being personal spaces of diverse functionalities and dynamically changing due to human interactions. In this paper, we present a 3D dataset of 20 office table-tops manually observed and scanned 3 times a day as regularly as possible over 19 days (461 scenes) and subsequently, manually annotated with 18 different object classes, including multiple instances. We analyse the dataset to discover spatial structures and patterns in their variations. The dataset can, for example, be used to study the spatial relations between objects and long-term environment models for applications such as activity recognition, context and functionality estimation and anomaly detection.
international conference on robotics and automation | 2017
Rares Ambrus; Sebastian Claici; Axel Wendt
We present an automatic approach for the task of reconstructing a 2-D floor plan from unstructured point clouds of building interiors. Our approach emphasizes accurate and robust detection of building structural elements and, unlike previous approaches, does not require prior knowledge of scanning device poses. The reconstruction task is formulated as a multiclass labeling problem that we approach using energy minimization. We use intuitive priors to define the costs for the energy minimization problem and rely on accurate wall and opening detection algorithms to ensure robustness. We provide detailed experimental evaluation results, both qualitative and quantitative, against state-of-the-art methods and labeled ground-truth data.
Robotics and Autonomous Systems | 2017
Nils Bore; Rares Ambrus; Patric Jensfelt; John Folkesson
We present a novel method for efficient querying and retrieval of arbitrarily shaped objects from large amounts of unstructured 3D point cloud data. Our approach first performs a convex segmentation of the data after which local features are extracted and stored in a feature dictionary. We show that the representation allows efficient and reliable querying of the data. To handle arbitrarily shaped objects, we propose a scheme which allows incremental matching of segments based on similarity to the query object. Further, we adjust the feature metric based on the quality of the query results to improve results in a second round of querying. We perform extensive qualitative and quantitative experiments on two datasets for both segmentation and retrieval, validating the results using ground truth data. Comparison with other state of the art methods further enforces the validity of the proposed method. Finally, we also investigate how the density and distribution of the local features within the point clouds influence the quality of the results.