Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Andreas Richtsfeld is active.

Publication


Featured researches published by Andreas Richtsfeld.


intelligent robots and systems | 2012

Segmentation of unknown objects in indoor environments

Andreas Richtsfeld; Thomas Mörwald; Johann Prankl; Michael Zillich; Markus Vincze

We present a framework for segmenting unknown objects in RGB-D images suitable for robotics tasks such as object search, grasping and manipulation. While handling single objects on a table is solved, handling complex scenes poses considerable problems due to clutter and occlusion. After pre-segmentation of the input image based on surface normals, surface patches are estimated using a mixture of planes and NURBS (non-uniform rational B-splines) and model selection is employed to find the best representation for the given data. We then construct a graph from surface patches and relations between pairs of patches and perform graph cut to arrive at object hypotheses segmented from the scene. The energy terms for patch relations are learned from user annotated training data, where support vector machines (SVM) are trained to classify a relation as being indicative of two patches belonging to the same object. We show evaluation of the relations and results on a database of different test sets, demonstrating that the approach can segment objects of various shapes in cluttered table top scenes.


Journal of Visual Communication and Image Representation | 2014

Learning of perceptual grouping for object segmentation on RGB-D data

Andreas Richtsfeld; Thomas Mörwald; Johann Prankl; Michael Zillich; Markus Vincze

Highlights • Segmentation of unknown objects in cluttered scenes.• Abstraction of raw RGB-D data into parametric surface patches.• Learning of perceptual grouping between surfaces with SVMs.• Global decision making for segmentation using Grahp-Cut.


international conference on robotics and automation | 2014

Attention-driven object detection and segmentation of cluttered table scenes using 2.5D symmetry

Ekaterina Potapova; Karthik Mahesh Varadarajan; Andreas Richtsfeld; Michael Zillich; Markus Vincze

The task of searching and grasping objects in cluttered scenes, typical of robotic applications in domestic environments requires fast object detection and segmentation. Attentional mechanisms provide a means to detect and prioritize processing of objects of interest. In this work, we combine a saliency operator based on symmetry with a segmentation method based on clustering locally planar surface patches, both operating on 2.5D point clouds (RGB-D images) as input data to yield a novel approach to table-top scene segmentation. Evaluation on indoor table-top scenes containing man-made objects clustered in piles and dumped in a box show that our approach to selection of attention points significantly improves performance of state-of-the-art attention-based segmentation methods.


international conference on robotics and automation | 2013

Geometric data abstraction using B-splines for range image segmentation

Thomas Mörwald; Andreas Richtsfeld; Johann Prankl; Michael Zillich; Markus Vincze

With the availability of cheap and powerful RGB-D sensors interest in 3D point cloud based methods has drastically increased. One common prerequisite of these methods is to abstract away from raw point cloud data, e.g. to planar patches, to reduce the amount of data and to handle noise and clutter. We present a novel method to abstract RGB-D sensor data to parametric surface models described by B-spline surfaces and associated boundaries. Data is first pre-segmented into smooth patches before B-spline surfaces are fitted. The best surface representations of these patches are selected in a merging procedure. Furthermore, we show how curve fitting estimates smooth boundaries and improves the given sensor information compared to hand-labelled ground truth annotation when using colour in addition to depth information. All parts of the framework are open-source1 and are evaluated on the object segmentation database (OSD) also available online, showing accuracy and usability of the proposed methods.


international conference on advanced robotics | 2011

Visual information abstraction for interactive robot learning

Kai Zhou; Andreas Richtsfeld; Michael Zillich; Markus Vincze; Alen Vrečko; Danijel Skočaj

Semantic visual perception for knowledge acquisition plays an important role in human cognition, as well as in the learning process of any cognitive robot. In this paper, we present a visual information abstraction mechanism designed for continuously learning robotic systems. We generate spatial information in the scene by considering plane estimation and stereo line detection coherently within a unified probabilistic framework, and show how spaces of interest (SOIs) are generated and segmented using the spatial information. We also demonstrate how the existence of SOIs is validated in the long-term learning process. The proposed mechanism facilitates robust visual information abstraction which is a requirement for continuous interactive learning. Experiments demonstrate that with the refined spatial information, our approach provides accurate and plausible representation of visual objects.


intelligent robots and systems | 2011

Coherent spatial abstraction and stereo line detection for robotic visual attention

Kai Zhou; Andreas Richtsfeld; Michael Zillich; Markus Vincze

Attention operators based on 2D image cues (such as color, texture) are well known and discussed extensively in the vision literature but are not ideally suited for robotic applications. In such contexts it is the 3D structure of scene elements that makes them interesting or not. We show how a bottom-up exploration mechanism that fuses 2D saliency-based conspicuity with spatial abstraction resulting from the coherent plane estimation and stereo line detection is well suited for typical indoor robotics tasks. This spatial abstraction is performed by a joint probabilistic model which takes the interaction of stereo line detection and 3D supporting plane estimation into consideration. By maximizing the probability of the joint model, our method facilitates reduction of false-positive stereo line detection and refines the estimation of supporting surface simultaneously. Experiments demonstrate that our approach provides more accurate and plausible attention.


Archive | 2009

3D Shape Detection for Mobile Robot Learning

Andreas Richtsfeld; Markus Vincze

If a robot shall learn from visual data the task is greatly simplified if visual data is abstracted from pixel data into basic shapes or Gestalts. This paper introduces a method of processing images to abstract basic features into higher level Gestalts. Grouping is formulated as incremental problem to avoid grouping parameters and to obtain anytime processing characteristics. The proposed system allows shape detection of 3D such as cubes, cones and cylinders for robot affordance learning.


advanced concepts for intelligent vision systems | 2011

Combining plane estimation with shape detection for holistic scene understanding

Kai Zhou; Andreas Richtsfeld; Karthik Mahesh Varadarajan; Michael Zillich; Markus Vincze

Structural scene understanding is an interconnected process wherein modules for object detection and supporting structure detection need to co-operate in order to extract cross-correlated information, thereby utilizing the maximum possible information rendered by the scene data. Such an inter-linked framework provides a holistic approach to scene understanding, while obtaining the best possible detection rates. Motivated by recent research in coherent geometrical contextual reasoning and object recognition, this paper proposes a unified framework for robust 3D supporting plane estimation using a joint probabilistic model which uses results from object shape detection and 3D plane estimation. Maximization of the joint probabilistic model leads to robust 3D surface estimation while reducing false perceptual grouping. We present results on both synthetic and real data obtained from an indoor mobile robot to demonstrate the benefits of our unified detection framework.


Künstliche Intelligenz | 2015

Object Detection for Robotic Applications Using Perceptual Organization in 3D

Andreas Richtsfeld; Michael Zillich; Markus Vincze

Object segmentation of unknown objects with arbitrary shape in cluttered scenes is still a challenging task in computer vision. A framework is introduced to segment RGB-D images where data is processed in a hierarchical fashion. After pre-segmentation and parametrization of surface patches, support vector machines are used to learn the importance of relations between these patches. The relations are derived from perceptual grouping principles. The proposed framework is able to segment objects, even if they are stacked or jumbled in cluttered scenes. Furthermore, the problem of segmenting partially occluded objects is tackled.


ieee-ras international conference on humanoid robots | 2014

Incremental attention-driven object segmentation

Ekaterina Potapova; Andreas Richtsfeld; Michael Zillich; Markus Vincze

Segmentation of highly cluttered indoor scenes is a challenging task and should be solved in real time to be efficiently used in such applications as robotics, for example. Traditional segmentation methods are often overwhelmed by the complexity of the scene and require significant processing time. To tackle this problem we propose to use incremental attention-driven segmentation, where attention mechanisms are used to prioritize parts of the scene to be handled first. Our method outputs object hypotheses composed of parametric surface models. We evaluate our approach on two publicly available datasets of cluttered indoor scenes. We show that the proposed method outperforms existing methods of attention-driven segmentation in terms of segmentation quality and computational performance.

Collaboration


Dive into the Andreas Richtsfeld's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michael Zillich

Vienna University of Technology

View shared research outputs
Top Co-Authors

Avatar

Kai Zhou

Vienna University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Johann Prankl

Vienna University of Technology

View shared research outputs
Top Co-Authors

Avatar

Thomas Mörwald

Vienna University of Technology

View shared research outputs
Top Co-Authors

Avatar

Ekaterina Potapova

Vienna University of Technology

View shared research outputs
Top Co-Authors

Avatar

George Todoran

Vienna University of Technology

View shared research outputs
Top Co-Authors

Avatar

Markus Bader

Vienna University of Technology

View shared research outputs
Top Co-Authors

Avatar

Markus Suchi

Vienna University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge