Michael Zillich
Vienna University of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Michael Zillich.
intelligent robots and systems | 2012
Andreas Richtsfeld; Thomas Mörwald; Johann Prankl; Michael Zillich; Markus Vincze
We present a framework for segmenting unknown objects in RGB-D images suitable for robotics tasks such as object search, grasping and manipulation. While handling single objects on a table is solved, handling complex scenes poses considerable problems due to clutter and occlusion. After pre-segmentation of the input image based on surface normals, surface patches are estimated using a mixture of planes and NURBS (non-uniform rational B-splines) and model selection is employed to find the best representation for the given data. We then construct a graph from surface patches and relations between pairs of patches and perform graph cut to arrive at object hypotheses segmented from the scene. The energy terms for patch relations are learned from user annotated training data, where support vector machines (SVM) are trained to classify a relation as being indicative of two patches belonging to the same object. We show evaluation of the relations and results on a database of different test sets, demonstrating that the approach can segment objects of various shapes in cluttered table top scenes.
international conference on computer vision systems | 2008
Michael Stark; Philipp Lies; Michael Zillich; Jeremy L. Wyatt; Bernt Schiele
Current approaches to visual object class detection mainly focus on the recognition of basic level categories, such as cars, motorbikes, mugs and bottles. Although these approaches have demonstrated impressive performance in terms of recognition, their restriction to these categories seems inadequate in the context of embodied, cognitive agents. Here, distinguishing objects according to functional aspects based on object affordances is important in order to enable manipulation of and interaction between physical objects and cognitive agent. In this paper, we propose a system for the detection of functional object classes, based on a representation of visually distinct hints on object affordances (affordance cues). It spans the complete range from tutordriven acquisition of affordance cues, learning of corresponding object models, and detecting novel instances of functional object classes in real images.
Journal of Visual Communication and Image Representation | 2014
Andreas Richtsfeld; Thomas Mörwald; Johann Prankl; Michael Zillich; Markus Vincze
Highlights • Segmentation of unknown objects in cluttered scenes.• Abstraction of raw RGB-D data into parametric surface patches.• Learning of perceptual grouping between surfaces with SVMs.• Global decision making for segmentation using Grahp-Cut.
international conference on computer vision systems | 2011
Ekaterina Potapova; Michael Zillich; Markus Vincze
In this paper we address the problem of obtaining meaningful saliency measures that tie in coherently with other methods and modalities within larger robotic systems. We learn probabilistic models of various saliency cues from labeled training data and fuse these into probability maps, which while appearing to be qualitatively similar to traditional saliency maps, represent actual probabilities of detecting salient features. We show that these maps are better suited to pick up task-relevant structures in robotic applications. Moreover, having true probabilities rather than arbitrarily scaled saliency measures allows for deeper, semantically meaningful integration with other parts of the overall system.
IEEE Transactions on Autonomous Mental Development | 2010
Jeremy L. Wyatt; Alper Aydemir; Michael Brenner; Marc Hanheide; Nick Hawes; Patric Jensfelt; Matej Kristan; Geert-Jan M. Kruijff; Pierre Lison; Andrzej Pronobis; Kristoffer Sjöö; Alen Vrečko; Hendrik Zender; Michael Zillich; Danijel Skočaj
There are many different approaches to building a system that can engage in autonomous mental development. In this paper, we present an approach based on what we term self-understanding, by which we mean the explicit representation of and reasoning about what a system does and does not know, and how that knowledge changes under action. We present an architecture and a set of representations used in two robot systems that exhibit a limited degree of autonomous mental development, which we term self-extension. The contributions include: representations of gaps and uncertainty for specific kinds of knowledge, and a goal management and planning system for setting and achieving learning goals.
The International Journal of Robotics Research | 2001
Markus Vincze; Minu Ayromlou; Wolfgang Ponweiser; Michael Zillich
A real-world limitation of visual servoing approaches is the sensitivity of visual tracking to varying ambient conditions and background clutter. The authors present a model-based vision framework to improve the robustness of edge-based feature tracking. Lines and ellipses are tracked using edge-projected integration of cues (EPIC). EPIC uses cues in regions delineated by edges that are defined by observed edgels and a priori knowledge from a wire-frame model of the object. The edgels are then used for a robust fit of the feature geometry, but at times this results in multiple feature candidates. A final validation step uses the model topology to select the most likely feature candidates. EPIC is suited for real-time operation. Experiments demonstrate operation at frame rate. Navigating a walking robot through an industrial environment shows the robustness to varying lighting conditions. Tracking objects over varying backgrounds indicates robustness to clutter.
robot and human interactive communication | 2007
Nick Hawes; Michael Zillich; Jeremy L. Wyatt
In this paper we present a toolkit for implementing architectures for intelligent robotic systems. This toolkit is based on an architecture schema (a set of architecture design rules). The purpose of both the schema and toolkit is to facilitate research into information-processing architectures for state-of-the- art intelligent robots, whilst providing engineering solutions for the development of such systems. A robotic system implemented using the toolkit is presented to demonstrate its key features.
intelligent robots and systems | 2011
Danijel Skočaj; Matej Kristan; Alen Vrečko; Marko Mahnič; Miroslav Janíček; Geert-Jan M. Kruijff; Marc Hanheide; Nick Hawes; Thomas Keller; Michael Zillich; Kai Zhou
In this paper we present representations and mechanisms that facilitate continuous learning of visual concepts in dialogue with a tutor and show the implemented robot system. We present how beliefs about the world are created by processing visual and linguistic information and show how they are used for planning system behaviour with the aim at satisfying its internal drive - to extend its knowledge. The system facilitates different kinds of learning initiated by the human tutor or by the system itself. We demonstrate these principles in the case of learning about object colours and basic shapes.
computer vision and pattern recognition | 2001
Danny Roobaert; Michael Zillich; Jan-Olof Eklundh
Pursuing the goals of absolute simplicity of a detection/recognition system, a pure learning approach to background-invariance and visual 3D object detection/recognition is proposed. The approach relies on learning from examples only, and does not encode any domain knowledge (e.g. in the form of intermediate representations, or by solving segmentation or correspondence problems). To make the pure learning approach practically feasible, we propose the BW training method for teaching an object recognition system background-invariance. The method consist of pedagogically training the system, once with a black background and once with a white background. The method is formulated within the framework of support vector learning. Evaluation is performed with the Columbia Image (COIL) database, that is extended with different classes of cluttered backgrounds. Using this pure learning approach, a system is proposed that is able to perform 3D object detection/recognition successfully in real-world scenes, with varying illuminations and backgrounds. The system is able to perform this task in real-time.
IEEE Robotics & Automation Magazine | 2017
Nick Hawes; Christopher Burbridge; Ferdian Jovan; Lars Kunze; Bruno Lacerda; Lenka Mudrová; Jay Young; Jeremy L. Wyatt; Denise Hebesberger; Tobias Körtner; Rares Ambrus; Nils Bore; John Folkesson; Patric Jensfelt; Lucas Beyer; Alexander Hermans; Bastian Leibe; Aitor Aldoma; Thomas Faulhammer; Michael Zillich; Markus Vincze; Eris Chinellato; Muhannad Al-Omari; Paul Duckworth; Yiannis Gatsoulis; David C. Hogg; Anthony G. Cohn; Christian Dondrup; Jaime Pulido Fentanes; Tomas Krajnik
Thanks to the efforts of the robotics and autonomous systems community, the myriad applications and capacities of robots are ever increasing. There is increasing demand from end users for autonomous service robots that can operate in real environments for extended periods. In the Spatiotemporal Representations and Activities for Cognitive Control in Long-Term Scenarios (STRANDS) project (http://strandsproject.eu), we are tackling this demand head-on by integrating state-of-the-art artificial intelligence and robotics research into mobile service robots and deploying these systems for long-term installations in security and care environments. Our robots have been operational for a combined duration of 104 days over four deployments, autonomously performing end-user-defined tasks and traversing 116 km in the process. In this article, we describe the approach we used to enable long-term autonomous operation in everyday environments and how our robots are able to use their long run times to improve their own performance.