Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Horst-Michael Gross is active.

Publication


Featured researches published by Horst-Michael Gross.


intelligent robots and systems | 2009

TOOMAS: Interactive Shopping Guide robots in everyday use - final implementation and experiences from long-term field trials

Horst-Michael Gross; Hans-Joachim Boehme; Ch. Schroeter; S. Mueller; Alexander Koenig; Erik Einhorn; Ch. Martin; Matthias Merten; Andreas Bley

The paper gives a comprehensive overview of our Shopping Guide project, which aims at the development of interactive mobile shopping companion robots for everyday use in challenging operating environments such as home improvement stores. It is spanning an arc from the expectations and requirements of store owners and customers, via the challenges of the shopping scenario and the operating environment, the implemented functionality of the shopping guide robots, up to the results of long-term field trials. The field trials started in April 2008 and still ongoing aim at studying whether and how a group of interactive mobile shopping guide robots can operate completely autonomously in such everyday environments and how they are accepted by uninstructed customers. In these field trials, where nine robotic shopping guides traveled together 2187 kilometers in three different home improvement stores in Germany, more than 8,600 customers were successfully guided to the locations of their products of choice. With the successful development of these shopping guide robots, a further important step towards assistive robotics for daily use has been done.


systems, man and cybernetics | 2008

ShopBot: Progress in developing an interactive mobile shopping assistant for everyday use

Horst-Michael Gross; Hans-Joachim Boehme; Christof Schroeter; S. Mueller; Alexander Koenig; Ch. Martin; Matthias Merten; Andreas Bley

The paper describes progress achieved in our long-term research project ShopBot, which aims at the development of an intelligent and interactive mobile shopping assistant for everyday use in shopping centers or home improvement stores. It is focusing on recent progress concerning two important methodological aspects: (i) the on-line building of maps of the operation area by means of advanced Rao-Blackwellized SLAM approaches using both sonar-based gridmaps as well as vision-based graph maps as representations, and (ii) a probabilistic approach to multi-modal user detection and tracking during the guidance tour. Experimental results of both the map building characteristics and the person tracking behavior achieved in an ordinary home improvement store demonstrate the reliability of both approaches. Moreover, we present first very encouraging results of long-term field trials which have been executed with three robotic shopping assistants in another home improvement store in Bavaria since March 2008. In this field test, the robots could demonstrate their suitability for this challenging real-world application, as well as the necessary user acceptance.


Neural Networks | 1999

Generative character of perception: a neural architecture for sensorimotor anticipation

Horst-Michael Gross; Andrea Heinze; Torsten Seiler; Volker Stephan

The basic idea of our anticipatory approach to perception is to avoid the common separation of perception and generation of behavior and to fuse both aspects into a consistent neural process. Our approach tries to explain the phenomenon of perception, in particular, of perception at the level of sensorimotor intelligence, from a behavior-oriented point of view. Perception is assumed to be a generative process of anticipating the course of events resulting from alternative sequences of hypothetically executed actions. By means of this sensorimotor anticipation, it is possible to characterize a visual scenery immediately in categories of behavior, i.e. by a set of actions which describe possible methods of interaction with the objects in the environment. Thus, the competence to perceive a complex situation can be understood as the capability to anticipate the course of events caused by different action sequences. Starting from an abstract description of anticipatory perception and the essential biological evidence for internal simulation, we present two biologically motivated computational models that are able to anticipate and evaluate hypothetically sensorimotor sequences. Both models consider functional aspects of those cortical and subcortical systems that are assumed to be involved in the process of sensory prediction and sensorimotor control. Our first approach, the Model for Anticipation based on Sensory IMagination (MASIM), realizes a sequential search in sensorimotor space using a simple model of lateral cerebellum as sensory predictor. We demonstrate the efficiency of this model approach in the light of visually guided local navigation behaviors of a mobile system. The second approach, the Model for Anticipation based on Cortical Representations (MACOR), is actually still at a conceptual level of realization. We postulate that this model allows a completely parallel search at the neocortical level using assemblies of spiking neurons for grouping, separation, and selection of sensorimotor sequences. Both models are intended as general schemes for anticipation based perception at the level of sensorimotor intelligence.


intelligent robots and systems | 2011

Progress in developing a socially assistive mobile home robot companion for the elderly with mild cognitive impairment

Horst-Michael Gross; Ch. Schroeter; S. Mueller; Michael Volkhardt; Erik Einhorn; Andreas Bley; Ch. Martin; T. Langner; Matthias Merten

The paper is addressing several aspects of our work as part of the European FP7 project “CompanionAble” and gives an overview of the progress in developing a socially assistive home robot companion for elderly people with mild cognitive impairment (MCI) living alone at home. The spectrum of required assistive functionalities and services that have been specified by the different end-user target groups of such a robot companion (the elderly, relatives, caregivers) is manifold. It reaches from situation-specific, intelligent reminding (e.g. taking medication or drinking) and cognitive stimulation, via mobile videophony with relatives or caregivers, up to the autonomous detection of dangerous situations, like falls, and their evaluation by authorized persons via mobile telepresence. From the beginning, our approach has been focused on long-term and everyday suitability and low-cost producibility as important prerequisites for the marketability of the robot companion. Against this background, the paper presents the main system requirements derived from user studies, the consequences for the hardware design and functionality of the robot companion, its system architecture, a key technology for HRI in home environments - the autonomous user tracking and searching, up to the results of already conducted and ongoing functionality tests and upcoming user studies.


international conference on robotics and automation | 2013

Realization and user evaluation of a companion robot for people with mild cognitive impairments

Ch. Schroeter; S. Mueller; Michael Volkhardt; Erik Einhorn; C. Huijnen; H. van den Heuvel; A. van Berlo; Andreas Bley; Horst-Michael Gross

This paper presents results of user evaluations with a socially assistive robot companion for older people suffering from mild cognitive impairment (MCI) and living (alone) at home. Within the European FP7 project “CompanionAble” (2008-2012) [1], we developed assistive technologies combining a mobile robot and smart environment with the aim to support these people and assist them living in their familiar home environment. For a final evaluation, user experience studies were conducted with volunteer users who were invited to a test home where they lived and freely used the robot and integrated system over a period of two days. Services provided by the companion robot include reminders of appointments (pre-defined or added by the users themselves or their informal carer) as well as frequent recommendations to specific activities, which were listed e.g. by their family carers. Furthermore, video contact with relatives and friends, a cognitive stimulation game designed especially to counter the progress of cognitive impairments, and the possibility to store personal items with the robot are offered. Recognition of the user entering or leaving the home is triggering situation specific reminders like agenda items due during the (expected) absence, missed calls or items not to be forgotten. Continuing our previous work published in [2], this paper presents detailed description of the implemented assistive functions and results of user studies conducted during April and May 2012 in the smart house of the Dutch project partner Smart Homes in Eindhoven, The Netherlands.


intelligent robots and systems | 2002

Vision-based Monte Carlo self-localization for a mobile service robot acting as shopping assistant in a home store

Horst-Michael Gross; Alexander Koenig; Hans-Joachim Boehme; Ch. Schroeter

We present a novel omnivision-based robot localization approach which utilizes the Monte Carlo Localization (MCL), a Bayesian filtering technique based on a density representation by means of particles. The capability of this method to approximate arbitrary likelihood densities is a crucial property for dealing with highly ambiguous localization hypotheses as are typical for real-world environments. We show how omnidirectional imaging can be combined with the MCL-algorithm to globally localize and track a mobile robot given a taught graph-based representation of the operation area. In contrast to other approaches, the nodes of our graph are labeled with both visual feature vectors extracted from the omnidirectional image, and odometric data about the pose of the robot at the moment of the node insertion (position and heading direction). To demonstrate the reliability of our approach, we present first experimental results in the context of a challenging robotics application, the self-localization of a mobile service robot acting as shopping assistant in a very regularly structured, maze-like and crowded environment, a home store.


intelligent robots and systems | 2003

Omnivision-based probabilistic self-localization for a mobile shopping assistant continued

Horst-Michael Gross; Alexander Koenig; Christof Schroeter; Hans-Joachim Boehme

The basic idea of our omniview-based MCL approach and preliminary experimental results were presented in our previous paper [Proc. IROS 2002, pp. 256-262]. In continuing, this paper describes a number of methodical and technical improvements addressing challenges arising from the characteristics of our real-world application, the vision-based self-localization of a mobile robot that acts as a shopping assistant in the maze-like environment of a home store. To cope with highly variable illumination conditions, we present a reference-based correction approach that realizes a robust, automatic luminance stabilization and color adaptation already at the level of image formation. To deal with severe occlusions or disturbances of the omnidirectional image caused by, e.g. people standing near the robot or local illumination artifacts, we introduce a novel selective observation comparison method as prerequisite for a robust particle filter update. Further studies investigate the impact of the utilized observation model on the localization accuracy. The results of a series of localization experiments carried out in the home store confirm the robustness and superiority of our advanced, real-time approach.


international symposium on neural networks | 2000

Camera-based gesture recognition for robot control

Andrea Corradini; Horst-Michael Gross

Several systems for automatic gesture recognition have been developed using different strategies and approaches. In these systems the recognition engine is mainly based on three algorithms: dynamic pattern matching, statistical classification, and neural networks (NN). In that paper we present four architectures for gesture-based interaction between a human being and an autonomous mobile robot using the above mentioned techniques or a hybrid combination of them. Each of our gesture recognition architecture consists of a preprocessor and a decoder. Three different hybrid stochastic/connectionist architectures are considered. A template matching problem by making use of dynamic programming techniques is dealt with; the strategy is to find the minimal distance between a continuous input feature sequence and the classes. Preliminary experiments with our baseline system achieved a recognition accuracy up to 92%. All systems use input from a monocular color video camera, and are user-independent but so far they are not in real-time yet.


Robotics and Autonomous Systems | 2004

A Multi-Modal System for Tracking and Analyzing Faces on a Mobile Robot

Torsten Wilhelm; Hans-Joachim Böhme; Horst-Michael Gross

Abstract This paper describes a user detection system which employs a saliency system working on an omnidirectional camera delivering a rough and fast estimate of the position of a potential user. It consists of a vision (skin color) and a sonar based component, which are combined to make the estimate more reliable. To make the skin color detection robust under varying illumination conditions, it is supplied with an automatic white balance algorithm. The active vision head looks continuously in the direction of the salient region. Thus, a high resolution image can be grabbed and analyzed with a face detector.


intelligent robots and systems | 2012

MIRA - middleware for robotic applications

Erik Einhorn; T. Langner; Ronny Stricker; Christian Märtin; Horst-Michael Gross

In this paper, we present MIRA, a new middleware for robotic applications. It is designed for use in real-world applications and for research and teaching. In comparison to many other existing middlewares, MIRA employs novel techniques for communication that are described in this paper. Moreover, we present benchmarks that analyze the performance of the most commonly used middlewares ROS, Yarp, LCM, Player, Urbi, and MOOS. Using these benchmarks, we can show that MIRA outperforms the other middlewares in terms of latency and computation time.

Collaboration


Dive into the Horst-Michael Gross's collaboration.

Top Co-Authors

Avatar

Hans-Joachim Boehme

Technische Universität Ilmenau

View shared research outputs
Top Co-Authors

Avatar

Andrea Scheidig

Technische Universität Ilmenau

View shared research outputs
Top Co-Authors

Avatar

Hans-Joachim Böhme

Technische Universität Ilmenau

View shared research outputs
Top Co-Authors

Avatar

Klaus Debes

Technische Universität Ilmenau

View shared research outputs
Top Co-Authors

Avatar

Volker Stephan

Technische Universität Ilmenau

View shared research outputs
Top Co-Authors

Avatar

Christof Schroeter

Technische Universität Ilmenau

View shared research outputs
Top Co-Authors

Avatar

Steffen Müller

Technische Universität Ilmenau

View shared research outputs
Top Co-Authors

Avatar

Markus Eisenbach

Technische Universität Ilmenau

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge