Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Andrea Corradini is active.

Publication


Featured researches published by Andrea Corradini.


international symposium on neural networks | 2000

Camera-based gesture recognition for robot control

Andrea Corradini; Horst-Michael Gross

Several systems for automatic gesture recognition have been developed using different strategies and approaches. In these systems the recognition engine is mainly based on three algorithms: dynamic pattern matching, statistical classification, and neural networks (NN). In that paper we present four architectures for gesture-based interaction between a human being and an autonomous mobile robot using the above mentioned techniques or a hybrid combination of them. Each of our gesture recognition architecture consists of a preprocessor and a decoder. Three different hybrid stochastic/connectionist architectures are considered. A template matching problem by making use of dynamic programming techniques is dealt with; the strategy is to find the minimal distance between a continuous input feature sequence and the classes. Preliminary experiments with our baseline system achieved a recognition accuracy up to 92%. All systems use input from a monocular color video camera, and are user-independent but so far they are not in real-time yet.


ieee international conference on automatic face and gesture recognition | 1998

User localisation for visually-based human-machine-interaction

Hans-Joachim Boehme; Ulf-Dietrich Braumann; Anja Brakensiek; Andrea Corradini; Markus Krabbes; Horst-Michael Gross

Recently there is an increasing interest in video based interface techniques, allowing more natural interaction between users and systems than common interface devices do. We present a neural architecture for user localisation, embedded within a complex system for visually-based human machine interaction (HMI). User localisation is an absolute prerequisite to video based HMI. Due to the main objective, the greatest possible robustness of the localisation as well as the whole visual interface under highly varying environmental conditions, we propose a multiple cue approach. This approach combines the features of facial structure, head shoulder contour, skin color and motion, with a multiscale representation. The selection of the image region most likely containing a possible user is then realised via a WTA-process within the multiscale representation. Preliminary results show the reliability of the multiple cue approach.


international conference on acoustics, speech, and signal processing | 2000

Implementation and comparison of three architectures for gesture recognition

Andrea Corradini; Horst-Michael Gross

Several systems for automatic gesture recognition have been developed using different strategies and approaches. In these systems the recognition engine is mainly based on three algorithms: dynamic pattern matching, statistical classification, and neural networks (NN). In this paper three architectures for the recognition of dynamic gestures using the above mentioned techniques or a hybrid combination of them are presented and compared. For all architectures a common preprocessor receives as input a sequence of color images, and produces as output a sequence of feature vectors of continuous parameters. The first two systems are hybrid architectures consisting of a combination of neural networks and hidden Markov models (HMM). NNs are used for the classification of single feature vectors while HMMs for the modeling of sequences of them with the aim to exploit the properties of both these tools. More precisely, in the first system a Kohonen feature map (SOM) clusters the input space. Further, each code-book is transformed into a symbol from a discrete alphabet and fed into a discrete HMM for classification. In the second approach a radial basis function (RBF) network is directly used to compute the HMM state observation probabilities. In the last system only dynamic programming techniques are employed. An input sequence of feature vectors is matched by some predefined templates by using the dynamic time warping (DTW) algorithm. Preliminary experiments with our baseline systems achieved a recognition accuracy up to 92%. All systems use input from a monocular color video camera, are user-independent but so far, they are not yet real-time.


International Journal on Artificial Intelligence Tools | 2000

A HYBRID STOCHASTIC-CONNECTIONIST APPROACH TO GESTURE RECOGNITION

Andrea Corradini; Hans-Joachim Boehme; Horst-Michael Gross

In this paper a person-specific saliency system and subsequently two architectures for the recognition of dynamic gestures are described. The systems implemented are designed to take a sequence of images and to assign it to one of a number of discrete classes where each of them corresponds to a gesture from a predefined small vocabulary. Since we think that for a human-computer interaction the localization of the user is essential for any further step regarding the recognition and the interpretation of gestures, in the first part, we begin with describing our saliency system dedicated to the person localization task in cluttered environments. Successively, the intrinsic gesture recognition process is broken down into an initial preprocessing stage followed by a mapping from the preprocessed input variables to an output variable representing the class label. Subsequently, we utilize two different classifiers for mapping the ordered sequence of feature vectors to one gesture category. The first classifier utilizes a hybrid combination of Kohonen Self-Organizing Map (SOM) and Discrete Hidden Markov Models (DHMM). As second recognizer a system of Continuous Hidden Markov Models (CHMM) is used. Preliminary experiments with our baseline systems are demonstrated.


Proceedings 1999 International Conference on Information Intelligence and Systems (Cat. No.PR00446) | 1999

A hybrid stochastic-connectionist architecture for gesture recognition

Andrea Corradini; Horst-Michael Gross

An architecture for the recognition of dynamic gestures is described. The system implemented is designed to take a sequence of images and to assign it to one of a number of discrete classes where each of them corresponds to a gesture from a predefined vocabulary. The classification task is broken down into an initial preprocessing stage following by a mapping from the preprocessed input variables to an output variable representing the class label. The preprocessing stage consists of the extraction of one translation and scale invariant feature vector from each image of the sequence. Further we utilize a hybrid combination of a Kohonen self-organizing map (SOM) and discrete hidden Markov models (DHMM) for mapping an ordered sequence of feature vectors to one gesture category. We create one DHMM for each movement to be detected. In the learning phase the SOM is used to cluster the feature vector space. After the self-organizing process each codebook is quantized into a symbol. Every symbol sequence underlying a given movement is finally used to train the corresponding Markov model by means of the nondiscriminative Baum-Welch algorithm, aiming at maximizing the probability of the samples given the model at hand. In the recognition phase the SOM transforms any input image sequence into one symbol sequence which is subsequently fed into a system of DHMMs. The gesture associated with the model which best matches the observed symbol sequence is chosen as the recognized movement.


GW '99 Proceedings of the International Gesture Workshop on Gesture-Based Communication in Human-Computer Interaction | 1999

Person Localization and Posture Recognition for Human-Robot Interaction

Hans-Joachim Böhme; Ulf-Dietrich Braumann; Andrea Corradini; Horst-Michael Gross

The development of a hybrid system for (mainly) gesture-based human-robot interaction is presented, thereby describing the progress in comparison to the work shown at the last gesture workshop (see [2]). The system makes use of standard image processing techniques as well as of neural information processing. The performance of our architecture includes the detection of a person as a potential user in an indoor environment, followed by the recognition of her gestural instructions. In this paper, we concentrate on two major mechanisms: (i), the contour-based person localization via a combination of steerable filters and three-dimensional dynamic neural fields, and (ii), our first experiences concerning the recognition of different instructional postures via a combination of statistical moments and neural classifiers.


Archive | 1999

Visual Person Localization with Dynamic Neural Fields: Towards a Gesture Recognition System

Andrea Corradini; Ulf-Dietrich Braumann; Anja Brakensiek; Markus Krabbes; Hans Joachim Boehme; Horst-Michael Gross

For any visually-based interaction between persons and acting systems within a real-world environment the localization of a user by the system is a necessary condition. The presented work deals with this visual localization problem of a user concretely referred to the autonomous mobile robot system MILVA of our department. Since this system is applied under real-world conditions especially for the localization some proper techniques are needed which have an adequate robustness. In our opinion, this requires the combination of several components of saliency towards a multi-cue approach, consisting of structure- and color-based features [2].


Archive | 1999

Gesture Recognition using Hybrid SOM/DHMM

Andrea Corradini; Horst-Michael Gross

This paper describes a method for the recognition of dynamic gestures using a combination Neural Network/ discrete Hidden Markov Model. This work deals with four topics. First a reliable and robust person localization task is presented. Then we focus on the view-based recognition of the user’s static gestural instructions from a predefined vocabulary based on both a skin color model and statistical normalized moment invariants. The segmentation of the postures occurs by means of the skin color model based on the Mahalanobis metric. From the resulting binary image containing only regions which have been classified as skin candidates we extract translation and scale invariant moments. Further a Kohonen Self Organizing Map (SOM) is used to cluster the feature space. After the self-organizing process we modify the SOM weight vectors using the Learning Vector Quantization (LVQ) method causing the weights to approach the decision boundaries and we quantize each of them into a symbol. Finally, the symbol sequence extracted from time-sequential images is used as input for a system of discrete Hidden Markov Models (DHMMs).


Ninth Workshop on Virtual Intelligence/Dynamic Neural Networks: Neural Networks Fuzzy Systems, Evolutionary Systems and Virtual Re | 1999

3D neural fields and steerable filters for contour-based person localization

Andrea Corradini; Ulf-Dietrich Braumann; Hans-Joachim Boehme; Horst-Michael Gross

This paper introduces a way to locate persons in visual images of cluttered scenes using a shape-of-contour approach. The contour which we refer to is that of the upper body of frontally aligned persons.


Mustererkennung 1998, 20. DAGM-Symposium | 1998

Konturbasierte Personenlokalisation mittels dreidimensionaler neuronaler Felder und steuerbarer Filter

Ulf-Dietrich Braumann; Andrea Corradini; Hans-Joachim Böhme; Horst-Michael Gross

In dieser Arbeit wird ein Verfahren zur Lokalisation von Personen innerhalb unpraparierter visueller Szenen anhand typischer Konturen vorgestellt. Es wird sich dabei auf die ausere Kontur frontal ausgerichteter Personen im Bereich von Kopf und Schultern bezogen (Kopf-Schulter-Partie). Diese Kontur wird approximiert durch ein raumlich verteiltes Arrangement steuerbarer (steerable) Filter, das auf einer Anzahl von pyramidal abgestuften Auflosungen eines Bildes ange-wendet wird.

Collaboration


Dive into the Andrea Corradini's collaboration.

Top Co-Authors

Avatar

Horst-Michael Gross

Technische Universität Ilmenau

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hans-Joachim Böhme

Technische Universität Ilmenau

View shared research outputs
Top Co-Authors

Avatar

Anja Brakensiek

Technische Universität Ilmenau

View shared research outputs
Top Co-Authors

Avatar

Hans-Joachim Boehme

Technische Universität Ilmenau

View shared research outputs
Top Co-Authors

Avatar

Markus Krabbes

Otto-von-Guericke University Magdeburg

View shared research outputs
Researchain Logo
Decentralizing Knowledge