Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Axel Panning is active.

Publication


Featured researches published by Axel Panning.


international conference on signal and image processing applications | 2011

A color based approach for eye blink detection in image sequences

Axel Panning; Ayoub Al-Hamadi; Bernd Michaelis

The human eye gives a clue about the mental state in the context of various applications such as stress, confusion and drowsiness. In this work we present a new algorithm for eye blink detection, by employing color information of the surrounding eye region. Moreover it is capable to detect eye blinks as well as relative eye lid movement. The algorithm handles decent rotations of the head, as long the eye is reliably detectable. Our approach is efficient though quite simple, since no segmentation of discrete eye features like lids or iris are required. The results show good performance on non-acted, real Human-Computer-Interaction videos, where subjects are allowed to behave randomly. Further, we compare our results to state of art methods.


Journal of Multimedia | 2010

Emotion Recognition based on 2D-3D Facial Feature Extraction from Color Image Sequences

Robert Niese; Ayoub Al-Hamadi; Axel Panning; Bernd Michaelis

In modern human computer interaction systems, emotion recognition from video is becoming an imperative feature. In this work we propose a new method for automatic recognition of facial expressions related to categories of basic emotions from image data. Our method incorporates a series of image processing, low level 3D computer vision and pattern recognition techniques. For image feature extraction, color and gradient information is used. Further, in terms of 3D processing, camera models are applied along with an initial registration step, in which person specific face models are automatically built from stereo. Based on these face models, geometric feature measures are computed and normalized using photogrammetric techniques. For recognition this normalization leads to minimal mixing between different emotion classes, which are determined with an artificial neural network classifier. Our framework achieves robust and superior classification results, also across a variety of head poses with resulting perspective foreshortening and changing face size. Results are presented for domestic and publicly available databases.


MPRSS'12 Proceedings of the First international conference on Multimodal Pattern Recognition of Social Signals in Human-Computer-Interaction | 2012

Fusion of fragmentary classifier decisions for affective state recognition

Gerald Krell; Michael Glodek; Axel Panning; Ingo Siegert; Bernd Michaelis; Andreas Wendemuth; Friedhelm Schwenker

Real human-computer interaction systems based on different modalities face the problem that not all information channels are always available at regular time steps. Nevertheless an estimation of the current user state is required at anytime to enable the system to interact instantaneously based on the available modalities. A novel approach to decision fusion of fragmentary classifications is therefore proposed and empirically evaluated for audio and video signals of a corpus of non-acted user behavior. It is shown that visual and prosodic analysis successfully complement each other leading to an outstanding performance of the fusion architecture.


Pattern Recognition and Image Analysis | 2008

Facial expression recognition based on Haar-like feature detection

Axel Panning; Ayoub Al-Hamadi; Robert Niese; Bernd Michaelis

In this paper we propose a novel approach for facial feature detection in color image sequences using Haar-like classifiers. The feature extraction is initialized without manual input and has the capability to fulfill the real time requirement. For facial expression recognition, we use geometrical measurement and simple texture analysis in detecting facial regions based on the prior detected facial feature points. For expression classification we used a three layer feed forward artificial neural network. The efficiency of the suggested approach is demonstrated under real world conditions.


international conference on signal processing | 2012

Multimodal affect recognition in spontaneous HCI environment

Axel Panning; Ingo Siegert; Ayoub Al-Hamadi; Andreas Wendemuth; Dietmar F. Rösner; Jörg Frommer; Gerald Krell; Bernd Michaelis

Human Computer Interaction (HCI) is known to be a multimodal process. In this paper we will show results of experiments for affect recognition, with non-acted, affective multimodal data from the new Last Minute Corpus (LMC). This corpus is more related to real HCI applications than other known data sets where affective behavior is elicited untypically for HCI.We utilize features from three modalities: facial expressions, prosody and gesture. The results show, that even simple fusion architectures can reach respectable results compared to other approaches. Further we could show, that probably not all features and modalities contribute substantially to the classification process, where prosody and eye blink frequency seem most contributing in the analyzed dataset.


international conference on computational science and its applications | 2007

Real-time capable method for facial expression recognition in color and stereo vision

Robert Niese; Ayoub Al-Hamadi; Axel Panning; Bernd Michaelis

In this paper we present a user independent real-time capable automatic method for recognition of facial expressions related to basic emotions from stereo image sequences. The method automatically detects faces in unconstraint pose based on depth and color information. In order to overcome difficulties caused by increasing change in pose, lighting transitions, or complicated background, we introduce a face normalization algorithm based on an Iterative Closest Point algorithm. In normalized face images we defined a set of physiologically motivated face regions related to a subset of facial muscles which are apt to automatically detect the six well-known basis emotions. Visual facial expression analysis takes place by an optical flow based feature extraction and a nearest neighbor classification, which uses a distance measure, i.e. the current flow vector pattern is matched against empirically determined ground truth data.


2013 IEEE International Conference on Cybernetics (CYBCO) | 2013

Using speaker group dependent modelling to improve fusion of fragmentary classifier decisions

Ingo Siegert; Michael Glodek; Axel Panning; Gerald Krell; Friedhelm Schwenker; Ayoub Al-Hamadi; Andreas Wendemuth

Current speech-controlled human computer interaction is purely based on spoken information. For a successful interaction, additional information such as the individual skills, preferences and actual affective state of the user are often mandatory. The most challenging of these additional inputs is the affective state, since affective cues are in general expressed very sparsely. The problem can be addressed in two ways. On the one hand, the recognition can be enhanced by making use of already available individual information. On the other hand, the recognition is aggravated by the fact that research is often limited to a single modality, which in real-life applications is critical since recognition may fail in case sensors do not perceive a signal. We address the problem by enhancing the acoustic recognition of the affective state by partitioning the user into groups. The assignment of a user to a group is performed at the beginning of the interaction, such that subsequently a specialized classifier model is utilized. Furthermore, we make use of several modalities, acoustics, facial expressions, and gesture information. The combination of decisions not affected by sensor failures from these multiple modalities is achieved by a Markov Fusion Network. The proposed approach is studied empirically using the LAST MINUTE corpus. We could show that compared to previous studies a significant improvement of the recognition rate can be obtained.


systems, man and cybernetics | 2012

Facial feature point detection using simplified gabor wavelets and confidence-based grouping

Axel Panning; Ayoub Al-Hamadi; Bernd Michaelis

One of the first steps in most facial expression and facial analysis systems is the localization of prominent facial feature points. In this paper we present a novel approach for facial feature point detection using Simplified Gabor Wavelets (SGW). The classifier is built in cascades, where each stage of the cascade is a Gentle-AdaBoost trained classifier. In addition, we suggest a confidence based weighted grouping of multi-detected feature points to enhance accuracy. We have trained and tested our algorithm with a shuffled mix of four available labeled databases with more than 700 individuals. Our experimental results achieve approximately 82% detection rate in average, which is a considerable result, since the databases contain not only frontal faces.


international symposium on visual computing | 2010

Colored and anchored Active Shape Models for tracking and form description of the facial features under image-specific disturbances

Axel Panning; Ayoub Al-Hamadi; Bernd Michaelis; Heiko Neumann

In this work a robust method is introduced, which addresses the problem of automatic extraction and tracking of facial features in color image sequences. The automatic extraction of facial features is done in two steps. The first step consists of a rough localization of the features while in the second step the facial features are exactly segmented using Active Shape Models (ASM). In contrast to simple ASM another approach is pursued in this work, which contains two modifications of the ASM, which lead to more robustness. The facial feature tracking in video sequences is realized by correspondence determination of individual specific support points which are used to anchor the feature models during movement. This leads to more stability and reliability for tracking and form description of the facial features under image-specific disturbances.


international conference on image processing | 2010

Variable block-size image authentication with localization and self-recovery

Ammar M. Hassan; Ayoub Al-Hamadi; Yassin M. Y. Hasan; Mohamed A. A. Wahab; Axel Panning; Bernd Michaelis

In this paper, a self-recovery image authentication technique is proposed using randomly-sized blocks. To dived an image into randomly-sized blocks, it undergoes recursive arbitrarily-asymmetric quad tree partitioning. Multiple description coding (MDC) that enhances reliability of altered block recovery, is utilized to generate two block descriptions. We propose the use of a chain with multiple links for embedding several signature copies and two descriptions per block in arbitrarily distant blocks. The experimental results deposit that the proposed technique successfully both localizes and compensates the alterations. Furthermore, it is robust against the vector quantization (VQ) attack.

Collaboration


Dive into the Axel Panning's collaboration.

Top Co-Authors

Avatar

Bernd Michaelis

Otto-von-Guericke University Magdeburg

View shared research outputs
Top Co-Authors

Avatar

Ayoub Al-Hamadi

Otto-von-Guericke University Magdeburg

View shared research outputs
Top Co-Authors

Avatar

Robert Niese

Otto-von-Guericke University Magdeburg

View shared research outputs
Top Co-Authors

Avatar

Andreas Wendemuth

Otto-von-Guericke University Magdeburg

View shared research outputs
Top Co-Authors

Avatar

Ingo Siegert

Otto-von-Guericke University Magdeburg

View shared research outputs
Top Co-Authors

Avatar

Gerald Krell

Otto-von-Guericke University Magdeburg

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ammar M. Hassan

Otto-von-Guericke University Magdeburg

View shared research outputs
Top Co-Authors

Avatar

D. Brammen

Otto-von-Guericke University Magdeburg

View shared research outputs
Researchain Logo
Decentralizing Knowledge