Ayoub Al-Hamadi
Otto-von-Guericke University Magdeburg
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ayoub Al-Hamadi.
international conference on pattern recognition | 2008
Mahmoud Elmezain; Ayoub Al-Hamadi; Jörg Appenrodt; Bernd Michaelis
In this paper, we propose an automatic system that recognizes both isolated and continuous gestures for Arabic numbers (0-9) in real-time based on hidden Markov model (HMM). To handle isolated gestures, HMM using ergodic, left-right (LR) and left-right banded (LRB) topologies with different number of states ranging from 3 to 10 is applied. Orientation dynamic features are obtained from spatio-temporal trajectories and then quantized to generate its codewords. The continuous gestures are recognized by our novel idea of zero-codeword detection with static velocity motion. Therefore, the LRB topology in conjunction with forward algorithm presents the best performance and achieves average rate recognition 98.94% and 95.7% for isolated and continuous gestures, respectively.
international conference on image processing | 2009
Mahmoud Elmezain; Ayoub Al-Hamadi; Bernd Michaelis
In this paper, we propose an automatic system that executes hand gesture spotting and recognition simultaneously without any time delay based on Hidden Markov Models (HMM). Our system is based on three main stages; preprocessing, feature extraction and classification. In preprocessing stage, color and 3D depth map are used to detect hands. The hand trajectory will take place in further steps using Mean-shift algorithm and Kalman filter. The second stage, Orientation dynamic features are obtained from spatio-temporal trajectories and then are quantized to generate its codewords. In the final stage, the gestures are segmented by finding the start and the end points of meaningful gestures that are embedded in the input stream and then are recognized by Viterbi algorithm. Experimental results demonstrate that, our system can successfully recognize spotted hand gestures with a 95.87% recognition rate for Arabic numbers from 0 to 9.
Journal of Multimedia | 2007
Robert Niese; Ayoub Al-Hamadi; Bernd Michaelis
When automatically analyzing images of human faces, either for recognition in biometry applications or fa- cial expression analysis in human machine interaction, one has to cope with challenges caused by different head pose, illumination and expression. In this article we propose a new stereo based method for effectively solving the pose problem through 3D face detection and normalization. The proposed method applies a model-based matching and is especially intended for the study of facial features and the description of their dynamic changes in image sequences under the assumption of non-cooperative persons. In our work, we are currently implementing a new application to observe and analyze single faces of post-operative patients. In the proposed method, face detection is based on color driven clustering of 3D points derived from stereo. A mesh model is matched with the post-processed face cluster using a variant of the Iterative Closest Point algorithm (ICP). Pose is derived from correspondence. Then, pose and model in- formation is used for the synthesis of the face normalization. Results show, stereo and color are powerful cues for finding the face and its pose under a wide range of poses, illumina- tions and expressions (PIE). Head orientation may vary in out of plane rotations up to ±45°. Key words—Image and Video Processing, ICP-Matching, Computer Vision, 3D Face Detection, Normalization
british machine vision conference | 2013
Philipp Werner; Ayoub Al-Hamadi; Robert Niese; Steffen Walter; Sascha Gruss; Harald C. Traue
Pain is what the patient says it is. But what about these who cannot utter? Automatic pain monitoring opens up prospects for better treatment, but accurate assessment of pain is challenging due to the subjective nature of pain. To facilitate advances, we contribute a new dataset, the BioVid Heat Pain Database which contains videos and physiological data of 90 persons subjected to well-defined pain stimuli of 4 intensities. We propose a fully automatic recognition system utilizing facial expression, head pose information and their dynamics. The approach is evaluated with the task of pain detection on the new dataset, also outlining open challenges for pain monitoring in general. Additionally, we analyze the relevance of head pose information for pain recognition and compare person-specific and general classification models.
international conference on intelligent computing | 2009
Omer Rashid; Ayoub Al-Hamadi; Bernd Michaelis
For a successful real-time vision-based HCI system, inference from natural visual method is crucial. In this paper, we have aimed to provide interaction through gesture and posture recognition for alphabets and numbers. In addition, data fusion is carried out which integrates these systems to extract multiple meanings at the same time. 3D information is exploited for segmentation and detection of face and hands using normal Gaussian distribution and depth information. For gesture, orientation of two consecutive hand centroid points is computed which is then quantized to generate code words. HMM is trained by Baum Welch algorithm and classified by Viterbi path algorithm. In posture recognition, American Sign Language is recognized for static alphabets and numbers. Feature vectors are computed from statistical and geometrical properties of the hand and are used to train SVM for classification and recognition. Moreover, curvature analysis is carried out for alphabets to avoid misclassifications. Experimental results of the proposed framework successfully integrate both gesture and posture recognition system at decision level fusion whereas the gesture system achieves recognition rate of 98% (i.e. for alphabets and numbers) and the posture recognition system with recognition rates of 98.65% and 98.6% for ASL alphabets and numbers respectively.
Advances in Human-computer Interaction | 2014
Anwar Saeed; Ayoub Al-Hamadi; Robert Niese; Moftah Elzobi
To improve the human-computer interaction (HCI) to be as good as human-human interaction, building an efficient approach for human emotion recognition is required. These emotions could be fused from several modalities such as facial expression, hand gesture, acoustic data, and biophysiological data. In this paper, we address the frame-based perception of the universal human facial expressions (happiness, surprise, anger, disgust, fear, and sadness), with the help of several geometrical features. Unlike many other geometry-based approaches, the frame-based method does not rely on prior knowledge of a person-specific neutral expression; this knowledge is gained through human intervention and not available in real scenarios. Additionally, we provide a method to investigate the performance of the geometry-based approaches under various facial point localization errors. From an evaluation on two public benchmark datasets, we have found that using eight facial points, we can achieve the state-of-the-art recognition rate. However, this state-of-the-art geometry-based approach exploits features derived from 68 facial points and requires prior knowledge of the person-specific neutral expression. The expression recognition rate using geometrical features is adversely affected by the errors in the facial point localization, especially for the expressions with subtle facial deformations.
international conference on pattern recognition | 2010
Mahmoud Elmezain; Ayoub Al-Hamadi; Bernd Michaelis
This paper proposes a forward spotting method that handles hand gesture segmentation and recognition simultaneously without time delay. To spot meaningful gestures of numbers (0-9) accurately, a stochastic method for designing a non-gesture model using Conditional Random Fields (CRFs) is proposed without training data. The non-gesture model provides a confidence measures that are used as an adaptive threshold to find the start and the end point of meaningful gestures. Experimental results show that the proposed method can successfully recognize isolated gestures with 96.51% and meaningful gestures with 90.49% reliability.
international symposium on signal processing and information technology | 2010
Mahmoud Elmezain; Ayoub Al-Hamadi; Samy Sadek; Bernd Michaelis
This paper proposes an automatic method that handles hand gesture spotting and recognition simultaneously. To spot meaningful gestures of numbers (0–9) accurately, a stochastic method for designing a non-gesture model with Hidden Markov Models (HMMs) versus Conditional Random Fields (CRFs) is proposed without training data. The non-gesture model provides a confidence measure that is used as an adaptive threshold to find the start and the end point of meaningful gestures, which are embedded in the input video stream. To reduce the states number of the non-gesture model with HMMs, similar probability distributions states are merged based on relative entropy measure. Additionally, the weights of self-transition feature functions are increased for short gesture to further improve the accuracy of gesture spotting and recognition with CRFs. Experimental results show that; the proposed method can successfully spot and recognize meaningful gestures with 93.31% and 90.49% reliability for HMMs and CRFs respectively. In addition, the model inference by HMMs are faster and the saving time is 66.42% using relative entropy. The reliability of CRFs method is improved from 86.12% to 90.49% using short gesture detector.
international conference on future generation information technology | 2009
Jörg Appenrodt; Ayoub Al-Hamadi; Mahmoud Elmezain; Bernd Michaelis
In this paper, we present our results to build an automatic gesture recognition system using different types of cameras to compare them in reference to their features for segmentation. Normally, the images of a mono color camera system are mostly used as input data in the research area of gesture recognition. In comparison to that, the analysis results of a stereo color camera and a thermal camera system are used to determine the advantages and disadvantages of these camera systems. With this basics, a real-time gesture recognition system is build to classify alphabets (A-Z) and numbers (0-9) with an average recognition rate of 98% using Hidden Markov Models (HMM).
international symposium on signal processing and information technology | 2007
Mahmoud Elmezain; Ayoub Al-Hamadi
This paper describes a method to recognize the alphabets from a single hand motion using Hidden Markov Models (HMM). In our method, gesture recognition for alphabets is based on three main stages; preprocessing, feature extraction and classification. In preprocessing stage, color and depth information are used to detect both hands and face in connection with morphological operation. After the detection of the hand, the tracking will take place in further step in order to determine the motion trajectory so-called gesture path. The second stage, feature extraction enhances the gesture path which gives us a pure path and also determines the orientation between the center of gravity and each point in a pure path. Thereby, the orientation is quantized to give a discrete vector that used as input to HMM. In the final stage, the gesture of alphabets is recognized by using Left-Right Banded model (LRB) in conjunction with Baum-Welch algorithm (BW) for training the parameters of HMM. Therefore, the best path is obtained by Viterbi algorithm using a gesture database. In our experiment, 520 trained gestures are used for training and also 260 tested gestures for testing. Our method recognizes the alphabets from A to Z and achieves an average recognition rate of 92.3%.