Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Anwar Saeed is active.

Publication


Featured researches published by Anwar Saeed.


Advances in Human-computer Interaction | 2014

Frame-Based facial expression recognition using geometrical features

Anwar Saeed; Ayoub Al-Hamadi; Robert Niese; Moftah Elzobi

To improve the human-computer interaction (HCI) to be as good as human-human interaction, building an efficient approach for human emotion recognition is required. These emotions could be fused from several modalities such as facial expression, hand gesture, acoustic data, and biophysiological data. In this paper, we address the frame-based perception of the universal human facial expressions (happiness, surprise, anger, disgust, fear, and sadness), with the help of several geometrical features. Unlike many other geometry-based approaches, the frame-based method does not rely on prior knowledge of a person-specific neutral expression; this knowledge is gained through human intervention and not available in real scenarios. Additionally, we provide a method to investigate the performance of the geometry-based approaches under various facial point localization errors. From an evaluation on two public benchmark datasets, we have found that using eight facial points, we can achieve the state-of-the-art recognition rate. However, this state-of-the-art geometry-based approach exploits features derived from 68 facial points and requires prior knowledge of the person-specific neutral expression. The expression recognition rate using geometrical features is adversely affected by the errors in the facial point localization, especially for the expressions with subtle facial deformations.


Sensors | 2015

Head Pose Estimation on Top of Haar-Like Face Detection: A Study Using the Kinect Sensor

Anwar Saeed; Ayoub Al-Hamadi; Ahmed Ghoneim

Head pose estimation is a crucial initial task for human face analysis, which is employed in several computer vision systems, such as: facial expression recognition, head gesture recognition, yawn detection, etc. In this work, we propose a frame-based approach to estimate the head pose on top of the Viola and Jones (VJ) Haar-like face detector. Several appearance and depth-based feature types are employed for the pose estimation, where comparisons between them in terms of accuracy and speed are presented. It is clearly shown through this work that using the depth data, we improve the accuracy of the head pose estimation. Additionally, we can spot positive detections, faces in profile views detected by the frontal model, that are wrongly cropped due to background disturbances. We introduce a new depth-based feature descriptor that provides competitive estimation results with a lower computation time. Evaluation on a benchmark Kinect database shows that the histogram of oriented gradients and the developed depth-based features are more distinctive for the head pose estimation, where they compare favorably to the current state-of-the-art approaches. Using a concatenation of the aforementioned feature types, we achieved a head pose estimation with average errors not exceeding 5.1∘,4.6∘,4.2∘ for pitch, yaw and roll angles, respectively.


international conference on image processing | 2015

Boosted human head pose estimation using kinect camera

Anwar Saeed; Ayoub Al-Hamadi

Head pose estimation is essential for several computer vision applications. For example, it has been employed in facial expression recognition, head gesture detection, and driver monitoring systems. In this work, we present a boosted method to estimate the head pose using Kinect camera. This estimation is cooperatively performed with the help of RGB and depth images. The human face is located in the RGB image using frontal and profile Viola-Jones (VJ) face detector, where the depth information is used to confine the size and location of the search window. Appearance features, extracted from the detected face patch in the RGB image and its corresponding in the depth image, are passed to Support Vector Machine (SVM) regressors to infer the head pose. Evaluation on two public benchmark databases demonstrates that our proposed approach compares favorably to state-of-the-art approaches.


IP&C | 2014

Gabor Wavelet Recognition Approach for Off-Line Handwritten Arabic Using Explicit Segmentation

Moftah Elzobi; Ayoub Al-Hamadi; Zaher Al Aghbari; Laslo Dings; Anwar Saeed

This article proposes an un-constrained recognition approach for the handwritten Arabic script. The approach starts by explicitly segment each word image into its constituent letters, then a filter-bank of Gabor wavelet transform is used to extract feature vectors corresponding to different scales and orientation in the segmented image. Classification is carried out by employing a support vectors machine algorithm, where IESK-arDB and IFN/ENIT databases are used for testing and evaluation of the proposed approach respectively. A Leave-one-out estimation strategy is followed to assess performance, where results confirmed the approach efficiency.


2013 IEEE International Conference on Cybernetics (CYBCO) | 2013

The effectiveness of using geometrical features for facial expression recognition

Anwar Saeed; Ayoub Al-Hamadi; Robert Niese

Facial expressions play an important role in diverse disciplines ranging from entertainment (video games) to medical applications and affective computing. For tackling the problem of expression recognition, various approaches were proposed over the last two decades. These approaches are primarily divided into two types: geometry and appearance based. In this paper, we address the geometry based approaches to recognize the six basic facial expressions (happiness, surprise, anger, fear, disgust, and sadness). We provide answers to three major questions regarding the geometrical features: 1. What is the minimum number of facial points that could provide a satisfactory recognition rate? 2. How this rate is affected by prior knowledge of person-specific neutral expression? 3. How accurate should a facial point detector be to achieve an acceptable recognition rate? To assess the reliability of our approach, we evaluated it on two public databases. The results show that a good recognition rate could be achieved by using just eight facial points. Moreover, the lack of prior knowledge of person-specific neutral state causes more than 7% drop in the recognition rate. Finally, the recognition rate is adversely affected by the facial point localization error.


intelligent systems design and applications | 2012

Neutral-independent geometric features for facial expression recognition

Anwar Saeed; Ayoub Al-Hamadi; Robert Niese

Improving Human-Computer Interaction (HCI) necessitates building an efficient human emotion recognition approach that involves various modalities such as facial expressions, hand gestures, acoustic data, and biophysiological data. In this paper, we address the perception of the universal human emotions (happy, surprise, anger, disgust, fear, and sadness) from facial expressions. In our companion-based assistant system, facial expression is considered as complementary aspect to the hand gestures. Unlike many other approaches, we do not rely on prior knowledge of the neutral state to infer the emotion because annotating the neutral state usually involves human intervention. We use features extracted from just eight fiducial facial points. Our results are in a good agreement with those of a state-of-the-art approach that exploits features derived from 68 facial points and requires prior knowledge of the neutral state. Then, we evaluate our approach on two databases. Finally, we investigate the influence of the facial points detection error on our emotion recognition approach.


international conference on document analysis and recognition | 2013

A Hidden Markov Model-Based Approach with an Adaptive Threshold Model for Off-Line Arabic Handwriting Recognition

Moftah Elzobi; Ayoub Al-Hamadi; Laslo Dings; Mahmoud Elmezain; Anwar Saeed

In contrast to the mainstream HMM-based approaches dedicated for the recognition of offline handwritten Arabic, this paper proposes an HMM-based approach that built upon an explicit segmentation module. And shape representative based rather than sliding window based features, are extracted and used to build a reference as well as a confirmation model for each letter in each handwritten form. Additionally, we constructed an HMM-based threshold model by ergodically connecting all letter models, in order to detect false segmentation as well as nonletter segments. IESK-arDB and IFN/ENIT databases are used for testing and evaluation of the proposed approach respectively, and satisfactory results are achieved.


international conference on image and signal processing | 2012

Speaker tracking using multi-modal fusion framework

Anwar Saeed; Ayoub Al-Hamadi; Michael Heuer

This paper introduces a framework by which multi-modal sensory data can be efficiently and meaningfully combined in the application of speaker tracking. This framework fuses together four different observation types taken from multi-modal sensors. The advantages of this fusion are that weak sensory data from either modality can be reinforced, and the presence of noise can be reduced. We propose a method of combining these modalities by employing a particle filter. This method offers satisfied real-time performance.


international conference on signal and image processing applications | 2011

Coping with hand-hand overlapping in bimanual movements

Anwar Saeed; Robert Niese; Ayoub Al-Hamadi; Bernd Michaelis

Hand gesture recognition plays a major role in the Human Computer Interaction (HCI), since it could be employed in many applications as a communication language. Nearly all hand gestures used in HCI are created using one hand; while other applications use gestures produced by two hands as long as both hands do not cross each other. This constraint in the bimanual gestures is due to the difficulties in reacquiring both hands correctly at the end of the hand-hand occlusion, which is addressed in this paper. Two policies, in analogy to the greedy and Hungarian algorithms, are used to solve the data association issue at the end of the hand-hand occlusion period given that the appearance of both hands does not change entirely during the overlapping period.


IP&C | 2011

Solving the Hand-Hand Overlapping for Gesture Application

Anwar Saeed; Robert Niese; Ayoub Al-Hamadi; Bernd Michaelis

The most hand gestures used in Human Computer Interaction (HCI) are generated either by one hand or by two hands on condition that both hands do not pass each other. This constraint in the two-hands gestures is due to the difficulties in reacquiring both hands correctly at the end of the hand-hand occlusion. In this paper, we provide a solution to this issue based on the fact that the appearance of both hands could not be changed entirely during the occlusion period.

Collaboration


Dive into the Anwar Saeed's collaboration.

Top Co-Authors

Avatar

Ayoub Al-Hamadi

Otto-von-Guericke University Magdeburg

View shared research outputs
Top Co-Authors

Avatar

Robert Niese

Otto-von-Guericke University Magdeburg

View shared research outputs
Top Co-Authors

Avatar

Moftah Elzobi

Otto-von-Guericke University Magdeburg

View shared research outputs
Top Co-Authors

Avatar

Laslo Dings

Otto-von-Guericke University Magdeburg

View shared research outputs
Top Co-Authors

Avatar

Bernd Michaelis

Otto-von-Guericke University Magdeburg

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michael Heuer

Otto-von-Guericke University Magdeburg

View shared research outputs
Top Co-Authors

Avatar

Sebastian Handrich

Otto-von-Guericke University Magdeburg

View shared research outputs
Top Co-Authors

Avatar

Mahmoud Elmezain

Otto-von-Guericke University Magdeburg

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge