Hussein Haggag
Deakin University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Hussein Haggag.
international conference on computer modelling and simulation | 2013
Hussein Haggag; Mohammed Hossny; Saeid Nahavandi; Douglas C. Creighton
Ergonomic assessments assist in preventing work related musculoskeletal injuries and provide safer workplace environments. The 3D motion capture environments are not suitable for many work places due to space, cost and calibration limitations. The Kinect sensor, which was introduced in the Xbox game console, provides a low-cost portable motion capture system. This sensor is capable of capturing and tracking 3D coordinates of a moving target with an accuracy comparable to state of the art commercial systems. This paper investigates the application of Kinect for real time rapid upper limb assessment (RULA) to aid in ergonomic analysis for assembly operations in industrial environments. A framework to integrate various motion capture technologies as well as different assessment methods is presented.
international conference on signal processing and communication systems | 2013
Hussein Haggag; Mohammed Hossny; Despina Filippidis; Douglas C. Creighton; Saeid Nahavandi; Vinod Puri
This paper presents the comparison between the Microsoft Kinect depth sensor and the Asus Xtion for computer vision applications. Depth sensors, known as RGBD cameras, project an infrared pattern and calculate the depth from the reflected light using an infrared sensitive camera. In this research, we compare the depth sensing capabilities of the two sensors under various conditions. The purpose is to give the reader a background to whether use the Microsoft Kinect or Asus Xtion sensor to solve a specific computer vision problem. The properties of the two depth sensors were investigated by conducting a series of experiments evaluating the accuracy of the sensors under various conditions, which shows the advantages and disadvantages of both Microsoft Kinect and Asus Xtion sensors.
digital image computing techniques and applications | 2015
Hussein Haggag; Mohammed Hossny; Saeid Nahavandi; Sherif Haggag; Douglas C. Creighton
People detection is essential in a lot of different systems. Many applications nowadays tend to require people detection to achieve certain tasks. These applications come under many disciplines, such as robotics, ergonomics, biomechanics, gaming and automotive industries. This wide range of applications makes human body detection an active area of research. With the release of depth sensors or RGB-D cameras such as Micosoft Kinect, this area of research became more active, specially with their affordable price. Human body detection requires the adaptation of many scenarios and situations. Various conditions such as occlusions, background cluttering and props attached to the human body require training on custom built datasets. In this paper we present an approach to prepare training datasets to detect and track human body with attached props. The proposed approach uses rigid body physics simulation to create and animate different props attached to the human body. Three scenarios are implemented. In the first scenario the prop is closely attached to the human body, such as a person carrying a backpack. In the second scenario, the prop is slightly attached to the human body, such as a person carrying a briefcase. In the third scenario the prop is not attached to the human body, such as a person dragging a trolley bag. Our approach gives results with accuracy of 93% in identifying both the human body parts and the attached prop in all the three scenarios.
service oriented software engineering | 2014
Hussein Haggag; Mohammed Hossny; Sherif Haggag; Saeid Nahavandi; Douglas C. Creighton
This paper presents a comparison of applying different clustering algorithms on a point cloud constructed from the depth maps captured by a RGBD camera such as Microsoft Kinect. The depth sensor is capable of returning images, where each pixel represents the distance to its corresponding point not the RGB data. This is considered as the real novelty of the RGBD camera in computer vision compared to the common video-based and stereo-based products. Depth sensors captures depth data without using markers, 2D to 3D-transition or determining feature points. The captured depth map then cluster the 3D depth points into different clusters to determine the different limbs of the human-body. The 3D points clustering is achieved by different clustering techniques. Our Experiments show good performance and results in using clustering to determine different human-body limbs.
systems, man and cybernetics | 2016
Hussein Haggag; Ahmed Abobakr; Mohammed Hossny; Saeid Nahavandi
Although marker-less human pose estimation and tracking is important in various systems, nowadays many applications tend to detect animals while performing a certain task. These applications are multidisciplinary including robotics, computer vision, safety, and animal healthcare. The appearance of RGB-D sensors such as Microsoft Kinect and its successful applications in tracking and recognition made this area of research more active, especially with their affordable price. In this paper, a data synthesis approach for generating realistic and highly varied animal corpus is presented. The generated dataset is used to train a machine learning model to semantically segment animal body parts. In the proposed framework, foreground extraction is applied to segment the animal, dense representations are obtained using the depth comparison feature extractor and used for training a supervised random decision forest. An accurate pixel-wise classification of the parts will allow accurate joint localization and hence pose estimation. Our approach records classification accuracy of 93% in identifying the different body parts of an animal using RGB-D images.
systems, man and cybernetics | 2014
Hussein Haggag; Mohammed Hossny; Sherif Haggag; Saeid Nahavandi; Douglas C. Creighton
Microsoft Kinect sensor was introduced with the XBOX gaming console. It features a simple and portable motion capturing system. Kinect nowadays presents a point of interest in many fields of study and areas of research where its affordable price compared to its capabilities. The Kinect sensor has the capability to capture and track detected 3D objects with accuracy comparable to that captured by state of the art commercial systems. Human safety is considered one of the highest concerns, specially nowadays where the existence of machines and robots is widely used. In this paper we present using the Kinect technology for enhancing the safety of equipment and operations in seven different applications. These applications include 1) positioning of childs car seat to optimise the childs position in respected to front and side air-bags; 2) board positioning system to improve the teachers arm reach posture; 3) gas station safety to prevent children from accessing the gas pump; 4) indoor pool safety to avoid children access to deep pool area; 5) robot safety emergency stop; 6) Workplace safety; and 7) older adults fall prediction.
systems, man and cybernetics | 2016
Hussein Haggag; Mohammed Hossny; Saeid Nahavandi; Omar Haggag
One of the biggest challenges of RGB-D posture tracking is separating appendages such as briefcases, trolleys, and backpacks from the human body. Markerless motion tracking relies on segmenting each depth frame to a finite set of body parts. This is achieved via supervised learning by assigning each pixel to a certain body part. The training image set for the supervised learning are usually synthesised using popular motion capture databases and an ensemble of 3D models covering a wide range of anthropometric characteristics. In this paper, we propose a novel method for generating training data of human postures with attached objects. The results have shown a significant increase in body-part classification accuracy for subjects with props from 60% to 94% using the generated image set.
service oriented software engineering | 2014
Sherif Haggag; Shady M. K. Mohamed; Asim Bhatti; Hussein Haggag; Saeid Nahavandi
Neural spikes define the human brain function. An accurate extraction of spike features leads to better understanding of brain functionality. The main challenge of feature extraction is to mitigate the effect of strong background noises. To address this problem, we introduce a new feature representation for neural spikes based on Cepstrum of multichannel recordings. Simulation results indicated that the proposed method is more robust than the existing Haar wavelet method.
service oriented software engineering | 2014
Sherif Haggag; Shady M. K. Mohamed; Hussein Haggag; Saeid Nahavandi
In neuroscience, the extracellular actions potentials of neurons are the most important signals, which are called spikes. However, a single extracellular electrode can capture spikes from more than one neuron. Spike sorting is an important task to diagnose various neural activities. The more we can understand neurons the more we can cure more neural diseases. The process of sorting these spikes is typically made in some steps which are detection, feature extraction and clustering. In this paper we propose to use the Mel-frequency cepstral coefficients (MFCC) to extract spike features associated with Hidden Markov model (HMM) in the clustering step. Our results show that using MFCC features can differentiate between spikes more clearly than the other feature extraction methods, and also using HMM as a clustering algorithm also yields a better sorting accuracy.
systems, man and cybernetics | 2015
Sherif Haggag; Shady M. K. Mohamed; Hussein Haggag; Saeid Nahavandi
Brain Computer Interface (BCI) is playing a very important role in human machine communications. Recent communication systems depend on the brain signals for communication. In these systems, users clearly manipulate their brain activity rather than using motor movements in order to generate signals that could be used to give commands and control any communication devices, robots or computers. In this paper, the aim was to estimate the performance of a brain computer interface (BCI) system by detecting the prosthetic motor imaginary tasks by using only a single channel of electroencephalography (EEG). The participant is asked to imagine moving his arm up or down and our system detects the movement based on the participant brain signal. Some features are extracted from the brain signal using Mel-Frequency Cepstrum Coefficient and based on these feature a Hidden Markov model is used to help in knowing if the participant imagined moving up or down. The major advantage in our method is that only one channel is needed to take the decision. Moreover, the method is online which means that it can give the decision as soon as the signal is given to the system. Hundred signals were used for testing, on average 89 % of the up down prosthetic motor imaginary tasks were detected correctly. This method can be used in many different applications such as: moving artificial prosthetic limbs and wheelchairs due to its high speed and accuracy.