Mohammed Hossny
Deakin University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Mohammed Hossny.
IEEE Transactions on Human-Machine Systems | 2014
Hailing Zhou; Ajmal S. Mian; Lei Wei; Douglas C. Creighton; Mohammed Hossny; Saeid Nahavandi
High performance for face recognition systems occurs in controlled environments and degrades with variations in illumination, facial expression, and pose. Efforts have been made to explore alternate face modalities such as infrared (IR) and 3-D for face recognition. Studies also demonstrate that fusion of multiple face modalities improve performance as compared with singlemodal face recognition. This paper categorizes these algorithms into singlemodal and multimodal face recognition and evaluates methods within each category via detailed descriptions of representative work and summarizations in tables. Advantages and disadvantages of each modality for face recognition are analyzed. In addition, face databases and system evaluations are also covered.
international conference on computer modelling and simulation | 2013
Hussein Haggag; Mohammed Hossny; Saeid Nahavandi; Douglas C. Creighton
Ergonomic assessments assist in preventing work related musculoskeletal injuries and provide safer workplace environments. The 3D motion capture environments are not suitable for many work places due to space, cost and calibration limitations. The Kinect sensor, which was introduced in the Xbox game console, provides a low-cost portable motion capture system. This sensor is capable of capturing and tracking 3D coordinates of a moving target with an accuracy comparable to state of the art commercial systems. This paper investigates the application of Kinect for real time rapid upper limb assessment (RULA) to aid in ergonomic analysis for assembly operations in industrial environments. A framework to integrate various motion capture technologies as well as different assessment methods is presented.
international conference on signal processing and communication systems | 2013
Hussein Haggag; Mohammed Hossny; Despina Filippidis; Douglas C. Creighton; Saeid Nahavandi; Vinod Puri
This paper presents the comparison between the Microsoft Kinect depth sensor and the Asus Xtion for computer vision applications. Depth sensors, known as RGBD cameras, project an infrared pattern and calculate the depth from the reflected light using an infrared sensitive camera. In this research, we compare the depth sensing capabilities of the two sensors under various conditions. The purpose is to give the reader a background to whether use the Microsoft Kinect or Asus Xtion sensor to solve a specific computer vision problem. The properties of the two depth sensors were investigated by conducting a series of experiments evaluating the accuracy of the sensors under various conditions, which shows the advantages and disadvantages of both Microsoft Kinect and Asus Xtion sensors.
digital image computing techniques and applications | 2015
Hussein Haggag; Mohammed Hossny; Saeid Nahavandi; Sherif Haggag; Douglas C. Creighton
People detection is essential in a lot of different systems. Many applications nowadays tend to require people detection to achieve certain tasks. These applications come under many disciplines, such as robotics, ergonomics, biomechanics, gaming and automotive industries. This wide range of applications makes human body detection an active area of research. With the release of depth sensors or RGB-D cameras such as Micosoft Kinect, this area of research became more active, specially with their affordable price. Human body detection requires the adaptation of many scenarios and situations. Various conditions such as occlusions, background cluttering and props attached to the human body require training on custom built datasets. In this paper we present an approach to prepare training datasets to detect and track human body with attached props. The proposed approach uses rigid body physics simulation to create and animate different props attached to the human body. Three scenarios are implemented. In the first scenario the prop is closely attached to the human body, such as a person carrying a backpack. In the second scenario, the prop is slightly attached to the human body, such as a person carrying a briefcase. In the third scenario the prop is not attached to the human body, such as a person dragging a trolley bag. Our approach gives results with accuracy of 93% in identifying both the human body parts and the attached prop in all the three scenarios.
international conference on control, automation, robotics and vision | 2010
Mohammed Hossny; Saeid Nahavandi; Douglas C. Creighton; Asim Bhatti
Mobile robots are providing great assistance operating in hazardous environments such as nuclear cores, battlefields, natural disasters, and even at the nano-level of human cells. These robots are usually equipped with a wide variety of sensors in order to collect data and guide their navigation. Whether a single robot operating all sensors or a swarm of cooperating robots operating their special sensors, the captured data can be too large to be transferred across limited resources (e.g. bandwidth, battery, processing, and response time) in hazardous environments. Therefore, local computations have to be carried out on board the swarming robots to assess the worthiness of captured data and the capacity of fused information in a certain spatial dimension as well as selection of proper combination of fusion algorithms and metrics. This paper introduces to the concepts of Type-I and Type-II fusion errors, fusion capacity, and fusion worthiness. These concepts together form the ladder leading to autonomous fusion systems.
international conference on industrial informatics | 2007
Mohammed Hossny; Saeid Nahavandi; Douglas C. Creighton
In this paper a new method to compute saliency of source images is presented. This work is an extension to universal quality index founded by Wang and Bovik and improved by Piella. It defines the saliency according to the change of topology of quadratic tree decomposition between source images and the fused image. The saliency function provides higher weight for the tree nodes that differs more in the fused image in terms topology. Quadratic tree decomposition provides an easy and systematic way to add a saliency factor based on the segmented regions in the images.
service oriented software engineering | 2014
Hussein Haggag; Mohammed Hossny; Sherif Haggag; Saeid Nahavandi; Douglas C. Creighton
This paper presents a comparison of applying different clustering algorithms on a point cloud constructed from the depth maps captured by a RGBD camera such as Microsoft Kinect. The depth sensor is capable of returning images, where each pixel represents the distance to its corresponding point not the RGB data. This is considered as the real novelty of the RGBD camera in computer vision compared to the common video-based and stereo-based products. Depth sensors captures depth data without using markers, 2D to 3D-transition or determining feature points. The captured depth map then cluster the 3D depth points into different clusters to determine the different limbs of the human-body. The 3D points clustering is achieved by different clustering techniques. Our Experiments show good performance and results in using clustering to determine different human-body limbs.
systems, man and cybernetics | 2016
Hussein Haggag; Ahmed Abobakr; Mohammed Hossny; Saeid Nahavandi
Although marker-less human pose estimation and tracking is important in various systems, nowadays many applications tend to detect animals while performing a certain task. These applications are multidisciplinary including robotics, computer vision, safety, and animal healthcare. The appearance of RGB-D sensors such as Microsoft Kinect and its successful applications in tracking and recognition made this area of research more active, especially with their affordable price. In this paper, a data synthesis approach for generating realistic and highly varied animal corpus is presented. The generated dataset is used to train a machine learning model to semantically segment animal body parts. In the proposed framework, foreground extraction is applied to segment the animal, dense representations are obtained using the depth comparison feature extractor and used for training a supervised random decision forest. An accurate pixel-wise classification of the parts will allow accurate joint localization and hence pose estimation. Our approach records classification accuracy of 93% in identifying the different body parts of an animal using RGB-D images.
systems, man and cybernetics | 2014
Hussein Haggag; Mohammed Hossny; Sherif Haggag; Saeid Nahavandi; Douglas C. Creighton
Microsoft Kinect sensor was introduced with the XBOX gaming console. It features a simple and portable motion capturing system. Kinect nowadays presents a point of interest in many fields of study and areas of research where its affordable price compared to its capabilities. The Kinect sensor has the capability to capture and track detected 3D objects with accuracy comparable to that captured by state of the art commercial systems. Human safety is considered one of the highest concerns, specially nowadays where the existence of machines and robots is widely used. In this paper we present using the Kinect technology for enhancing the safety of equipment and operations in seven different applications. These applications include 1) positioning of childs car seat to optimise the childs position in respected to front and side air-bags; 2) board positioning system to improve the teachers arm reach posture; 3) gas station safety to prevent children from accessing the gas pump; 4) indoor pool safety to avoid children access to deep pool area; 5) robot safety emergency stop; 6) Workplace safety; and 7) older adults fall prediction.
international conference on intelligent robotics and applications | 2008
Mohammed Hossny; Saeid Nahavandi; Doug Crieghton
Image fusion quality metrics have evolved from image processing quality metrics. They measure the quality of fused images by estimating how much localized information has been transferred from the source images into the fused image. However, this technique assumes that it is actually possible to fuse two images into one without any loss. In practice, some features must be sacrificed and relaxed in both source images. Relaxed features might be very important, like edges, gradients and texture elements. The importance of a certain feature is application dependant. This paper presents a new method for image fusion quality assessment. It depends on estimating how much valuable information has not been transferred.