Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Moataz M. Abdelwahab is active.

Publication


Featured researches published by Moataz M. Abdelwahab.


international midwest symposium on circuits and systems | 2016

Real time algorithm for efficient HCI employing features obtained from MYO sensor

Ehab H. El-Shazly; Moataz M. Abdelwahab; Atsushi Shimada; Rin-ichiro Taniguchi

This paper presents a new gesture recognition algorithm that uses different features obtained from MYO sensor. To preserve the spatial and temporal alignment for different features of each movement, Two Dimensional Principal Component Analysis 2DPCA is employed to obtain the dominant features by processing the obtained data in its 2D form. Canonical Correlation Analysis CCA is used to find a space where the projection of similar training/testing pairs become highly correlated. The testing sequences is matched to the training set that gives maximum correlation in the new space obtained by CCA. Two new data sets for common HCI applications (gaming and air writing) were collected at LIMU lab, Kyushu university and used to verify the efficiency of the proposed algorithm. Low processing complexity, efficient storage requirement, high accuracy and fast decision are factors that promotes our algorithm for real time implementation.


signal image technology and internet based systems | 2015

Efficient Facial and Facial Expression Recognition Using Canonical Correlation Analysis for Transform Domain Features Fusion and Classification

Ehab H. El-Shazly; Moataz M. Abdelwahab; Rin-ichiro Taniguchi

In this paper, an efficient facial and facial expression recognition algorithm employing Canonical Correlation Analysis (CCA) for features fusion and classification is presented. Multiple features are extracted, transformed to different transform domains and fused together. Two Dimensional Principal Component Analysis (2DPCA) is used to maintain only the principal features representing different faces. 2DPCA also maintains the spatial relation between adjacent pixels improving the overall recognition accuracy. CCA is being used for features fusion as well as classification. Experimental results on four different data sets showed that our algorithm outperform all most recent published state of the art techniques and reached 100 % recognition accuracy in most data sets.


international symposium on signal processing and information technology | 2015

Image encryption using Camellia and Chaotic maps

Marwa S. Elpeltagy; Moataz M. Abdelwahab; Mohammed S. Sayed

Multimedia information security has a vital role in communication systems. One of the security aspects is the encryption. A new image encryption approach, that uses the Camellia block cipher algorithm combined with the logistic chaotic map, is proposed in this paper. CAT map is used to scramble the image and the S-function output is masked for more security. Key schedule algorithm is used to generate the pre-whitening key and the round key while logistic map is used to generate the round mask and the post-whitening key. The experimental results show that the suggested approach is highly resistive to statistical attacks and sensitive to slight changes in the key. It has large key space and short encryption time. The proposed approach was tested for both gray scale and colored images.


international joint conference on computer vision imaging and computer graphics theory and applications | 2018

Wearable RGB Camera-based Navigation System for the Visually Impaired.

Reham Abobeah; Mohamed E. Hussein; Moataz M. Abdelwahab; Amin Shoukry

This paper proposes a wearable RGB camera-based system for sightless people through which they can easily and independently navigate their surrounding environment. The system uses a single head or chest mounted RGB camera to capture the visual information from the current user’s path, and an auditory system to inform the user about the right direction to follow. This information is obtained through a novel alignment technique which takes as input a visual snippet from the current user’s path and responds with the corresponding location on the training path. Then, assuming that the wearable camera pose reflects the user’s pose, the system corrects the current user’s pose to align with the corresponding pose in the training location. As a result, the user receives periodically an acoustic instruction to assist him in reaching his destination safely. The experiments conducted to test the system, in various collected indoor and outdoor paths, have shown that it satisfies its design specifications in terms of correctly generating the instructions for guiding the visually impaired along these paths, in addition to its ability to detect and correct deviations from the predefined paths.


Iet Computer Vision | 2018

Multi-modality-based Arabic sign language recognition

Marwa S. Elpeltagy; Moataz M. Abdelwahab; Mohamed E. Hussein; Amin Shoukry; Asmaa Shoala; Moustafa Galal

With the increase in the number of deaf-mute people in the Arab world and the lack of Arabic sign language (ArSL) recognition benchmark data sets, there is a pressing need for publishing a large-volume and realistic ArSL data set. This study presents such a data set, which consists of 150 isolated ArSL signs. The data set is challenging due to the great similarity among hand shapes and motions in the collected signs. Along with the data set, a sign language recognition algorithm is presented. The authors’ proposed method consists of three major stages: hand segmentation, hand shape sequence and body motion description, and sign classification. The hand shape segmentation is based on the depth and position of the hand joints. Histograms of oriented gradients and principal component analysis are applied on the segmented hand shapes to obtain the hand shape sequence descriptor. The covariance of the three-dimensional joints of the upper half of the skeleton in addition to the hand states and face properties are adopted for motion sequence description. The canonical correlation analysis and random forest classifiers are used for classification. The achieved accuracy is 55.57% over 150 ArSL signs, which is considered promising.


international midwest symposium on circuits and systems | 2017

Four layers image representation for prediction of lung cancer genetic mutations based on 2DPCA

Moataz M. Abdelwahab; Shimaa A. Abdelrahman

Genetic mutations are the first warning to the onset of lung cancer. The ability to early predict these mutations could open the door for a targeted treatment options for lung cancer patients. Three top candidate genes previously reported to have the highest frequency of lung cancer mutations. Each gene is encoded as a symbolic sequence of four letters. A novel method for gene representation is introduced in this paper, where each letter in gene sequence is represented by a layer image. The final four layers are integrated with Two Dimensional Principle Component Analysis (2DPCA) to build an algorithm for prediction of lung cancer. Furthermore, the algorithm is capable to identify the substitution type in somatic mutations of lung cancer with high accuracy. The high dimensionality and computational complexity of prediction are reduced by employing 2DPCA, which allows a high-dimensional space to be represented in a low-dimensional one. Experimental results confirm that, the proposed algorithm achieved accuracy of 98.55% in early prediction of lung cancer and accuracy of 88.18% in identification of the substitution type in gene sequence.


international conference on image analysis and recognition | 2016

Video Object Segmentation Based on Superpixel Trajectories

Mohamed A. Abdelwahab; Moataz M. Abdelwahab; Hideaki Uchiyama; Atsushi Shimada; Rin-ichiro Taniguchi

In this paper, a video object segmentation method utilizing the motion of superpixel centroids is proposed. Our method achieves the same advantages of methods based on clustering point trajectories, furthermore obtaining dense clustering labels from sparse ones becomes very easy. Simply for each superpixel the label of its centroid is propagated to all its entire pixels. In addition to the motion of superpixel centroids, histogram of oriented optical flow, HOOF, extracted from superpixels is used as a second feature. After segmenting each object, we distinguish between foreground objects and the background utilizing the obtained clustering results.


Journal of Reliable Intelligent Environments | 2016

Real time vision/sensor based features processing for efficient HCI employing canonical correlation analysis

Ehab H. El-Shazly; Moataz M. Abdelwahab; Atsushi Shimada; Rin-ichiro Taniguchi

In this paper, a global algorithm for facial and gesture recognition is presented. The algorithm basically consists of three modules: features sensing and processing, dominant features selection and finally features matching. Depending on the type of data used (vision or sensor based), the proposed algorithm exploits multiple features employing 2DPCA that efficiently compact features’ descriptors maintain the spatial and temporal alignment of features’ components. Canonical Correlation Analysis (CCA) is employed to fuse different features from different descriptors or different performers. CCA also transforms training and testing features sets into new space where similar pairs become highly correlated pairs. Different experiments were conducted using well known data sets in addition to our newly collected data sets to verify the efficiency of the proposed algorithm. Excellent recognition accuracy, and fast performance are factors that promotes the proposed algorithm for real time implementation.


international symposium on multimedia | 2015

A Novel Algorithm for Vehicle Detection and Tracking in Airborne Videos

Mohamed A. Abdelwahab; Moataz M. Abdelwahab

Real time detection and tracking of multi vehicles in airborne videos is still a challenging problem due to the camera motion and low resolution. In this paper, a real time technique for simultaneously detecting, tracking and counting vehicles in airborne and stationary camera videos is proposed. First, feature points are extracted and tracked through video frames. A new strategy is used for removing the non-stationary background points by measuring the changes in the histogram of the pixels around each feature point with time. The obtained foreground features are clustered and grouped into separate trackable vehicles based on their motion properties. Experimental results performed on videos representing airborne and fixed cameras confirm the excellent properties of the proposed algorithm.


international midwest symposium on circuits and systems | 2015

A global human action, facial and gestures recognition algorithm employing multiple features extraction and CCA

Ehab H. El-Shazly; Moataz M. Abdelwahab

In this paper, a global algorithm for human action, facial and gesture recognition is presented. The proposed algorithm depends on the extraction of multiple transform domain features and Canonical Correlation Analysis (CCA) for features fusion and classification. The proposed algorithm achieved the best reported results for facial and facial expression recognition. Excellent comparable results were achieved for human action and gesture recognition.

Collaboration


Dive into the Moataz M. Abdelwahab's collaboration.

Top Co-Authors

Avatar

Ehab H. El-Shazly

Egypt-Japan University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Amin Shoukry

Egypt-Japan University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Marwa S. Elpeltagy

Egypt-Japan University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Mohamed A. Abdelwahab

Egypt-Japan University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mohammed S. Sayed

Egypt-Japan University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Reham Abobeah

Egypt-Japan University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge