Khaled Assaleh
American University of Sharjah
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Khaled Assaleh.
IEEE Transactions on Biomedical Engineering | 2005
Khaled Assaleh; Hasan Al-Nashash
In this paper, we propose a novel technique for extracting fetal electrocardiogram (FECG) from a thoracic ECG recording and an abdominal ECG recording of a pregnant woman. The polynomial networks technique is used to nonlinearly map the thoracic ECG signal to the abdominal ECG signal. The FECG is then extracted by subtracting the mapped thoracic ECG from the abdominal ECG signal. Visual test results obtained from real ECG signals show that the proposed algorithm is capable of reliably extracting the FECG from two leads only. The visual quality of the FECG extracted by the proposed technique is found to meet or exceed that of published results using other techniques such as the independent component analysis.
systems man and cybernetics | 2007
Tamer Shanableh; Khaled Assaleh; Mohammad Al-Rousan
This paper presents various spatio-temporal feature-extraction techniques with applications to online and offline recognitions of isolated Arabic Sign Language gestures. The temporal features of a video-based gesture are extracted through forward, backward, and bidirectional predictions. The prediction errors are thresholded and accumulated into one image that represents the motion of the sequence. The motion representation is then followed by spatial-domain feature extractions. As such, the temporal dependencies are eliminated and the whole video sequence is represented by a few coefficients. The linear separability of the extracted features is assessed, and its suitability for both parametric and nonparametric classification techniques is elaborated upon. The proposed feature-extraction scheme was complemented by simple classification techniques, namely, K nearest neighbor (KNN) and Bayesian, i.e., likelihood ratio, classifiers. Experimental results showed classification performance ranging from 97% to 100% recognition rates. To validate our proposed technique, we have conducted a series of experiments using the classical way of classifying data with temporal dependencies, namely, hidden Markov models (HMMs). Experimental results revealed that the proposed feature-extraction scheme combined with simple KNN or Bayesian classification yields comparable results to the classical HMM-based scheme. Moreover, since the proposed scheme compresses the motion information of an image sequence into a single image, it allows for using simple classification techniques where the temporal dimension is eliminated. This is actually advantageous for both computational and storage requirements of the classifier
Applied Soft Computing | 2009
Mohammad Al-Rousan; Khaled Assaleh; A. Tala'a
Sign language in Arab World has been recently recognized and documented. There have been no serious attempts to develop a recognition system that can be used as a communication means between hearing-impaired and other people. This paper introduces the first automatic Arabic sign language (ArSL) recognition system based on hidden Markov models (HMMs). A large set of samples has been used to recognize 30 isolated words from the Standard Arabic sign language. The system operates in different modes including offline, online, signer-dependent, and signer-independent modes. Experimental results on using real ArSL data collected from deaf people demonstrate that the proposed system has high recognition rate for all modes. For signer-dependent case, the system obtains a word recognition rate of 98.13%, 96.74%, and 93.8%, on the training data in offline mode, on the test data in offline mode, and on the test data in online mode respectively. On the other hand, for signer-independent case the system obtains a word recognition rate of 94.2% and 90.6% for offline and online modes respectively. The system does not rely on the use of data gloves or other means as input devices, and it allows the deaf signers to perform gestures freely and naturally.
EURASIP Journal on Advances in Signal Processing | 2005
Khaled Assaleh; Mohammad Al-Rousan
Building an accurate automatic sign language recognition system is of great importance in facilitating efficient communication with deaf people. In this paper, we propose the use of polynomial classifiers as a classification engine for the recognition of Arabic sign language (ArSL) alphabet. Polynomial classifiers have several advantages over other classifiers in that they do not require iterative training, and that they are highly computationally scalable with the number of classes. Based on polynomial classifiers, we have built an ArSL system and measured its performance using real ArSL data collected from deaf people. We show that the proposed system provides superior recognition results when compared with previously published results using ANFIS-based classification on the same dataset and feature extraction methodology. The comparison is shown in terms of the number of misclassified test patterns. The reduction in the rate of misclassified patterns was very significant. In particular, we have achieved a 36% reduction of misclassifications on the training data and 57% on the test data.
information sciences, signal processing and their applications | 2007
Khaled Assaleh; Wajdi Ahmad
In this paper, we present a novel approach for speech signal modeling using fractional calculus. This approach is contrasted with the celebrated Linear Predictive Coding (LPC) approach which is based on integer order models. It is demonstrated via numerical simulations that by using a few integrals of fractional orders as basis functions, the speech signal can be modeled accurately. The new approach has the merit of requiring a smaller number of model parameters, and is demonstrated to be superior to the LPC approach in capturing the details of the modeled signal.
Computers & Industrial Engineering | 2005
Khaled Assaleh; Yousef Al-Assaf
Obtaining adequate features is a critical step in classifying causable patterns in control charts. Various methods were developed to extract features that maximize the inter-class variability while minimizing the intra-class variations. Most of these methods are based on either time or frequency domain analysis. As a multiresolution analysis approach, wavelet transform was considered to exploit the joint time-frequency characteristics of the patterns. However, the effectiveness of the features obtained by multi-resolution wavelet analysis (MRWA) suffers from the frequency leakage among the different spectral bands. This phenomenon is inherent in wavelet analysis regardless of the choice of the mother wavelet. Cross-band frequency leakage smears the band-specific information, which may yield less distinguishing features, especially for short-time observation patterns.In this work we introduce a multi-resolution analysis approach based on discrete cosine transform (DCT) that overcomes the problems associated with MRWA. We also verify that the classification rates of shift, trend, and cyclic causable patterns using multi-resolution DCT (MRDCT) features are higher than those obtained using MRWA features. Furthermore, the computational requirements for MRDCT are notably less than those needed for MRWA. Artificial neural network (ANN) classifier was used with both feature extraction methods.
Neurocomputing | 2010
Tamer Shanableh; Khaled Assaleh
In polynomial networks, feature vectors are mapped to a higher dimensional space through a polynomial function. The expanded vectors are then passed to a single layer network to compute the model parameters. However, as the dimensionality of the feature vectors grows with polynomial expansion, polynomial training and classification become impractical due to the prohibitive number of expanded variables. This problem is more prominent in vision-based systems where high dimensionality feature vectors are extracted from digital images and/or video. In this paper we propose to reduce the dimensionality of the expanded vector through the use of stepwise regression. We compare our work to the reduced-model multinomial networks where the dimensionality of the expanded feature vectors grows linearly whilst preserving the classification ability. We also compare the proposed work to standard polynomial classifiers and to established techniques of polynomial classifiers with dimensionality reduction. Two application scenarios are used to test the proposed solution, namely; image-based hand recognition and video-based recognition of isolated sign language gestures. Various datasets from the UCI machine learning repository are also used for testing. Experimental results illustrate the effectiveness of the proposed dimensionality reduction technique in comparison to published methods.
IEEE Transactions on Human-Machine Systems | 2015
Noor Ali Tubaiz; Tamer Shanableh; Khaled Assaleh
In this paper, we propose a glove-based Arabic sign language recognition system using a novel technique for sequential data classification. We compile a sensor-based dataset of 40 sentences using an 80-word lexicon. In the dataset, hand movements are captured using two DG5-VHand data gloves. Data labeling is performed using a camera to synchronize hand movements with their corresponding sign language words. Low-complexity preprocessing and feature extraction techniques are applied to capture and emphasize the temporal dependence of the data. Subsequently, a Modified k-Nearest Neighbor (MKNN) approach is used for classification. The proposed MKNN makes use of the context of feature vectors for the purpose of accurate classification. The proposed solution achieved a sentence recognition rate of 98.9%. The results are compared against an existing vision-based approach that uses the same set of sentences. The proposed solution is superior in terms of classification rates while eliminating restrictions of vision-based systems.
IEEE Transactions on Dielectrics and Electrical Insulation | 2012
Refat Atef Ghunem; Khaled Assaleh; Ayman H. El-Hag
In this paper a prediction model is proposed for estimation of furan content in transformer oil using oil quality parameters and dissolved gases as inputs. Multi-layer perceptron feed forward neural networks were used to model the relationships between various transformer oil parameters and furan content. Seven transformer oil parameters, which are breakdown voltage, water content, acidity, total combustible hydrocarbon gases and hydrogen, total combustible gases, carbon monoxide and carbon dioxide concentrations, are proposed to be predictors of furan content in transformer oil. The predictors were chosen based on the physical nature of oil/paper insulation degradation under transformer operating conditions. Moreover, stepwise regression was used to further tune the prediction model by selecting the most significant predictors. The proposed model has been tested on in-service power transformers and prediction accuracy of 90% for furan content in transformer oil has been achieved.
Digital Signal Processing | 2011
Tamer Shanableh; Khaled Assaleh
This paper presents a solution for user-independent recognition of isolated Arabic sign language gestures. The video-based gestures are preprocessed to segment out the hands of the signer based on color segmentation of the colored gloves. The prediction errors of consecutive segmented images are then accumulated into two images according to the directionality of the motion. Different accumulation weights are employed to further help preserve the directionality of the projected motion. Normally, a gesture is represented by hand movements; however, additional user-dependent head and body movements might be present. In the user-independent mode we seek to filter out such user-dependent information. This is realized by encapsulating the movements of the segmented hands in a bounding box. The encapsulated images of the projected motion are then transformed into the frequency domain using Discrete Cosine Transformation (DCT). Feature vectors are formed by applying Zonal coding to the DCT coefficients with varying cutoff values. Classification techniques such as KNN and polynomial classifiers are used to assess the validity of the proposed user-independent feature extraction schemes. An average classification rate of 87% is reported.