Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where M. P. Paulraj is active.

Publication


Featured researches published by M. P. Paulraj.


international colloquium on signal processing and its applications | 2010

A phoneme based sign language recognition system using skin color segmentation

M. P. Paulraj; Sazali Yaacob; Mohd Shuhanaz bin Zanar Azalan; Rajkumar Palaniappan

A sign language is a language which, instead of acoustically conveyed sound patterns, uses visually transmitted sign patterns. Sign languages are commonly developed for deaf communities, which can include interpreters, friends and families of deaf people as well as people who are deaf or hard of hearing themselves. Developing a sign language recognition system will help the hearing impaired to communicate more fluently with the normal people. This paper presents a simple sign language recognition system that has been developed using skin color segmentation and Artificial Neural Network. The moment invariants features extracted from the right and left hand gesture images are used to develop a network model. The system has been implemented and tested for its validity. Experimental results show that the average recognition rate is 92.85%.


international conference on electronic design | 2008

Extraction of head and hand gesture features for recognition of sign language

M. P. Paulraj; Sazali Yaacob; Hazry Desa; C.R. Hema; Wan Mohd Ridzuan; Wan Mohd Ridzuan Wan Ab Majid

Sign language is the primary communication method that impaired hearing people used in their daily life. Sign language recognition has gained a lot of attention recently by researchers in computer vision. Sign language recognition systems in general require the knowledge of the hands position, shape, motion, orientation and facial expression. In this paper we present a simple method for converting sign language into voice signals using features obtained from head and hand gestures which can be used by hearing impaired person to communicate with an ordinary person. A simple feature extraction method based on the area of the objects in a binary image and Discrete Cosine Transform (DCT) is proposed for extracting the features from the video sign language. A simple neural network models is developed for the recognition of gestures using the features computed from the video stream. An audio system is installed to play the particular word corresponding to the gestures. Experimental results demonstrate that the recognition rate of the proposed neural network models is about 91%.


ieee international conference on control system, computing and engineering | 2011

Malaysian English accents identification using LPC and formant analysis

M. A. Yusnita; M. P. Paulraj; Sazali Yaacob; Shahriman Abu Bakar; A. Saidatul

In Malaysia, most people speak several varieties of English known as Malaysian English (MalE) and there is no uniform version because of the existence of multi-ethnic population. It is a common scenario that Malaysians speak a particular local Malay, Chinese or Indian English accent. As most commercial speech recognizers have been developed using a standard English language, it is a challenging task for achieving highly efficient performance when other accented speech are presented to this system. Accent identification (AccID) can be one of the subsystem in speaker independent automatic speech recognition (SI-ASR) system so that deterioration issue in its performance can be tackled. In this paper, the most important speech features of three ethnic groups of MalE speakers are extracted using Linear Predictive Coding (LPC), formant and log energy feature vectors. In the subsequent stage, the accent identity of a speaker is predicted using K-Nearest Neighbors (KNN) classifier based on the extracted information. Prior, the preprocessing parameters and LPC order are investigated to properly extract the speech features. This study is conducted on a small set speech corpus developed as pilot study to determine the feasibility of automatic AccID of MalE speakers which has never been reported before. The experimental results indicate a highly promising recognition accuracy of 94.2% upon feature fusion sets of LPC, formants and log energy.


international colloquium on signal processing and its applications | 2010

Moving vehicle noise classification using backpropagation algorithm

Norasmadi Abdul Rahim; M. P. Paulraj; Abdul Hamid Adom; Sathishkumar Sundararaj

The hearing impaired is afraid of walking along a street and living a life alone. Since, it is difficult for hearing impaired to hear and judge sound information and they often encounter risky situations while they are in outdoors. The sound produced by moving vehicle in outdoor situation cannot be moderate wisely by profoundly deaf people. They also cannot distinguish the type and the distance of any moving vehicle approaching from their behind. Generally the profoundly deaf people do not use any hearing aid which does not provide any benefit. In this paper, a simple system that identifies the type and distance of a moving vehicle using artificial neural network has been proposed. The noises emanated from moving vehicles along the roadside were recorded along with the type and distance of moving vehicles. Simple feature extraction algorithm for extracting the feature from noise emanated by the moving vehicle has been made using frequency analysis approach. A one-third-octave filter bands is used for getting the important signatures from the emanated noise. The extracted features are associated with the type and distance of the moving vehicle and a simple neural network model is developed. The developed neural network model is tested for its validity.


intelligent information hiding and multimedia signal processing | 2007

Fuzzy Based Classification of EEG Mental Tasks for a Brain Machine Interface

C.R. Hema; M. P. Paulraj; R. Nagarajan; Sazali Yaacob; Abdul Hamid Adorn

Patients with neurodegenerative diseases loose all motor movements including impairment of speech, leaving the patients totally locked-in. One possible option for rehabilitation of such patients is using a brain machine interfaces (BMI) which uses their active cognition capabilities to control external devices and their environment. BMIs are designed using the electrical activity of the brain detected by scalp EEG electrodes. Classification of EEG signals extracted during mental tasks is a technique for designing a BMI. In this paper five different mental tasks from five subjects were studied, for classification combinations of two tasks are studied for each subject. A fuzzy based classification method is proposed for classification of the EEG mental task signals. Power of the spectral band frequencies of the EEG are used as features for training and testing the fuzzy classifier. Classification accuracies ranged from 65% to 100% for different combinations of mental tasks. The results validate the performance of the proposed algorithm for mental task classification.


ieee international conference on control system, computing and engineering | 2011

Analysis of EEG signals during relaxation and mental stress condition using AR modeling techniques

A. Saidatul; M. P. Paulraj; Sazali Yaacob; M. A. Yusnita

Electroencephalography (EEG) is the most important tool to study the brain behavior. This paper presents an integrated system for detecting brain changes during relax and mental stress condition. In most studies, which use quantitative EEG analysis, the properties of measured EEG are computed by applying power spectral density (PSD) estimation for selected representative EEG samples. The sample for which the PSD is calculated is assumed to be stationary. This work deals with a comparative study of the PSD obtained from resting and mental stress condition of EEG signals. The power density spectra were calculated using fast Fourier transform (FFT) by Welchs method, auto regressive (AR) method by Yule-Walker and Burgs method. Finally a neural network classifier used to classify these two conditions. It is found that maximum classification accuracy of 91.17% was obtained for the Burg Method compared to Yule Walker and Welch Method technique.


international conference on signal and image processing applications | 2009

Identification of vocal fold pathology based on Mel Frequency Band Energy Coefficients and singular value decomposition

M. Hariharan; M. P. Paulraj; Sazali Yaacob

Many approaches have been developed to detect the vocal fold pathology. Among the approaches, analysis of speech has proved to be an excellent tool for vocal fold pathology detection. This paper presents the Mel Frequency Band Energy Coefficients (MFBECs) combined with singular value decomposition (SVD) based feature extraction method for the classification of pathological or normal voice. In order to extract the most relevant information from the original MFBECs feature dataset, SVD is used. For the analysis, the speech samples of pathological and healthy subjects from the Massachusetts Eye and Ear Infirmary (MEEI) database are used. A simple k-means nearest neighbourhood (k-NN) and Linear Discriminant Analysis (LDA) based classifiers are used for testing the effectiveness of the MFBECs-SVD based feature vector. The experimental results show that the proposed features gives very promising classification accuracy and also can be effectively used to detect the pathological voices clinically.


international conference on electronic design | 2008

Recognition of motor imagery of hand movements for a BMI using PCA features

C.R. Hema; M. P. Paulraj; Sazali Yaacob; Abdul Hamid Adom; R. Nagarajan

Motor imagery is the mental simulation of a motor act that includes preparation for movement and mental operations of motor representations implicitly or explicitly. The ability of an individual to control his EEG through imaginary mental tasks enables him to control devices through a brain machine interfaces (BMI). In other words a BMI can be used to rehabilitate people suffering from neuromuscular disorders as a means of communication or control. This paper presents a novel approach in the design of a four state BMI using two electrodes. The BMI is designed using Neural Network Classifiers. The performance of the BMI is evaluated using two network architectures. The performance of the proposed algorithm has an average classification efficiency of 93.5%.


International Journal of Biomedical Engineering and Technology | 2011

Detection of vocal fold paralysis and oedema using time-domain features and Probabilistic Neural Network

M. Hariharan; M. P. Paulraj; Sazali Yaacob

This paper proposes a feature extraction method based on time-domain energy variation for the detection of vocal fold pathology. In this work, two different vocal fold problems (vocal fold paralysis and edema) are taken for analysis and in either case, a two-class pattern recognition problem is investigated. The normal and pathological speech samples are used from Massachusetts Eye and Ear Infirmary database. Probabilistic Neural Network (PNN) is employed for the classification. The experimental results show that the proposed features give very promising classification accuracy of 90% and can be used to detect the vocal fold paralysis and edema clinically.


international colloquium on signal processing and its applications | 2009

Classification of vowel sounds using MFCC and feed forward Neural Network

M. P. Paulraj; Sazali Yaacob; Ahamad Nazri; Sathees Kumar

The English language as spoken by Malaysians varies from place to place and differs from one ethnic community and its sub-group to another. Hence, it is necessary to develop an exclusive Speech to text translation system for understanding the English pronunciation as spoken by Malaysians. Speech translation is a process of both speech recognition and equivalent phonemic to word translation. Speech recognition is a process of identifying phonemes from the speech segment. In this paper, the initial step for speech recognition by identifying the phoneme features is proposed. In order to classify the phoneme features, Mel-frequency cepstral coefficients (MFCC) are computed in this paper. A simple feed forward Neural Network (FFNN) trained by back propagation procedure is proposed for identifying the phonemes features. The extracted MFCC coefficients are used as input to a neural network classifier for associating it to one of the 11 classes.

Collaboration


Dive into the M. P. Paulraj's collaboration.

Top Co-Authors

Avatar

Sazali Yaacob

Universiti Malaysia Perlis

View shared research outputs
Top Co-Authors

Avatar

Abdul Hamid Adom

Universiti Malaysia Perlis

View shared research outputs
Top Co-Authors

Avatar

M. A. Yusnita

Universiti Teknologi MARA

View shared research outputs
Top Co-Authors

Avatar

C. R. Hema

Universiti Malaysia Perlis

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sazali Bin Yaccob

Universiti Malaysia Perlis

View shared research outputs
Top Co-Authors

Avatar

C. R. Hema

Universiti Malaysia Perlis

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge