Junaidi Abdullah
Multimedia University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Junaidi Abdullah.
international conference on computer applications and industrial electronics | 2010
Ahmad Yahya Dawod; Junaidi Abdullah; Md. Jahangir Alam
Hand segmentation is often the first step in applications such as gesture recognition, hand tracking and recognition. We propose a new technique for hand segmentation of color images using adaptive skin color model. Our method captures pixel values of a persons hand and converts them into YCbCr color space. The technique will then map the CbCr color space to CbCr plane to construct a clustered region of skin color for the person. Edge detection is applied to the cluster in order to create an adaptive skin color boundaries for classification. Experimental results demonstrate successful detection over a variety of hand variations in color, position, scale, rotation and pose.
international conference on advanced computer theory and engineering | 2010
Ahmad Yahya Dawod; Junaidi Abdullah; Md. Jahangir Alam
Accurate hand segmentation is a challenging task in computer vision applications. We propose a new method to segment hand based on free-form skin color model. The pixel value of a persons hand is captured and represented in YCbCr color model. The CbCr color space is mapped to a CbCr plane in order to produce a clustered region of skin color. Then, instead of using ellipse to model the skin color, edge detection is performed on the clustered region to construct a free-form skin color model. The hand segmentation result, tested on various complex backgrounds gives promising results.
International Journal of Parallel Programming | 2014
Sreeramula Sankaraiah; Lam Hai Shuan; Chikkanan Eswaran; Junaidi Abdullah
High definition video applications often require heavy computation, high bandwidth and high memory requirements which make their real-time implementation difficult. Multi-core architecture with parallelism provides new solutions to implementing complex multimedia applications in real-time. It is well-known that the speed of the H.264 encoder can be increased on a multi-core architecture using the parallelism concept. Most of the parallelization methods proposed earlier for these purposes suffer from the drawbacks of limited scalability and data dependency. In this paper, we present a result obtained using data-level parallelism at the Group-Of-Pictures (GOP) level for the video encoder. The proposed technique involves each GOP being encoded independently and implemented on JM 18.0 using advanced data structures and OpenMP programming techniques. The performance of the parallelized video encoder is evaluated for various resolutions based on the parameters such as encoding speed, bit rate, memory requirements and PSNR. The results show that with GOP level parallelism, very high speed up values can be achieved without much degradation in the video quality.
international conference on signal and image processing applications | 2009
Hu Ng; Wooi-Haw Tan; Hau-Lee Tong; Junaidi Abdullah; Ryoichi Komiya
In this paper, a new approach is proposed for extracting human gait features from a walking human based on the silhouette image. The approach consists of five stages: clearing the background noise of image by morphological opening; measuring the width and height of the human silhouette; dividing the enhanced human silhouette into six body segments based on anatomical knowledge; applying morphological skeleton to obtain the body skeleton; and applying Hough transform to obtain the joint angles from the body segment skeletons. The joint angles together with the height and width of the human silhouette are collected and used for gait analysis. From the experiment conducted, it can be observed that the proposed system is feasible as satisfactory results have been achieved.
Pattern Recognition Letters | 2013
Chiung Ching Ho; Hu Ng; Wooi-Haw Tan; Kok-Why Ng; Hau-Lee Tong; Timothy Tzen Vun Yap; Pei-Fen Chong; Chikkannan Eswaran; Junaidi Abdullah
This paper describes the baseline corpus of a new multimodal biometric database, the MMU GASPFA (Gait-Speech-Face) database. The corpus in GASPFA is acquired using commercial off the shelf (COTS) equipment including digital video cameras, digital voice recorder, digital camera, Kinect camera and accelerometer equipped smart phones. The corpus consists of frontal face images from the digital camera, speech utterances recorded using the digital voice recorder, gait videos with their associated data recorded using both the digital video cameras and Kinect camera simultaneously as well as accelerometer readings from the smart phones. A total of 82 participants had their biometric data recorded. MMU GASPFA is able to support both multimodal biometric authentication as well as gait action recognition. This paper describes the acquisition setup and protocols used in MMU GASPFA, as well as the content of the corpus. Baseline results from a subset of the participants are presented for validation purposes.
information sciences, signal processing and their applications | 2010
Hu Ng; Hau-Lee Tong; Wooi-Haw Tan; Junaidi Abdullah
In this paper, we proposed a new approach for the classification of human gait features with different apparel and various walking speed. The approach consists of two parts: extraction of human gait features from enhanced human silhouette and classification of the extracted human gait features using fuzzy k-nearest neighbours (KNN). The joint angles together with the height, width and crotch height of the human silhouette are collected and used for gait analysis. The training and the testing sets are separable without overlapping. Both sets involve nine different apparel and three walking speed. From the experiment conducted, it can be observed that the proposed system is feasible as satisfactory results have been achieved.
international visual informatics conference | 2009
Hu Ng; Wooi-Haw Tan; Hau-Lee Tong; Junaidi Abdullah; Ryoichi Komiya
In this paper, a new approach is proposed for extracting human gait features from a walking human based on the silhouette images. The approach consists of six stages: clearing the background noise of image by morphological opening; measuring of the width and height of the human silhouette; dividing the enhanced human silhouette into six body segments based on anatomical knowledge; applying morphological skeleton to obtain the body skeleton; applying Hough transform to obtain the joint angles from the body segment skeletons; and measuring the distance between the bottom of right leg and left leg from the body segment skeletons. The angles of joints, step-size together with the height and width of the human silhouette are collected and used for gait analysis. The experimental results have demonstrated that the proposed system is feasible and achieved satisfactory results.
international conference on digital signal processing | 2014
Chikkannan Eswaran; Marwan D. Saleh; Junaidi Abdullah
The detection and analysis of spot lesions associated with the retinal diseases, such as exudates, microaneurysms, and hemorrhages, play an important role in the screening of retinal diseases. This paper presents an algorithm for segmentation of automated exudates from color fundus images. The proposed algorithm comprises two major stages, namely, pre-processing and segmentation. A novel pre-processing method is employed for background removal through contrast enhancement and noise removal. In the second stage, the pre-processed image is sliced horizontally and vertically into a number of slices and then the corresponding projection values are obtained in order to select an appropriate threshold value for each of the image slices. Finally, optic disc is removed to facilitate the correct identification of exudates and to decrease the false positive cases. DIARETDB1 database is used to measure the accuracy of the proposed method. Based on the experiments which are conducted on pixel basis, it is found that the proposed algorithm achieves better results compared to known algorithms. With the proposed algorithm, average values of 71.2%, 72.77%, 99.98%, 97.72%, 99.74%, and 83.28% are obtained in terms of overlap, sensitivity, specificity, PPV, accuracy, and kappa coefficient respectively.
Computer methods in biomechanics and biomedical engineering. Imaging & visualization | 2018
N. D. Salih; Marwan D. Saleh; Chikkannan Eswaran; Junaidi Abdullah
Abstract The analysis of retinal features, such as blood vessels, optic disc and fovea, plays an important role in the detection of several diseases. This paper presents a method for automated optic disc segmentation from colour fundus images. The proposed method comprises three major stages, namely optic disc localisation, preprocessing and segmentation. Localisation is performed using the fast Fourier transform-based template matching to obtain a seed point located on the optic disc which is then used as an input to the region growing technique for the purpose of segmentation. Three sets of fundus images, namely DRIVE, MESSIDOR and a LOCAL database are used to measure the accuracy of the proposed method. From the experimental results, it is found that the proposed localisation method achieves success rates of 100, 98.91 and 97.56% for these databases, respectively, which are comparable to other known methods. The proposed segmentation method is compared with several known segmentation methods using DRIVE database. Based on the results, it is found that the proposed method achieves values of 87.16, 91.27, 99.81, 90.56, 98.68, and 89.71% in terms of overlap, sensitivity, specificity, positive predictive value, accuracy, and kappa coefficient respectively, which are higher compared to the results achieved by other known methods. Furthermore, the average processing time required for the optic disc localisation is 0.22 s, while the average processing time required for the entire three stages is1.03 s.
The Journal of Object Technology | 2010
Fathi Taibi; Md. Jahangir Alam; Junaidi Abdullah
Requirements specification is a collaborative activity that involves several developers specifying the requirements elicited through several stakeholders. Operation-base merging allows combining specifications using the information available about their state as well as their evolution or change. Thus, leading to a more precise, accurate and efficient merging. Differencing specifications is a tedious, complicated, and a crucial process needed for operation-based merging of specifications resulting from collaboration. An approach for differencing Object-Oriented formal specifications is proposed in this paper. The difference is modeled as a set of primitive operations and is produced based on the matching results of specifications’ elements. These matchings are calculated based on an approach employing elements’ syntactic and structural similarities. The proposed differencing approach is empirically validated.