Hu Ng
Multimedia University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Hu Ng.
international conference on signal and image processing applications | 2009
Hu Ng; Wooi-Haw Tan; Hau-Lee Tong; Junaidi Abdullah; Ryoichi Komiya
In this paper, a new approach is proposed for extracting human gait features from a walking human based on the silhouette image. The approach consists of five stages: clearing the background noise of image by morphological opening; measuring the width and height of the human silhouette; dividing the enhanced human silhouette into six body segments based on anatomical knowledge; applying morphological skeleton to obtain the body skeleton; and applying Hough transform to obtain the joint angles from the body segment skeletons. The joint angles together with the height and width of the human silhouette are collected and used for gait analysis. From the experiment conducted, it can be observed that the proposed system is feasible as satisfactory results have been achieved.
Pattern Recognition Letters | 2013
Chiung Ching Ho; Hu Ng; Wooi-Haw Tan; Kok-Why Ng; Hau-Lee Tong; Timothy Tzen Vun Yap; Pei-Fen Chong; Chikkannan Eswaran; Junaidi Abdullah
This paper describes the baseline corpus of a new multimodal biometric database, the MMU GASPFA (Gait-Speech-Face) database. The corpus in GASPFA is acquired using commercial off the shelf (COTS) equipment including digital video cameras, digital voice recorder, digital camera, Kinect camera and accelerometer equipped smart phones. The corpus consists of frontal face images from the digital camera, speech utterances recorded using the digital voice recorder, gait videos with their associated data recorded using both the digital video cameras and Kinect camera simultaneously as well as accelerometer readings from the smart phones. A total of 82 participants had their biometric data recorded. MMU GASPFA is able to support both multimodal biometric authentication as well as gait action recognition. This paper describes the acquisition setup and protocols used in MMU GASPFA, as well as the content of the corpus. Baseline results from a subset of the participants are presented for validation purposes.
information sciences, signal processing and their applications | 2010
Hu Ng; Hau-Lee Tong; Wooi-Haw Tan; Junaidi Abdullah
In this paper, we proposed a new approach for the classification of human gait features with different apparel and various walking speed. The approach consists of two parts: extraction of human gait features from enhanced human silhouette and classification of the extracted human gait features using fuzzy k-nearest neighbours (KNN). The joint angles together with the height, width and crotch height of the human silhouette are collected and used for gait analysis. The training and the testing sets are separable without overlapping. Both sets involve nine different apparel and three walking speed. From the experiment conducted, it can be observed that the proposed system is feasible as satisfactory results have been achieved.
international visual informatics conference | 2009
Hu Ng; Wooi-Haw Tan; Hau-Lee Tong; Junaidi Abdullah; Ryoichi Komiya
In this paper, a new approach is proposed for extracting human gait features from a walking human based on the silhouette images. The approach consists of six stages: clearing the background noise of image by morphological opening; measuring of the width and height of the human silhouette; dividing the enhanced human silhouette into six body segments based on anatomical knowledge; applying morphological skeleton to obtain the body skeleton; applying Hough transform to obtain the joint angles from the body segment skeletons; and measuring the distance between the bottom of right leg and left leg from the body segment skeletons. The angles of joints, step-size together with the height and width of the human silhouette are collected and used for gait analysis. The experimental results have demonstrated that the proposed system is feasible and achieved satisfactory results.
The Scientific World Journal | 2014
Hu Ng; Wooi-Haw Tan; Junaidi Abdullah; Hau-Lee Tong
This paper describes the acquisition setup and development of a new gait database, MMUGait. This database consists of 82 subjects walking under normal condition and 19 subjects walking with 11 covariate factors, which were captured under two views. This paper also proposes a multiview model-based gait recognition system with joint detection approach that performs well under different walking trajectories and covariate factors, which include self-occluded or external occluded silhouettes. In the proposed system, the process begins by enhancing the human silhouette to remove the artifacts. Next, the width and height of the body are obtained. Subsequently, the joint angular trajectories are determined once the body joints are automatically detected. Lastly, crotch height and step-size of the walking subject are determined. The extracted features are smoothened by Gaussian filter to eliminate the effect of outliers. The extracted features are normalized with linear scaling, which is followed by feature selection prior to the classification process. The classification experiments carried out on MMUGait database were benchmarked against the SOTON Small DB from University of Southampton. Results showed correct classification rate above 90% for all the databases. The proposed approach is found to outperform other approaches on SOTON Small DB in most cases.
international visual informatics conference | 2011
Tze-Wei Yeoh; Wooi-Haw Tan; Hu Ng; Hau-Lee Tong; Chee-Pun Ooi
Gait recognition is an unobtrusive biometric, which allows identification of people from a distance by the manner in which they walk. In this paper, a new approach is proposed for extracting human gait features based on body joint identification from human silhouette images. In the proposed approach, the human silhouette image is first enhanced to remove the artifacts before it is divided into eight segments according to a priori knowledge of human body proportion. Next, the body joints which act as the pivot points in human gait are automatically identified and the joint trajectories are computed. To assess the performance of the extracted gait features, fuzzy k-nearest neighbor classification technique is used to identify subjects from the SOTON covariate database. The experimental results have shown that the gait features extracted using the proposed approach are effective as the recognition rate has been improved.
international conference on computer applications and industrial electronics | 2010
Hu Ng; Hau-Lee Tong; Wooi-Haw Tan; Tzen-Vun Yap; Junaidi Abdullah
Gait as a biometric has received great attention nowadays as it can offer human identification at a distance without any contact with the feature capturing device. This is motivated by the increasing number of synchronised closed-circuit television (CCTV) cameras which have been installed in many major towns, in order to monitor and prevent crime. This paper proposes a new approach for gait classification with twelve different covariate factors. The proposed approach is consisted of two parts: extraction of human gait features from enhanced human silhouette and classification of the extracted human gait features using fuzzy k-nearest neighbours (KNN). The joint trajectories together with the height, width and crotch height of the human silhouette are collected and used for gait analysis. To improve the recognition rate, two of these features are smoothened before the classification process in order to alleviate the effect of outliers. Experimental results of a dataset involving nine walking subjects have demonstrated the effectiveness of the proposed approach.
Archive | 2019
Jia Juang Koh; Timothy Tzen Vun Yap; Hu Ng; Vik Tor Goh; Hau Lee Tong; Chiung Ching Ho; Thiam Yong Kuek
This research work explores the possibility of using deep learning to produce an autonomous system for detecting potholes on video to assist in road monitoring and maintenance. Video data of roads was collected using a GoPro camera mounted on a car. Region-based Fully Convolutional Networks (RFCN) was employed to produce the model to detect potholes from images, and validated on the collected videos. The R-FCN model is able to achieve a Mean Average Precision (MAP) of 89% and a True Positive Rate (TPR) of 89% with no false positive.
Archive | 2019
Kain Hoe Tai; Vik Tor Goh; Timothy Tzen Vun Yap; Hu Ng
This paper focuses on the design of a WiFi-based tracking and monitoring system that can detect people’s movements in a residential neighbourhood. The proposed system uses WiFi access points as scanners that detect signals transmitted by the WiFi-enabled smartphones that are carried by most people. Our proposed system is able to track these people as they move through the neighbourhood. We implement our WiFi-based tracking system in a prototype and demonstrate that it is able to detect all WiFi devices in the vicinity of the scanners. We describe the implementation details of our system as well as discuss some of the results that we obtained.
Archive | 2019
Zi Hau Chin; Hu Ng; Timothy Tzen Vun Yap; Hau Lee Tong; Chiung Ching Ho; Vik Tor Goh
The study is to classify human motion data captured by a wrist worn accelerometer. The classification is based on the various daily activities of a normal person. The dataset is obtained from Human Motion Primitives Detection [1]. There is a total of 839 trials from 14 activities performed by 16 volunteers (11 males and 5 females) ages between 19 to 91 years. A wrist worn tri-axial accelerometer was used to accrue the acceleration data of X, Y and Z axis during each trial. For feature extraction, nine statistical parameters together with the energy spectral density and the correlation between the accelerometer readings are employed to extract 63 features from the raw acceleration data. Particle Swarm Organization, Tabu Search and Ranker are applied to rank and select the positive roles for the later classification process. Classification is implemented using Support Vector Machine, k-Nearest Neighbors and Random Forest. From the experimental results, the proposed model achieved the highest correct classification rate of 91.5% from Support Vector Machine with radial basis function kernel.