Gim Guan Chua
Agency for Science, Technology and Research
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Gim Guan Chua.
ieee international conference on automatic face & gesture recognition | 2008
Corey Manders; Farzam Farbiz; Jyh Herng Chong; Ka Yin Tang; Gim Guan Chua; Mei Hwan Loke; Miaolong Yuan
One of the long-term goals in human-computer interaction is to utilize more intuitive and natural methods such as speech and hand gestures that a user would employ for communication. In this paper, we present a robust method of hand tracking using a probability map computed from a joint probability function derived from both depth information and skin-tone information. The depth information is provided using a commercially available stereo camera, and the color information is found using calibrated and linearized color information. The work shows the effectiveness of this technique, in terms of both the quality of the results as well as the speed at which the computations may be performed. Due to the linearization of the color information and the use of stereo vision data, the technique is demonstrated to be largely invariant to illumination changes.
international conference on image and graphics | 2009
Shuhong Xu; Peng Song; Ching Ling Chin; Gim Guan Chua; Zhiyong Huang; Susanto Rahardja
This paper reports the design and implementation of an interactive and immersive environment (IIE) for tennis simulation. The presented design layout, named Tennis Space, provides the necessary immersive experience without overly restricting the player. To address the instability problem of real-time tracking of fast moving objects, a hybrid tracking solution integrating optical tracking and ultrasound-inertial tracking technologies is developed. An L-shaped IIE has been implemented for tennis simulation and has received positive feedback from users.
database systems for advanced applications | 2014
Narayanan Amudha; Gim Guan Chua; Eric Siew Khuan Foo; Shen Tat Goh; Shuqiao Guo; Paul Min Chim Lim; Mun-Thye Mak; Muhammad Cassim Munshi; See-Kiong Ng; Wee Siong Ng; Huayu Wu
We introduce the A*STAR Data Analytics and Exchange Platform (“A*DAX”), which is the backbone data platform for different programs and projects under the Urban Systems Initiative launched by the Agency for Science, Technology and Research in Singapore. The A*DAX aims to provide a centralized system for public and private sectors to manage and share data; meanwhile, it also provides basic data analytics and visualization functions for authorized parties to consume data. A*DAX is also a channel for developers to develop innovative applications based on real data to improve urban services.
international symposium on computer and information sciences | 2011
Miaolong Yuan; Gim Guan Chua; Farzam Farbiz; Susanto Rahardja
Eye contact with virtual character can provide a realistic illusion to a user in an immersive virtual reality (VR) environment. This allows more believable eye communication between a user and a computer-generated virtual character. In this paper, an effective eye contact mechanism by innovatively combining a vision-based head tracker and an eye animation method is shown. Together, a robust color classification method to track the user’s face is developed which includes extracting the 3D information of the tracked face available from a stereo camera without the need for any handheld devices or special sensors attached on the user. Through the use of the vision-based head tracker, the virtual character will be aware of the user’s movements and can react to the user in a believable and pleasing manner by moving the head and the eyes to gaze at the user when the user is looking at the virtual character, or turning away when the user is not looking, or be in idle mode when the user is not around. The proposed eye contact method has been successfully applied in a virtual reality interactive game.
Ai Communications | 2017
Yu Lu; Zeng Zeng; Huayu Wu; Gim Guan Chua; Jingjing Zhang
The fast advancements in sensor data acquisition and vehicle telematics facilitate data collection from taxis and thus, enable building a system to monitor and analyze the citywide taxi service. In this paper, we present a novel and practical system for taxi service analytics and visualization. By utilizing both real time and historical taxi data, the system conducts the estimation on region based passenger wait time for taxi, where recurrent neural network (RNN) and deep learning algorithms are used to build a predictive model. The built RNN-based predictive model achieves 73.3% overall accuracy, which is significantly higher than other classic models. Meanwhile, the system conducts the analytics on the taxi pickup hotspots and trip distributions. The experimental results show that around 97% trips are accurately identified and more than 200 hotspots in the city are successfully detected. Moreover, a novel three dimensional (3D) visualization together with the informative user interface is designed and implemented to ease the information access, and to help system users to understand the characteristics and gain insights of the taxi service.
machine vision applications | 2009
Corey Manders; Farzam Farbiz; Ishtiaq Rasool Khan; Miaolong Yuan; Ka Yin Tang; Mei Hwan Loke; Gim Guan Chua
This work outlines a system in which a stereo camera may effectively track a users face and hands in three dimensions. Given this information, a method for controlling objects in three dimensions is also described. The system begins by finding faces. If more than one face is found in the image, the algorithm uses depth information to isolate the face that is closest to the camera. The algorithm then gathers information about the users skin tone by examining the content of the face found. For much of the processing, only the hue and saturation components are used after applying an HSV to RGB transformation given the camera output. The skin tone information in tandem with depth is then used to isolate the users hands, and track them in three dimensions. To be used as an effective interface, the system uses information of the two hands relative to the users face. In controlling an object in three dimensions, if the user would like to move the object up, he or she simply positions both hands above his or her face. Similar commands allow the user to apply a translational factor in three dimensions, as well as applying yaw and roll when wanted.
Archive | 2009
Corey Manson Manders; Farbiz Farzam; Ka Yin Christina Tang; Gim Guan Chua
international joint conference on artificial intelligence | 2016
Yu Lu; Gim Guan Chua; Huayu Wu; Clement Shi Qi Ong
mobile data management | 2014
Manoranjan Dash; Gim Guan Chua; Hai Long Nguyen; Ghim Eng Yap; Cao Hong; Xiaoli Li; Shonali Krishnaswamy; James Decraene; Amy Shi Nash
virtual reality continuum and its applications in industry | 2009
Mei Hwan Loke; Ka Yin Tang; Gim Guan Chua; Yiling Odelia Tan; Farzam Farbiz