Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Teck Wee Chua is active.

Publication


Featured researches published by Teck Wee Chua.


computer vision and pattern recognition | 2015

Shadow optimization from structured deep edge detection

Li Shen; Teck Wee Chua; Karianto Leman

Local structures of shadow boundaries as well as complex interactions of image regions remain largely unexploited by previous shadow detection approaches. In this paper, we present a novel learning-based framework for shadow region recovery from a single image. We exploit local structures of shadow edges by using a structured CNN learning framework. We show that using structured label information in classification can improve local consistency over pixel labels and avoid spurious labelling. We further propose and formulate shadow/bright measure to model complex interactions among image regions. The shadow and bright measures of each patch are computed from the shadow edges detected by the proposed CNN. Using the global interaction constraints on patches, we formulate a least-square optimization problem for shadow recovery that can be solved efficiently. Our shadow recovery method achieves state-of-the-art results on major shadow benchmark databases collected under various conditions.


international conference on image processing | 2012

Adaptive texture-color based background subtraction for video surveillance

Teck Wee Chua; Yue Wang; Karianto Leman

Texture and color are two primitive forms of features that can be used to describe a scene. While conventional local binary pattern (LBP) texture based background subtraction performs well on texture rich regions, it fails to detect uniform foreground objects in large uniform background. As such, color information can be used to complement texture feature. In this study, we propose to incorporate local color feature based on Improved Hue, Luminance, and Saturation (IHLS) color space and introduce an adaptive scheme that automatically adjusts the weight between texture and color similarities based on the pixels local properties: texture uniformity and color saturation. Experiments on eight challenging sequences demonstrate the effectiveness of the proposed method compared to the state-of-the-art algorithms.


conference on multimedia modeling | 2011

Combination of local and global features for near-duplicate detection

Yue Wang; Zujun Hou; Karianto Leman; Nam Trung Pham; Teck Wee Chua; Richard Chang

This paper presents a new method to combine local and global features for near-duplicate images detection. It mainly consists of three steps. Firstly, the keypoints of images are extracted and preliminarily matched. Secondly, the matched keypoints are voted for estimation of affine transform to reduce false matching keypoints. Finally, to further confirm the matching, the Local Binary Pattern (LBP) and color histograms of areas formed by matched keypoints in two images are compared. This method has the advantage for handling the case when there are only a few matched keypoints. The proposed algorithm has been tested on Columbia dataset and compared quantitatively with the RANdom SAmple Consensus (RANSAC) and the Scale-Rotation Invariant Pattern Entropy (SR-PE) methods. The results turn out that the proposed method compares favorably against the state-of-the-arts.


ieee international conference on fuzzy systems | 2012

Fuzzy rule-based system for dynamic texture and color based background subtraction

Teck Wee Chua; Karianto Leman; Yue Wang

Background subtraction is an essential technique for automatic video analysis. The main idea is to construct and update the model of the background scene. Foreground pixels are detected if they deviate from the background model to a certain extent. The model can consist of color, texture and gradient information [1]. In this paper, we focus on both color and texture information. The proposed texture feature is based on local binary pattern (LBP), while the color feature is represented by local color pattern (LCP). LBP is known to work well on texture rich regions and is invariant to subtle illumination variations, but it is inefficient on uniform regions. In view of this, color information can be incorporated to complement the texture feature. On the other hand, when the scene contrast or video quality is poor, color information may be unreliable and should be assigned lower priority than texture information. We propose a fuzzy rule-based system that adaptively adjusts the weights of the texture and color features based on the pixels local properties. Experimental results on real scenes demonstrate the robustness of the proposed method.


ieee international conference on fuzzy systems | 2011

Human action recognition via sum-rule fusion of fuzzy K-Nearest Neighbor classifiers

Teck Wee Chua; Karianto Leman; Nam Trung Pham

Shape and motion are two most distinct cues observed from human actions. Traditionally, K-Nearest Neighbor (K-NN) classifier is used to compute crisp votes from multiple cues separately. The votes are then combined using linear weighting scheme. Usually, the weights are determined in a brute-force or trial-and-error manner. In this study, we propose a new classification framework based on sum-rule fusion of fuzzy K-NN classifiers. Fuzzy K-NN classifier is capable of producing soft votes, also known as fuzzy membership values. Based on Bayes theorem, we show that the fuzzy membership values produced by the classifiers can be combined using sum-rule. In our experiment, the proposed framework consistently outperforms the conventional counterpart (K-NN with majority voting) for both Weizmann and KTH datasets. The improvement may attribute to the ability of the proposed framework to handle data ambiguity due to similar poses present in different action classes. We also show that the performance of our method compares favorably with the state-of-the-arts.


international conference on distributed smart cameras | 2013

Automatic cooperative camera system for real-time bag detection in visual surveillance

Richard Chang; Teck Wee Chua; Karianto Leman; Hee Lin Wang; Jie Zhang

Visual surveillance systems use more and more cameras in order to cover wider areas and reduce blind spots. Cameras placement and configuration usually depends on the area to be monitored and the size of objects in the scene. Video analytics systems also require a minimal size to get detailed features of objects or people. Most vision-based surveillance systems focus on detection and tracking of people or objects in the scene. However, it is often more meaningful to describe people with high-level information such as hair style, carrying bag or other attributes. In order to perform this detection a close view is required. In this paper, a collaborative camera pair system tackles this problem and retrieves detailed features in a wide scene following the master-slave approach. A PTZ (Pan-Tilt-Zoom) camera is defined as slave and zooms on the targets detected by the master camera with wide coverage. We introduce an automatic method to estimate the internal camera parameters in order to have an efficient control of the camera pair combined with a novel real-time bag detection algorithm. Targets are first identified in the master camera and the slave camera will zoom in to the targets to detect different types of bags. Experimental results will be shown on real-data at each step of the approach.


conference on multimedia modeling | 2014

Hierarchical Audio-Visual Surveillance for Passenger Elevators

Teck Wee Chua; Karianto Leman; Feng Gao

Modern elevators are equipped with closed-circuit television (CCTV) cameras to record videos for post-incident investigation rather than providing proactive event monitoring. While there are some attempts at automated video surveillance, events such as urinating, vandalism, and crimes that involved vulnerable targets may not exhibit significant visual cues. On contrary, such events are more discerning from audio cues. In this work, we propose a hierarchical audio-visual surveillance framework for elevators. Audio analytic module acts as the front line detector to monitor for such events. This means audio cue is the main determining source to infer the event occurrence. The secondary inference process involves queries to visual analytic module to build-up the evidences leading to event detection. We validate the performance of our system at a residential trial site and the initial results are promising.


conference on multimedia modeling | 2014

A Novel Human Action Representation via Convolution of Shape-Motion Histograms

Teck Wee Chua; Karianto Leman

Robust solutions to vision-based human action recognition require effective representations of body shapes and their dynamics. Combining multiple cues in the input space can improve the recognition task. Although conventional method such as concatenation of feature vectors is straightforward, it may not sufficiently encapsulate the characteristics of an action. Inspired by the success of convolution-based reverb application in digital signal processing, we propose a novel method to synergistically combine shape and motion histograms via convolution operation. The objective is to synthesize the output (action representation) which carries the characteristics of both source inputs (shape and motion). Analysis and experimental results on the Weizmann and KTH datasets show that the resultant feature is more efficient than other hybrid features. Compared to other recent works, the feature that we used has much lower dimension. In addition, our method avoids the need for determining weights manually during feature concatenation.


intelligent robots and systems | 2013

Sling bag and backpack detection for human appearance semantic in vision system

Teck Wee Chua; Karianto Leman; Hee Lin Wang; Nam Trung Pham; Richard Chang; Dinh Duy Nguyen; Jie Zhang

In many intelligent surveillance systems there is a requirement to search for people of interest through archived semantic labels. Other than searching through typical appearance attributes such as clothing color and body height, information such as whether a person carries a bag or not is valuable to provide more relevant targeted search. We propose two novel and fast algorithms for sling bag and backpack detection based on the geometrical properties of bags. The advantage of the proposed algorithms is that it does not require shape information from human silhouettes therefore it can work under crowded condition. In addition, the absence of background subtraction makes the algorithms suitable for mobile platforms such as robots. The system was tested with a low resolution surveillance video dataset. Experimental results demonstrate that our method is promising.


international conference on acoustics, speech, and signal processing | 2011

Random finite set for data association in multiple camera tracking

Nam Trung Pham; Richard Chang; Karianto Leman; Teck Wee Chua; Yue Wang

Most methods for multiple camera tracking rely on accurate calibration to associate data from multiple cameras. However, it often is not easy to have an accurate calibration in some real applications due to practical reasons. The inaccurate calibration can lead to wrong data association of objects between cameras. In this paper, we propose a method to handle the data association of objects in multiple cameras under inaccurate ground plane homography by using the RFS Bayes filter. Our method is based on modeling measurements from cameras to a random finite set. This random finite set includes the primary measurement from the object, extraneous measurements of the object, and clutter. Experimental results show the efficiency and robustness of our method through challenging cases such as occlusions, merged and split persons.

Collaboration


Dive into the Teck Wee Chua's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge