Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Karianto Leman is active.

Publication


Featured researches published by Karianto Leman.


computer vision and pattern recognition | 2015

Shadow optimization from structured deep edge detection

Li Shen; Teck Wee Chua; Karianto Leman

Local structures of shadow boundaries as well as complex interactions of image regions remain largely unexploited by previous shadow detection approaches. In this paper, we present a novel learning-based framework for shadow region recovery from a single image. We exploit local structures of shadow edges by using a structured CNN learning framework. We show that using structured label information in classification can improve local consistency over pixel labels and avoid spurious labelling. We further propose and formulate shadow/bright measure to model complex interactions among image regions. The shadow and bright measures of each patch are computed from the shadow edges detected by the proposed CNN. Using the global interaction constraints on patches, we formulate a least-square optimization problem for shadow recovery that can be solved efficiently. Our shadow recovery method achieves state-of-the-art results on major shadow benchmark databases collected under various conditions.


international conference on image processing | 2012

Adaptive texture-color based background subtraction for video surveillance

Teck Wee Chua; Yue Wang; Karianto Leman

Texture and color are two primitive forms of features that can be used to describe a scene. While conventional local binary pattern (LBP) texture based background subtraction performs well on texture rich regions, it fails to detect uniform foreground objects in large uniform background. As such, color information can be used to complement texture feature. In this study, we propose to incorporate local color feature based on Improved Hue, Luminance, and Saturation (IHLS) color space and introduce an adaptive scheme that automatically adjusts the weight between texture and color similarities based on the pixels local properties: texture uniformity and color saturation. Experiments on eight challenging sequences demonstrate the effectiveness of the proposed method compared to the state-of-the-art algorithms.


international conference on acoustics, speech, and signal processing | 2011

Keypoint-based near-duplicate images detection using affine invariant feature and color matching

Yue Wang; Zujun Hou; Karianto Leman

This paper presents a new keypoint-based approach to near-duplicate images detection. It consists of three steps. Firstly, the keypoints of images are extracted and then matched. Secondly, the matched keypoints are voted for estimation of affine transform based on an affine invariant ratio of normalized lengths. Finally, to further confirm the matching, the color histograms of areas formed by matched keypoints in two images are compared. This method has the advantage for handling the case when there are only a few matched keypoints. The proposed algorithm has been tested on Columbia dataset and conducted the quantitative comparison with RANdom SAmple Consensus (RANSAC) algorithm and Scale-Rotation Invariant Pattern Entropy (SR-PE) algorithm. The experiment result turns out that the proposed method compares favorably against the state-of-the-arts.


conference on multimedia modeling | 2011

Combination of local and global features for near-duplicate detection

Yue Wang; Zujun Hou; Karianto Leman; Nam Trung Pham; Teck Wee Chua; Richard Chang

This paper presents a new method to combine local and global features for near-duplicate images detection. It mainly consists of three steps. Firstly, the keypoints of images are extracted and preliminarily matched. Secondly, the matched keypoints are voted for estimation of affine transform to reduce false matching keypoints. Finally, to further confirm the matching, the Local Binary Pattern (LBP) and color histograms of areas formed by matched keypoints in two images are compared. This method has the advantage for handling the case when there are only a few matched keypoints. The proposed algorithm has been tested on Columbia dataset and compared quantitatively with the RANdom SAmple Consensus (RANSAC) and the Scale-Rotation Invariant Pattern Entropy (SR-PE) methods. The results turn out that the proposed method compares favorably against the state-of-the-arts.


International Journal of Software Engineering and Knowledge Engineering | 2005

PDA BASED HUMAN MOTION RECOGNITION SYSTEM

Karianto Leman; Goel Ankit; T. Tan

This paper describes the design and implementation of autonomous real-time motion recognition on a Personal Digital Assistant. All previous such applications have been non real-time and required user interaction. The motivation to use a PDA is to test the viability of performing complex video processing on an embedded platform. The application was constructed using a representation and recognition technique for identifying patterns using Hu Moments. The approach is based upon temporal templates (Motion Energy and History Images) and their matching in time. The implementation was done using Intel Integrated Performance Primitives functions in order to reduce the complexity of the application. Tests were conducted using 5 different motion actions like arm waving, walking from left and right of the camera, head tilting and bending forward. Suggestions were also made on how to improve the performance of the system and possible applications.


international conference on image processing | 2014

Learning deep features for multiple object tracking by using a multi-task learning strategy

Li Wang; Nam Trung Pham; Tian-Tsong Ng; Gang Wang; Kap Luk Chan; Karianto Leman

Model-free object tracking is still challenging because of the limited prior knowledge and the unexpected variation of the target object. In this paper, we propose a feature learning algorithm for model-free multiple object tracking. First, we pre-learn generic features invariant to diverse motion transformations from auxiliary video data by using a deep network of anto-encoder. Then, we adapt the pre-learned features according to multiple target objects respectively in a multi-task learning manner. We treat the feature adaptation for each target object as one single task. We simultaneously learn the common feature shared by all target objects and the individual feature of each object. Experimental results demonstrate that our feature learning algorithm can significantly improve multiple object tracking performance.


ieee international conference on fuzzy systems | 2012

Fuzzy rule-based system for dynamic texture and color based background subtraction

Teck Wee Chua; Karianto Leman; Yue Wang

Background subtraction is an essential technique for automatic video analysis. The main idea is to construct and update the model of the background scene. Foreground pixels are detected if they deviate from the background model to a certain extent. The model can consist of color, texture and gradient information [1]. In this paper, we focus on both color and texture information. The proposed texture feature is based on local binary pattern (LBP), while the color feature is represented by local color pattern (LCP). LBP is known to work well on texture rich regions and is invariant to subtle illumination variations, but it is inefficient on uniform regions. In view of this, color information can be incorporated to complement the texture feature. On the other hand, when the scene contrast or video quality is poor, color information may be unreliable and should be assigned lower priority than texture information. We propose a fuzzy rule-based system that adaptively adjusts the weights of the texture and color features based on the pixels local properties. Experimental results on real scenes demonstrate the robustness of the proposed method.


conference on multimedia modeling | 2014

Fusing Appearance and Spatio-temporal Features for Multiple Camera Tracking

Nam Trung Pham; Karianto Leman; Richard Chang; Jie Zhang; Hee Lin Wang

Multiple camera tracking is a challenging task for many surveillance systems. The objective of multiple camera tracking is to maintain trajectories of objects in the camera network. Due to ambiguities in appearance of objects, it is challenging to re-identify objects when they re-appear in other cameras. Most research works associate objects by using appearance features. In this work, we fuse appearance and spatio-temporal features for person re-identification. Our framework consists of two steps: preprocessing to reduce the number of association candidates and associating objects by using the probabilistic relative distance. We set up an experimental environment including 10 cameras and achieve a better performance than using appearance features only.


ieee international conference on fuzzy systems | 2011

Human action recognition via sum-rule fusion of fuzzy K-Nearest Neighbor classifiers

Teck Wee Chua; Karianto Leman; Nam Trung Pham

Shape and motion are two most distinct cues observed from human actions. Traditionally, K-Nearest Neighbor (K-NN) classifier is used to compute crisp votes from multiple cues separately. The votes are then combined using linear weighting scheme. Usually, the weights are determined in a brute-force or trial-and-error manner. In this study, we propose a new classification framework based on sum-rule fusion of fuzzy K-NN classifiers. Fuzzy K-NN classifier is capable of producing soft votes, also known as fuzzy membership values. Based on Bayes theorem, we show that the fuzzy membership values produced by the classifiers can be combined using sum-rule. In our experiment, the proposed framework consistently outperforms the conventional counterpart (K-NN with majority voting) for both Weizmann and KTH datasets. The improvement may attribute to the ability of the proposed framework to handle data ambiguity due to similar poses present in different action classes. We also show that the performance of our method compares favorably with the state-of-the-arts.


international conference on distributed smart cameras | 2013

Automatic cooperative camera system for real-time bag detection in visual surveillance

Richard Chang; Teck Wee Chua; Karianto Leman; Hee Lin Wang; Jie Zhang

Visual surveillance systems use more and more cameras in order to cover wider areas and reduce blind spots. Cameras placement and configuration usually depends on the area to be monitored and the size of objects in the scene. Video analytics systems also require a minimal size to get detailed features of objects or people. Most vision-based surveillance systems focus on detection and tracking of people or objects in the scene. However, it is often more meaningful to describe people with high-level information such as hair style, carrying bag or other attributes. In order to perform this detection a close view is required. In this paper, a collaborative camera pair system tackles this problem and retrieves detailed features in a wide scene following the master-slave approach. A PTZ (Pan-Tilt-Zoom) camera is defined as slave and zooms on the targets detected by the master camera with wide coverage. We introduce an automatic method to estimate the internal camera parameters in order to have an efficient control of the camera pair combined with a novel real-time bag detection algorithm. Targets are first identified in the master camera and the slave camera will zoom in to the targets to detect different types of bags. Experimental results will be shown on real-data at each step of the approach.

Collaboration


Dive into the Karianto Leman's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gang Wang

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar

Goel Ankit

National University of Singapore

View shared research outputs
Researchain Logo
Decentralizing Knowledge