Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kia-Fock Loe is active.

Publication


Featured researches published by Kia-Fock Loe.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2006

A dynamic conditional random field model for foreground and shadow segmentation

Yang Wang; Kia-Fock Loe; Jian-Kang Wu

This paper proposes a dynamic conditional random field (DCRF) model for foreground object and moving shadow segmentation in indoor video scenes. Given an image sequence, temporal dependencies of consecutive segmentation fields and spatial dependencies within each segmentation field are unified by a dynamic probabilistic framework based on the conditional random field (CRF). An efficient approximate filtering algorithm is derived for the DCRF model to recursively estimate the segmentation field from the history of observed images. The foreground and shadow segmentation method integrates both intensity and gradient features. Moreover, models of background, shadow, and gradient information are updated adaptively for nonstationary background processes. Experimental results show that the proposed approach can accurately detect moving objects and their cast shadows even in monocular grayscale video sequences.


Pattern Recognition | 2002

Robust vision-based features and classification schemes for off-line handwritten digit recognition

Loo-Nin Teow; Kia-Fock Loe

Abstract We use well-established results in biological vision to construct a model for handwritten digit recognition. We show empirically that the features extracted by our model are linearly separable over a large training set (MNIST). Using only a linear discriminant system on these features, our model is relatively simple yet outperforms other models on the same data set. In particular, the best result is obtained by applying triowise linear support vector machines with soft voting on vision-based features extracted from deslanted images.


IEEE Transactions on Image Processing | 2005

Spatiotemporal video segmentation based on graphical models

Yang Wang; Kia-Fock Loe; Tele Tan; Jian-Kang Wu

This paper proposes a probabilistic framework for spatiotemporal segmentation of video sequences. Motion information, boundary information from intensity segmentation, and spatial connectivity of segmentation are unified in the video segmentation process by means of graphical models. A Bayesian network is presented to model interactions among the motion vector field, the intensity segmentation field, and the video segmentation field. The notion of the Markov random field is used to encourage the formation of continuous regions. Given consecutive frames, the conditional joint probability density of the three fields is maximized in an iterative way. To effectively utilize boundary information from the intensity segmentation, distance transformation is employed in local objective functions. Experimental results show that the method is robust and generates spatiotemporally coherent segmentation results. Moreover, the proposed video segmentation approach can be viewed as the compromise of previous motion based approaches and region merging approaches.


Pattern Recognition | 2005

A probabilistic approach for foreground and shadow segmentation in monocular image sequences

Yang Wang; Tele Tan; Kia-Fock Loe; Jian-Kang Wu

This paper presents a novel method of foreground and shadow segmentation in monocular indoor image sequences. The models of background, edge information, and shadow are set up and adaptively updated. A Bayesian network is proposed to describe the relationships among the segmentation label, background, intensity, and edge information. A maximum a posteriori-Markov random field estimation is used to boost the spatial connectivity of segmented regions.


international conference on image processing | 2003

A probabilistic method for foreground and shadow segmentation

Yang Wang; Tele Tan; Kia-Fock Loe

This paper presents a probabilistic method for foreground segmentation that distinguishes moving objects from their cast shadows in monocular indoor image sequences. The models of background, shadow, and edge information are set up and adaptively updated. A Bayesian framework is proposed to describe the relationships among the segmentation label, background, intensity, and edge information. A Markov random field is used to boost the spatial connectivity of the segmented regions. The solution is obtained by maximizing the posterior probability density of the segmentation field.


The Visual Computer | 1996

αB-spline: a linear singular blending B-spline

Kia-Fock Loe

A linear singular blending (LSB) technique can enhance the shape—control capability of the B-spline. This capability is derived from the blending parameters defined at the B-spline control vertices and blends LSB line segments or bilinear surface patches with the B-spline curve or surface. Varying the blending parameters between zero and unity applies tension for reshaping. The reshaped curve or surface retains the same smoothness properties as the original B-spline; it possesses the same strict parametric continuities. This is different from the β-spline, which introduces additional control to the B-spline by imposing geometrical continuities to the joints of curve segments or surface patches. For applications in which strict parametric continuities cannot be compromised, LSB provides an intuitive way to introduce tension to the B-spline.


computer vision and pattern recognition | 2003

S-AdaBoost and pattern detection in complex environment

Jimmy Liu Jiang; Kia-Fock Loe

S-AdaBoost is a new variant of AdaBoost and is more effective than the conventional AdaBoost in handling outliers in pattern detection and classification in real world complex environment. Utilizing the divide and conquer principle, S-AdaBoost divides the input space into a few sub-spaces and uses dedicated classifiers to classify patterns in the sub-spaces. The final classification result is the combination of the outputs of the dedicated classifiers. S-AdaBoost system is made up of an AdaBoost divider, an AdaBoost classifier, a dedicated classifier for outliers, and a non-linear combiner. In addition to presenting face detection test results in a complex airport environment, we have also conducted experiments on a number of benchmark databases to test the algorithm. The experiment results clearly show S-AdaBoosts effectiveness in pattern detection and classification.


Medical Imaging 2004: Physiology, Function, and Structure from Medical Images | 2004

Rapid and automatic detection of brain tumors in MR images

Zhengjia Wang; Qingmao Hu; Kia-Fock Loe; Aamer Aziz; Wieslaw L. Nowinski

An algorithm to automatically detect brain tumors in MR images is presented. The key concern is speed in order to process efficiently large brain image databases and provide quick outcomes in clinical setting. The method is based on study of asymmetry of the brain. Tumors cause asymmetry of the brain, so we detect brain tumors in 3D MR images using symmetry analysis of image grey levels with respect to the midsagittal plane (MSP). The MSP, separating the brain into two hemispheres, is extracted using our previously developed algorithm. By removing the background pixels, the normalized grey level histograms are calculated for both hemispheres. The similarity between these two histograms manifests the symmetry of the brain, and it is quantified by using four symmetry measures: correlation coefficient, root mean square error, integral of absolute difference (IAD), and integral of normalized absolute difference (INAD). A quantitative analysis of brain normality based on 42 patients with tumors and 55 normals is presented. The sensitivity and specificity of IAD and INAD were 83.3% and 89.1%, and 85.7% and 83.6%, respectively. The running time for each symmetry measure for a 3D 8bit MR data was between 0.1 - 0.3 seconds on a 2.4GHz CPU PC.


workshop on applications of computer vision | 2005

A Dynamic Hidden Markov Random Field Model for Foreground and Shadow Segmentation

Yang Wang; Kia-Fock Loe; Tele Tan; Jian-Kang Wu

This paper proposes a dynamic hidden Markov random field (DHMRF) model for foreground object and moving shadow segmentation in indoor video scenes. Given an image sequence, temporal dependencies of consecutive segmentation fields and spatial dependencies within each segmentation field are unified in the novel dynamic probabilistic model that combines the hidden Markov model (HMM) and the Markov random field (MRF). An efficient approximate filtering algorithm is derived for the DHMRF model to recursively estimate the segmentation field from the history of observed images. The foreground and shadow segmentation method integrates both intensity and edge information: Moreover, models of background, shadow, and edge information are updated adoptively for nonstationary background processes. Experimental results show that the proposed approach can accurately detect moving objects and their cast shadows even in monocular grayscale video sequences


computer vision and pattern recognition | 2000

Handwritten digit recognition with a novel vision model that extracts linearly separable features

Loo-Nin Teow; Kia-Fock Loe

We use well-established results in biological vision to construct a novel vision model for handwritten digit recognition. We show empirically that the features extracted by our model are linearly separable over a large training set (MNIST). Using only a linear classifier on these features, our model is relatively simple yet outperforms other models on the same data set.

Collaboration


Dive into the Kia-Fock Loe's collaboration.

Top Co-Authors

Avatar

Chiew-Lan Tai

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Yang Wang

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Loo-Nin Teow

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar

Loke-Soo Hsu

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar

Sing-Chai Chan

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hoon-Heng Teh

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar

Jimmy Liu Jiang

National University of Singapore

View shared research outputs
Researchain Logo
Decentralizing Knowledge