Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Terence Sim is active.

Publication


Featured researches published by Terence Sim.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2003

The CMU pose, illumination, and expression database

Terence Sim; Simon Baker; Maan Bsat

In the Fall of 2000, we collected a database of more than 40,000 facial images of 68 people. Using the Carnegie Mellon University 3D Room, we imaged each person across 13 different poses, under 43 different illumination conditions, and with four different expressions. We call this the CMU pose, illumination, and expression (PIE) database. We describe the imaging hardware, the collection procedure, the organization of the images, several possible uses, and how to obtain the database.


ieee international conference on automatic face and gesture recognition | 2002

The CMU Pose, Illumination, and Expression (PIE) database

Terence Sim; Simon Baker; Maan Bsat

Between October 2000 and December 2000, we collected a database of over 40,000 facial images of 68 people. Using the CMU (Carnegie Mellon University) 3D Room, we imaged each person across 13 different poses, under 43 different illumination conditions, and with four different expressions. We call this database the CMU Pose, Illumination and Expression (PIE) database. In this paper, we describe the imaging hardware, the collection procedure, the organization of the database, several potential uses of the database, and how to obtain the database.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2007

Continuous Verification Using Multimodal Biometrics

Terence Sim; Sheng Zhang; Rajkumar Janakiraman; Sandeep S. Kumar

Conventional verification systems, such as those controlling access to a secure room, do not usually require the user to reauthenticate himself for continued access to the protected resource. This may not be sufficient for high-security environments in which the protected resource needs to be continuously monitored for unauthorized use. In such cases, continuous verification is needed. In this paper, we present the theory, architecture, implementation, and performance of a multimodal biometrics verification system that continuously verifies the presence of a logged-in user. Two modalities are currently used - face and fingerprint - but our theory can be readily extended to include more modalities. We show that continuous verification imposes additional requirements on multimodal fusion when compared to conventional verification systems. We also argue that the usual performance metrics of false accept and false reject rates are insufficient yardsticks for continuous verification and propose new metrics against which we benchmark our system


Pattern Recognition | 2011

Defocus map estimation from a single image

Shaojie Zhuo; Terence Sim

In this paper, we address the challenging problem of recovering the defocus map from a single image. We present a simple yet effective approach to estimate the amount of spatially varying defocus blur at edge locations. The input defocused image is re-blurred using a Gaussian kernel and the defocus blur amount can be obtained from the ratio between the gradients of input and re-blurred images. By propagating the blur amount at edge locations to the entire image, a full defocus map can be obtained. Experimental results on synthetic and real images demonstrate the effectiveness of our method in providing a reliable estimation of the defocus map.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2007

Discriminant Subspace Analysis: A Fukunaga-Koontz Approach

Sheng Zhang; Terence Sim

The Fisher linear discriminant (FLD) is commonly used in pattern recognition. It finds a linear subspace that maximally separates class patterns according to the Fisher criterion. Several methods of computing the FLD have been proposed in the literature, most of which require the calculation of the so-called scatter matrices. In this paper, we bring a fresh perspective to FLD via the Fukunaga-Koontz transform (FKT). We do this by decomposing the whole data space into four subspaces with different discriminabilities, as measured by eigenvalue ratios. By connecting the eigenvalue ratio with the generalized eigenvalue, we show where the Fisher Criterion is maximally satisfied. We prove the relationship between FLD and FKT analytically and propose a unified framework to understanding some existing work. Furthermore, we extend our theory to the multiple discriminant analysis (MDA). This is done by transforming the data into intraclass and extraclass spaces, followed by maximizing the Bhattacharyya distance. Based on our FKT analysis, we identify the discriminant subspaces of MDA/FKT and propose an efficient algorithm, which works even when the scatter matrices are singular or too large to be formed. Our method is general and may be applied to different pattern recognition problems. We validate our method by experimenting on synthetic and real data.


international conference on computer vision | 2013

Multi-channel Correlation Filters

Hamed Kiani Galoogahi; Terence Sim; Simon Lucey

Modern descriptors like HOG and SIFT are now commonly used in vision for pattern detection within image and video. From a signal processing perspective, this detection process can be efficiently posed as a correlation/ convolution between a multi-channel image and a multi-channel detector/filter which results in a single channel response map indicating where the pattern (e.g. object) has occurred. In this paper, we propose a novel framework for learning a multi-channel detector/filter efficiently in the frequency domain, both in terms of training time and memory footprint, which we refer to as a multichannel correlation filter. To demonstrate the effectiveness of our strategy, we evaluate it across a number of visual detection/ localization tasks where we: (i) exhibit superior performance to current state of the art correlation filters, and (ii) superior computational and memory efficiencies compared to state of the art spatial detectors.


computer vision and pattern recognition | 2009

Digital face makeup by example

Dong Guo; Terence Sim

This paper introduces an approach of creating face makeup upon a face image with another image as the style example. Our approach is analogous to physical makeup, as we modify the color and skin detail while preserving the face structure. More precisely, we first decompose the two images into three layers: face structure layer, skin detail layer, and color layer. Thereafter, we transfer information from each layer of one image to corresponding layer of the other image. One major advantage of the proposed method lies in that only one example image is required. This renders face makeup by example very convenient and practical. Equally, this enables some additional interesting applications, such as applying makeup by a portraiture. The experiment results demonstrate the effectiveness of the proposed approach in faithfully transferring makeup.


computer vision and pattern recognition | 2008

Enhancing photographs with Near Infra-Red images

Xiaopeng Zhang; Terence Sim; Xiaoping Miao

Near infra-red (NIR) images of natural scenes usually have better contrast and contain rich texture details that may not be perceived in visible light photographs (VIS). In this paper, we propose a novel method to enhance a photograph by using the contrast and texture information of its corresponding NIR image. More precisely, we first decompose the NIR/VIS pair into average and detail wavelet subbands. We then transfer the contrast in the average subband and transfer texture in the detail subbands. We built a special camera mount that optically aligns two consumer-grade digital cameras, one of which was modified to capture NIR. Our results exhibit higher visual quality than tone-mapped HDR images, showing that NIR imaging is useful for computational photography.


ieee international conference on automatic face and gesture recognition | 2000

Memory-based face recognition for visitor identification

Terence Sim; Rahul Sukthankar; Matthew D. Mullin; Shumeet Baluja

We show that a simple, memory-based technique for appearance-based face recognition, motivated by the real-world task of visitor identification, can outperform more sophisticated algorithms that use principal components analysis (PCA) and neural networks. This technique is closely related to correlation templates; however, we show that the use of novel similarity measures greatly improves performance. We also show that augmenting the memory base with additional, synthetic face images results in further improvements in performance. Results of extensive empirical testing on two standard face recognition datasets are presented, and direct comparisons with published work show that our algorithm achieves comparable (or superior) results. Our system is incorporated into an automated visitor identification system that has been operating successfully in an outdoor environment since January 1999.


computer vision and pattern recognition | 2010

Robust flash deblurring

Shaojie Zhuo; Dong Guo; Terence Sim

Motion blur due to camera shake is an annoying yet common problem in low-light photography. In this paper, we propose a novel method to recover a sharp image from a pair of motion blurred and flash images, consecutively captured using a hand-held camera. We first introduce a robust flash gradient constraint by exploiting the correlation between a sharp image and its corresponding flash image. Then we formulate our flash deblurring as solving a maximum-a-posteriori problem under the flash gradient constraint. We solve the problem by performing kernel estimation and non-blind deconvolution iteratively, leading to an accurate blur kernel and a reconstructed image with fine image details. Experiments on both synthetic and real images show the superiority of our method compared with existing methods.

Collaboration


Dive into the Terence Sim's collaboration.

Top Co-Authors

Avatar

Shuicheng Yan

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar

Yu Zhang

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar

Jiashi Feng

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar

Chew Lim Tan

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar

Jianshu Li

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar

Dong Guo

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar

Li Zhang

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rajkumar Janakiraman

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar

Sheng Zhang

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge