Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yee-Hui Oh is active.

Publication


Featured researches published by Yee-Hui Oh.


PLOS ONE | 2015

Efficient spatio-temporal local binary patterns for spontaneous facial micro-expression recognition

Yandan Wang; John See; Raphael C.-W. Phan; Yee-Hui Oh

Micro-expression recognition is still in the preliminary stage, owing much to the numerous difficulties faced in the development of datasets. Since micro-expression is an important affective clue for clinical diagnosis and deceit analysis, much effort has gone into the creation of these datasets for research purposes. There are currently two publicly available spontaneous micro-expression datasets—SMIC and CASME II, both with baseline results released using the widely used dynamic texture descriptor LBP-TOP for feature extraction. Although LBP-TOP is popular and widely used, it is still not compact enough. In this paper, we draw further inspiration from the concept of LBP-TOP that considers three orthogonal planes by proposing two efficient approaches for feature extraction. The compact robust form described by the proposed LBP-Six Intersection Points (SIP) and a super-compact LBP-Three Mean Orthogonal Planes (MOP) not only preserves the essential patterns, but also reduces the redundancy that affects the discriminality of the encoded features. Through a comprehensive set of experiments, we demonstrate the strengths of our approaches in terms of recognition accuracy and efficiency.


asian conference on computer vision | 2014

LBP with Six Intersection Points: Reducing Redundant Information in LBP-TOP for Micro-expression Recognition

Yandan Wang; John See; Raphael C.-W. Phan; Yee-Hui Oh

Facial micro-expression recognition is an upcoming area in computer vision research. Up until the recent emergence of the extensive CASMEII spontaneous micro-expression database, there were numerous obstacles faced in the elicitation and labeling of data involving facial micro-expressions. In this paper, we propose the Local Binary Patterns with Six Intersection Points (LBP-SIP) volumetric descriptor based on the three intersecting lines crossing over the center point. The proposed LBP-SIP reduces the redundancy in LBP-TOP patterns, providing a more compact and lightweight representation; leading to more efficient computational complexity. Furthermore, we also incorporated a Gaussian multi-resolution pyramid to our proposed approach by concatenating the patterns across all pyramid levels. Using an SVM classifier with leave-one-sample-out cross validation, we achieve the best recognition accuracy of 67.21 %, surpassing the baseline performance with further computational efficiency.


asian conference on computer vision | 2014

Subtle Expression Recognition Using Optical Strain Weighted Features

Sze-Teng Liong; John See; Raphael C.-W. Phan; Anh Cat Le Ngo; Yee-Hui Oh; KokSheik Wong

Optical strain characterizes the relative amount of displacement by a moving object within a time interval. Its ability to compute any small muscular movements on faces can be advantageous to subtle expression research. This paper proposes a novel optical strain weighted feature extraction scheme for subtle facial micro-expression recognition. Motion information is derived from optical strain magnitudes, which is then pooled spatio-temporally to obtain block-wise weights for the spatial image plane. By simple product with the weights, the resulting feature histograms are intuitively scaled to accommodate the importance of block regions. Experiments conducted on two recent spontaneous micro-expression databases–CASMEII and SMIC, demonstrated promising improvement over the baseline results.


international symposium on intelligent signal processing and communication systems | 2014

Optical strain based recognition of subtle emotions

Sze-Teng Liong; Raphael C.-W. Phan; John See; Yee-Hui Oh; KokSheik Wong

This paper presents a novel method to recognize subtle emotions based on optical strain magnitude feature extraction from the temporal point of view. The common way that subtle emotions are exhibited by a person is in the form of visually observed micro-expressions, which usually occur only over a brief period of time. Optical strain allows small deformations on the face to be computed between successive frames although these subtle changes can be minute. We perform temporal sum pooling for each frame in the video to a single strain map to summarize the features over time. To reduce the dimensionality of the input space, the strain maps are then resized to a pre-defined resolution for consistency across the database. Experiments were conducted on the SMIC (Spontaneous Micro-expression) Database, which was recently established in 2013. A best three-class recognition accuracy of 53.56% is achieved, with the proposed method outperforming the baseline reported in the original work by almost 5%. This is the first known optical strain based classification of micro-expressions. The closest related work employed optical strain to spot micro-expressions, but did not investigate its potential for determining the specific type of micro-expression.


international conference on digital signal processing | 2015

Monogenic Riesz wavelet representation for micro-expression recognition

Yee-Hui Oh; Anh Cat Le Ngo; John See; Sze-Teng Liong; Raphael C.-W. Phan; Huo-Chong Ling

A monogenic signal is a two-dimensional analytical signal that provides the local information of magnitude, phase, and orientation. While it has been applied on the field of face and expression recognition [1], [2], [3], there are no known usages for subtle facial micro-expressions. In this paper, we propose a feature representation method which succinctly captures these three low-level components at multiple scales. Riesz wavelet transform is employed to obtain multi-scale monogenic wavelets, which are formulated by quaternion representation. Instead of summing up the multi-scale monogenic representations, we consider all monogenic representations across multiple scales as individual features. For classification, two schemes were applied to integrate these multiple feature representations: a fusion-based method which combines the features efficiently and discriminately using the ultra-fast, optimized Multiple Kernel Learning (UFO-MKL) algorithm; and concatenation-based method where the features are combined into a single feature vector and classified by a linear SVM. Experiments carried out on a recent spontaneous micro-expression database demonstrated the capability of the proposed method in outperforming the state-of-the-art monogenic signal approach to solving the micro-expression recognition problem.


Signal Processing-image Communication | 2016

Spontaneous subtle expression detection and recognition based on facial strain

Sze-Teng Liong; John See; Raphael C.-W. Phan; Yee-Hui Oh; Anh Cat Le Ngo; KokSheik Wong; Su-Wei Tan

Optical strain is an extension of optical flow that is capable of quantifying subtle changes on faces and representing the minute facial motion intensities at the pixel level. This is computationally essential for the relatively new field of spontaneous micro-expression, where subtle expressions can be technically challenging to pinpoint. In this paper, we present a novel method for detecting and recognizing micro-expressions by utilizing facial optical strain magnitudes to construct optical strain features and optical strain weighted features. The two sets of features are then concatenated to form the resultant feature histogram. Experiments were performed on the CASME II and SMIC databases. We demonstrate on both databases, the usefulness of optical strain information and more importantly, that our best approaches are able to outperform the original baseline results for both detection and recognition tasks. A comparison of the proposed method with other existing spatio-temporal feature extraction approaches is also presented. HighlightsThe method proposed is a combination of two optical strain derived features.Optical strain magnitudes were employed to describe fine subtle facial movements.Evaluation was performed in both the detection and recognition tasks.Promising performances were obtained in two micro-expression databases.


Multimedia Tools and Applications | 2017

Effective recognition of facial micro-expressions with video motion magnification

Yandan Wang; John See; Yee-Hui Oh; Raphael C.-W. Phan; Yogachandran Rahulamathavan; Huo-Chong Ling; Su-Wei Tan; Xujie Li

Facial expression recognition has been intensively studied for decades, notably by the psychology community and more recently the pattern recognition community. What is more challenging, and the subject of more recent research, is the problem of recognizing subtle emotions exhibited by so-called micro-expressions. Recognizing a micro-expression is substantially more challenging than conventional expression recognition because these micro-expressions are only temporally exhibited in a fraction of a second and involve minute spatial changes. Until now, work in this field is at a nascent stage, with only a few existing micro-expression databases and methods. In this article, we propose a new micro-expression recognition approach based on the Eulerian motion magnification technique, which could reveal the hidden information and accentuate the subtle changes in micro-expression motion. Validation of our proposal was done on the recently proposed CASME II dataset in comparison with baseline and state-of-the-art methods. We achieve a good recognition accuracy of up to 75.30 % by using leave-one-out cross validation evaluation protocol. Extensive experiments on various factors at play further demonstrate the effectiveness of our proposed approach.


international conference on acoustics, speech, and signal processing | 2016

Eulerian emotion magnification for subtle expression recognition

Anh Cat Le Ngo; Yee-Hui Oh; Raphael C.-W. Phan; John See

Subtle emotions are expressed through tiny and brief movements of facial muscles, called micro-expressions; thus, recognition of these hidden expressions is as challenging as inspection of microscopic worlds without microscopes. In this paper, we show that through motion magnification, subtle expressions can be realistically exaggerated and become more easily recognisable. We magnify motions of facial expressions in the Eulerian perspective by manipulating their amplitudes or phases. To evaluate effects of exaggerating facial expressions, we use a common framework (LBP-TOP features and SVM classifiers) to perform 5-class subtle emotion recognition on the CASME II corpus, a spontaneous subtle emotion database. According to experimental results, significant improvements in recognition rates of magnified micro-expressions over normal ones are confirmed and measured. Furthermore, we estimate upper bounds of effective magnification factors and empirically corroborate these theoretical calculations with experimental data.


international conference on acoustics, speech, and signal processing | 2016

Intrinsic two-dimensional local structures for micro-expression recognition

Yee-Hui Oh; Anh Cat Le Ngo; Raphael C.-W. Phari; John See; Huo-Chong Ling

An elapsed facial emotion involves changes of facial contour due to the motions (such as contraction or stretch) of facial muscles located at the eyes, nose, lips and etc. Thus, the important information such as corners of facial contours that are located in various regions of the face are crucial to the recognition of facial expressions, and even more apparent for micro-expressions. In this paper, we propose the first known notion of employing intrinsic two-dimensional (i2D) local structures to represent these features for micro-expression recognition. To retrieve i2D local structures such as phase and orientation, higher order Riesz transforms are employed by means of monogenic curvature tensors. Experiments performed on micro-expression datasets show the effectiveness of i2D local structures in recognizing micro-expressions.


asian conference on pattern recognition | 2015

Automatic apex frame spotting in micro-expression database

Sze-Teng Liong; John See; KokSheik Wong; Anh Cat Le Ngo; Yee-Hui Oh; Raphael C.-W. Phan

Micro-expression usually occurs at high-stakes situations and may provide useful information in the field of behavioral psychology for better interpretion and analysis. Unfortunately, it is technically challenging to detect and recognize micro-expressions due to its brief duration and the subtle facial distortions. Apex frame, which is the instant indicating the most expressive emotional state in a video, is effective to classify the emotion in that particular frame. In this work, we present a novel method to spot the apex frame of a spontaneous micro-expression video sequence. A binary search approach is employed to locate the index of the frame in which the peak facial changes occur. Features from specific facial regions are extracted to better represent and describe the expression details. The defined facial regions are selected based on the action unit and landmark coordinates of the subject, in which case these processes are automated. We consider three distinct feature descriptors to evaluate the reliability of the proposed approach. Improvements of at least 20% are achieved when compared to the baselines.

Collaboration


Dive into the Yee-Hui Oh's collaboration.

Top Co-Authors

Avatar

John See

Multimedia University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

KokSheik Wong

Monash University Malaysia Campus

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge