Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kha Gia Quach is active.

Publication


Featured researches published by Kha Gia Quach.


computer vision and pattern recognition | 2015

Beyond Principal Components: Deep Boltzmann Machines for face modeling

Chi Nhan Duong; Khoa Luu; Kha Gia Quach; Tien D. Bui

The “interpretation through synthesis”, i.e. Active Appearance Models (AAMs) method, has received considerable attention over the past decades. It aims at “explaining” face images by synthesizing them via a parameterized model of appearance. It is quite challenging due to appearance variations of human face images, e.g. facial poses, occlusions, lighting, low resolution, etc. Since these variations are mostly non-linear, it is impossible to represent them in a linear model, such as Principal Component Analysis (PCA). This paper presents a novel Deep Appearance Models (DAMs) approach, an efficient replacement for AAMs, to accurately capture both shape and texture of face images under large variations. In this approach, three crucial components represented in hierarchical layers are modeled using the Deep Boltzmann Machines (DBM) to robustly capture the variations of facial shapes and appearances. DAMs are therefore superior to AAMs in inferring a representation for new face images under various challenging conditions. In addition, DAMs have ability to generate a compact set of parameters in higher level representation that can be used for classification, e.g. face recognition and facial age estimation. The proposed approach is evaluated in facial image reconstruction, facial super-resolution on two databases, i.e. LFPW and Helen. It is also evaluated on FG-NET database for the problem of age estimation.


Canadian Journal of Remote Sensing | 2014

Denoising Hyperspectral Imagery Using Principal Component Analysis and Block-Matching 4D Filtering

Guangyi Chen; Tien D. Bui; Kha Gia Quach; Shen-En Qian

Abstract In this article, we propose a new method for denoising hyperspectral imagery. Hyperspectral imagery normally contains a small amount of noise, which can hardly be seen by human eyes thanks to its relatively high signal-to-noise ratio. However, in many remote sensing applications, this amount of noise is still troublesome. In this study, we first perform principal component analysis (PCA) to the hyperspectral data cube to be denoised in order to separate the fine features from the noise in the hyperspectral data cube. Because the first few PCA output channels contain the majority of information in the hyperspectral data cube, we do not denoise these PCA output channel images. We use the block-matching 4D (BM4D) filtering to reduce the noise in the remaining low-energy noisy PCA output channel images. Finally, an inverse PCA transform is performed in order to obtain the denoised hyperspectral data cube. Experimental results show that our proposed method in this work is very competitive when compared with existing methods for hyperspectral imagery denoising. Résumé Dans cet article, nous proposons une nouvelle méthode pour le débruitage d’imagerie hyperspectrale. L’imagerie hyperspectrale contient normalement une petite quantité de bruit, qui peut difficilement être vu à l’œil nu à cause de son rapport signal sur bruit relativement élevé. Cependant, dans de nombreuses applications de télédétection, cette quantité de bruit est gênante. Dans cet article, nous réalisons d’abord une analyse en composantes principales (ACP; «PCA») sur le cube de données hyperspectrales qui sera débruité afin de séparer les traits fins du bruit dans ce cube de données hyperspectrales. Parce que les premières composantes de l’ACP contiennent la majorité de l’information du cube de données hyperspectrales, nous n’avons pas débruité ces sorties de l’ACP. Nous utilisons le filtrage 4D par mise en correspondance de blocs (« block-matching 4D filtering (BM4D) ») pour réduire le bruit dans les images de composantes principales restantes à basse énergie. Enfin, une transformation inverse de l’ACP est réalisée afin d’obtenir un cube de données hyperspectrales débruité. Les résultats expérimentaux montrent que la méthode proposée dans cet article est très compétitive par rapport aux méthodes existantes pour le débruitage de l’imagerie hyperspectrale.


computer vision and pattern recognition | 2016

Longitudinal Face Modeling via Temporal Deep Restricted Boltzmann Machines

Chi Nhan Duong; Khoa Luu; Kha Gia Quach; Tien D. Bui

Modeling the face aging process is a challenging task due to large and non-linear variations present in different stages of face development. This paper presents a deep model approach for face age progression that can efficiently capture the non-linear aging process and automatically synthesize a series of age-progressed faces in various age ranges. In this approach, we first decompose the long-term age progress into a sequence of short-term changes and model it as a face sequence. The Temporal Deep Restricted Boltzmann Machines based age progression model together with the prototype faces are then constructed to learn the aging transformation between faces in the sequence. In addition, to enhance the wrinkles of faces in the later age ranges, the wrinkle models are further constructed using Restricted Boltzmann Machines to capture their variations in different facial regions. The geometry constraints are also taken into account in the last step for more consistent age-progressed results. The proposed approach is evaluated using various face aging databases, i.e. FGNET, Cross-Age Celebrity Dataset (CACD) and MORPH, and our collected large-scale aging database named AginG Faces in the Wild (AGFW). In addition, when ground-truth age is not available for input image, our proposed system is able to automatically estimate the age of the input face before aging process is employed.


International Journal of Computer Vision | 2018

Deep Appearance Models: A Deep Boltzmann Machine Approach for Face Modeling

Chi Nhan Duong; Khoa Luu; Kha Gia Quach; Tien D. Bui

The “interpretation through synthesis” approach to analyze face images, particularly Active Appearance Models (AAMs) method, has become one of the most successful face modeling approaches over the last two decades. AAM models have ability to represent face images through synthesis using a controllable parameterized Principal Component Analysis (PCA) model. However, the accuracy and robustness of the synthesized faces of AAMs are highly depended on the training sets and inherently on the generalizability of PCA subspaces. This paper presents a novel Deep Appearance Models (DAMs) approach, an efficient replacement for AAMs, to accurately capture both shape and texture of face images under large variations. In this approach, three crucial components represented in hierarchical layers are modeled using the Deep Boltzmann Machines (DBM) to robustly capture the variations of facial shapes and appearances. DAMs are therefore superior to AAMs in inferencing a representation for new face images under various challenging conditions. The proposed approach is evaluated in various applications to demonstrate its robustness and capabilities, i.e. facial super-resolution reconstruction, facial off-angle reconstruction or face frontalization, facial occlusion removal and age estimation using challenging face databases, i.e. Labeled Face Parts in the Wild, Helen and FG-NET. Comparing to AAMs and other deep learning based approaches, the proposed DAMs achieve competitive results in those applications, thus this showed their advantages in handling occlusions, facial representation, and reconstruction.


international conference on pattern recognition | 2014

Sparse Representation and Low-Rank Approximation for Robust Face Recognition

Kha Gia Quach; Chi Nhan Duong; Tien D. Bui

Face recognition under various conditions such as illumination, poses, expression, and occlusion has been one of the most challenging problems in computer vision. Over the last few years there has been significant attention paid to the low-rank approximation (LRA) and sparse representation (SR) techniques. The applications of these techniques have appeared in many different areas ranging from handwritten character recognition to multi-factor face recognition. In this paper, we will review some of the most recent works using LRA and SR in the multi-factor face recognition problem, and present a novel framework to improve their performance in the recognition of faces under various affecting conditions. Our results are comparable to or better than the state-of-the-art in this area.


computer vision and pattern recognition | 2017

Robust Hand Detection and Classification in Vehicles and in the Wild

T. Hoang Ngan Le; Kha Gia Quach; Chenchen Zhu; Chi Nhan Duong; Khoa Luu; Marios Savvides

Robust hand detection and classification is one of the most crucial pre-processing steps to support human computer interaction, driver behavior monitoring, virtual reality, etc. This problem, however, is very challenging due to numerous variations of hand images in real-world scenarios. This work presents a novel approach named Multiple Scale Region-based Fully Convolutional Networks (MSRFCN) to robustly detect and classify human hand regions under various challenging conditions, e.g. occlusions, illumination, low-resolutions. In this approach, the whole image is passed through the proposed fully convolutional network to compute score maps. Those score maps with their position-sensitive properties can help to efficiently address a dilemma between translation-invariance in classification and detection. The method is evaluated on the challenging hand databases, i.e. the Vision for Intelligent Vehicles and Applications (VIVA) Challenge, Oxford hand dataset and compared against various recent hand detection methods. The experimental results show that our proposed MS-FRCN approach consistently achieves the state-of-the-art hand detection results, i.e. Average Precision (AP) / Average Recall (AR) of 95.1% / 94.5% at level 1 and 86.0% / 83.4% at level 2, on the VIVA challenge. In addition, the proposed method achieves the state-of-the-art results for left/right hand and driver/passenger classification tasks on the VIVA database with a significant improvement on AP/AR of ~7% and ~13% for both classification tasks, respectively. The hand detection performance of MS-RFCN reaches to 75.1% of AP and 77.8% of AR on Oxford database.


international conference on pattern recognition | 2016

Robust Deep Appearance Models

Kha Gia Quach; Chi Nhan Duong; Khoa Luu; Tien D. Bui

This paper presents a novel Robust Deep Appearance Models (RDAMs) approach to learn the non-linear correlation between shape and texture of face images. In this approach, two crucial components of face images, i.e. shape and texture, are represented by Deep Boltzmann Machines and Robust Deep Boltzmann Machines (RDBM), respectively. The RDBM, an alternative form of Robust Boltzmann Machines, can separate corrupted/occluded pixels in the texture modeling to achieve better reconstruction results. The two models are connected by Restricted Boltzmann Machines at the top layer to jointly learn and capture the variations of both facial shapes and appearances. This paper also introduces new fitting algorithms with occlusion awareness through the mask obtained from the RDBM reconstruction. The proposed approach is evaluated in various applications by using challenging face datasets, i.e. Labeled Face Parts in the Wild (LFPW), Helen, EURECOM and AR databases, to demonstrate its robustness and capabilities.


2012 IEEE RIVF International Conference on Computing & Communication Technologies, Research, Innovation, and Vision for the Future | 2012

Gabor Wavelet-Based Appearance Models

Kha Gia Quach; Chi Nhan Duong; Khoa Luu; Hoai Bac Le

There has been considerable research in the last several years based on the principles of Active Appearance Models (AAMs). AAM is a robust methodology for general image (object) descriptions that incorporates shape and texture information. In this work, we extend the basic AAMs by developing a new method for texture description for the application of human facial modeling and synthesizing. The premise is to develop a better texture based model for the face that incorporates specific facial information such as wrinkling, micro-features (e.g., moles, scares, freckles, etc.), and aging features (e.g., sagging, hollowing of checks, weight gain, etc.) This paper proposes a new improvement in texture synthesis-Gabor Wavelet-based Appearance Model. The experimental results demonstrate the potential of this approach for face-based applications.


international conference on pattern recognition | 2016

Depth-based 3D hand pose tracking

Kha Gia Quach; Chi Nhan Duong; Khoa Luu; Tien D. Bui

In this paper, we propose two new approaches using the Convolution Neural Network (CNN) and the Recurrent Neural Network (RNN) for tracking 3D hand poses. The first approach is a detection based algorithm while the second is a data driven method. Our first contribution is a new tracking-by-detection strategy extending the CNN based single frame detection method to a multiple frame tracking approach by taking into account prediction history using RNN. Our second contribution is the use of RNN to simulate the fitting of a 3D model to the input data. It helps to relax the need of a carefully designed fitting function and optimization algorithm. With such strategies, we show that our tracking frameworks can automatically correct the fail detection made in previous frames due to occlusions. Our proposed method is evaluated on two public hand datasets, i.e. NYU and ICVL, and compared against other recent hand tracking methods. Experimental results show that our approaches achieve the state-of-the-art accuracy and efficiency in the challenging problem of 3D hand pose estimation.


international conference on computer vision | 2017

Temporal Non-volume Preserving Approach to Facial Age-Progression and Age-Invariant Face Recognition

Chi Nhan Duong; Kha Gia Quach; Khoa Luu; T. Hoang Ngan Le; Marios Savvides

Collaboration


Dive into the Kha Gia Quach's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Khoa Luu

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Marios Savvides

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

T. Hoang Ngan Le

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Chenchen Zhu

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge