Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kui Jia is active.

Publication


Featured researches published by Kui Jia.


IEEE Transactions on Image Processing | 2009

A Study on Gait-Based Gender Classification

Shiqi Yu; Tieniu Tan; Kaiqi Huang; Kui Jia; Xinyu Wu

Gender is an important cue in social activities. In this correspondence, we present a study and analysis of gender classification based on human gait. Psychological experiments were carried out. These experiments showed that humans can recognize gender based on gait information, and that contributions of different body components vary. The prior knowledge extracted from the psychological experiments can be combined with an automatic method to further improve classification accuracy. The proposed method which combines human knowledge achieves higher performance than some other methods, and is even more accurate than human observers. We also present a numerical analysis of the contributions of different human components, which shows that head and hair, back, chest and thigh are more discriminative than other components. We also did challenging cross-race experiments that used Asian gait data to classify the gender of Europeans, and vice versa. Encouraging results were obtained. All the above prove that gait-based gender classification is feasible in controlled environments. In real applications, it still suffers from many difficulties, such as view variation, clothing and shoes changes, or carrying objects. We analyze the difficulties and suggest some possible solutions.


international conference on computer vision | 2005

Multi-modal tensor face for simultaneous super-resolution and recognition

Kui Jia; Shaogang Gong

Face images of non-frontal views under poor illumination resolution reduce dramatically face recognition accuracy. This is evident most compellingly by the very low recognition rate of all existing face recognition systems when applied to live CCTV camera input. In this paper, we present a Bayesian framework to perform multimodal (such as variations in viewpoint and illumination) face image super-resolution for recognition in tensor space. Given a single modal low-resolution face image, we benefit from the multiple factor interactions of training sensor and super-resolve its high-resolution reconstructions across different modalities for face recognition. Instead of performing pixel-domain super-resolution and recognition independently as two separate sequential processes, we integrate the tasks of super-resolution and recognition by directly computing a maximum likelihood identity parameter vector in high-resolution tensor space for recognition. We show results from multi-modal super-resolution and face recognition experiments across different imaging modalities, using low-resolution images as testing inputs and demonstrate improved recognition rates over standard tensorface and eigenface representations


IEEE Transactions on Image Processing | 2008

Generalized Face Super-Resolution

Kui Jia; Shaogang Gong

Existing learning-based face super-resolution (hallucination) techniques generate high-resolution images of a single facial modality (i.e., at a fixed expression, pose and illumination) given one or set of low-resolution face images as probe. Here, we present a generalized approach based on a hierarchical tensor (multilinear) space representation for hallucinating high-resolution face images across multiple modalities, achieving generalization to variations in expression and pose. In particular, we formulate a unified tensor which can be reduced to two parts: a global image-based tensor for modeling the mappings among different facial modalities, and a local patch-based multiresolution tensor for incorporating high-resolution image details. For realistic hallucination of unregistered low-resolution faces contained in raw images, we develop an automatic face alignment algorithm capable of pixel-wise alignment by iteratively warping the probing face to its projection in the space of training face images. Our experiments show not only performance superiority over existing benchmark face super-resolution techniques on single modal face hallucination, but also novelty of our approach in coping with multimodal hallucination and its robustness in automatic alignment under practical imaging conditions.


Pattern Recognition Letters | 2006

Hallucinating multiple occluded face images of different resolutions

Kui Jia; Shaogang Gong

Learning-based super-resolution has recently been proposed for enhancing human face images, known as face hallucination. In this paper, we propose a novel algorithm to super-resolve face images given multiple partially occluded inputs at different lower resolutions. By integrating hierarchical patch-wise alignment and inter-frame constraints into a Bayesian framework, we can probabilistically align multiple input images at different resolutions and recursively infer the high-resolution face image. We address the problem of fusing partial imagery information through multiple frames and discuss the new algorithms effectiveness when encountering occluded low-resolution face images. We show promising results compared to those of existing face hallucination methods from both simulated facial database and live video sequences.


computer vision and pattern recognition | 2006

Multi-Resolution Patch Tensor for Facial Expression Hallucination

Kui Jia; Shaogang Gong

In this paper, we propose a sequential approach to hallucinate/ synthesize high-resolution images of multiple facial expressions. We propose an idea of multi-resolution tensor for super-resolution, and decompose facial expression images into small local patches. We build a multi-resolution patch tensor across different facial expressions. By unifying the identity parameters and learning the subspace mappings across different resolutions and expressions, we simplify the facial expression hallucination as a problem of parameter recovery in a patch tensor space. We further add a high-frequency component residue using nonparametric patch learning from high-resolution training data. We integrate the sequential statistical modelling into a Bayesian framework, so that given any low-resolution facial image of a single expression, we are able to synthesize multiple facial expression images in high-resolution. We show promising experimental results from both facial expression database and live video sequences.


british machine vision conference | 2006

Coupling Face Registration and Super-Resolution.

Kui Jia; Shaogang Gong; Alex Po Leung

Existing approaches to learning-based face image super-resolution require low-resolution testing inputs manually registered t o pre-aligned highresolution training models [9, 12, 13, 5]. This restricts au tomatic applications to live images and video. In this paper, we propose a multi-resolution patch tensor based model to automatically super-resolve and register low-resolution testing face images. Face candidates are triggered first by a face detector giving the subwindows with their coarse initial positions and scales in a large image frame. This initialises a combined registration and super-resolution process. Rather than manually aligning each coarsely detected face subwindow to some predefined template, based on its position and sca le, we scan all the potential face subwindows across different positio ns and scales, and obtain registration and super-resolution in a simultaneou s process. The superresolution result which is optimally correlated to its orig inal low-resolution face subwindow is also guaranteed to be the best super-resolved reconstruction. We verify our approach by experimenting on MIT+CMU face detection dataset, the promising results demonstrate the robustness of our approach on learning-based face super-resolution on real images.


advanced video and signal based surveillance | 2005

Multi-modal face image super-resolutions in tensor space

Kui Jia; Shaogang Gong

Face images of non-frontal views under poor illumination with low resolution reduce dramatically face recognition accuracy. To overcome these problems, super-resolution techniques can be exploited. In this paper, we present a Bayesian framework to perform multi-modal (such as variations in viewpoint and illumination) face image super-resolutions in tensor space. Given a single modal low-resolution face image, we benefit from the multiple factor interactions of training tensor, and super-resolve its high-resolution reconstructions across different modalities. Instead of performing pixel-domain super-resolutions, we reconstruct the high-resolution face images by computing a maximum likelihood identity parameter vector in high-resolution tensor space. Experiments show promising results of multi-view and multi-illumination face image super-resolutions respectively.


advanced video and signal based surveillance | 2005

Face super-resolution using multiple occluded images of different resolutions

Kui Jia; Shaogang Gong

In this paper, we present a novel learning-based algorithm to super-resolve multiple partially occluded low-resolution face images. By integrating hierarchical patch-wise alignment and inter-frame constraints into a Bayesian framework, we can probabilistically align multiple input images at different resolutions and recursively infer the high-resolution face image. We address the problem of fusing partial imagery information through multiple frames and discuss the new algorithms effectiveness when encountering occluded low-re solution face images. We show promising results compared to that of existing face hallucination methods.


british machine vision conference | 2005

Multi-Modal Face Image Super-Resolutions in Tensor Space.

Kui Jia; Shaogang Gong

Face images of non-frontal views under poor illumination with low resolution reduce dramatically face recognition accuracy. To overcome these problems, super-resolution techniques can be exploited. In this paper, we present a Bayesian framework to perform multi-modal (such as variations in viewpoint and illumination) face image super-resolutions in tensor space. Given a single modal low-resolution face image, we benefit from the multiple factor interactions of training tensor, and super-resolve its high-resolution reconstructions across different modalities. Instead of performing pixel-domain super-resolutions, we reconstruct the high-resolution face images by computing a maximum likelihood identity parameter vector in high-resolution tensor space. Experiments show promising results of multi-view and multi-illumination face image super-resolutions respectively.


Imaging for Crime Detection and Prevention, 2005. ICDP 2005. The IEE International Symposium on | 2005

CCTV face hallucination under occlusion with motion blur

Kui Jia; Shaogang Gong

Collaboration


Dive into the Kui Jia's collaboration.

Top Co-Authors

Avatar

Shaogang Gong

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar

Alex Po Leung

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar

Kaiqi Huang

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Shiqi Yu

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Tieniu Tan

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Xinyu Wu

The Chinese University of Hong Kong

View shared research outputs
Researchain Logo
Decentralizing Knowledge