Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Shiyang Cheng is active.

Publication


Featured researches published by Shiyang Cheng.


computer vision and pattern recognition | 2013

Robust Discriminative Response Map Fitting with Constrained Local Models

Akshay Asthana; Stefanos Zafeiriou; Shiyang Cheng; Maja Pantic

We present a novel discriminative regression based approach for the Constrained Local Models (CLMs) framework, referred to as the Discriminative Response Map Fitting (DRMF) method, which shows impressive performance in the generic face fitting scenario. The motivation behind this approach is that, unlike the holistic texture based features used in the discriminative AAM approaches, the response map can be represented by a small set of parameters and these parameters can be very efficiently used for reconstructing unseen response maps. Furthermore, we show that by adopting very simple off-the-shelf regression techniques, it is possible to learn robust functions from response maps to the shape parameters updates. The experiments, conducted on Multi-PIE, XM2VTS and LFPW database, show that the proposed DRMF method outperforms state-of-the-art algorithms for the task of generic face fitting. Moreover, the DRMF method is computationally very efficient and is real-time capable. The current MATLAB implementation takes 1 second per image. To facilitate future comparisons, we release the MATLAB code and the pre-trained models for research purposes.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2015

From Pixels to Response Maps: Discriminative Image Filtering for Face Alignment in the Wild

Akshay Asthana; Stefanos Zafeiriou; Georgios Tzimiropoulos; Shiyang Cheng; Maja Pantic

We propose a face alignment framework that relies on the texture model generated by the responses of discriminatively trained part-based filters. Unlike standard texture models built from pixel intensities or responses generated by generic filters (e.g. Gabor), our framework has two important advantages. First, by virtue of discriminative training, invariance to external variations (like identity, pose, illumination and expression) is achieved. Second, we show that the responses generated by discriminatively trained filters (or patch-experts) are sparse and can be modeled using a very small number of parameters. As a result, the optimization methods based on the proposed texture model can better cope with unseen variations. We illustrate this point by formulating both part-based and holistic approaches for generic face alignment and show that our framework outperforms the state-of-the-art on multiple”wild” databases. The code and dataset annotations are available for research purposes from http://ibug.doc.ic.ac.uk/resources.


international conference on image processing | 2014

3D facial geometric features for constrained local model

Shiyang Cheng; Stefanos Zafeiriou; Akshay Asthana; Maja Pantic

We propose a 3D Constrained Local Model framework for deformable face alignment in depth image. Our framework exploits the intrinsic 3D geometric information in depth data by utilizing robust histogram-based 3D geometric features that are based on normal vectors. In addition, we demonstrate the fusion of intensity data and 3D features that further improves the facial landmark localization accuracy. The experiments are conducted on publicly available FRGC database. The results show that our 3D features based CLM completely outperforms the raw depth features based CLM in term of fitting accuracy and robustness, and the fusion of intensity and 3D depth feature further improves the performance. Another benefit is that the proposed 3D features in our framework do not require any pre-processing procedure on the data.


acm sigmm conference on multimedia systems | 2014

Real-time generic face tracking in the wild with CUDA

Shiyang Cheng; Akshay Asthana; Stefanos Zafeiriou; Jie Shen; Maja Pantic

We present a robust real-time face tracking system based on the Constrained Local Models framework by adopting the novel regression-based Discriminative Response Map Fitting (DRMF) method. By exploiting the algorithms potential parallelism, we present a hybrid CPU-GPU implementation capable of achieving real-time performance at 30 to 45 FPS, on ordinary consumer-grade computers. We have made the software publicly available for research purposes


Image and Vision Computing | 2017

Statistical non-rigid ICP algorithm and its application to 3D face alignment

Shiyang Cheng; Ioannis Marras; Stefanos Zafeiriou; Maja Pantic

The problem of fitting a 3D facial model to a 3D mesh has received a lot of attention the past 1520years. The majority of the techniques fit a general model consisting of a simple parameterisable surface or a mean 3D facial shape. The drawback of this approach is that is rather difficult to describe the non-rigid aspect of the face using just a single facial model. One way to capture the 3D facial deformations is by means of a statistical 3D model of the face or its parts. This is particularly evident when we want to capture the deformations of the mouth region. Even though statistical models of face are generally applied for modelling facial intensity, there are few approaches that fit a statistical model of 3D faces. In this paper, in order to capture and describe the non-rigid nature of facial surfaces we build a part-based statistical model of the 3D facial surface and we combine it with non-rigid iterative closest point algorithms. We show that the proposed algorithm largely outperforms state-of-the-art algorithms for 3D face fitting and alignment especially when it comes to the description of the mouth region. A statistical non-rigid ICP method for 3D face alignment is proposed.Local fitting in dynamic subdivision framework helps capture subtle facial feature.2D point-driven mesh deformation in pre-processing step helps improve performance.


ieee international conference on automatic face gesture recognition | 2015

Active nonrigid ICP algorithm

Shiyang Cheng; Ioannis Marras; Stefanos Zafeiriou; Maja Pantic

The problem of fitting a 3D facial model to a 3D mesh has received a lot of attention the past 15-20 years. The majority of the techniques fit a general model consisting of a simple parameterisable surface or a mean 3D facial shape. The drawback of this approach is that is rather difficult to describe the non-rigid aspect of the face using just a single facial model. One way to capture the 3D facial deformations is by means of a statistical 3D model of the face or its parts. This is particularly evident when we want to capture the deformations of the mouth region. Even though statistical models of face are generally applied for modelling facial intensity, there are few approaches that fit a statistical model of 3D faces. In this paper, in order to capture and describe the non-rigid nature of facial surfaces we build a part-based statistical model of the 3D facial surface and we combine it with non-rigid iterative closest point algorithms. We show that the proposed algorithm largely outperforms state-of-the-art algorithms for 3D face fitting and alignment especially when it comes to the description of the mouth region.


computer vision and pattern recognition | 2014

Incremental Face Alignment in the Wild

Akshay Asthana; Stefanos Zafeiriou; Shiyang Cheng; Maja Pantic


computer vision and pattern recognition | 2018

UV-GAN: Adversarial Facial UV Map Completion for Pose-Invariant Face Recognition

Jiankang Deng; Shiyang Cheng; Niannan Xue; Yuxiang Zhou; Stefanos Zafeiriou


affective computing and intelligent interaction | 2015

Sentiment apprehension in human-robot interaction with NAO

Jie Shen; Ognjen Rudovic; Shiyang Cheng; Maja Pantic


computer vision and pattern recognition | 2017

4DFAB : A Large Scale 4D Facial Expression Database for Biometric Applications

Shiyang Cheng; Irene Kotsia; Maja Pantic; Stefanos Zafeiriou

Collaboration


Dive into the Shiyang Cheng's collaboration.

Top Co-Authors

Avatar

Maja Pantic

Imperial College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jie Shen

Imperial College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Irene Kotsia

Queen Mary University of London

View shared research outputs
Top Co-Authors

Avatar

Niannan Xue

Imperial College London

View shared research outputs
Top Co-Authors

Avatar

Yuxiang Zhou

Imperial College London

View shared research outputs
Top Co-Authors

Avatar

Ioannis Marras

Aristotle University of Thessaloniki

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge