Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sifei Liu is active.

Publication


Featured researches published by Sifei Liu.


computer vision and pattern recognition | 2013

Structured Face Hallucination

Chih-Yuan Yang; Sifei Liu; Ming-Hsuan Yang

The goal of face hallucination is to generate high-resolution images with fidelity from low-resolution ones. In contrast to existing methods based on patch similarity or holistic constraints in the image space, we propose to exploit local image structures for face hallucination. Each face image is represented in terms of facial components, contours and smooth regions. The image structure is maintained via matching gradients in the reconstructed high-resolution output. For facial components, we align input images to generate accurate exemplars and transfer the high-frequency details for preserving structural consistency. For contours, we learn statistical priors to generate salient structures in the high-resolution images. A patch matching method is utilized on the smooth regions where the image gradients are preserved. Experimental results demonstrate that the proposed algorithm generates hallucinated face images with favorable quality and adaptability.


computer vision and pattern recognition | 2017

Generative Face Completion

Yijun Li; Sifei Liu; Jimei Yang; Ming-Hsuan Yang

In this paper, we propose an effective face completion algorithm using a deep generative model. Different from well-studied background completion, the face completion task is more challenging as it often requires to generate semantically new pixels for the missing key components (e.g., eyes and mouths) that contain large appearance variations. Unlike existing nonparametric algorithms that search for patches to synthesize, our algorithm directly generates contents for missing regions based on a neural network. The model is trained with a combination of a reconstruction loss, two adversarial losses and a semantic parsing loss, which ensures pixel faithfulness and local-global contents consistency. With extensive experimental results, we demonstrate qualitatively and quantitatively that our model is able to deal with a large area of missing pixels in arbitrary shapes and generate realistic face completion results.


computer vision and pattern recognition | 2015

Multi-objective convolutional learning for face labeling

Sifei Liu; Jimei Yang; Chang Huang; Ming-Hsuan Yang

This paper formulates face labeling as a conditional random field with unary and pairwise classifiers. We develop a novel multi-objective learning method that optimizes a single unified deep convolutional network with two distinct non-structured loss functions: one encoding the unary label likelihoods and the other encoding the pairwise label dependencies. Moreover, we regularize the network by using a nonparametric prior as new input channels in addition to the RGB image, and show that significant performance improvements can be achieved with a much smaller network size. Experiments on both the LFW and Helen datasets demonstrate state-of-the-art results of the proposed algorithm, and accurate labeling results on challenging images can be obtained by the proposed algorithm for real-world applications.


european conference on computer vision | 2016

Deep Cascaded Bi-Network for Face Hallucination

Shizhan Zhu; Sifei Liu; Chen Change Loy; Xiaoou Tang

We present a novel framework for hallucinating faces of unconstrained poses and with very low resolution (face size as small as 5pxIOD). In contrast to existing studies that mostly ignore or assume pre-aligned face spatial configuration (e.g. facial landmarks localization or dense correspondence field), we alternatingly optimize two complementary tasks, namely face hallucination and dense correspondence field estimation, in a unified framework. In addition, we propose a new gated deep bi-network that contains two functionality-specialized branches to recover different levels of texture details. Extensive experiments demonstrate that such formulation allows exceptional hallucination quality on in-the-wild low-res faces with significant pose and illumination variations.


european conference on computer vision | 2016

Learning Recursive Filters for Low-Level Vision via a Hybrid Neural Network

Sifei Liu; Jinshan Pan; Ming-Hsuan Yang

In this paper, we consider numerous low-level vision problems (e.g., edge-preserving filtering and denoising) as recursive image filtering via a hybrid neural network. The network contains several spatially variant recurrent neural networks (RNN) as equivalents of a group of distinct recursive filters for each pixel, and a deep convolutional neural network (CNN) that learns the weights of RNNs. The deep CNN can learn regulations of recurrent propagation for various tasks and effectively guides recurrent propagation over an entire image. The proposed model does not need a large number of convolutional channels nor big kernels to learn features for low-level vision filters. It is significantly smaller and faster in comparison with a deep CNN based image filter. Experimental results show that many low-level vision tasks can be effectively learned and carried out in real-time by the proposed algorithm.


international conference on image processing | 2014

Compressed face hallucination

Sifei Liu; Ming-Hsuan Yang

In this paper, we propose an algorithm to hallucinate faces in the JPEG compressed domain, which has not been well addressed in the literature. The proposed approach hallucinates compressed face images through an exemplar-based framework and solves two main problems. First, image noise introduced by JPEG compression is exacerbated through the super-resolution process. We present a novel formulation for face hallucination that uses the JPEG quantization intervals as constraints to recover the feasible intensity values from each image patch of a low-resolution input. Second, existing face hallucination methods are sensitive to noise contained in the compressed images. We regularize the compression noise caused by block discrete cosine transform coding, and reconstruct high-resolution images with the proposed gradient-guided total variation. Numerous experimental results show that the proposed algorithm generates favorable results than the combination of state-of-the-art face hallucination and de-noising algorithms.


european conference on computer vision | 2018

Rendering Portraitures from Monocular Camera and Beyond

Xiangyu Xu; Deqing Sun; Sifei Liu; Wenqi Ren; Yu-Jin Zhang; Ming-Hsuan Yang; Jian Sun

Shallow Depth-of-Field (DoF) is a desirable effect in photography which renders artistic photos. Usually, it requires single-lens reflex cameras and certain photography skills to generate such effects. Recently, dual-lens on cellphones is used to estimate scene depth and simulate DoF effects for portrait shots. However, this technique cannot be applied to photos already taken and does not work well for whole-body scenes where the subject is at a distance from the cameras. In this work, we introduce an automatic system that achieves portrait DoF rendering for monocular cameras. Specifically, we first exploit Convolutional Neural Networks to estimate the relative depth and portrait segmentation maps from a single input image. Since these initial estimates from a single input are usually coarse and lack fine details, we further learn pixel affinities to refine the coarse estimation maps. With the refined estimation, we conduct depth and segmentation-aware blur rendering to the input image with a Conditional Random Field and image matting. In addition, we train a spatially-variant Recursive Neural Network to learn and accelerate this rendering process. We show that the proposed algorithm can effectively generate portraitures with realistic DoF effects using one single input. Experimental results also demonstrate that our depth and segmentation estimation modules perform favorably against the state-of-the-art methods both quantitatively and qualitatively.


International Journal of Computer Vision | 2018

Hallucinating Compressed Face Images

Chih-Yuan Yang; Sifei Liu; Ming-Hsuan Yang

A face hallucination algorithm is proposed to generate high-resolution images from JPEG compressed low-resolution inputs by decomposing a deblocked face image into structural regions such as facial components and non-structural regions like the background. For structural regions, landmarks are used to retrieve adequate high-resolution component exemplars in a large dataset based on the estimated head pose and illumination condition. For non-structural regions, an efficient generic super resolution algorithm is applied to generate high-resolution counterparts. Two sets of gradient maps extracted from these two regions are combined to guide an optimization process of generating the hallucination image. Numerous experimental results demonstrate that the proposed algorithm performs favorably against the state-of-the-art hallucination methods on JPEG compressed face images with different poses, expressions, and illumination conditions.


international conference on computer vision | 2017

Unsupervised Domain Adaptation for Face Recognition in Unlabeled Videos

Kihyuk Sohn; Sifei Liu; Guangyu Zhong; Xiang Yu; Ming-Hsuan Yang; Manmohan Chandraker


neural information processing systems | 2017

Learning Affinity via Spatial Propagation Networks

Sifei Liu; Shalini De Mello; Jinwei Gu; Guangyu Zhong; Ming-Hsuan Yang; Jan Kautz

Collaboration


Dive into the Sifei Liu's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Guangyu Zhong

Dalian University of Technology

View shared research outputs
Top Co-Authors

Avatar

Jan Kautz

University College London

View shared research outputs
Top Co-Authors

Avatar

Chih-Yuan Yang

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jinshan Pan

University of California

View shared research outputs
Top Co-Authors

Avatar

Yi-Hsuan Tsai

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge