Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Fang Zhao is active.

Publication


Featured researches published by Fang Zhao.


IEEE Transactions on Image Processing | 2017

Deep Edge Guided Recurrent Residual Learning for Image Super-Resolution

Wenhan Yang; Jiashi Feng; Jianchao Yang; Fang Zhao; Jiaying Liu; Zongming Guo; Shuicheng Yan

In this paper, we consider the image super-resolution (SR) problem. The main challenge of image SR is to recover high-frequency details of a low-resolution (LR) image that are important for human perception. To address this essentially ill-posed problem, we introduce a Deep Edge Guided REcurrent rEsidual (DEGREE) network to progressively recover the high-frequency details. Different from most of the existing methods that aim at predicting high-resolution (HR) images directly, the DEGREE investigates an alternative route to recover the difference between a pair of LR and HR images by recurrent residual learning. DEGREE further augments the SR process with edge-preserving capability, namely the LR image and its edge map can jointly infer the sharp edge details of the HR image during the recurrent recovery process. To speed up its training convergence rate, by-pass connections across the multiple layers of DEGREE are constructed. In addition, we offer an understanding on DEGREE from the view-point of sub-band frequency decomposition on image signal and experimentally demonstrate how the DEGREE can recover different frequency bands separately. Extensive experiments on three benchmark data sets clearly demonstrate the superiority of DEGREE over the well-established baselines and DEGREE also provides new state-of-the-arts on these data sets. We also present addition experiments for JPEG artifacts reduction to demonstrate the good generality and flexibility of our proposed DEGREE network to handle other image processing tasks.In this paper, we consider the image super-resolution (SR) problem. The main challenge of image SR is to recover high-frequency details of a low-resolution (LR) image that are important for human perception. To address this essentially ill-posed problem, we introduce a Deep Edge Guided REcurrent rEsidual (DEGREE) network to progressively recover the high-frequency details. Different from most of the existing methods that aim at predicting high-resolution (HR) images directly, the DEGREE investigates an alternative route to recover the difference between a pair of LR and HR images by recurrent residual learning. DEGREE further augments the SR process with edge-preserving capability, namely the LR image and its edge map can jointly infer the sharp edge details of the HR image during the recurrent recovery process. To speed up its training convergence rate, by-pass connections across the multiple layers of DEGREE are constructed. In addition, we offer an understanding on DEGREE from the view-point of sub-band frequency decomposition on image signal and experimentally demonstrate how the DEGREE can recover different frequency bands separately. Extensive experiments on three benchmark data sets clearly demonstrate the superiority of DEGREE over the well-established baselines and DEGREE also provides new state-of-the-arts on these data sets. We also present addition experiments for JPEG artifacts reduction to demonstrate the good generality and flexibility of our proposed DEGREE network to handle other image processing tasks.


acm multimedia | 2016

Robust Face Recognition with Deep Multi-View Representation Learning

Jianshu Li; Jian Zhao; Fang Zhao; Hao Liu; Jing Li; Shengmei Shen; Jiashi Feng; Terence Sim

This paper describes our proposed method targeting at the MSR Image Recognition Challenge MS-Celeb-1M. The challenge is to recognize one million celebrities from their face images captured in the real world. The challenge provides a large scale dataset crawled from the Web, which contains a large number of celebrities with many images for each subject. Given a new testing image, the challenge requires an identify for the image and the corresponding confidence score. To complete the challenge, we propose a two-stage approach consisting of data cleaning and multi-view deep representation learning. The data cleaning can effectively reduce the noise level of training data and thus improves the performance of deep learning based face recognition models. The multi-view representation learning enables the learned face representations to be more specific and discriminative. Thus the difficulties of recognizing faces out of a huge number of subjects are substantially relieved. Our proposed method achieves a coverage of 46.1% at 95% precision on the random set and a coverage of 33.0% at 95% precision on the hard set of this challenge.


IEEE Transactions on Image Processing | 2018

Robust LSTM-Autoencoders for Face De-Occlusion in the Wild

Fang Zhao; Jiashi Feng; Jian Zhao; Wenhan Yang; Shuicheng Yan

Face recognition techniques have been developed significantly in recent years. However, recognizing faces with partial occlusion is still challenging for existing face recognizers, which is heavily desired in real-world applications concerning surveillance and security. Although much research effort has been devoted to developing face de-occlusion methods, most of them can only work well under constrained conditions, such as all of faces are from a pre-defined closed set of subjects. In this paper, we propose a robust LSTM-Autoencoders (RLA) model to effectively restore partially occluded faces even in the wild. The RLA model consists of two LSTM components, which aims at occlusion-robust face encoding and recurrent occlusion removal respectively. The first one, named multi-scale spatial LSTM encoder, reads facial patches of various scales sequentially to output a latent representation, and occlusion-robustness is achieved owing to the fact that the influence of occlusion is only upon some of the patches. Receiving the representation learned by the encoder, the LSTM decoder with a dual channel architecture reconstructs the overall face and detects occlusion simultaneously, and by feat of LSTM, the decoder breaks down the task of face de-occlusion into restoring the occluded part step by step. Moreover, to minimize identify information loss and guarantee face recognition accuracy over recovered faces, we introduce an identity-preserving adversarial training scheme to further improve RLA. Extensive experiments on both synthetic and real data sets of faces with occlusion clearly demonstrate the effectiveness of our proposed RLA in removing different types of facial occlusion at various locations. The proposed method also provides significantly larger performance gain than other de-occlusion methods in promoting recognition performance over partially-occluded faces.Face recognition techniques have been developed significantly in recent years. However, recognizing faces with partial occlusion is still challenging for existing face recognizers, which is heavily desired in real-world applications concerning surveillance and security. Although much research effort has been devoted to developing face de-occlusion methods, most of them can only work well under constrained conditions, such as all of faces are from a pre-defined closed set of subjects. In this paper, we propose a robust LSTM-Autoencoders (RLA) model to effectively restore partially occluded faces even in the wild. The RLA model consists of two LSTM components, which aims at occlusion-robust face encoding and recurrent occlusion removal respectively. The first one, named multi-scale spatial LSTM encoder, reads facial patches of various scales sequentially to output a latent representation, and occlusion-robustness is achieved owing to the fact that the influence of occlusion is only upon some of the patches. Receiving the representation learned by the encoder, the LSTM decoder with a dual channel architecture reconstructs the overall face and detects occlusion simultaneously, and by feat of LSTM, the decoder breaks down the task of face de-occlusion into restoring the occluded part step by step. Moreover, to minimize identify information loss and guarantee face recognition accuracy over recovered faces, we introduce an identity-preserving adversarial training scheme to further improve RLA. Extensive experiments on both synthetic and real data sets of faces with occlusion clearly demonstrate the effectiveness of our proposed RLA in removing different types of facial occlusion at various locations. The proposed method also provides significantly larger performance gain than other de-occlusion methods in promoting recognition performance over partially-occluded faces.


european conference on computer vision | 2018

Dynamic Conditional Networks for Few-Shot Learning

Fang Zhao; Jian Zhao; Shuicheng Yan; Jiashi Feng

This paper proposes a novel Dynamic Conditional Convolutional Network (DCCN) to handle conditional few-shot learning, i.e, only a few training samples are available for each condition. DCCN consists of dual subnets: DyConvNet contains a dynamic convolutional layer with a bank of basis filters; CondiNet predicts a set of adaptive weights from conditional inputs to linearly combine the basis filters. In this manner, a specific convolutional kernel can be dynamically obtained for each conditional input. The filter bank is shared between all conditions thus only a low-dimension weight vector needs to be learned. This significantly facilitates the parameter learning across different conditions when training data are limited. We evaluate DCCN on four tasks which can be formulated as conditional model learning, including specific object counting, multi-modal image classification, phrase grounding and identity based face generation. Extensive experiments demonstrate the superiority of the proposed model in the conditional few-shot learning setting.


neural information processing systems | 2017

Dual-Agent GANs for Photorealistic and Identity Preserving Profile Face Synthesis

Jian Zhao; Lin Xiong; Panasonic Karlekar Jayashree; Jianshu Li; Fang Zhao; Zhecan Wang; Panasonic Sugiri Pranata; Panasonic Shengmei Shen; Shuicheng Yan; Jiashi Feng


computer vision and pattern recognition | 2017

Self-Supervised Neural Aggregation Networks for Human Parsing

Jian Zhao; Jianshu Li; Xuecheng Nie; Fang Zhao; Yunpeng Chen; Zhecan Wang; Jiashi Feng; Shuicheng Yan


computer vision and pattern recognition | 2018

Towards Pose Invariant Face Recognition in the Wild

Jian Zhao; Yu Cheng; Yan Xu; Lin Xiong; Jianshu Li; Fang Zhao; Karlekar Jayashree; Sugiri Pranata; Shengmei Shen; Junliang Xing; Shuicheng Yan; Jiashi Feng


computer vision and pattern recognition | 2018

Weakly Supervised Phrase Localization With Multi-Scale Anchored Transformer Network

Fang Zhao; Jianshu Li; Jian Zhao; Jiashi Feng


british machine vision conference | 2017

Marginalized CNN: Learning Deep Invariant Representations.

Jian Zhao; Jianshu Li; Fang Zhao; Xuecheng Nie; Yunpeng Chen; Shuicheng Yan; Jiashi Feng


arXiv: Computer Vision and Pattern Recognition | 2018

Look Across Elapse: Disentangled Representation Learning and Photorealistic Cross-Age Face Synthesis for Age-Invariant Face Recognition

Jian Zhao; Yu Cheng; Yi Cheng; Yang Yang; Haochong Lan; Fang Zhao; Lin Xiong; Yan Xu; Jianshu Li; Sugiri Pranata; Shengmei Shen; Junliang Xing; Hengzhu Liu; Shuicheng Yan; Jiashi Feng

Collaboration


Dive into the Fang Zhao's collaboration.

Top Co-Authors

Avatar

Jiashi Feng

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar

Jian Zhao

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar

Jianshu Li

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar

Shuicheng Yan

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar

Terence Sim

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xuecheng Nie

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar

Yu Cheng

National University of Singapore

View shared research outputs
Top Co-Authors

Avatar

Yunpeng Chen

National University of Singapore

View shared research outputs
Researchain Logo
Decentralizing Knowledge