Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yibing Song is active.

Publication


Featured researches published by Yibing Song.


european conference on computer vision | 2014

Real-Time Exemplar-Based Face Sketch Synthesis

Yibing Song; Linchao Bao; Qingxiong Yang; Ming-Hsuan Yang

This paper proposes a simple yet effective face sketch synthesis method. Similar to existing exemplar-based methods, a training dataset containing photo-sketch pairs is required, and a K-NN photo patch search is performed between a test photo and every training exemplar for sketch patch selection. Instead of using the Markov Random Field to optimize global sketch patch selection, this paper formulates face sketch synthesis as an image denoising problem which can be solved efficiently using the proposed method. Real-time performance can be obtained on a state-of-the-art GPU. Meanwhile quantitative evaluations on face sketch recognition and user study demonstrate the effectiveness of the proposed method. In addition, the proposed method can be directly extended to the temporal domain for consistent video sketch synthesis, which is of great importance in digital entertainment.


IEEE Transactions on Image Processing | 2014

Tree Filtering: Efficient Structure-Preserving Smoothing With a Minimum Spanning Tree

Linchao Bao; Yibing Song; Qingxiong Yang; Hao Yuan; Gang Wang

We present a new efficient edge-preserving filter-“tree filter”-to achieve strong image smoothing. The proposed filter can smooth out high-contrast details while preserving major edges, which is not achievable for bilateral-filter-like techniques. Tree filter is a weighted-average filter, whose kernel is derived by viewing pixel affinity in a probabilistic framework simultaneously considering pixel spatial distance, color/intensity difference, as well as connectedness. Pixel connectedness is acquired by treating pixels as nodes in a minimum spanning tree (MST) extracted from the image. The fact that an MST makes all image pixels connected through the tree endues the filter with the power to smooth out high-contrast, fine-scale details while preserving major image structures, since pixels in small isolated region will be closely connected to surrounding majority pixels through the tree, while pixels inside large homogeneous region will be automatically dragged away from pixels outside the region. The tree filter can be separated into two other filters, both of which turn out to have fast algorithms. We also propose an efficient linear time MST extraction algorithm to further improve the whole filtering speed. The algorithms give tree filter a great advantage in low computational complexity (linear to number of image pixels) and fast speed: it can process a 1-megapixel 8-bit image at ~ 0.25 s on an Intel 3.4 GHz Core i7 CPU (including the construction of MST). The proposed tree filter is demonstrated on a variety of applications.


workshop on applications of computer vision | 2014

Real-time video decolorization using bilateral filtering

Yibing Song; Linchao Bao; Qingxiong Yang

This paper presents a real-time decolorization method. Given the human visual systems preference for luminance information, the luminance should be preserved as much as possible during decolorization. As a result, the proposed decolorization method measures the amount of color contrast/detail lost when converting color to luminance. The detail loss is estimated by computing the difference between two intermediate images: one obtained by applying bilateral filter to the original color image, and the other obtained by applying joint bilateral filter to the original color image with its luminance as the guidance image. The estimated detail loss is then mapped to a grayscale image named residual image by minimizing the difference between the image gradients of the input color image and the objective grayscale image that is the sum of the residual image and the luminance. Apparently, the residual image will contain pixels with all zero values (that is the two intermediate images will be the same) only when no visual detail is missing in the luminance. Unlike most previous methods, the proposed decolorization method preserves both contrast in the color image and the luminance. Quantitative evaluation shows that it is the top performer on the standard test suite. Meanwhile it is very robust and can be directly used to convert videos while maintaining the temporal coherence. Specifically it can convert a high-resolution video (1280 × 720) in real time (about 28 Hz) on a 3.4 GHz i7 CPU.


international joint conference on artificial intelligence | 2017

Learning to Hallucinate Face Images via Component Generation and Enhancement

Qingxiong Yang; Yibing Song; Jiawei Zhang; Shengfeng He; Linchao Bao

We propose a two-stage method for face hallucination. First, we generate facial components of the input image using CNNs. These components represent the basic facial structures. Second, we synthesize fine-grained facial structures from high resolution training images. The details of these structures are transferred into facial components for enhancement. Therefore, we generate facial components to approximate ground truth global appearance in the first stage and enhance them through recovering details in the second stage. The experiments demonstrate that our method performs favorably against state-of-the-art methods.


Computer Vision and Image Understanding | 2017

Stylizing face images via multiple exemplars

Yibing Song; Linchao Bao; Shengfeng He; Qingxiong Yang; Ming-Hsuan Yang

Abstract We address the problem of transferring the style of a headshot photo to face images. Existing methods using a single exemplar lead to inaccurate results when the exemplar does not contain sufficient stylized facial components for a given photo. In this work, we propose an algorithm to stylize face images using multiple exemplars containing different subjects in the same style. Patch correspondences between an input photo and multiple exemplars are established using a Markov Random Field (MRF), which enables accurate local energy transfer via Laplacian stacks. As image patches from multiple exemplars are used, the boundaries of facial components on the target image are inevitably inconsistent. The artifacts are removed by a post-processing step using an edge-preserving filter. Experimental results show that the proposed algorithm consistently produces visually pleasing results.


international joint conference on artificial intelligence | 2017

Fast Preprocessing for Robust Face Sketch Synthesis

Yibing Song; Jiawei Zhang; Linchao Bao; Qingxiong Yang

Exemplar-based face sketch synthesis methods usually meet the challenging problem that input photos are captured in different lighting conditions from training photos. The critical step causing the failure is the search of similar patch candidates for an input photo patch. Conventional illumination invariant patch distances are adopted rather than directly relying on pixel intensity difference, but they will fail when local contrast within a patch changes. In this paper, we propose a fast preprocessing method named Bidirectional Luminance Remapping (BLR), which interactively adjust the lighting of training and input photos. Our method can be directly integrated into state-of-the-art exemplar-based methods to improve their robustness with ignorable computational cost.


international conference on computer vision | 2017

CREST: Convolutional Residual Learning for Visual Tracking

Yibing Song; Chao Ma; Lijun Gong; Jiawei Zhang; Rynson W. H. Lau; Ming-Hsuan Yang


international conference on computer graphics and interactive techniques | 2013

Decolorization: is rgb2gray() out?

Yibing Song; Linchao Bao; Xiaobin Xu; Qingxiong Yang


international conference on pattern recognition | 2012

An edge-preserving filtering framework for visibility restoration

Linchao Bao; Yibing Song; Qingxiong Yang; Narendra Ahuja


computer vision and pattern recognition | 2018

VITAL: VIsual Tracking via Adversarial Learning

Yibing Song; Chao Ma; Xiaohe Wu; Lijun Gong; Linchao Bao; Wangmeng Zuo; Chunhua Shen; Rynson W. H. Lau; Ming-Hsuan Yang

Collaboration


Dive into the Yibing Song's collaboration.

Top Co-Authors

Avatar

Linchao Bao

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Qingxiong Yang

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Rynson W. H. Lau

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jiawei Zhang

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Lijun Gong

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Shengfeng He

South China University of Technology

View shared research outputs
Top Co-Authors

Avatar

Chao Ma

University of Adelaide

View shared research outputs
Top Co-Authors

Avatar

Ke Xu

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Wenxi Liu

City University of Hong Kong

View shared research outputs
Researchain Logo
Decentralizing Knowledge