Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Qiong Yan is active.

Publication


Featured researches published by Qiong Yan.


computer vision and pattern recognition | 2013

Hierarchical Saliency Detection

Qiong Yan; Li Xu; Jianping Shi; Jiaya Jia

When dealing with objects with complex structures, saliency detection confronts a critical problem - namely that detection accuracy could be adversely affected if salient foreground or background in an image contains small-scale high-contrast patterns. This issue is common in natural images and forms a fundamental challenge for prior methods. We tackle it from a scale point of view and propose a multi-layer approach to analyze saliency cues. The final saliency map is produced in a hierarchical model. Different from varying patch sizes or downsizing images, our scale-based region handling is by finding saliency values optimally in a tree model. Our approach improves saliency detection on many images that cannot be handled well traditionally. A new dataset is also constructed.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2016

Hierarchical Image Saliency Detection on Extended CSSD

Jianping Shi; Qiong Yan; Li Xu; Jiaya Jia

Complex structures commonly exist in natural images. When an image contains small-scale high-contrast patterns either in the background or foreground, saliency detection could be adversely affected, resulting erroneous and non-uniform saliency assignment. The issue forms a fundamental challenge for prior methods. We tackle it from a scale point of view and propose a multi-layer approach to analyze saliency cues. Different from varying patch sizes or downsizing images, we measure region-based scales. The final saliency values are inferred optimally combining all the saliency cues in different scales using hierarchical inference. Through our inference model, single-scale information is selected to obtain a saliency map. Our method improves detection quality on many images that cannot be handled well traditionally. We also construct an extended Complex Scene Saliency Dataset (ECSSD) to include complex but general natural images.


international conference on computer graphics and interactive techniques | 2013

A sparse control model for image and video editing

Li Xu; Qiong Yan; Jiaya Jia

It is common that users draw strokes, as control samples, to modify color, structure, or tone of a picture. We discover inherent limitation of existing methods for their implicit requirement on where and how the strokes are drawn, and present a new system that is principled on minimizing the amount of work put in user interaction. Our method automatically determines the influence of edit samples across the whole image jointly considering spatial distance, sample location, and appearance. It greatly reduces the number of samples that are needed, while allowing for a decent level of global and local manipulation of resulting effects and reducing propagation ambiguity. Our method is broadly beneficial to applications adjusting visual content.


computer vision and pattern recognition | 2017

Accurate Single Stage Detector Using Recurrent Rolling Convolution

Jimmy S. J. Ren; Xiaohao Chen; Jianbo Liu; Wenxiu Sun; Jiahao Pang; Qiong Yan; Yu-Wing Tai; Li Xu

Most of the recent successful methods in accurate object detection and localization used some variants of R-CNN style two stage Convolutional Neural Networks (CNN) where plausible regions were proposed in the first stage then followed by a second stage for decision refinement. Despite the simplicity of training and the efficiency in deployment, the single stage detection methods have not been as competitive when evaluated in benchmarks consider mAP for high IoU thresholds. In this paper, we proposed a novel single stage end-to-end trainable object detection network to overcome this limitation. We achieved this by introducing Recurrent Rolling Convolution (RRC) architecture over multi-scale feature maps to construct object classifiers and bounding box regressors which are deep in context. We evaluated our method in the challenging KITTI dataset which measures methods under IoU threshold of 0.7. We showed that with RRC, a single reduced VGG-16 based model already significantly outperformed all the previously published results. At the time this paper was written our models ranked the first in KITTI car detection (the hard level), the first in cyclist detection and the second in pedestrian detection. These results were not reached by the previous single stage methods. The code is publicly available.


international conference on computer vision | 2013

Cross-Field Joint Image Restoration via Scale Map

Qiong Yan; Xiaoyong Shen; Li Xu; Shaojie Zhuo; Xiaopeng Zhang; Liang Shen; Jiaya Jia

Color, infrared, and flash images captured in different fields can be employed to effectively eliminate noise and other visual artifacts. We propose a two-image restoration framework considering input images in different fields, for example, one noisy color image and one dark-flashed near infrared image. The major issue in such a framework is to handle structure divergence and find commonly usable edges and smooth transition for visually compelling image reconstruction. We introduce a scale map as a competent representation to explicitly model derivative-level confidence and propose new functions and a numerical solver to effectively infer it following new structural observations. Our method is general and shows a principled way for cross-field restoration.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2015

Multispectral Joint Image Restoration via Optimizing a Scale Map

Xiaoyong Shen; Qiong Yan; Li Xu; Lizhuang Ma; Jiaya Jia

Color, infrared and flash images captured in different fields can be employed to effectively eliminate noise and other visual artifacts. We propose a two-image restoration framework considering input images from different fields, for example, one noisy color image and one dark-flashed near-infrared image. The major issue in such a framework is to handle all structure divergence and find commonly usable edges and smooth transitions for visually plausible image reconstruction. We introduce a novel scale map as a competent representation to explicitly model derivative-level confidence and propose new functions and a numerical solver to effectively infer it following our important structural observations. Multispectral shadow detection is also used to make our system more robust. Our method is general and shows a principled way to solve multispectral restoration problems.


international conference on computer graphics and interactive techniques | 2013

Dense scattering layer removal

Qiong Yan; Li Xu; Jiaya Jia

We propose a new model, together with advanced optimization, to separate a thick scattering media layer from a single natural image. It is able to handle challenging underwater scenes and images taken in fog and sandstorm, both of which are with significantly reduced visibility. Our method addresses the critical issue -- this is, originally unnoticeable impurities will be greatly magnified after removing the scattering media layer -- with transmission-aware optimization. We introduce non-local structure-aware regularization to properly constrain transmission estimation without introducing the halo artifacts. A selective-neighbor criterion is presented to convert the unconventional constrained optimization problem to an unconstrained one where the latter can be efficiently solved.


international conference on computer graphics and interactive techniques | 2012

Structure extraction from texture via relative total variation

Li Xu; Qiong Yan; Yang Xia; Jiaya Jia


international conference on machine learning | 2015

Deep Edge-Aware Filters

Li Xu; Jimmy S. J. Ren; Qiong Yan; Renjie Liao; Jiaya Jia


neural information processing systems | 2015

Shepard convolutional neural networks

Jimmy S. J. Ren; Li Xu; Qiong Yan; Wenxiu Sun

Collaboration


Dive into the Qiong Yan's collaboration.

Top Co-Authors

Avatar

Li Xu

The Chinese University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Jiaya Jia

The Chinese University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Jimmy S. J. Ren

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Jianping Shi

The Chinese University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Xiaoyong Shen

The Chinese University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Chuan Wang

University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Jiahao Pang

Hong Kong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Renjie Liao

The Chinese University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Wenxiu Sun

Hong Kong University of Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge