Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Xiaoyun Yan is active.

Publication


Featured researches published by Xiaoyun Yan.


Journal of Visual Communication and Image Representation | 2014

Moving cast shadow detection using online sub-scene shadow modeling and object inner-edges analysis

Jun Wang; Yuehuan Wang; Man Jiang; Xiaoyun Yan; Mengmeng Song

Abstract In this paper, we propose an adaptive and accurate moving cast shadow detection method employing online sub-scene shadow modeling and object inner-edges analysis for applications of static-camera video surveillance. To describe shadow appearance more accurately, the proposed method builds adaptive online shadow models for sub-scenes with different conditions of irradiance and reflectance. The online shadow models are learned by utilizing Gaussian functions to fit the significant peaks of accumulating histograms, which are calculated from Hue, Saturation and Intensity (HSI) difference of moving objects between background and foreground. Additionally, object inner-edges analysis is adopted to reject camouflages, which are misclassified foreground regions that are highly similar to shadows. Finally, the main shadow regions are expanded to recycle the misclassified shadow pixels based on local color constancy. The proposed algorithm can adaptively handle the shadow appearance changes and camouflages without prior information about illuminations and scenarios. Experimental results demonstrate that the proposed method outperforms state-of-the-art methods.


Journal of Visual Communication and Image Representation | 2017

Salient object detection via boosting object-level distinctiveness and saliency refinement☆

Xiaoyun Yan; Yuehuan Wang; Qiong Song; Kaiheng Dai

Abstract Many salient object detection approaches share the common drawback that they cannot uniformly highlight heterogeneous regions of salient objects, and thus, parts of the salient objects are not discriminated from background regions in a saliency map. In this paper, we focus on this drawback and accordingly propose a novel algorithm that more uniformly highlights the entire salient object as compared to many approaches. Our method consists of two stages: boosting the object-level distinctiveness and saliency refinement. In the first stage, a coarse object-level saliency map is generated based on boosting the distinctiveness of the object proposals in the test images, using a set of object-level features and the Modest AdaBoost algorithm. In the second stage, several saliency refinement steps are executed to obtain a final saliency map in which the boundaries of salient objects are preserved. Quantitative and qualitative comparisons with state-of-the-art approaches demonstrate the superior performance of our approach.


international conference on image processing | 2014

Salient region detection via color spatial distribution determined global contrasts

Xiaoyun Yan; Yuehuan Wang; Man Jiang; Jun Wang

In this paper, we propose a novel salient region detection method via color spatial distribution determined global contrasts. First, original image is preprocessed by a texture suppression approach, and segmented into superpixels. After that, the color spatial distribution of all superpixels is computed. Then, based on values of the distribution in whole image and boundaries of image, some superpixels are determined as foreground and background queries. Next, two global contrasts based on these queries are computed respectively to produce two different saliency maps. Ultimately, color spatial distribution and the two saliency maps are accumulated to generate final saliency map. Our approach is evaluated on M-SRA 1000 dataset, and the experimental results demonstrate superior performance of our method to eight state-of-the-art approaches.


international conference on image processing | 2016

Salient object detection by multi-level features learning determined sparse reconstruction

Xiaoyun Yan; Yuehuan Wang; Qiong Song; Kaiheng Dai

We propose a salient object detection algorithm via multilevel features learning determined sparse reconstruction. There are three stages in our method. First, the test image are successively processed by a segmentation and semantic information generation procedures. Second, three kinds of features are extracted from semantic, global, and local levels for each superpixel to train a random forest regressor, the learned regression model is then used to generate an initial saliency map. Third, the ultimate detection result is produced using sparse reconstruction determined by the initial saliency map. Compared with most approaches, the proposed method has two obvious advantages. First, the heterogeneous regions inside salient object are often allocated similar saliency values in saliency map. Second, there are much fewer false positives in our detection results. The superior performance of our method were evaluated on four datasets with 12 state-of-the-art approaches.


Remote Sensing | 2018

Remote Sensing Images Stripe Noise Removal by Double Sparse Regulation and Region Separation

Qiong Song; Yuehuan Wang; Xiaoyun Yan; Haiguo Gu

Stripe noise removal continues to be an active field of research for remote image processing. Most existing approaches are prone to generating artifacts in extreme areas and removing the stripe-like details. In this paper, a weighted double sparsity unidirectional variation (WDSUV) model is constructed to reduce this phenomenon. The WDSUV takes advantage of both the spatial domain and the gradient domain’s sparse property of stripe noise, and processes the heavy stripe area, extreme area and regular noise corrupted areas using different strategies. The proposed model consists of two variation terms and two sparsity terms that can well exploit the intrinsic properties of stripe noise. Then, the alternating direction method of multipliers (ADMM) optimal solver is employed to solve the optimization model in an alternating minimization scheme. Compared with the state-of-the-art approaches, the experimental results on both the synthetic and real remote sensing data demonstrate that the proposed model has a better destriping performance in terms of the preservation of small details, stripe noise estimation and in the mean time for artifacts’ reduction.


Ninth International Symposium on Multispectral Image Processing and Pattern Recognition (MIPPR2015) | 2015

An independent sequential maximum likelihood approach to simultaneous track-to-track association and bias removal

Qiong Song; Yuehuan Wang; Xiaoyun Yan; Dang Liu

In this paper we propose an independent sequential maximum likelihood approach to address the joint track-to-track association and bias removal in multi-sensor information fusion systems. First, we enumerate all kinds of association situation following by estimating a bias for each association. Then we calculate the likelihood of each association after bias compensated. Finally we choose the maximum likelihood of all association situations as the association result and the corresponding bias estimation is the registration result. Considering the high false alarm and interference, we adopt the independent sequential association to calculate the likelihood. Simulation results show that our proposed method can give out the right association results and it can estimate the bias precisely simultaneously for small number of targets in multi-sensor fusion system.


international conference on image processing | 2014

A sub-scene modeling framework for moving cast shadow detection

Jun Wang; Yuehuan Wang; Man Jiang; Xiaoyun Yan

In this paper, we propose an adaptive and accurate online sub-scene modeling framework for moving cast shadow detection in applications of static-camera video surveillance. To describe shadow appearance more accurately, the proposed method builds adaptive online shadow models for sub-scenes with different conditions of irradiance and reflectance. Additionally, in the correction process, object inner-edges analysis and shadow region expanding are adopted to reject shadow camouflages and recycle the misclassified shadow pixels respectively. The proposed algorithm can adaptively handle the shadow appearance changes and camouflages in both outdoor and indoor scenes without prior information about illuminations and scenarios. Experimental results demonstrate that the proposed method outperforms state-of-the-art methods.


asian conference on pattern recognition | 2013

Saliency Detection Using Color Spatial Variance Weighted Graph Model

Xiaoyun Yan; Yuehuan Wang; Mengmeng Song; Man Jiang

Saliency detection as a recently active research field of computer vision, has a wide range of applications, such as pattern recognition, image retrieval, adaptive compression, target detection, etc. In this paper, we propose a saliency detection method based on color spatial variance weighted graph model, which is designed rely on a background prior. First, the original image is partitioned into small patches, then we use mean-shift clustering algorithm on this patches to get sorts of clustering centers that represents the main colors of whole image. In modeling stage, all patches and the clustering centers are denoted as nodes on a specific graph model. The saliency of each patch is defined as weighted sum of weights on shortest paths from the patch to all clustering centers, each shortest path is weighted according to color spatial variance. Our saliency detection method is computational efficient and outperformed the state of art methods by higher precision and better recall rates, when we took evaluation on the popular MSRA1000 database.


international conference on image processing | 2018

Fusion of Template Matching and Foreground Detection for Robust Visual Tracking.

Kaiheng Dai; Yuehuan Wang; Xiaoyun Yan; Yang Huo


international conference on image processing | 2018

Soft Mask Correlation Filter for Visual Object Tracking.

Yang Huo; Yuehuan Wang; Xiaoyun Yan; Kaiheng Dai

Collaboration


Dive into the Xiaoyun Yan's collaboration.

Top Co-Authors

Avatar

Yuehuan Wang

Huazhong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Kaiheng Dai

Huazhong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Man Jiang

Huazhong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Jun Wang

Huazhong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Qiong Song

Huazhong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Mengmeng Song

Huazhong University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Dang Liu

Huazhong University of Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge