Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chunxia Xiao is active.

Publication


Featured researches published by Chunxia Xiao.


The Visual Computer | 2012

Fast image dehazing using guided joint bilateral filter

Chunxia Xiao; Jiajia Gan

In this paper, we propose a new fast dehazing method from single image based on filtering. The basic idea is to compute an accurate atmosphere veil that is not only smoother, but also respect with depth information of the underlying image. We firstly obtain an initial atmosphere scattering light through median filtering, then refine it by guided joint bilateral filtering to generate a new atmosphere veil which removes the abundant texture information and recovers the depth edge information. Finally, we solve the scene radiance using the atmosphere attenuation model. Compared with exiting state of the art dehazing methods, our method could get a better dehazing effect at distant scene and places where depth changes abruptly. Our method is fast with linear complexity in the number of pixels of the input image; furthermore, as our method can be performed in parallel, thus it can be further accelerated using GPU, which makes our method applicable for real-time requirement.


IEEE Transactions on Geoscience and Remote Sensing | 2014

Cloud Detection of RGB Color Aerial Photographs by Progressive Refinement Scheme

Qing Zhang; Chunxia Xiao

In this paper, we propose an automatic and effective cloud detection algorithm for color aerial photographs. Based on the properties derived from observations and statistical results on a large number of color aerial photographs with cloud layers, we present a novel progressive refinement scheme for detecting clouds in the color aerial photographs. We first construct a significance map which highlights the difference between cloud regions and noncloud regions. Based on the significance map and the proposed optimal threshold setting, we obtain a coarse cloud detection result which classifies the input aerial photograph into the candidate cloud regions and noncloud regions. In order to accurately detect the cloud regions from the candidate cloud regions, we then construct a robust detail map derived from a multiscale bilateral decomposition to guide us in removing noncloud regions from the candidate cloud regions. Finally, we further perform a guided feathering to achieve our final cloud detection result, which detects semitransparent cloud pixels around the boundaries of cloud regions. The proposed method is evaluated in terms of both visual and quantitative comparisons, and the evaluation results show that our proposed method works well for the cloud detection of color aerial photographs.


IEEE Transactions on Visualization and Computer Graphics | 2011

Fast Exact Nearest Patch Matching for Patch-Based Image Editing and Processing

Chunxia Xiao; Meng Liu; Nie Yongwei; Zhao Dong

This paper presents an efficient exact nearest patch matching algorithm which can accurately find the most similar patch-pairs between source and target image. Traditional match matching algorithms treat each pixel/patch as an independent sample and build a hierarchical data structure, such as kd-tree, to accelerate nearest patch finding. However, most of these approaches can only find approximate nearest patch and do not explore the sequential overlap between patches. Hence, they are neither accurate in quality nor optimal in speed. By eliminating redundant similarity computation of sequential overlap between patches, our method finds the exact nearest patch in brute-force style but reduces its running time complexity to be linear on the patch size. Furthermore, relying on recent multicore graphics hardware, our method can be further accelerated by at least an order of magnitude (≥ 10 ×). This greatly improves performance and ensures that our method can be efficiently applied in an interactive editing framework for moderate-sized image even video. To our knowledge, this approach is the fastest exact nearest patch matching method for high-dimensional patch and also its extra memory requirement is minimal. Comparisons with the popular nearest patch matching methods in the experimental results demonstrate the merits of our algorithm.


IEEE Transactions on Visualization and Computer Graphics | 2013

Compact Video Synopsis via Global Spatiotemporal Optimization

Yongwei Nie; Chunxia Xiao; Hanqiu Sun; Ping Li

Video synopsis aims at providing condensed representations of video data sets that can be easily captured from digital cameras nowadays, especially for daily surveillance videos. Previous work in video synopsis usually moves active objects along the time axis, which inevitably causes collisions among the moving objects if compressed much. In this paper, we propose a novel approach for compact video synopsis using a unified spatiotemporal optimization. Our approach globally shifts moving objects in both spatial and temporal domains, which shifting objects temporally to reduce the length of the video and shifting colliding objects spatially to avoid visible collision artifacts. Furthermore, using a multilevel patch relocation (MPR) method, the moving space of the original video is expanded into a compact background based on environmental content to fit with the shifted objects. The shifted objects are finally composited with the expanded moving space to obtain the high-quality video synopsis, which is more condensed while remaining free of collision artifacts. Our experimental results have shown that the compact video synopsis we produced can be browsed quickly, preserves relative spatiotemporal relationships, and avoids motion collisions.Video synopsis aims at providing condensed representations of video data sets that can be easily captured from digital cameras nowadays, especially for daily surveillance videos. Previous work in video synopsis usually moves active objects along the time axis, which inevitably causes collisions among the moving objects if compressed much. In this paper, we propose a novel approach for compact video synopsis using a unified spatiotemporal optimization. Our approach globally shifts moving objects in both spatial and temporal domains, which shifting objects temporally to reduce the length of the video and shifting colliding objects spatially to avoid visible collision artifacts. Furthermore, using a multilevel patch relocation (MPR) method, the moving space of the original video is expanded into a compact background based on environmental content to fit with the shifted objects. The shifted objects are finally composited with the expanded moving space to obtain the high-quality video synopsis, which is more condensed while remaining free of collision artifacts. Our experimental results have shown that the compact video synopsis we produced can be browsed quickly, preserves relative spatiotemporal relationships, and avoids motion collisions.


IEEE Transactions on Image Processing | 2015

Shadow Remover: Image Shadow Removal Based on Illumination Recovering Optimization

Ling Zhang; Qing Zhang; Chunxia Xiao

In this paper, we present a novel shadow removal system for single natural images as well as color aerial images using an illumination recovering optimization method. We first adaptively decompose the input image into overlapped patches according to the shadow distribution. Then, by building the correspondence between the shadow patch and the lit patch based on texture similarity, we construct an optimized illumination recovering operator, which effectively removes the shadows and recovers the texture detail under the shadow patches. Based on coherent optimization processing among the neighboring patches, we finally produce high-quality shadow-free results with consistent illumination. Our shadow removal system is simple and effective, and can process shadow images with rich texture types and nonuniform shadows. The illumination of shadow-free results is consistent with that of surrounding environment. We further present several shadow editing applications to illustrate the versatility of the proposed method.


IEEE Transactions on Visualization and Computer Graphics | 2011

Efficient Edit Propagation Using Hierarchical Data Structure

Chunxia Xiao; Yongwei Nie; Feng Tang

This paper presents a novel unified hierarchical structure for scalable edit propagation. Our method is based on the key observation that in edit propagation, appearance varies very smoothly in those regions where the appearance is different from the user-specified pixels. Uniformly sampling in these regions leads to redundant computation. We propose to use a quadtree-based adaptive subdivision method such that more samples are selected in similar regions and less in those that are different from the user-specified regions. As a result, both the computation and the memory requirement are significantly reduced. In edit propagation, an edge-preserving propagation function is first built, and the full solution for all the pixels can be computed by interpolating from the solution obtained from the adaptively subdivided domain. Furthermore, our approach can be easily extended to accelerate video edit propagation using an adaptive octree structure. In order to improve user interaction, we introduce several new Gaussian Mixture Model (GMM) brushes to find pixels that are similar to the user-specified regions. Compared with previous methods, our approach requires significantly less time and memory, while achieving visually same results. Experimental results demonstrate the efficiency and effectiveness of our approach on high-resolution photographs and videos.


Computer-aided Design | 2013

Efficient Feature-preserving Local Projection Operator for Geometry Reconstruction

Bin Liao; Chunxia Xiao; Liqiang Jin; Hongbo Fu

Abstract This paper proposes an efficient and Feature-preserving Locally Optimal Projection operator (FLOP) for geometry reconstruction. Our operator is bilateral weighted, taking both spatial and geometric feature information into consideration for feature-preserving approximation. We then present an accelerated FLOP operator based on the random sampling of the Kernel Density Estimate (KDE), which produces reconstruction results close to those generated using the complete point set data, to within a given accuracy. Additionally, we extend our approach to time-varying data reconstruction, called the Spatial–Temporal Locally Optimal Projection operator (STLOP), which efficiently generates temporally coherent and stable feature-preserving results. The experimental results show that the proposed algorithms are efficient and robust for feature-preserving geometry reconstruction on both static models and time-varying data sets.


IEEE Transactions on Circuits and Systems for Video Technology | 2014

Fast Closed-Form Matting Using a Hierarchical Data Structure

Chunxia Xiao; Meng Liu; Donglin Xiao; Zhao Dong; Kwan-Liu Ma

Image/video matting is one of the key operations in many image/video editing applications. Although previous methods can generate high-quality matting results, their high computational cost in processing high-resolution image and video data often limits their usability. In this paper, we present a unified acceleration method for closed-form image and video matting using a hierarchical data structure, which achieves an excellent compromise between quality and speed. We first apply a Gaussian KD tree to adaptively cluster the input high-dimensional image and video feature space into a low-dimensional feature space. Then, we solve the affinity-weighted Laplacian alpha matting in the reduced feature space. The final matting results are derived using detail-aware alpha interpolation. Our algorithm can be fully parallelized by exploiting advanced graphics hardware, which can further accelerate the matting computation. Our method accelerates existing methods by at least an order of magnitude with good quality, and also greatly reduces the memory consumption. This acceleration strategy is also extended to support other affinity-based matting approaches, which makes it a more general accelerating framework for a variety of matting methods. Finally, we apply the presented method to accelerate image and video dehazing, and image shadow detection and removal.


Computer Graphics Forum | 2013

Efficient Shadow Removal Using Subregion Matching Illumination Transfer

Chunxia Xiao; Donglin Xiao; Ling Zhang; Lin Chen

This paper proposes a new shadow removal approach for input single natural image by using subregion matching illumination transfer. We first propose an effective and automatic shadow detection algorithm incorporating global successive thresholding scheme and local boundary refinement. Then we present a novel shadow removal algorithm by performing illumination transfer on the matched subregion pairs between the shadow regions and non-shadow regions, and this method can process complex images with different kinds of shadowed texture regions and illumination conditions. In addition, we develop an efficient shadow boundary processing method by using alpha matte interpolation, which produces seamless transition between the shadow and non-shadow regions. Experimental results demonstrate the capabilities of our algorithm in both the shadow removal quality and performance.


Computer Graphics Forum | 2013

Fast Shadow Removal Using Adaptive Multi-Scale Illumination Transfer

Chunxia Xiao; Ruiyun She; Donglin Xiao; Kwan-Liu Ma

In this paper, we present a new method for removing shadows from images. First, shadows are detected by interactive brushing assisted with a Gaussian Mixture Model. Secondly, the detected shadows are removed using an adaptive illumination transfer approach that accounts for the reflectance variation of the image texture. The contrast and noise levels of the result are then improved with a multi‐scale illumination transfer technique. Finally, any visible shadow boundaries in the image can be eliminated based on our Bayesian framework. We also extend our method to video data and achieve temporally consistent shadow‐free results.

Collaboration


Dive into the Chunxia Xiao's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kwan-Liu Ma

University of California

View shared research outputs
Top Co-Authors

Avatar

Hanqiu Sun

The Chinese University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge