Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ruiqin Xiong is active.

Publication


Featured researches published by Ruiqin Xiong.


IEEE Transactions on Circuits and Systems for Video Technology | 2014

Image Restoration Using Joint Statistical Modeling in a Space-Transform Domain

Jian Zhang; Debin Zhao; Ruiqin Xiong; Siwei Ma; Wen Gao

This paper presents a novel strategy for high-fidelity image restoration by characterizing both local smoothness and nonlocal self-similarity of natural images in a unified statistical manner. The main contributions are three-fold. First, from the perspective of image statistics, a joint statistical modeling (JSM) in an adaptive hybrid space-transform domain is established, which offers a powerful mechanism of combining local smoothness and nonlocal self-similarity simultaneously to ensure a more reliable and robust estimation. Second, a new form of minimization functional for solving the image inverse problem is formulated using JSM under a regularization-based framework. Finally, in order to make JSM tractable and robust, a new Split Bregman-based algorithm is developed to efficiently solve the above severely underdetermined inverse problem associated with theoretical proof of convergence. Extensive experiments on image inpainting, image deblurring, and mixed Gaussian plus salt-and-pepper noise removal applications verify the effectiveness of the proposed algorithm.


IEEE Journal on Emerging and Selected Topics in Circuits and Systems | 2012

Image Compressive Sensing Recovery via Collaborative Sparsity

Jian Zhang; Debin Zhao; Chen Zhao; Ruiqin Xiong; Siwei Ma; Wen Gao

Compressive sensing (CS) has drawn quite an amount of attention as a joint sampling and compression approach. Its theory shows that when the signal is sparse enough in some domain, it can be decoded from many fewer measurements than suggested by the Nyquist sampling theory. So one of the most challenging researches in CS is to seek a domain where a signal can exhibit a high degree of sparsity and hence be recovered faithfully. Most of the conventional CS recovery approaches, however, exploited a set of fixed bases (e.g., DCT, wavelet, and gradient domain) for the entirety of a signal, which are irrespective of the nonstationarity of natural signals and cannot achieve high enough degree of sparsity, thus resulting in poor rate-distortion performance. In this paper, we propose a new framework for image compressive sensing recovery via collaborative sparsity, which enforces local 2-D sparsity and nonlocal 3-D sparsity simultaneously in an adaptive hybrid space-transform domain, thus substantially utilizing intrinsic sparsity of natural images and greatly confining the CS solution space. In addition, an efficient augmented Lagrangian-based technique is developed to solve the above optimization problem. Experimental results on a wide range of natural images are presented to demonstrate the efficacy of the new CS recovery strategy.


IEEE Transactions on Image Processing | 2013

Compression Artifact Reduction by Overlapped-Block Transform Coefficient Estimation With Block Similarity

Xinfeng Zhang; Ruiqin Xiong; Xiaopeng Fan; Siwei Ma; Wen Gao

Block transform coded images usually suffer from annoying artifacts at low bit rates, caused by the coarse quantization of transform coefficients. In this paper, we propose a new method to reduce compression artifacts by the overlapped-block transform coefficient estimation from non-local blocks. In the proposed method, the discrete cosine transform coefficients of each block are estimated by adaptively fusing two prediction values based on their reliabilities. One prediction is the quantized values of coefficients decoded from the compressed bitstream, whose reliability is determined by quantization steps. The other prediction is the weighted average of the coefficients in nonlocal blocks, whose reliability depends on the variance of the coefficients in these blocks. The weights are used to distinguish the effectiveness of the coefficients in nonlocal blocks to predict original coefficients and are determined by block similarity in transform domain. To solve the optimization problem, the overlapped blocks are divided into several subsets. Each subset contains nonoverlapped blocks covering the whole image and is optimized independently. Therefore, the overall optimization is reduced to a set of sub-optimization problems, which can be easily solved. Finally, we provide a strategy for parameter selection based on the compression levels. Experimental results show that the proposed method can remarkably reduce compression artifacts and significantly improve both the subjective and objective qualities of block transform coded images.


IEEE Transactions on Image Processing | 2011

Image Interpolation Via Regularized Local Linear Regression

Xianming Liu; Debin Zhao; Ruiqin Xiong; Siwei Ma; Wen Gao; Huifang Sun

In this paper, we present an efficient image interpolation scheme by using regularized local linear regression (RLLR). On one hand, we introduce a robust estimator of local image structure based on moving least squares, which can efficiently handle the statistical outliers compared with ordinary least squares based methods. On the other hand, motivated by recent progress on manifold based semi-supervise learning, the intrinsic manifold structure is explicitly considered by making use of both measured and unmeasured data points. In particular, the geometric structure of the marginal probability distribution induced by unmeasured samples is incorporated as an additional locality preserving constraint. The optimal model parameters can be obtained with a closed-form solution by solving a convex optimization problem. Experimental results demonstrate that our method outperform the existing methods in both objective and subjective visual quality over a wide range of test images.


international symposium on circuits and systems | 2013

Improved total variation based image compressive sensing recovery by nonlocal regularization

Jian Zhang; Shaohui Liu; Ruiqin Xiong; Siwei Ma; Debin Zhao

Recently, total variation (TV) based minimization algorithms have achieved great success in compressive sensing (CS) recovery for natural images due to its virtue of preserving edges. However, the use of TV is not able to recover the fine details and textures, and often suffers from undesirable staircase artifact. To reduce these effects, this paper presents an improved TV based image CS recovery algorithm by introducing a new nonlocal regularization constraint into CS optimization problem. The nonlocal regularization is built on the well known nonlocal means (NLM) filtering and takes advantage of self-similarity in images, which helps to suppress the staircase effect and restore the fine details. Furthermore, an efficient augmented Lagrangian based algorithm is developed to solve the above combined TV and nonlocal regularization constrained problem. Experimental results demonstrate that the proposed algorithm achieves significant performance improvements over the state-of-the-art TV based algorithm in both PSNR and visual perception.


IEEE Transactions on Circuits and Systems for Video Technology | 2012

Multiple Hypotheses Bayesian Frame Rate Up-Conversion by Adaptive Fusion of Motion-Compensated Interpolations

Hongbin Liu; Ruiqin Xiong; Debin Zhao; Siwei Ma; Wen Gao

Frame rate up-conversion (FRUC) improves the viewing experience of a video because the motion in a FRUC-constructed high frame-rate video looks more smooth and continuous. This paper proposes a multiple hypotheses Bayesian FRUC scheme for estimating the intermediate frame with maximum a posteriori probability, in which both temporal motion model and spatial image model are incorporated into the optimization criterion. The image model describes the spatial structure of neighboring pixels while the motion model describes the temporal correlation of pixels along motion trajectories. Instead of employing a single uniquely optimal motion, multiple “optimal” motion trajectories are utilized to form a group of motion hypotheses. To obtain accurate estimation for the pixels in missing intermediate frames, the motion-compensated interpolations generated by all these motion hypotheses are adaptively fused according to the reliability of each hypothesis. We revealed by numerical analysis that this reliability (i.e., the variance of interpolation errors along the hypothesized motion trajectory) can be measured by the variation of reference pixels along the motion trajectory. To obtain the multiple motion fields, a set of block-matching sizes is used and the motion fields are estimated by progressively reducing the size of matching block. Experimental results show that the proposed method can significantly improve both the objective and the subjective quality of the constructed high frame rate video.


visual communications and image processing | 2004

Exploiting temporal correlation with adaptive block-size motion alignment for 3D wavelet coding

Ruiqin Xiong; Feng Wu; Shipeng Li; Zixiang Xiong; Ya-Qin Zhang

This paper proposes an adaptive block-size motion alignment technique in 3D wavelet coding to further exploit temporal correlations across pictures. Similar to B picture in traditional video coding, each macroblock can motion align from forward and/or backward for temporal wavelet de-composition. In each direction, a macroblock may select its partition from one of seven modes - 16x16, 8x16, 16x8, 8x8, 8x4, 4x8 and 4x4 - to allow accurate motion alignment. Furthermore, the rate-distortion optimization criterions are proposed to select motion mode, motion vectors and partition mode. Although the proposed technique greatly improves the accuracy of motion alignment, it does not directly bring the coding efficiency gain because of smaller block size and more block boundaries. Therefore, an overlapped block motion alignment is further proposed to cope with block boundaries and to suppress spatial high-frequency components. The experimental results show the proposed adaptive block-size motion alignment with the overlapped block motion alignment can achieve up to 1.0 dB gain in 3D wavelet video coding. Our 3D wavelet coder outperforms the MC-EZBC for most sequences by 1~2dB and we are doing up to 1.5 dB better than H.264.


international symposium on circuits and systems | 2012

Image super-resolution via dual-dictionary learning and sparse representation

Jian Zhang; Chen Zhao; Ruiqin Xiong; Siwei Ma; Debin Zhao

Learning-based image super-resolution aims to reconstruct high-frequency (HF) details from the prior model trained by a set of high- and low-resolution image patches. In this paper, HF to be estimated is considered as a combination of two components: main high-frequency (MHF) and residual high-frequency (RHF), and we propose a novel image super-resolution method via dual-dictionary learning and sparse representation, which consists of the main dictionary learning and the residual dictionary learning, to recover MHF and RHF respectively. Extensive experimental results on test images validate that by employing the proposed two-layer progressive scheme, more image details can be recovered and much better results can be achieved than the state-of-the-art algorithms in terms of both PSNR and visual perception.


visual communications and image processing | 2012

WaveCast: Wavelet based wireless video broadcast using lossy transmission

Xiaopeng Fan; Ruiqin Xiong; Feng Wu; Debin Zhao

Wireless video broadcasting is a popular application of mobile network. However, the traditional approaches have limited supports to the accommodation of users with diverse channel conditions. The newly emerged Softcast approach provides smooth multicast performance but is not very efficient in inter frame compression. In this work, we propose a new video multicast approach: WaveCast. Different from softcast, WaveCast utilizes motion compensated temporal filter (MCTF) to exploit inter frame redundancy, and utilizes conventional framework to transmit motion information such that the MVs can be reconstructed losslessly. Meanwhile, WaveCast transmits the transform coefficients in lossy mode and performs gracefully in multicast. In experiments, Wave-Cast outperforms softcast 2dB in video PSNR at low channel SNR, and outperforms H.264 based framework up to 8dB in broadcast.


IEEE Transactions on Image Processing | 2011

Interpolation-Dependent Image Downsampling

Yongbing Zhang; Debin Zhao; Jian Zhang; Ruiqin Xiong; Wen Gao

Traditional methods for image downsampling commit to remove the aliasing artifacts. However, the influences on the quality of the image interpolated from the downsampled one are usually neglected. To tackle this problem, in this paper, we propose an interpolation-dependent image downsampling (IDID), where interpolation is hinged to downsampling. Given an interpolation method, the goal of IDID is to obtain a downsampled image that minimizes the sum of square errors between the input image and the one interpolated from the corresponding downsampled image. Utilizing a least squares algorithm, the solution of IDID is derived as the inverse operator of upsampling. We also devise a content-dependent IDID for the interpolation methods with varying interpolation coefficients. Numerous experimental results demonstrate the viability and efficiency of the proposed IDID.

Collaboration


Dive into the Ruiqin Xiong's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Feng Wu

University of Science and Technology of China

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xiaopeng Fan

Harbin Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Debin Zhao

Harbin Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xinfeng Zhang

Nanyang Technological University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge