Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Wenrui Hu is active.

Publication


Featured researches published by Wenrui Hu.


Neurocomputing | 2015

Global Coupled Learning and Local Consistencies Ensuring for sparse-based tracking

Yehui Yang; Yuan Xie; Wensheng Zhang; Wenrui Hu; Yuanhua Tan

This paper presents a robust tracking algorithm by sparsely representing the object at both global and local levels. Accordingly, the algorithm is constructed by two complementary parts: Global Coupled Learning (GCL) part and Local Consistencies Ensuring (LCE) part. The global part is a discriminative model which aims to utilize the holistic features of the object via an over-complete global dictionary and classifier, and the dictionary and classifier are coupled learning to construct an adaptive GCL part. While in LCE part, we explore the object?s local features by sparsely coding the object patches via a local dictionary, then both temporal and spatial consistencies of the local patches are ensured to refine the tracking results. Moreover, the GCL and LCE parts are integrated into a Bayesian framework for constructing the final tracker. Experiments on fifteen benchmark challenging sequences demonstrate that the proposed algorithm has more effectiveness and robustness than the alternative ten state-of-the-art trackers. HighlightsWe sparsely represent the object in both global and local level for tracking, which aim to explore the object?s holistic and local information respectively.The global dictionary and classifier are coupled learned in our global part.We define temporal and spatial consistencies among the object patches, and refine the tracking result by ensuring the consistencies.


Neurocomputing | 2014

Single image super-resolution using combined total variation regularization by split Bregman Iteration

Lin Li; Yuan Xie; Wenrui Hu; Wensheng Zhang

This paper addresses the problem of generating a high-resolution (HR) image from a single degraded low-resolution (LR) input image without any external training set. Due to the ill-posed nature of this problem, it is necessary to find an effective prior knowledge to make it well-posed. For this purpose, we propose a novel super-resolution (SR) method based on combined total variation regularization. In the first place, we propose a new regularization term called steering kernel regression total variation (SKRTV), which exploits the local structural regularity properties in natural images. In the second place, another regularization term called non-local total variation (NLTV) is employed as a complementary term in our method, which makes the most of the redundancy of similar patches in natural images. By combining the two complementary regularization terms, we propose a maximum a posteriori probability framework of SR reconstruction. Furthermore, split Bregman iteration is applied to implement the proposed model. Extensive experiments demonstrate the effectiveness of the proposed method.


IEEE Transactions on Systems, Man, and Cybernetics | 2017

Temporal Restricted Visual Tracking Via Reverse-Low-Rank Sparse Learning

Yehui Yang; Wenrui Hu; Yuan Xie; Wensheng Zhang; Tianzhu Zhang

An effective representation model, which aims to mine the most meaningful information in the data, plays an important role in visual tracking. Some recent particle-filter-based trackers achieve promising results by introducing the low-rank assumption into the representation model. However, their assumed low-rank structure of candidates limits the robustness when facing severe challenges such as abrupt motion. To avoid the above limitation, we propose a temporal restricted reverse-low-rank learning algorithm for visual tracking with the following advantages: 1) the reverse-low-rank model jointly represents target and background templates via candidates, which exploits the low-rank structure among consecutive target observations and enforces the temporal consistency of target in a global level; 2) the appearance consistency may be broken when target suffers from sudden changes. To overcome this issue, we propose a local constraint via 11,2 mixed-norm, which can not only ensures the local consistency of target appearance, but also tolerates the sudden changes between two adjacent frames; and 3) to alleviate the inference of unreasonable representation values due to outlier candidates, an adaptive weighted scheme is designed to improve the robustness of the tracker. By evaluating on 26 challenge video sequences, the experiments show the effectiveness and favorable performance of the proposed algorithm against 12 state-of-the-art visual trackers.


IEEE Transactions on Neural Networks | 2017

The Twist Tensor Nuclear Norm for Video Completion

Wenrui Hu; Dacheng Tao; Wensheng Zhang; Yuan Xie; Yehui Yang

In this paper, we propose a new low-rank tensor model based on the circulant algebra, namely, twist tensor nuclear norm (t-TNN). The twist tensor denotes a three-way tensor representation to laterally store 2-D data slices in order. On one hand, t-TNN convexly relaxes the tensor multirank of the twist tensor in the Fourier domain, which allows an efficient computation using fast Fourier transform. On the other, t-TNN is equal to the nuclear norm of block circulant matricization of the twist tensor in the original domain, which extends the traditional matrix nuclear norm in a block circulant way. We test the t-TNN model on a video completion application that aims to fill missing values and the experiment results validate its effectiveness, especially when dealing with video recorded by a nonstationary panning camera. The block circulant matricization of the twist tensor can be transformed into a circulant block representation with nuclear norm invariance. This representation, after transformation, exploits the horizontal translation relationship between the frames in a video, and endows the t-TNN model with a more powerful ability to reconstruct panning videos than the existing state-of-the-art low-rank models.


IEEE Transactions on Image Processing | 2017

Moving Object Detection Using Tensor-Based Low-Rank and Saliently Fused-Sparse Decomposition

Wenrui Hu; Yehui Yang; Wensheng Zhang; Yuan Xie

In this paper, we propose a new low-rank and sparse representation model for moving object detection. The model preserves the natural space-time structure of video sequences by representing them as three-way tensors. Then, it operates the low-rank background and sparse foreground decomposition in the tensor framework. On the one hand, we use the tensor nuclear norm to exploit the spatio-temporal redundancy of background based on the circulant algebra. On the other, we use the new designed saliently fused-sparse regularizer (SFS) to adaptively constrain the foreground with spatio-temporal smoothness. To refine the existing foreground smooth regularizers, the SFS incorporates the local spatio-temporal geometric structure information into the tensor total variation by using the 3D locally adaptive regression kernel (3D-LARK). What is more, the SFS further uses the 3D-LARK to compute the space-time motion saliency of foreground, which is combined with the l1 norm and improves the robustness of foreground extraction. Finally, we solve the proposed model with globally optimal guarantee. Extensive experiments on challenging well-known data sets demonstrate that our method significantly outperforms the state-of-the-art approaches and works effectively on a wide range of complex scenarios.In this paper, we propose a new low-rank and sparse representation model for moving object detection. The model preserves the natural space-time structure of video sequences by representing them as three-way tensors. Then, it operates the low-rank background and sparse foreground decomposition in the tensor framework. On the one hand, we use the tensor nuclear norm to exploit the spatio-temporal redundancy of background based on the circulant algebra. On the other, we use the new designed saliently fused-sparse regularizer (SFS) to adaptively constrain the foreground with spatio-temporal smoothness. To refine the existing foreground smooth regularizers, the SFS incorporates the local spatio-temporal geometric structure information into the tensor total variation by using the 3D locally adaptive regression kernel (3D-LARK). What is more, the SFS further uses the 3D-LARK to compute the space-time motion saliency of foreground, which is combined with the l1 norm and improves the robustness of foreground extraction. Finally, we solve the proposed model with globally optimal guarantee. Extensive experiments on challenging well-known data sets demonstrate that our method significantly outperforms the state-of-the-art approaches and works effectively on a wide range of complex scenarios.


IEEE Transactions on Image Processing | 2016

Removing Turbulence Effect via Hybrid Total Variation and Deformation-Guided Kernel Regression

Yuan Xie; Wensheng Zhang; Dacheng Tao; Wenrui Hu; Yanyun Qu; Hanzi Wang

It remains a challenge to simultaneously remove geometric distortion and space-time-varying blur in frames captured through a turbulent atmospheric medium. To solve, or at least reduce these effects, we propose a new scheme to recover a latent image from observed frames by integrating a new hybrid total variation model and deformation-guided spatial-temporal kernel regression. The proposed scheme first constructs a high-quality reference image from the observed frames using low-rank decomposition. Then, to generate an improved registered sequence, the reference image is iteratively optimized using a variational model containing the combined regularization of local and non-local total variations. The proposed optimization algorithm efficiently solves this model with convergence guarantee. Next, to reduce blur variation, deformation-guided spatial-temporal kernel regression is carried out to fuse the registered sequence into one image by introducing the concept of the near-stationary patch. Applying a blind deconvolution algorithm to the fused image produces the final output. Extensive experimental testing shows, both qualitatively and quantitatively, that the proposed method can effectively alleviate distortion, and blur and recover details of the original scene compared to the state-of-the-art methods.


IEEE Transactions on Circuits and Systems for Video Technology | 2017

Discriminative Reverse Sparse Tracking via Weighted Multitask Learning

Yehui Yang; Wenrui Hu; Wensheng Zhang; Tianzhu Zhang; Yuan Xie

Multitask learning has shown great potentiality for visual tracking under a particle filter framework. However, the recent multitask trackers, which exploit the similarity between all candidates by imposing group sparsity on the candidate representations, have a limitation in robustness due to the diverse sampling of candidates. To deal with this issue, we propose a discriminative reverse sparse tracker via weighted multitask learning. Our positive and negative templates are retained from the target observations and the background, respectively. Here, the templates are reversely represented via the candidates, and the representation of each positive template is viewed as a single task. Compared with existing multitask trackers, the proposed algorithm has the following advantages. First, we regularize the target representations with the


international conference on internet multimedia computing and service | 2014

Image Denoising via Nonlocally Sparse Coding and Tensor Decomposition

Wenrui Hu; Yuan Xie; Wensheng Zhang; Limin Zhu; Yanyun Qu; Yuanhua Tan

\ell _{2,1}


arXiv: Computer Vision and Pattern Recognition | 2015

A New Low-Rank Tensor Model for Video Completion.

Wenrui Hu; Dacheng Tao; Wensheng Zhang; Yuan Xie; Yehui Yang

-norm to exploit the similarity shared by the positive templates, which is reasonable because of the target appearance consistency in the tracking process. Second, the valuable prior relationship between the candidates and the templates is introduced into the representation model by a weighted multitask learning scheme. Third, both target information and background information are integrated to generate discriminative scores for enhancing the proposed tracker. The experimental results on challenging sequences show that the proposed algorithm is effective and performs favorably against 12 state-of-the-art trackers.


International Journal of Automation and Computing | 2015

A TV-l1 based nonrigid image registration by coupling parametric and non-parametric transformation

Wenrui Hu; Yuan Xie; Lin Li; Wensheng Zhang

The nonlocally sparse coding and collaborative filtering techniques have been proved very effective in image denoising, which yielded state-of-the-art performance at this time. In this paper, the two approaches are adaptively embedded into a Bayesian framework to perform denoising based on split Bregman iteration. In the proposed framework, a noise-free structure part of the latent image and a refined observation with less noise than the original observation are mixed as constraints to finely remove noise iteration by iteration. To reconstruct the structure part, we utilize the sparse coding method based on the proposed nonlocally orthogonal matching pursuit algorithm (NLOMP), which can improve the robustness and accuracy of sparse coding in present of noise. To get the refined observation, the collaborative filtering method are used based on Tucker tensor decomposition, which can takes full advantage of the multilinear data analysis. Experiments illustrate that the proposed denoising algorithm achieves highly competitive performance to the leading algorithms such as BM3D and NCSR.

Collaboration


Dive into the Wenrui Hu's collaboration.

Top Co-Authors

Avatar

Wensheng Zhang

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Yuan Xie

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Yehui Yang

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lin Li

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tianzhu Zhang

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Limin Zhu

Chinese Academy of Sciences

View shared research outputs
Researchain Logo
Decentralizing Knowledge