Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Lei He is active.

Publication


Featured researches published by Lei He.


Journal of Visual Communication and Image Representation | 2014

Collaborative object tracking model with local sparse representation

Chengjun Xie; Jieqing Tan; Peng Chen; Jie Zhang; Lei He

There existed many visual tracking methods that are based on sparse representation model, most of them were either generative or discriminative, which made object tracking more difficult when objects have undergone large pose change, illumination variation or partial occlusion. To address this issue, in this paper we propose a collaborative object tracking model with local sparse representation. The key idea of our method is to develop a local sparse representation-based discriminative model (SRDM) and a local sparse representation-based generative model (SRGM). In the SRDM module, the appearance of a target is modeled by local sparse codes that can be formed as training data for a linear classifier to discriminate the target from the background. In the SRGM module, the appearance of the target is represented by sparse coding histogram and a sparse coding-based similarity measure is applied to compute the distance between histograms of a target candidate and the target template. Finally, a collaborative similarity measure is proposed for measuring the difference of the two models, and then the corresponding likelihood of the target candidates is input into a particle filter framework to estimate the target state sequentially over time in visual tracking. Experiments on some publicly available benchmarks of video sequences showed that our proposed tracker is robust and effective.


Iet Computer Vision | 2013

Multiple instance learning tracking method with local sparse representation

Chengjun Xie; Jieqing Tan; Peng Chen; Jie Zhang; Lei He

When objects undergo large pose change, illumination variation or partial occlusion, most existed visual tracking algorithms tend to drift away from targets and even fail in tracking them. To address this issue, in this study, the authors propose an online algorithm by combining multiple instance learning (MIL) and local sparse representation for tracking an object in a video system. The key idea in our method is to model the appearance of an object by local sparse codes that can be formed as training data for the MIL framework. First, local image patches of a target object are represented as sparse codes with an overcomplete dictionary, where the adaptive representation can be helpful in overcoming partial occlusion in object tracking. Then MIL learns the sparse codes by a classifier to discriminate the target from the background. Finally, results from the trained classifier are input into a particle filter framework to sequentially estimate the target state over time in visual tracking. In addition, to decrease the visual drift because of the accumulative errors when updating the dictionary and classifier, a two-step object tracking method combining a static MIL classifier with a dynamical MIL classifier is proposed. Experiments on some publicly available benchmarks of video sequences show that our proposed tracker is more robust and effective than others.


Signal Processing-image Communication | 2015

Super-resolution by polar Newton-Thiele's rational kernel in centralized sparsity paradigm

Lei He; Jieqing Tan; Zhuo Su; Xiaonan Luo; Chengjun Xie

In general the rectangular windows are used by many super-resolution reconstruction approaches, however, they are not suitable for the arc regions of images. In view of this, a novel reconstruction algorithm is proposed in this paper, which is based on the Newton-Thieles rational interpolation by continued fractions in the polar coordinates. In order to get better reconstructed results, we also present a novel model where the Newton-Thieles rational interpolation scheme used to magnify images/videos is combined with the sparse representation scheme used to refine the reconstructed results. Plenty of experiments in image and video sequences demonstrate that the new method can produce high-quality resolution enhancement, as compared with the state-of-the-art methods. Experimental results show that the proposed method achieves much better results than other methods in terms of both visual effect and PSNR. HighlightsThe Newton-Thieles rational interpolation function in the polar coordinates is proposed.The nonlinear interpolation in the polar coordinates is applied to image and video SR reconstruction.The novel SR model by polar Newton-Thieles rational kernel in centralized sparsity paradigm is presented.


machine vision applications | 2014

Multi-scale patch-based sparse appearance model for robust object tracking

Chengjun Xie; Jieqing Tan; Peng Chen; Jie Zhang; Lei He

When objects undergo large pose change, illumination variation or partial occlusion, most existing visual tracking algorithms tend to drift away from targets and even fail to track them. To address the issue, in this paper we propose a multi-scale patch-based appearance model with sparse representation and provide an efficient scheme involving the collaboration between multi-scale patches encoded by sparse coefficients. The key idea of our method is to model the appearance of an object by different scale patches, which are represented by sparse coefficients with different scale dictionaries. The model exploits both partial and spatial information of targets based on multi-scale patches. Afterwards, a similarity score of one candidate target is input into a particle filter framework to estimate the target state sequentially over time in visual tracking. Additionally, to decrease the visual drift caused by frequently updating model, we present a novel two-step object tracking method which exploits both the ground truth information of the target labeled in the first frame and the target obtained online with the multi-scale patch information. Experiments on some publicly available benchmarks of video sequences showed that the similarity involving complementary information can locate targets more accurately and the proposed tracker is more robust and effective than others.


Discrete Dynamics in Nature and Society | 2018

An Adaptive Image Inpainting Method Based on Continued Fractions Interpolation

Lei He; Yan Xing; Kangxiong Xia; Jieqing Tan

In view of the drawback of most image inpainting algorithms by which texture was not prominent, an adaptive inpainting algorithm based on continued fractions was proposed in this paper. In order to restore every damaged point, the information of known pixel points around the damaged point was used to interpolate the intensity of the damaged point. The proposed method included two steps; firstly, Thiele’s rational interpolation combined with the mask image was used to interpolate adaptively the intensities of damaged points to get an initial repaired image, and then Newton-Thiele’s rational interpolation was used to refine the initial repaired image to get a final result. In order to show the superiority of the proposed algorithm, plenty of experiments were tested on damaged images. Subjective evaluation and objective evaluation were used to evaluate the quality of repaired images, and the objective evaluation was comparison of Peak Signal to Noise Ratios (PSNRs). The experimental results showed that the proposed algorithm had better visual effect and higher Peak Signal to Noise Ratio compared with the state-of-the-art methods.


Multimedia Tools and Applications | 2017

A novel super-resolution image and video reconstruction approach based on Newton-Thiele's rational kernel in sparse principal component analysis

Lei He; Jieqing Tan; Xing Huo; Chengjun Xie

In this paper, we propose a new image and video sequences reconstruction approach, where the Newton-Thiele’s vector valued rational interpolation is combined with the sparse principal component analysis. Through observation of the degraded model, the reconstruction scheme is performed by two steps. Firstly, the sparse principal component analysis and the linear minimum mean square-error estimation method are used to remove the noise from the degraded image. And then, the Newton-Thiele’s vector valued rational interpolation is used to magnify the denoising result, by which the details and texture regions of image can be well preserved. By using this novel reconstruction model by Newton-Thiele’s rational kernel in sparse principal component analysis, the final reconstructed results not only have good visual effect, but also have rich texture details. In order to show the effectiveness and robustness of the proposed method, we have done plenty of experiments on images and video sequences, and the experimental results show that the proposed method can produce better high-quality resolution results, as compared with the state-of-the-art methods.


ICDH '14 Proceedings of the 2014 5th International Conference on Digital Home | 2014

A Novel Two-Step Approach for the Super-resolution Reconstruction of Video Sequences

Lei He; Jieqing Tan; Chengjun Xie; Min Hu

In this paper, we propose a two-step approach for the super-resolution reconstruction of video sequences based on the degraded model. Firstly we use the sparse principal component analysis and the linear minimum mean square-error estimation method to remove the noises from the degraded video sequences. Secondly we adopt the Newton-Thieles vector valued rational interpolation which is one of the nonlinear interpolation methods to magnify the results of the previous step. Our method is effective not only for gray video sequences, but also for color video sequences. Experimental results on a series of video sequences demonstrate that our method achieves better visual effects than those presented in [11] and [16], especially in details.


2012 Fourth International Conference on Digital Home | 2012

A Joint Object Tracking Framework with Incremental and Multiple Instance Learning

Chengjun Xie; Jieqing Tan; Linli Zhou; Lei He; Jie Zhang; Yingqiao Bu

When objects undergo large pose change, illumination variation or partial occlusion, most existed visual tracking algorithms tend to drift away from targets and even fail in tracking them. To address this issue, in this paper we propose an online algorithm by combining Incremental Learning (IL) and Multiple Instance Learning (MIL) based on local sparse representation for tracking an object in a video system. First, the target location is estimated using the online updated IL. Then, to decrease the visual drift due to the accumulation of errors while updating IL subspace with the first step results, a two-step object tracking method combining a static IL model with a dynamical MIL model is proposed. We utilize information of the static IL model involving the singular values, the Eigen template to avoid visual drift if there is no significant appearance change in the tracked objects. Otherwise, we use the dynamical MIL model to discriminate the target from the background when there is significant appearance change in the tracked objects. Experiments on some publicly available benchmarks of video sequences show that our proposed tracker is more robust and effective than others.


Signal Processing-image Communication | 2015

Corrigendum to Super-resolution by polar Newton-Thiele's rational kernel in centralized sparsity paradigm Signal Processing

Lei He; Jieqing Tan; Zhuo Su; Xiaonan Luo; Chengjun Xie


Journal of Electronic Imaging | 2018

Super-resolution reconstruction based on continued fractions interpolation kernel in the polar coordinates

Lei He; Jieqing Tan; Yan Xing; Min Hu; Chengjun Xie

Collaboration


Dive into the Lei He's collaboration.

Top Co-Authors

Avatar

Jieqing Tan

Hefei University of Technology

View shared research outputs
Top Co-Authors

Avatar

Chengjun Xie

Hefei University of Technology

View shared research outputs
Top Co-Authors

Avatar

Min Hu

Hefei University of Technology

View shared research outputs
Top Co-Authors

Avatar

Jie Zhang

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xiaonan Luo

Sun Yat-sen University

View shared research outputs
Top Co-Authors

Avatar

Yan Xing

Hefei University of Technology

View shared research outputs
Top Co-Authors

Avatar

Zhuo Su

Sun Yat-sen University

View shared research outputs
Top Co-Authors

Avatar

Kangxiong Xia

Beijing Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Xing Huo

Hefei University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge