Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jianru Xue is active.

Publication


Featured researches published by Jianru Xue.


international conference on computer vision | 2011

Automatic salient object extraction with contextual cue

Le Wang; Jianru Xue; Nanning Zheng; Gang Hua

We present a method for automatically extracting salient object from a single image, which is cast in an energy minimization framework. Unlike most previous methods that only leverage appearance cues, we employ an auto-context cue as a complementary data term. Benefitting from a generic saliency model for bootstrapping, the segmentation of the salient object and the learning of the auto-context model are iteratively performed without any user intervention. Upon convergence, we obtain not only a clear separation of the salient object, but also an auto-context classifier which can be used to recognize the same type of object in other images. Our experiments on four benchmarks demonstrated the efficacy of the added contextual cue. It is shown that our method compares favorably with the state-of-the-art, some of which even embraced user interactions.


Journal of Visual Communication and Image Representation | 2010

Scaling iterative closest point algorithm for registration of m-D point sets

Shaoyi Du; Nanning Zheng; Lei Xiong; Shihui Ying; Jianru Xue

Point set registration is important for calibration of multiple cameras, 3D reconstruction and recognition, etc. The iterative closest point (ICP) algorithm is accurate and fast for point set registration in a same scale, but it does not handle the case with different scales. This paper instead introduces a novel approach named the scaling iterative closest point (SICP) algorithm which integrates a scale matrix with boundaries into the original ICP algorithm for scaling registration. At each iterative step of this algorithm, we set up correspondence between two m-D point sets, and then use a simple and fast iterative algorithm with the singular value decomposition (SVD) method and the properties of parabola incorporated to compute scale, rotation and translation transformations. The SICP algorithm has been proved to converge monotonically to a local minimum from any given parameters. Hence, to reach desired global minimum, good initial parameters are required which are successfully estimated in this paper by analyzing covariance matrices of point sets. The SICP algorithm is independent of shape representation and feature extraction, and thereby it is general for scaling registration of m-D point sets. Experimental results demonstrate its efficiency and accuracy compared with the standard ICP algorithm.


european conference on computer vision | 2017

Video Object Discovery and Co-Segmentation with Extremely Weak Supervision

Le Wang; Gang Hua; Rahul Sukthankar; Jianru Xue; Zhenxing Niu; Nanning Zheng

We present a spatio-temporal energy minimization formulation for simultaneous video object discovery and co-segmentation across multiple videos containing irrelevant frames. Our approach overcomes a limitation that most existing video co-segmentation methods possess, i.e., they perform poorly when dealing with practical videos in which the target objects are not present in many frames. Our formulation incorporates a spatio-temporal auto-context model, which is combined with appearance modeling for superpixel labeling. The superpixel-level labels are propagated to the frame level through a multiple instance boosting algorithm with spatial reasoning, based on which frames containing the target object are identified. Our method only needs to be bootstrapped with the frame-level labels for a few video frames (e.g., usually 1 to 3) to indicate if they contain the target objects or not. Extensive experiments on four datasets validate the efficacy of our proposed method: 1) object segmentation from a single video on the SegTrack dataset, 2) object co-segmentation from multiple videos on a video co-segmentation dataset, and 3) joint object discovery and co-segmentation from multiple videos containing irrelevant frames on the MOViCS dataset and XJTU-Stevens, a new dataset that we introduce in this paper. The proposed method compares favorably with the state-of-the-art in all of these experiments.


systems man and cybernetics | 2008

Tracking Multiple Visual Targets via Particle-Based Belief Propagation

Jianru Xue; Nanning Zheng; Jason Geng; Xiaopin Zhong

Multiple-target tracking in video (MTTV) presents a technical challenge in video surveillance applications. In this paper, we formulate the MTTV problem using dynamic Markov network (DMN) techniques. Our model consists of three coupled Markov random fields: 1) a field for the joint state of the multitarget; 2) a binary random process for the existence of each individual target; and 3) a binary random process for the occlusion of each dual adjacent target. To make the inference tractable, we introduce two robust functions that eliminate the two binary processes. We then propose a novel belief propagation (BP) algorithm called particle-based BP and embed it into a Markov chain Monte Carlo approach to obtain the maximum a posteriori estimation in the DMN. With a stratified sampler, we incorporate the information obtained from a learned bottom-up detector (e.g., support-vector-machine-based classifier) and the motion model of the target into the message propagation. Other low-level visual cues such as motion and shape can be easily incorporated into our framework to obtain better tracking results. We have performed extensive experimental verification, and the results suggest that our method is comparable to the state-of-art multitarget tracking methods in all the cases we tested.


IEEE Transactions on Information Forensics and Security | 2015

A Visual Model-Based Perceptual Image Hash for Content Authentication

Xiaofeng Wang; Kemu Pang; Xiaorui Zhou; Yang Zhou; Lu Li; Jianru Xue

Perceptual image hash has been widely investigated in an attempt to solve the problems of image content authentication and content-based image retrieval. In this paper, we combine statistical analysis methods and visual perception theory to develop a real perceptual image hash method for content authentication. To achieve real perceptual robustness and perceptual sensitivity, the proposed method uses Watsons visual model to extract visually sensitive features that play an important role in the process of humans perceiving image content. We then generate robust perceptual hash code by combining image-block-based features and key-point-based features. The proposed method achieves a tradeoff between perceptual robustness to tolerate content-preserving manipulations and a wide range of geometric distortions and perceptual sensitivity to detect malicious tampering. Furthermore, it has the functionality to detect compromised image regions. Compared with state-of-the-art schemes, the proposed method obtains a better comprehensive performance in content-based image tampering detection and localization.


Signal Processing-image Communication | 2011

An integrated visual saliency-based watermarking approach for synchronous image authentication and copyright protection

Lihua Tian; Nanning Zheng; Jianru Xue; Ce Li; Xiaofeng Wang

This paper proposes an integrated visual saliency-based watermarking approach, which can be used for both synchronous image authentication and copyright protection. Firstly, regions of interest (ROIs), which are not in a fixed size and can present the most important information of one image, would be extracted automatically using the proto-object based saliency attention model. Secondly, to resist common signal processing attacks, for each ROI, an improved quantization method is employed to embed the copyright information into its DCT coefficients. Finally, the edge map of one ROI is chosen as the fragile watermark, and is then embedded into the DWT domain of the watermarked image to further resist the tampering attacks. Using ROI-based visual saliency as a bridge, this proposed method can achieve image authentication and copyright protection synchronously, and it can also preserve much more robust information. Experimental results on standard benchmark demonstrate that compared with the state-of-the-art watermarking scheme, the proposed method is more robust to white noise, filtering and JPEG compression attacks. Furthermore, it also shows that the proposed method can effectively detect tamper and locate forgery.


IEEE Transactions on Intelligent Transportation Systems | 2015

Efficient Sampling-Based Motion Planning for On-Road Autonomous Driving

Liang Ma; Jianru Xue; Kuniaki Kawabata; Jihua Zhu; Chao Ma; Nanning Zheng

This paper introduces an efficient motion planning method for on-road driving of the autonomous vehicles, which is based on the rapidly exploring random tree (RRT) algorithm. RRT is an incremental sampling-based algorithm and is widely used to solve the planning problem of mobile robots. However, due to the meandering path, the inaccurate terminal state, and the slow exploration, it is often inefficient in many applications such as autonomous vehicles. To address these issues and considering the realistic context of on-road autonomous driving, we propose a fast RRT algorithm that introduces a rule-template set based on the traffic scenes and an aggressive extension strategy of search tree. Both improvements lead to a faster and more accurate RRT toward the goal state compared with the basic RRT algorithm. Meanwhile, a model-based prediction postprocess approach is adopted, by which the generated trajectory can be further smoothed and a feasible control sequence for the vehicle would be obtained. Furthermore, in the environments with dynamic obstacles, an integrated approach of the fast RRT algorithm and the configuration-time space can be used to improve the quality of the planned trajectory and the replanning. A large number of experimental results illustrate that our method is fast and efficient in solving planning queries of on-road autonomous driving and demonstrate its superior performances over previous approaches.


IEEE Transactions on Image Processing | 2012

PSF Estimation via Gradient Domain Correlation

Wei Hu; Jianru Xue; Nanning Zheng

This paper proposes an efficient method to estimate the point spread function (PSF) of a blurred image using image gradients spatial correlation. A patch-based image degradation model is proposed for estimating the sample covariance matrix of the gradient domain natural image. Based on the fact that the gradients of clean natural images are approximately uncorrelated to each other, we estimated the autocorrelation function of the PSF from the covariance matrix of gradient domain blurred image using the proposed patch-based image degradation model. The PSF is computed using a phase retrieval technique to remove the ambiguity introduced by the absence of the phase. Experimental results show that the proposed method significantly reduces the computational burden in PSF estimation, compared with existing methods, while giving comparable blurring kernel.


Pattern Recognition | 2013

Automatic salient object extraction with contextual cue and its applications to recognition and alpha matting

Jianru Xue; Le Wang; Nanning Zheng; Gang Hua

Abstract A method for automatically extracting salient object from a single image is presented in this paper. The proposed method is cast in an energy minimization framework. Unlike that only appearance cues are leveraged in most previous methods, an auto-context cue is used as a complementary data term. Benefitting from a generic saliency model for bootstrapping, the segmentation of the salient object and the learning of the auto-context model are iteratively performed without any user intervention. Upon convergence, the method outputs not only a clear separation of the salient object, but also an auto-context classifier which can be used to recognize the same type of object in other images. Our experiments on four benchmarks demonstrated the efficacy of the added contextual cue. It is shown that our method compares favorably with the state-of-the-art, some of which even embraced user interactions. Furthermore, we present some initial recognition results from the induced auto-context model and also show that the segmentation produced by our approach could serve as a good initialization for alpha matting.


IEEE Transactions on Image Processing | 2011

Proto-Object Based Rate Control for JPEG2000: An Approach to Content-Based Scalability

Jianru Xue; Ce Li; Nanning Zheng

The JPEG2000 system provides scalability with respect to quality, resolution and color component in the transfer of images. However, scalability with respect to semantic content is still lacking. We propose a biologically plausible salient region based bit allocation mechanism within the JPEG2000 codec for the purpose of augmenting scalability with respect to semantic content. First, an input image is segmented into several salient proto-objects (a region that possibly contains a semantically meaningful physical object) and background regions (a region that contains no object of interest) by modeling visual focus of attention on salient proto-objects. Then, a novel rate control scheme distributes a target bit rate to each individual region according to its saliency, and constructs quality layers of proto-objects for the purpose of more precise truncation comparable to original quality layers in the standard. Empirical results show that the suggested approach adds to the JPEG2000 system scalability with respect to content as well as the functionality of selectively encoding, decoding, and manipulation of each individual proto-object in the image, with only some slightly trivial modifications to the JPEG2000 standard. Furthermore, the proposed rate control approach efficiently reduces the computational complexity and memory usage, as well as maintains the high quality of the image to a level comparable to the conventional post-compression rate distortion (PCRD) optimum truncation algorithm for JPEG2000.

Collaboration


Dive into the Jianru Xue's collaboration.

Top Co-Authors

Avatar

Nanning Zheng

Xi'an Jiaotong University

View shared research outputs
Top Co-Authors

Avatar

Xuguang Lan

Xi'an Jiaotong University

View shared research outputs
Top Co-Authors

Avatar

Shaoyi Du

Xi'an Jiaotong University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ce Li

Lanzhou University of Technology

View shared research outputs
Top Co-Authors

Avatar

Shanmin Pang

Xi'an Jiaotong University

View shared research outputs
Top Co-Authors

Avatar

Jihua Zhu

Xi'an Jiaotong University

View shared research outputs
Top Co-Authors

Avatar

Liang Ma

Xi'an Jiaotong University

View shared research outputs
Top Co-Authors

Avatar

Zhiqiang Tian

Xi'an Jiaotong University

View shared research outputs
Top Co-Authors

Avatar

Xiaopin Zhong

Xi'an Jiaotong University

View shared research outputs
Researchain Logo
Decentralizing Knowledge