Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jianbing Shen is active.

Publication


Featured researches published by Jianbing Shen.


computer vision and pattern recognition | 2015

Saliency-aware geodesic video object segmentation

Wenguan Wang; Jianbing Shen; Fatih Porikli

We introduce an unsupervised, geodesic distance based, salient video object segmentation method. Unlike traditional methods, our method incorporates saliency as prior for object via the computation of robust geodesic measurement. We consider two discriminative visual features: spatial edges and temporal motion boundaries as indicators of foreground object locations. We first generate framewise spatiotemporal saliency maps using geodesic distance from these indicators. Building on the observation that foreground areas are surrounded by the regions with high spatiotemporal edge values, geodesic distance provides an initial estimation for foreground and background. Then, high-quality saliency results are produced via the geodesic distances to background regions in the subsequent frames. Through the resulting saliency maps, we build global appearance models for foreground and background. By imposing motion continuity, we establish a dynamic location model for each frame. Finally, the spatiotemporal saliency maps, appearance models and dynamic location models are combined into an energy minimization framework to attain both spatially and temporally coherent object segmentation. Extensive quantitative and qualitative experiments on benchmark video dataset demonstrate the superiority of the proposed method over the state-of-the-art algorithms.


IEEE Transactions on Image Processing | 2014

Lazy Random Walks for Superpixel Segmentation

Jianbing Shen; Yunfan Du; Wenguan Wang; Xuelong Li

We present a novel image superpixel segmentation approach using the proposed lazy random walk (LRW) algorithm in this paper. Our method begins with initializing the seed positions and runs the LRW algorithm on the input image to obtain the probabilities of each pixel. Then, the boundaries of initial superpixels are obtained according to the probabilities and the commute time. The initial superpixels are iteratively optimized by the new energy function, which is defined on the commute time and the texture measurement. Our LRW algorithm with self-loops has the merits of segmenting the weak boundaries and complicated texture regions very well by the new global probability maps and the commute time strategy. The performance of superpixel is improved by relocating the center positions of superpixels and dividing the large superpixels into small ones with the proposed optimization algorithm. The experimental results have demonstrated that our method achieves better performance than previous superpixel approaches.


IEEE Transactions on Image Processing | 2015

Consistent Video Saliency Using Local Gradient Flow Optimization and Global Refinement

Wenguan Wang; Jianbing Shen; Ling Shao

We present a novel spatiotemporal saliency detection method to estimate salient regions in videos based on the gradient flow field and energy optimization. The proposed gradient flow field incorporates two distinctive features: 1) intra-frame boundary information and 2) inter-frame motion information together for indicating the salient regions. Based on the effective utilization of both intra-frame and inter-frame information in the gradient flow field, our algorithm is robust enough to estimate the object and background in complex scenes with various motion patterns and appearances. Then, we introduce local as well as global contrast saliency measures using the foreground and background information estimated from the gradient flow field. These enhanced contrast saliency cues uniformly highlight an entire object. We further propose a new energy function to encourage the spatiotemporal consistency of the output saliency maps, which is seldom explored in previous video saliency methods. The experimental results show that the proposed algorithm outperforms state-of-the-art video saliency detection methods.


IEEE Transactions on Image Processing | 2015

Robust Video Object Cosegmentation

Wenguan Wang; Jianbing Shen; Xuelong Li; Fatih Porikli

With ever-increasing volumes of video data, automatic extraction of salient object regions became even more significant for visual analytic solutions. This surge has also opened up opportunities for taking advantage of collective cues encapsulated in multiple videos in a cooperative manner. However, it also brings up major challenges, such as handling of drastic appearance, motion pattern, and pose variations, of foreground objects as well as indiscriminate backgrounds. Here, we present a co segmentation framework to discover and segment out common object regions across multiple frames and multiple videos in a joint fashion. We incorporate three types of cues, i.e., intraframe saliency, interframe consistency, and across-video similarity into an energy optimization framework that does not make restrictive assumptions on foreground appearance and motion model, and does not require objects to be visible in all frames. We also introduce a spatio-temporal scale-invariant feature transform (SIFT) flow descriptor to integrate across-video correspondence from the conventional SIFT-flow into interframe motion flow from optical flow. This novel spatio-temporal SIFT flow generates reliable estimations of common foregrounds over the entire video data set. Experimental results show that our method outperforms the state-of-the-art on a new extensive data set (ViCoSeg).


computer vision and pattern recognition | 2011

Intrinsic images using optimization

Jianbing Shen; Xiaoshan Yang; Yunde Jia; Xuelong Li

In this paper, we present a novel intrinsic image recovery approach using optimization. Our approach is based on the assumption of color characteristics in a local window in natural images. Our method adopts a premise that neighboring pixels in a local window of a single image having similar intensity values should have similar reflectance values. Thus the intrinsic image decomposition is formulated by optimizing an energy function with adding a weighting constraint to the local image properties. In order to improve the intrinsic image extraction results, we specify local constrain cues by integrating the user strokes in our energy formulation, including constant-reflectance, constant-illumination and fixed-illumination brushes. Our experimental results demonstrate that our approach achieves a better recovery of intrinsic reflectance and illumination components than by previous approaches.


IEEE Transactions on Image Processing | 2016

Sub-Markov Random Walk for Image Segmentation

Xingping Dong; Jianbing Shen; Ling Shao; Luc Van Gool

A novel sub-Markov random walk (subRW) algorithm with label prior is proposed for seeded image segmentation, which can be interpreted as a traditional random walker on a graph with added auxiliary nodes. Under this explanation, we unify the proposed subRW and other popular random walk (RW) algorithms. This unifying view will make it possible for transferring intrinsic findings between different RW algorithms, and offer new ideas for designing novel RW algorithms by adding or changing auxiliary nodes. To verify the second benefit, we design a new subRW algorithm with label prior to solve the segmentation problem of objects with thin and elongated parts. The experimental results on both synthetic and natural images with twigs demonstrate that the proposed subRW method outperforms previous RW algorithms for seeded image segmentation.


IEEE Transactions on Multimedia | 2015

Visual Tracking Using Strong Classifier and Structural Local Sparse Descriptors

Bo Ma; Jianbing Shen; Yangbiao Liu; Hongwei Hu; Ling Shao; Xuelong Li

Sparse coding methods have achieved great success in visual tracking, and we present a strong classifier and structural local sparse descriptors for robust visual tracking. Since the summary features considering the sparse codes are sensitive to occlusion and other interfering factors, we extract local sparse descriptors from a fraction of all patches by performing a pooling operation. The collection of local sparse descriptors is combined into a boosting-based strong classifier for robust visual tracking using a discriminative appearance model. Furthermore, a structural reconstruction error based weight computation method is proposed to adjust the classification score of each candidate for more precise tracking results. To handle appearance changes during tracking, we present an occlusion-aware template update scheme. Comprehensive experimental comparisons with the state-of-the-art algorithms demonstrated the better performance of the proposed method.


IEEE Transactions on Circuits and Systems for Video Technology | 2014

Interactive Segmentation Using Constrained Laplacian Optimization

Jianbing Shen; Yunfan Du; Xuelong Li

We present a novel interactive image segmentation approach with user scribbles using constrained Laplacian graph optimization. A novel energy framework is developed by adding the smoothing item in the cost function of Laplacian graph energy. To the best of our knowledge, our approach is the first to incorporate the normalized cuts and graph cuts algorithms into a unified energy optimization framework. The proposed approach is further accelerated by running the proposed optimization method on a band region when we segment the large images. Our acceleration strategy enables our approach to efficiently segment the large images, which yields about a 20-80 times speedup. The proposed approach is evaluated on both the publicly available data sets and our own data set with large images. The benefits of the proposed unified framework are also demonstrated both qualitatively and quantitatively. The experimental results show that our segmentation method achieves better performance of both boundary recall and error rate than the existing state-of-the-art approaches.


IEEE Transactions on Image Processing | 2018

Video Salient Object Detection via Fully Convolutional Networks

Wenguan Wang; Jianbing Shen; Ling Shao

This paper proposes a deep learning model to efficiently detect salient regions in videos. It addresses two important issues: 1) deep video saliency model training with the absence of sufficiently large and pixel-wise annotated video data and 2) fast video saliency training and detection. The proposed deep video saliency network consists of two modules, for capturing the spatial and temporal saliency information, respectively. The dynamic saliency model, explicitly incorporating saliency estimates from the static saliency model, directly produces spatiotemporal saliency inference without time-consuming optical flow computation. We further propose a novel data augmentation technique that simulates video training data from existing annotated image data sets, which enables our network to learn diverse saliency information and prevents overfitting with the limited number of training videos. Leveraging our synthetic video data (150K video sequences) and real videos, our deep video saliency model successfully learns both spatial and temporal saliency cues, thus producing accurate spatiotemporal saliency estimate. We advance the state-of-the-art on the densely annotated video segmentation data set (MAE of .06) and the Freiburg-Berkeley Motion Segmentation data set (MAE of .07), and do so with much improved speed (2 fps with all steps).


IEEE Transactions on Image Processing | 2016

Correspondence Driven Saliency Transfer

Wenguan Wang; Jianbing Shen; Ling Shao; Fatih Porikli

In this paper, we show that large annotated datasets datasets have great potential to provide strong priors for saliency estimation rather than merely serving for benchmark evaluations. To this end, we present a novel image saliency detection method called saliency transfer. Given an input image, we first retrieve a support set of best matches from the large database of saliency annotated images. Then, we assign the transitional saliency scores by warping the support set annotations onto the input image according to computed dense correspondences. To incorporate context, we employ two complementary correspondence strategies: a global matching scheme based on scene-level analysis, and a local matching scheme based on patch-level inference. We then introduce two refinement measures to further refine the saliency maps, and apply the random-walk-with-restart by exploring the global saliency structure to estimate the affinity between foreground and background assignments. Extensive experimental results on four publicly available benchmark datasets demonstrate that the proposed saliency algorithm consistently outperforms the current state-of-the-art methods.In this paper, we show that large annotated data sets have great potential to provide strong priors for saliency estimation rather than merely serving for benchmark evaluations. To this end, we present a novel image saliency detection method called saliency transfer. Given an input image, we first retrieve a support set of best matches from the large database of saliency annotated images. Then, we assign the transitional saliency scores by warping the support set annotations onto the input image according to computed dense correspondences. To incorporate context, we employ two complementary correspondence strategies: a global matching scheme based on scene-level analysis and a local matching scheme based on patch-level inference. We then introduce two refinement measures to further refine the saliency maps and apply the random-walk-with-restart by exploring the global saliency structure to estimate the affinity between foreground and background assignments. Extensive experimental results on four publicly available benchmark data sets demonstrate that the proposed saliency algorithm consistently outperforms the current state-of-the-art methods.

Collaboration


Dive into the Jianbing Shen's collaboration.

Top Co-Authors

Avatar

Wenguan Wang

Beijing Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Ling Shao

University of East Anglia

View shared research outputs
Top Co-Authors

Avatar

Fatih Porikli

Australian National University

View shared research outputs
Top Co-Authors

Avatar

Xuelong Li

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Hanqiu Sun

The Chinese University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Xingping Dong

Beijing Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bo Ma

Beijing Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xiaoyang Mao

University of Yamanashi

View shared research outputs
Researchain Logo
Decentralizing Knowledge