Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Shengfeng He is active.

Publication


Featured researches published by Shengfeng He.


computer vision and pattern recognition | 2013

Visual Tracking via Locality Sensitive Histograms

Shengfeng He; Qingxiong Yang; Rynson W. H. Lau; Jiang Wang; Ming-Hsuan Yang

This paper presents a novel locality sensitive histogram algorithm for visual tracking. Unlike the conventional image histogram that counts the frequency of occurrences of each intensity value by adding ones to the corresponding bin, a locality sensitive histogram is computed at each pixel location and a floating-point value is added to the corresponding bin for each occurrence of an intensity value. The floating-point value declines exponentially with respect to the distance to the pixel location where the histogram is computed, thus every pixel is considered but those that are far away can be neglected due to the very small weights assigned. An efficient algorithm is proposed that enables the locality sensitive histograms to be computed in time linear in the image size and the number of bins. A robust tracking framework based on the locality sensitive histograms is proposed, which consists of two main components: a new feature for tracking that is robust to illumination changes and a novel multi-region tracking algorithm that runs in real time even with hundreds of regions. Extensive experiments demonstrate that the proposed tracking framework outperforms the state-of-the-art methods in challenging scenarios, especially when the illumination changes dramatically.


computer vision and pattern recognition | 2016

Real-Time Salient Object Detection with a Minimum Spanning Tree

Wei-Chih Tu; Shengfeng He; Qingxiong Yang; Shao-Yi Chien

In this paper, we present a real-time salient object detection system based on the minimum spanning tree. Due to the fact that background regions are typically connected to the image boundaries, salient objects can be extracted by computing the distances to the boundaries. However, measuring the image boundary connectivity efficiently is a challenging problem. Existing methods either rely on superpixel representation to reduce the processing units or approximate the distance transform. Instead, we propose an exact and iteration free solution on a minimum spanning tree. The minimum spanning tree representation of an image inherently reveals the object geometry information in a scene. Meanwhile, it largely reduces the search space of shortest paths, resulting an efficient and high quality distance transform algorithm. We further introduce a boundary dissimilarity measure to compliment the shortage of distance transform for salient object detection. Extensive evaluations show that the proposed algorithm achieves the leading performance compared to the state-of-the-art methods in terms of efficiency and accuracy.


International Journal of Computer Vision | 2015

SuperCNN: A Superpixelwise Convolutional Neural Network for Salient Object Detection

Shengfeng He; Rynson W. H. Lau; Wenxi Liu; Zhe Huang; Qingxiong Yang

Existing computational models for salient object detection primarily rely on hand-crafted features, which are only able to capture low-level contrast information. In this paper, we learn the hierarchical contrast features by formulating salient object detection as a binary labeling problem using deep learning techniques. A novel superpixelwise convolutional neural network approach, called SuperCNN, is proposed to learn the internal representations of saliency in an efficient manner. In contrast to the classical convolutional networks, SuperCNN has four main properties. First, the proposed method is able to learn the hierarchical contrast features, as it is fed by two meaningful superpixel sequences, which is much more effective for detecting salient regions than feeding raw image pixels. Second, as SuperCNN recovers the contextual information among superpixels, it enables large context to be involved in the analysis efficiently. Third, benefiting from the superpixelwise mechanism, the required number of predictions for a densely labeled map is hugely reduced. Fourth, saliency can be detected independent of region size by utilizing a multiscale network structure. Experiments show that SuperCNN can robustly detect salient objects and outperforms the state-of-the-art methods on three benchmark datasets.


european conference on computer vision | 2014

Saliency Detection with Flash and No-flash Image Pairs

Shengfeng He; Rynson W. H. Lau

In this paper, we propose a new saliency detection method using a pair of flash and no-flash images. Our approach is inspired by two observations. First, only the foreground objects are significantly brightened by the flash as they are relatively nearer to the camera than the background. Second, the brightness variations introduced by the flash provide hints to surface orientation changes. Accordingly, the first observation is explored to form the background prior to eliminate background distraction. The second observation provides a new orientation cue to compute surface orientation contrast. These photometric cues from the two observations are independent of visual attributes like color, and they provide new and robust distinctiveness to support salient object detection. The second observation further leads to the introduction of new spatial priors to constrain the regions rendered salient to be compact both in the image plane and in 3D space. We have constructed a new flash/no-flash image dataset. Experiments on this dataset show that the proposed method successfully identifies salient objects from various challenging scenes that the state-of-the-art methods usually fail.


IEEE Transactions on Image Processing | 2017

RGBD Salient Object Detection via Deep Fusion

Liangqiong Qu; Shengfeng He; Jiawei Zhang; Jiandong Tian; Yandong Tang; Qingxiong Yang

Numerous efforts have been made to design various low-level saliency cues for RGBD saliency detection, such as color and depth contrast features as well as background and color compactness priors. However, how these low-level saliency cues interact with each other and how they can be effectively incorporated to generate a master saliency map remain challenging problems. In this paper, we design a new convolutional neural network (CNN) to automatically learn the interaction mechanism for RGBD salient object detection. In contrast to existing works, in which raw image pixels are fed directly to the CNN, the proposed method takes advantage of the knowledge obtained in traditional saliency detection by adopting various flexible and interpretable saliency feature vectors as inputs. This guides the CNN to learn a combination of existing features to predict saliency more effectively, which presents a less complex problem than operating on the pixels directly. We then integrate a superpixel-based Laplacian propagation framework with the trained CNN to extract a spatially consistent saliency map by exploiting the intrinsic structure of the input image. Extensive quantitative and qualitative experimental evaluations on three data sets demonstrate that the proposed method consistently outperforms the state-of-the-art methods.


IEEE Transactions on Image Processing | 2015

Saliency-Guided Color-to-Gray Conversion Using Region-Based Optimization

Hao Du; Shengfeng He; Bin Sheng; Lizhuang Ma; Rynson W. H. Lau

Image decolorization is a fundamental problem for many real-world applications, including monochrome printing and photograph rendering. In this paper, we propose a new color-to-gray conversion method that is based on a region-based saliency model. First, we construct a parametric color-to-gray mapping function based on global color information as well as local contrast. Second, we propose a region-based saliency model that computes visual contrast among pixel regions. Third, we minimize the salience difference between the original color image and the output grayscale image in order to preserve contrast discrimination. To evaluate the performance of the proposed method in preserving contrast in complex scenarios, we have constructed a new decolorization data set with 22 images, each of which contains abundant colors and patterns. Extensive experimental evaluations on the existing and the new data sets show that the proposed method outperforms the state-of-the-art methods quantitatively and qualitatively.


international conference on computer vision | 2015

Oriented Object Proposals

Shengfeng He; Rynson W. H. Lau

In this paper, we propose a new approach to generate oriented object proposals (OOPs) to reduce the detection error caused by various orientations of the object. To this end, we propose to efficiently locate object regions according to pixelwise object probability, rather than measuring the objectness from a set of sampled windows. We formulate the proposal generation problem as a generative probabilistic model such that object proposals of different shapes (i.e., sizes and orientations) can be produced by locating the local maximum likelihoods. The new approach has three main advantages. First, it helps the object detector handle objects of different orientations. Second, as the shapes of the proposals may vary to fit the objects, the resulting proposals are tighter than the sampling windows with fixed sizes. Third, it avoids massive window sampling, and thereby reducing the number of proposals while maintaining a high recall. Experiments on the PASCAL VOC 2007 dataset show that the proposed OOP outperforms the state-of-the-art fast methods. Further experiments show that the rotation invariant property helps a class-specific object detector achieve better performance than the state-of-the-art proposal generation methods in either object rotation scenarios or general scenarios. Generating OOPs is very fast and takes only 0.5s per image.


Computer Animation and Virtual Worlds | 2011

Real-time smoke simulation with improved turbulence by spatial adaptive vorticity confinement

Shengfeng He; Hon-Cheng Wong; Wai-Man Pang; Un-Hong Wong

Turbulence modeling has recently drawn many attentions in fluid animation to generate small‐scale rolling features. Being one of the widely adopted approaches, vorticity confinement method re‐injects lost energy dissipation back to the flow. However, previous works suffer from deficiency when large vorticity coefficient ε is used, due to the fact that constant ε is applied all over the simulated domain. In this paper, we propose a novel approach to enhance the visual effect by employing an adaptive vorticity confinement which varies the strength with respect to the helicity instead of a user‐defined constant. To further improve fine details in turbulent flows, we are not only applying our proposed vorticity confinement to low‐resolution grid, but also on a finer grid to generate sub‐grid level turbulence. Since the incompressible Navier–Stokes equations are solved only in low‐resolution grid, this saves a significant amount of computation. Several experiments demonstrate that our method can produce realistic smoke animation with enhanced turbulence effects in real‐time. Copyright


IEEE Transactions on Circuits and Systems for Video Technology | 2016

Fast Weighted Histograms for Bilateral Filtering and Nearest Neighbor Searching

Shengfeng He; Qingxiong Yang; Rynson W. H. Lau; Ming-Hsuan Yang

The locality sensitive histogram (LSH) injects spatial information into the local histogram in an efficient manner, and has been demonstrated to be very effective for visual tracking. In this paper, we explore the application of this efficient histogram in two important problems. We first extend the LSH to linear time bilateral filtering, and then propose a new type of histogram for efficiently computing edge-preserving nearest neighbor fields (NNFs). While the existing histogram-based bilateral filtering methods are the state of the art for efficient grayscale image processing, they are limited to box spatial filter kernels only. In our first application, we address this limitation by expressing the bilateral filter as a simple ratio of linear functions of the LSH, which is able to extend the box spatial kernel to an exponential kernel. The computational complexity of the proposed bilateral filter is linear in the number of image pixels. In our second application, we derive a new bilateral weighted histogram (BWH) for NNF. The new histogram maintains the efficiency of LSH, which allows approximate NNF to be computed independent of patch size. In addition, BWH takes both spatial and color information into account, and thus provides higher accuracy for histogram-based matching, especially around color edges.


Computer Graphics Forum | 2013

Synthetic Controllable Turbulence using Robust Second Vorticity Confinement

Shengfeng He; Rynson W. H. Lau

Capturing fine details of turbulence on a coarse grid is one of the main tasks in real‐time fluid simulation. Existing methods for doing this have various limitations. In this paper, we propose a new turbulence method that uses a refined second vorticity confinement method, referred to as robust second vorticity confinement, and a synthesis scheme to create highly turbulent effects from coarse grid. The new technique is sufficiently stable to efficiently produce highly turbulent flows, while allowing intuitive control of vortical structures. Second vorticity confinement captures and defines the vortical features of turbulence on a coarse grid. However, due to the stability problem, it cannot be used to produce highly turbulent flows. In this work, we propose a robust formulation to improve the stability problem by making the positive diffusion term to vary with helicity adaptively. In addition, we also employ our new method to procedurally synthesize the high‐resolution flow fields. As shown in our results, this approach produces stable high‐resolution turbulence very efficiently.

Collaboration


Dive into the Shengfeng He's collaboration.

Top Co-Authors

Avatar

Rynson W. H. Lau

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar

Qingxiong Yang

City University of Hong Kong

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jiandong Tian

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Liangqiong Qu

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Shao Huang

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Weiqiang Wang

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Xiaodan Zhang

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Yandong Tang

Chinese Academy of Sciences

View shared research outputs
Researchain Logo
Decentralizing Knowledge