Fanghui Liu
Shanghai Jiao Tong University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Fanghui Liu.
Signal Processing-image Communication | 2016
Chenjie Ge; Keren Fu; Fanghui Liu; Li Bai; Jie Yang
The goal of salient object detection from an image is to extract the regions which capture the attention of the human visual system more than other regions of the image. In this paper a novel method is presented for detecting salient objects from a set of images, known as co-saliency detection. We treat co-saliency detection as a two-stage saliency propagation problem. The first inter-saliency propagation stage utilizes the similarity between a pair of images to discover common properties of the images with the help of a single image saliency map. With the pairwise co-salient foreground cue maps obtained, the second intra-saliency propagation stage refines pairwise saliency detection using a graph-based method combining both foreground and background cues. A new fusion strategy is then used to obtain the co-saliency detection results. Finally an integrated multi-scale scheme is employed to obtain pixel-level co-saliency maps. The proposed method makes use of existing saliency detection models for co-saliency detection and is not overly sensitive to the initial saliency model selected. Extensive experiments on three benchmark databases show the superiority of the proposed co-saliency model against the state-of-the-art methods both subjectively and objectively. HighlightsInter-saliency propagation method to transmit saliency values.Intra-saliency propagation method to pop out co-salient objects.A new fusion strategy to adaptively combine results.
Neurocomputing | 2017
Tao Zhou; Harish Bhaskar; Fanghui Liu; Jie Yang; Ping Cai
Visual tracking is highly challenged by factors such as occlusion, background clutter, an abrupt target motion, illumination variation, and changes in scale and orientation. In this paper, an integrated framework for online learning of a fused temporal appearance and spatial constraint models for robust and accurate visual target tracking is proposed. The temporal appearance model aims to encapsulate historical appearance information of the target in order to cope with variations due to illumination changes and motion dynamics. On the other hand, the spatial constraint model exploits the relationships between the target and its neighbors to handle occlusion and deal with a cluttered background. For the purposes of reducing the computational complexity of the state estimation algorithm and in order to emphasize the importance of the different basis vectors, a K-nearest Local Smooth Algorithm (KLSA) is used to describe the spatial state model. Further, a customized Accelerated Proximal Gradient (APG) method is implemented for iteratively obtaining an optimal solution using KLSA. Finally, the optimal state estimate is obtained by using weighted samples within a particle filtering framework. Experimental results on large-scale benchmark sequences show that the proposed tracker achieves favorable performance compared to state-of-the-art methods.
Neurocomputing | 2016
Fanghui Liu; Tao Zhou; Jie Yang
Correlation filter achieves promising performance with high speed in visual tracking. However, conventional correlation filter based trackers cannot tackle affine transformation issues such as scale variation, rotation and skew. To address this problem, in this paper, we propose a part-based representation tracker via kernelized correlation filter (KCF) for visual tracking. A Spatial-Temporal Angle Matrix (STAM), severed as confidence metric, is proposed to select reliable patches from parts via multiple correlation filters. These stable patches are used to estimate a 2D affine transformation matrix of the target in a geometric method. Specially, the whole combination scheme for these stable patches is proposed to exploit sampling space in order to obtain numerous affine matrices and their corresponding candidates. The diversiform candidates would help to seek for the optimal candidate to represent the objects accurate affine transformation in a higher probability. Both qualitative and quantitative evaluations on VOT2014 challenge and Object Tracking Benchmark (OTB) show that the proposed tracking method achieves favorable performance compared with other state-of-the-art methods.
Pattern Recognition Letters | 2017
Fanghui Liu; Tao Zhou; Keren Fu; Jie Yang
Kernelized Temporal Locality Learning model is proposed.Temporal smoothness constraint of a local dictionary is considered.Kernel method is incorporated into the LLC method for nonlinear representation.Our tracker achieves a promising performance on the benchmark. Linear representation-based methods play an important role in the development of the target appearance modeling in visual tracking. However, such linear representation scheme cannot accurately depict the nonlinearly distributed appearance variations of the target, which often leads to unreliable tracking results. To fix this issue, we introduce the kernel method into the locality-constrained linear coding algorithm to comprehensively exploit its nonlinear representation ability. Further, to fully consider the temporal correlation between neighboring frames, we develop a point-to-set distance metric with L2, 1 norm as the temporal smoothness constraint, which aims to guarantee that the object between the two consecutive frames should be represented by the similar dictionaries temporally. Experimental results on Object Tracking Benchmark show that the proposed tracker achieves promising performance compared with other state-of-the-art methods.
international conference on neural information processing | 2016
Fanghui Liu; Mingna Liu; Tao Zhou; Yu Qiao; Jie Yang
Nonnegative Matrix Factorization NMF has received considerable attention in visual tracking. However noises and outliers are not tackled well due to Frobenius norm in NMFs objective function. To address this issue, in this paper, NMF with
Pattern Recognition Letters | 2016
Fanghui Liu; Tao Zhou; Keren Fu; Jie Yang
international conference on computer vision | 2015
Fanghui Liu; Tao Zhou; Jie Yang; Irene Yu-Hua Gu
L_{2,1}
IEEE Transactions on Multimedia | 2017
Fanghui Liu; Chen Gong; Tao Zhou; Keren Fu; Xiangjian He; Jie Yang
international conference on acoustics, speech, and signal processing | 2016
Fanghui Liu; Tao Zhou; Keren Fu; Irene Yu-Hua Gu; Jie Yang
L2,1 norm loss function robust NMF is introduced into appearance modelling in visual tracking. Compared to standard NMF, robust NMF not only handles noises and outliers but also provides sparsity property. In our visual tracking framework, basis matrix from robust NMF is used for appearance modelling with additional
IEEE Transactions on Image Processing | 2018
Fanghui Liu; Chen Gong; Xiaolin Huang; Tao Zhou; Jie Yang; Dacheng Tao