Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Huazhu Fu is active.

Publication


Featured researches published by Huazhu Fu.


computer vision and pattern recognition | 2015

Diversity-induced Multi-view Subspace Clustering

Xiaochun Cao; Changqing Zhang; Huazhu Fu; Si Liu; Hua Zhang

In this paper, we focus on how to boost the multi-view clustering by exploring the complementary information among multi-view features. A multi-view clustering framework, called Diversity-induced Multi-view Subspace Clustering (DiMSC), is proposed for this task. In our method, we extend the existing subspace clustering into the multi-view domain, and utilize the Hilbert Schmidt Independence Criterion (HSIC) as a diversity term to explore the complementarity of multi-view representations, which could be solved efficiently by using the alternating minimizing optimization. Compared to other multi-view clustering methods, the enhanced complementarity reduces the redundancy between the multi-view representations, and improves the accuracy of the clustering results. Experiments on both image and video face clustering well demonstrate that the proposed method outperforms the state-of-the-art methods.


IEEE Transactions on Image Processing | 2014

Self-Adaptively Weighted Co-Saliency Detection via Rank Constraint

Xiaochun Cao; Zhiqiang Tao; Bao Zhang; Huazhu Fu; Wei Feng

Co-saliency detection aims at discovering the common salient objects existing in multiple images. Most existing methods combine multiple saliency cues based on fixed weights, and ignore the intrinsic relationship of these cues. In this paper, we provide a general saliency map fusion framework, which exploits the relationship of multiple saliency cues and obtains the self-adaptive weight to generate the final saliency/co-saliency map. Given a group of images with similar objects, our method first utilizes several saliency detection algorithms to generate a group of saliency maps for all the images. The feature representation of the co-salient regions should be both similar and consistent. Therefore, the matrix jointing these feature histograms appears low rank. We formalize this general consistency criterion as the rank constraint, and propose two consistency energy to describe it, which are based on low rank matrix approximation and low rank matrix recovery, respectively. By calculating the self-adaptive weight based on the consistency energy, we highlight the common salient regions. Our method is valid for more than two input images and also works well for single image saliency detection. Experimental results on a variety of benchmark data sets demonstrate that the proposed method outperforms the state-of-the-art methods.


computer vision and pattern recognition | 2014

Object-Based Multiple Foreground Video Co-segmentation

Huazhu Fu; Dong Xu; Bao Zhang; Stephen Lin

We present a video co-segmentation method that uses category-independent object proposals as its basic element and can extract multiple foreground objects in a video set. The use of object elements overcomes limitations of low-level feature representations in separating complex foregrounds and backgrounds. We formulate object-based co-segmentation as a co-selection graph in which regions with foreground-like characteristics are favored while also accounting for intra-video and inter-video foreground coherence. To handle multiple foreground objects, we expand the co-selection graph model into a proposed multi-state selection graph model (MSG) that optimizes the segmentations of different objects jointly. This extension into the MSG can be applied not only to our co-selection graph, but also can be used to turn any standard graph model into a multi-state selection solution that can be optimized directly by the existing energy minimization techniques. Our experiments show that our object-based multiple foreground video co-segmentation method (ObMiC) compares well to related techniques on both single and multiple foreground cases.


international conference on computer vision | 2015

Low-Rank Tensor Constrained Multiview Subspace Clustering

Changqing Zhang; Huazhu Fu; Si Liu; Guangcan Liu; Xiaochun Cao

In this paper, we explore the problem of multiview subspace clustering. We introduce a low-rank tensor constraint to explore the complementary information from multiple views and, accordingly, establish a novel method called Low-rank Tensor constrained Multiview Subspace Clustering (LT-MSC). Our method regards the subspace representation matrices of different views as a tensor, which captures dexterously the high order correlations underlying multiview data. Then the tensor is equipped with a low-rank constraint, which models elegantly the cross information among different views, reduces effectually the redundancy of the learned subspace representations, and improves the accuracy of clustering as well. The inference process of the affinity matrix for clustering is formulated as a tensor nuclear norm minimization problem, constrained with an additional L2,1-norm regularizer and some linear equalities. The minimization problem is convex and thus can be solved efficiently by an Augmented Lagrangian Alternating Direction Minimization (AL-ADM) method. Extensive experimental results on four benchmark datasets show the effectiveness of our proposed LT-MSC method.


international symposium on biomedical imaging | 2016

Retinal vessel segmentation via deep learning network and fully-connected conditional random fields

Huazhu Fu; Yanwu Xu; Damon Wing Kee Wong; Jiang Liu

Vessel segmentation is a key step for various medical applications. This paper introduces the deep learning architecture to improve the performance of retinal vessel segmentation. Deep learning architecture has been demonstrated having the powerful ability in automatically learning the rich hierarchical representations. In this paper, we formulate the vessel segmentation to a boundary detection problem, and utilize the fully convolutional neural networks (CNNs) to generate a vessel probability map. Our vessel probability map distinguishes the vessels and background in the inadequate contrast region, and has robustness to the pathological regions in the fundus image. Moreover, a fully-connected Conditional Random Fields (CRFs) is also employed to combine the discriminative vessel probability map and long-range interactions between pixels. Finally, a binary vessel segmentation result is obtained by our method. We show that our proposed method achieve a state-of-the-art vessel segmentation performance on the DRIVE and STARE datasets.


IEEE Transactions on Image Processing | 2015

Object-Based Multiple Foreground Video Co-Segmentation via Multi-State Selection Graph

Huazhu Fu; Dong Xu; Bao Zhang; Stephen Lin; Rabab K. Ward

We present a technique for multiple foreground video co-segmentation in a set of videos. This technique is based on category-independent object proposals. To identify the foreground objects in each frame, we examine the properties of the various regions that reflect the characteristics of foregrounds, considering the intra-video coherence of the foreground as well as the foreground consistency among the different videos in the set. Multiple foregrounds are handled via a multi-state selection graph in which a node representing a video frame can take multiple labels that correspond to different objects. In addition, our method incorporates an indicator matrix that for the first time allows accurate handling of cases with common foreground objects missing in some videos, thus preventing irrelevant regions from being misclassified as foreground objects. An iterative procedure is proposed to optimize our new objective function. As demonstrated through comprehensive experiments, this object-based multiple foreground video co-segmentation method compares well with related techniques that co-segment multiple foregrounds.


IEEE Transactions on Image Processing | 2015

Constrained Multi-View Video Face Clustering

Xiaochun Cao; Changqing Zhang; Chengju Zhou; Huazhu Fu; Hassan Foroosh

In this paper, we focus on face clustering in videos. To promote the performance of video clustering by multiple intrinsic cues, i.e., pairwise constraints and multiple views, we propose a constrained multi-view video face clustering method under a unified graph-based model. First, unlike most existing video face clustering methods which only employ these constraints in the clustering step, we strengthen the pairwise constraints through the whole video face clustering framework, both in sparse subspace representation and spectral clustering. In the constrained sparse subspace representation, the sparse representation is forced to explore unknown relationships. In the constrained spectral clustering, the constraints are used to guide for learning more reasonable new representations. Second, our method considers both the video face pairwise constraints as well as the multi-view consistence simultaneously. In particular, the graph regularization enforces the pairwise constraints to be respected and the co-regularization penalizes the disagreement among different graphs of multiple views. Experiments on three real-world video benchmark data sets demonstrate the significant improvements of our method over the state-of-the-art methods.


IEEE Transactions on Medical Imaging | 2018

Joint Optic Disc and Cup Segmentation Based on Multi-Label Deep Network and Polar Transformation

Huazhu Fu; Jun Cheng; Yanwu Xu; Damon Wing Kee Wong; Jiang Liu; Xiaochun Cao

Glaucoma is a chronic eye disease that leads to irreversible vision loss. The cup to disc ratio (CDR) plays an important role in the screening and diagnosis of glaucoma. Thus, the accurate and automatic segmentation of optic disc (OD) and optic cup (OC) from fundus images is a fundamental task. Most existing methods segment them separately, and rely on hand-crafted visual feature from fundus images. In this paper, we propose a deep learning architecture, named M-Net, which solves the OD and OC segmentation jointly in a one-stage multi-label system. The proposed M-Net mainly consists of multi-scale input layer, U-shape convolutional network, side-output layer, and multi-label loss function. The multi-scale input layer constructs an image pyramid to achieve multiple level receptive field sizes. The U-shape convolutional network is employed as the main body network structure to learn the rich hierarchical representation, while the side-output layer acts as an early classifier that produces a companion local prediction map for different scale layers. Finally, a multi-label loss function is proposed to generate the final segmentation map. For improving the segmentation performance further, we also introduce the polar transformation, which provides the representation of the original image in the polar coordinate system. The experiments show that our M-Net system achieves state-of-the-art OD and OC segmentation result on ORIGA data set. Simultaneously, the proposed method also obtains the satisfactory glaucoma screening performances with calculated CDR value on both ORIGA and SCES datasets.


IEEE Transactions on Biomedical Engineering | 2015

Automatic Optic Disc Detection in OCT Slices via Low-Rank Reconstruction

Huazhu Fu; Dong Xu; Stephen Lin; Damon Wing Kee Wong; Jiang Liu

Optic disc measurements provide useful diagnostic information as they have correlations with certain eye diseases. In this paper, we provide an automatic method for detecting the optic disc in a single OCT slice. Our method is developed from the observation that the retinal pigment epithelium (RPE) which bounds the optic disc has a low-rank appearance structure that differs from areas within the disc. To detect the disc, our method acquires from the OCT image an RPE appearance model that is specific to the individual and imaging conditions, by learning a low-rank dictionary from image areas known to be part of the RPE according to priors on ocular anatomy. The edge of the RPE, where the optic disc is located, is then found by traversing the retinal layer containing the RPE, reconstructing local appearance with the low-rank model, and detecting the point at which appearance starts to deviate (i.e., increased reconstruction error). To aid in this detection, we also introduce a geometrical constraint called the distance bias that accounts for the smooth shape of the RPE. Experiments demonstrate that our method outperforms other OCT techniques in localizing the optic disc and estimating disc width. Moreover, we also show the potential usage of our method on optic disc area detection in 3-D OCT volumes.


acm multimedia | 2014

Co-Saliency Detection via Base Reconstruction

Xiaochun Cao; Yupeng Cheng; Zhiqiang Tao; Huazhu Fu

Co-saliency aims at detecting common saliency in a series of images, which is useful for a variety of multimedia applications. In this paper, we address the co-saliency detection to a reconstruction problem: the foreground could be well reconstructed by using the reconstruction bases, which are extracted from each image and have the similar appearances in the feature space. We firstly obtain a candidate set by measuring the saliency prior of each image. Relevance information among the multiple images is utilized to remove the inaccuracy reconstruction bases. Finally, with the updated reconstruction bases, we rebuild the images and provide the reconstruction error regarded as a negative correlational value in co-saliency measurement. The satisfactory quantitative and qualitative experimental results on two benchmark datasets demonstrate the efficiency and effectiveness of our method.

Collaboration


Dive into the Huazhu Fu's collaboration.

Top Co-Authors

Avatar

Xiaochun Cao

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Jiang Liu

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tin Aung

Tan Tock Seng Hospital

View shared research outputs
Top Co-Authors

Avatar

Dong Xu

University of Sydney

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge