Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Anan Liu is active.

Publication


Featured researches published by Anan Liu.


IEEE Transactions on Systems, Man, and Cybernetics | 2015

Multipe/Single-View Human Action Recognition via Part-Induced Multitask Structural Learning

Anan Liu; Yuting Su; Pingping Jia; Zan Gao; Tong Hao; Zhaoxuan Yang

This paper proposes a unified framework for multiple/single-view human action recognition. First, we propose the hierarchical partwise bag-of-words representation which encodes both local and global visual saliency based on the body structure cue. Then, we formulate the multiple/single-view human action recognition as a part-regularized multitask structural learning (MTSL) problem which has two advantages on both model learning and feature selection: 1) preserving the consistence between the body-based action classification and the part-based action classification with the complementary information among different action categories and multiple views and 2) discovering both action-specific and action-shared feature subspaces to strengthen the generalization ability of model learning. Moreover, we contribute two novel human action recognition datasets, TJU (a single-view multimodal dataset) and MV-TJU (a multiview multimodal dataset). The proposed method is validated on three kinds of challenging datasets, including two single-view RGB datasets (KTH and TJU), two well-known depth dataset (MSR action 3-D and MSR daily activity 3-D), and one novel multiview multimodal dataset (MV-TJU). The extensive experimental results show that this method can outperform the popular 2-D/3-D part model-based methods and several other competing methods for multiple/single-view human action recognition in both RGB and depth modalities. To our knowledge, this paper is the first to demonstrate the applicability of MTSL with part-based regularization on multiple/single-view human action recognition in both RGB and depth modalities.


IEEE Transactions on Image Processing | 2016

Multi-Modal Clique-Graph Matching for View-Based 3D Model Retrieval

Anan Liu; Weizhi Nie; Yue Gao; Yuting Su

Multi-view matching is an important but a challenging task in view-based 3D model retrieval. To address this challenge, we propose an original multi-modal clique graph (MCG) matching method in this paper. We systematically present a method for MCG generation that is composed of cliques, which consist of neighbor nodes in multi-modal feature space and hyper-edges that link pairwise cliques. Moreover, we propose an image set-based clique/edgewise similarity measure to address the issue of the set-to-set distance measure, which is the core problem in MCG matching. The proposed MCG provides the following benefits: 1) preserves the local and global attributes of a graph with the designed structure; 2) eliminates redundant and noisy information by strengthening inliers while suppressing outliers; and 3) avoids the difficulty of defining high-order attributes and solving hyper-graph matching. We validate the MCG-based 3D model retrieval using three popular single-modal data sets and one novel multi-modal data set. Extensive experiments show the superiority of the proposed method through comparisons. Moreover, we contribute a novel real-world 3D object data set, the multi-view RGB-D object data set. To the best of our knowledge, it is the largest real-world 3D object data set containing multi-modal and multi-view information.


IEEE Transactions on Medical Imaging | 2012

A Semi-Markov Model for Mitosis Segmentation in Time-Lapse Phase Contrast Microscopy Image Sequences of Stem Cell Populations

Anan Liu; Kang Li; Takeo Kanade

We propose a semi-Markov model trained in a max-margin learning framework for mitosis event segmentation in large-scale time-lapse phase contrast microscopy image sequences of stem cell populations. Our method consists of three steps. First, we apply a constrained optimization based microscopy image segmentation method that exploits phase contrast optics to extract candidate subsequences in the input image sequence that contains mitosis events. Then, we apply a max-margin hidden conditional random field (MM-HCRF) classifier learned from human-annotated mitotic and nonmitotic sequences to classify each candidate subsequence as a mitosis or not. Finally, a max-margin semi-Markov model (MM-SMM) trained on manually-segmented mitotic sequences is utilized to reinforce the mitosis classification results, and to further segment each mitosis into four predefined temporal stages. The proposed method outperforms the event-detection CRF model recently reported by Huh as well as several other competing methods in very challenging image sequences of multipolar-shaped C3H10T1/2 mesenchymal stem cells. For mitosis detection, an overall precision of 95.8% and a recall of 88.1% were achieved. For mitosis segmentation, the mean and standard deviation for the localization errors of the start and end points of all mitosis stages were well below 1 and 2 frames, respectively. In particular, an overall temporal location error of 0.73 ±1.29 frames was achieved for locating daughter cell birth events.


international symposium on biomedical imaging | 2010

Mitosis sequence detection using hidden conditional random fields

Anan Liu; Kang Li; Takeo Kanade

We propose a fully-automated mitosis event detector using hidden conditional random fields for cell populations imaged with time-lapse phase contrast microscopy. The method consists of two stages that jointly optimize recall and precision. First, we apply model-based microscopy image preconditioning and volumetric segmentation to identify candidate spatiotemporal sub-regions in the input image sequence where mitosis potentially occurred. Then, we apply a learned hidden conditional random field classifier to classify each candidate sequence as mitosis or not. The proposed detection method achieved 95% precision and 85% recall in very challenging image sequences of multipolar-shaped C3H10T1/2 mesenchymal stem cells. The superiority of the method was further demonstrated by comparisons with conditional random field and support vector machine classifiers. Moreover, the proposed method does not depend on empirical parameters, ad hoc image processing, or cell tracking; and can be straightforwardly adapted to different cell types.


computer vision and pattern recognition | 2015

Clique-graph matching by preserving global & local structure

Weizhi Nie; Anan Liu; Zan Gao; Yuting Su

This paper originally proposes the clique-graph and further presents a clique-graph matching method by preserving global and local structures. Especially, we formulate the objective function of clique-graph matching with respective to two latent variables, the clique information in the original graph and the pairwise clique correspondence constrained by the one-to-one matching. Since the objective function is not jointly convex to both latent variables, we decompose it into two consecutive steps for optimization: 1) clique-to-clique similarity measure by preserving local unary and pairwise correspondences; 2) graph-to-graph similarity measure by preserving global clique-to-clique correspondence. Extensive experiments on the synthetic data and real images show that the proposed method can outperform representative methods especially when both noise and outliers exist.


Neurocomputing | 2015

Single/multi-view human action recognition via regularized multi-task learning

Anan Liu; Ning Xu; Yuting Su; Hong Lin; Tong Hao; Zhaoxuan Yang

Abstract This paper proposes a unified single/multi-view human action recognition method via regularized multi-task learning. First, we propose the pyramid partwise bag of words (PPBoW) representation which implicitly encodes both local visual characteristics and human body structure. Furthermore, we formulate the task of single/multi-view human action recognition into a part-induced multi-task learning problem penalized by graph structure and sparsity to discover the latent correlation among multiple views and body parts and consequently boost the performances. The experiment shows that this method can significantly improve performance over the standard BoW+SVM method. Moreover, the proposed method can achieve competing performance simply with low dimensional PPBoW representation against the state-of-the-art methods for human action recognition on KTH and MV-TJU, a new multi-view action dataset with RGB, depth and skeleton data prepared by our group.


Signal Processing | 2015

Coupled hidden conditional random fields for RGB-D human action recognition

Anan Liu; Weizhi Nie; Yuting Su; Li Ma; Tong Hao; Zhaoxuan Yang

This paper proposes a human action recognition method via coupled hidden conditional random fields model by fusing both RGB and depth sequential information. The coupled hidden conditional random fields model extends the standard hidden-state conditional random fields model only with one chain-structure sequential observation to multiple chain-structure sequential observations, which are synchronized sequence data captured in multiple modalities. For model formulation, we propose the specific graph structure for the interaction among multiple modalities and design the corresponding potential functions. Then we propose the model learning and inference methods to discover the latent correlation between RGB and depth data as well as model temporal context within individual modality. The extensive experiments show that the proposed model can boost the performance of human action recognition by taking advance of complementary characteristics from both RGB and depth modalities. HighlightsWe propose cHCRF to learn sequence-specific and sequence-shared temporal structure.We contribute a novel RGB-D human action dataset containing 1200 samples.Experiments on 3 popular datasets show the superiority of the proposed method.


Information Sciences | 2015

Graph-based characteristic view set extraction and matching for 3D model retrieval

Anan Liu; Zhongyang Wang; Weizhi Nie; Yuting Su

In recent times, multi-view representation of the 3D model has led to extensive research in view-based methods for 3D model retrieval. However, most approaches focus on feature extraction from 2D images while ignoring the spatial information of the 3D model. In order to improve the effectiveness of view-based methods on 3D model retrieval, this paper proposes a novel method for characteristic view extraction and similarity measurement. First, the graph clustering method is used for view grouping and the random-walk algorithm is applied to adaptively update the weight of each view. The spatial information of the 3D object is utilized to construct a view-graph model, thus enabling each characteristic view to represent the discriminative visual feature in terms of specific spatial context. Next, by considering the view set as a graph model, the similarity measurement of two models can be converted into a graph matching problem. This problem is solved by mathematically formulating it as a Rayleigh quotient maximization with affinity constraints for similarity measurement. Extensive comparison experiments were conducted on the popular ETH, NTU, PSB, and MV-RED 3D model datasets. The results demonstrate the superiority of the proposed method.


annual acis international conference on computer and information science | 2008

A Novel Image Text Extraction Method Based on K-Means Clustering

Yan Song; Anan Liu; Lin Pang; Shouxun Lin; Yongdong Zhang; Sheng Tang

Texts in web pages, images and videos contain important clues for information indexing and retrieval. Most existing text extraction methods depend on the language type and text appearance. In this paper, a novel and universal method of image text extraction is proposed. A coarse-to-fine text location method is implemented. Firstly, a multi-scale approach is adopted to locate texts with different font sizes. Secondly, projection profiles are used in location refinement step. Color-based k-means clustering is adopted in text segmentation. Compared to grayscale image which is used in most existing methods, color image is more suitable for segmentation based on clustering. It treats corner-points, edge-points and other points equally so that it solves the problem of handling multilingual text. It is demonstrated in experimental results that best performance is obtained when k is 3. Comparative experimental results on a large number of images show that our method is accurate and robust in various conditions.


Neurocomputing | 2014

Single/cross-camera multiple-person tracking by graph matching

Weizhi Nie; Anan Liu; Yuting Su; Huanbo Luan; Zhaoxuan Yang; Liujuan Cao; Rongrong Ji

Single and cross-camera multiple person tracking in unconstrained condition is an extremely challenging task in computer vision. Facing the main difficulties caused by the existence of occlusion in single-camera scenario and the occurrence of transition in cross-camera scenario, we propose a unified framework formulated in graph matching with affinity constraints for both single and cross-camera tracking tasks. To our knowledge, our work is the first to unify two kinds of tracking problems with the same framework by graph matching. The proposed method consists of two steps, tracklet generation and tracklet association. First, we implement the modified part-based human detector and the Tracking-Modeling-Detection (TMD) method for tracklet generation. Then we propose to associate tracklets by graph matching which is mathematically formulated into the Rayleigh Quotients Maximization. The comparison experiments show that the proposed method can produce the competing results with the state-of-the-art methods.

Collaboration


Dive into the Anan Liu's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yongdong Zhang

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Sheng Tang

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Zan Gao

Tianjin University of Technology

View shared research outputs
Top Co-Authors

Avatar

Jintao Li

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yan Song

Chinese Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge