Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Meizhu Liu is active.

Publication


Featured researches published by Meizhu Liu.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2012

Shape Retrieval Using Hierarchical Total Bregman Soft Clustering

Meizhu Liu; Baba C. Vemuri; Shun-ichi Amari; Frank Nielsen

In this paper, we consider the family of total Bregman divergences (tBDs) as an efficient and robust “distance” measure to quantify the dissimilarity between shapes. We use the tBD-based l1-norm center as the representative of a set of shapes, and call it the t-center. First, we briefly present and analyze the properties of the tBDs and t-centers following our previous work in [1]. Then, we prove that for any tBD, there exists a distribution which belongs to the lifted exponential family (lEF) of statistical distributions. Further, we show that finding the maximum a posteriori (MAP) estimate of the parameters of the lifted exponential family distribution is equivalent to minimizing the tBD to find the t-centers. This leads to a new clustering technique, namely, the total Bregman soft clustering algorithm. We evaluate the tBD, t-center, and the soft clustering algorithm on shape retrieval applications. Our shape retrieval framework is composed of three steps: 1) extraction of the shape boundary points, 2) affine alignment of the shapes and use of a Gaussian mixture model (GMM) [2], [3], [4] to represent the aligned boundaries, and 3) comparison of the GMMs using tBD to find the best matches given a query shape. To further speed up the shape retrieval algorithm, we perform hierarchical clustering of the shapes using our total Bregman soft clustering algorithm. This enables us to compare the query with a small subset of shapes which are chosen to be the cluster t-centers. We evaluate our method on various public domain 2D and 3D databases, and demonstrate comparable or better results than state-of-the-art retrieval techniques.


IEEE Transactions on Medical Imaging | 2011

Total Bregman Divergence and Its Applications to DTI Analysis

Baba C. Vemuri; Meizhu Liu; Shun-ichi Amari; Frank Nielsen

Divergence measures provide a means to measure the pairwise dissimilarity between “objects,” e.g., vectors and probability density functions (pdfs). Kullback-Leibler (KL) divergence and the square loss (SL) function are two examples of commonly used dissimilarity measures which along with others belong to the family of Bregman divergences (BD). In this paper, we present a novel divergence dubbed the Total Bregman divergence (TBD), which is intrinsically robust to outliers, a very desirable property in many applications. Further, we derive the TBD center, called the t-center (using the l1-norm), for a population of positive definite matrices in closed form and show that it is invariant to transformation from the special linear group. This t-center, which is also robust to outliers, is then used in tensor interpolation as well as in an active contour based piecewise constant segmentation of a diffusion tensor magnetic resonance image (DT-MRI). Additionally, we derive the piecewise smooth active contour model for segmentation of DT-MRI using the TBD and present several comparative results on real data.


computer vision and pattern recognition | 2010

Total Bregman divergence and its applications to shape retrieval

Meizhu Liu; Baba C. Vemuri; Shun-ichi Amari; Frank Nielsen

Shape database search is ubiquitous in the world of bio-metric systems, CAD systems etc. Shape data in these domains is experiencing an explosive growth and usually requires search of whole shape databases to retrieve the best matches with accuracy and efficiency for a variety of tasks. In this paper, we present a novel divergence measure between any two given points in Rn or two distribution functions. This divergence measures the orthogonal distance between the tangent to the convex function (used in the definition of the divergence) at one of its input arguments and its second argument. This is in contrast to the ordinate distance taken in the usual definition of the Bregman class of divergences [4]. We use this orthogonal distance to redefine the Bregman class of divergences and develop a new theory for estimating the center of a set of vectors as well as probability distribution functions. The new class of divergences are dubbed the total Bregman divergence (TBD). We present the l\-norm based TBD center that is dubbed the t-center which is then used as a cluster center of a class of shapes The t-center is weighted mean and this weight is small for noise and outliers. We present a shape retrieval scheme using TBD and the t-center for representing the classes of shapes from the MPEG-7 database and compare the results with other state-of-the-art methods in literature.


computer vision and pattern recognition | 2011

AdaBoost on low-rank PSD matrices for metric learning

Jinbo Bi; Dijia Wu; Le Lu; Meizhu Liu; Yimo Tao; Matthias Wolf

The problem of learning a proper distance or similarity metric arises in many applications such as content-based image retrieval. In this work, we propose a boosting algorithm, MetricBoost, to learn the distance metric that preserves the proximity relationships among object triplets: object i is more similar to object j than to object k. Metric-Boost constructs a positive semi-definite (PSD) matrix that parameterizes the distance metric by combining rank-one PSD matrices. Different options of weak models and combination coefficients are derived. Unlike existing proximity preserving metric learning which is generally not scalable, MetricBoost employs a bipartite strategy to dramatically reduce computation cost by decomposing proximity relationships over triplets into pair-wise constraints. Met-ricBoost outperforms the state-of-the-art on two real-world medical problems: 1. identifying and quantifying diffuse lung diseases; 2. colorectal polyp matching between different views, as well as on other benchmark datasets.


european conference on computer vision | 2012

A robust and efficient doubly regularized metric learning approach

Meizhu Liu; Baba C. Vemuri

A proper distance metric is fundamental in many computer vision and pattern recognition applications such as classification, image retrieval, face recognition and so on. However, it is usually not clear what metric is appropriate for specific applications, therefore it becomes more reliable to learn a task oriented metric. Over the years, many metric learning approaches have been reported in literature. A typical one is to learn a Mahalanobis distance which is parameterized by a positive semidefinite (PSD) matrix M. An efficient method of estimating M is to treat M as a linear combination of rank-one matrices that can be learned using a boosting type approach. However, such approaches have two main drawbacks. First, the weight change across the training samples maybe non-smooth. Second, the learned rank-one matrices might be redundant. In this paper, we propose a doubly regularized metric learning algorithm, termed by DRMetric, which imposes two regularizations on the conventional metric learning method. First, a regularization is applied on the weight of the training examples, which prevents unstable change of the weights and also prevents outlier examples from being weighed too much. Besides, a regularization is applied on the rank-one matrices to make them independent. This greatly reduces the redundancy of the rank-one matrices. We present experiments depicting the performance of the proposed method on a variety of datasets for various applications.


NeuroImage | 2013

A robust variational approach for simultaneous smoothing and estimation of DTI.

Meizhu Liu; Baba C. Vemuri; Rachid Deriche

Estimating diffusion tensors is an essential step in many applications - such as diffusion tensor image (DTI) registration, segmentation and fiber tractography. Most of the methods proposed in the literature for this task are not simultaneously statistically robust and feature preserving techniques. In this paper, we propose a novel and robust variational framework for simultaneous smoothing and estimation of diffusion tensors from diffusion MRI. Our variational principle makes use of a recently introduced total Kullback-Leibler (tKL) divergence for DTI regularization. tKL is a statistically robust dissimilarity measure for diffusion tensors, and regularization by using tKL ensures the symmetric positive definiteness of tensors automatically. Further, the regularization is weighted by a non-local factor adapted from the conventional non-local means filters. Finally, for the data fidelity, we use the nonlinear least-squares term derived from the Stejskal-Tanner model. We present experimental results depicting the positive performance of our method in comparison to competing methods on synthetic and real data examples.


medical image computing and computer assisted intervention | 2011

Robust large scale prone-supine polyp matching using local features: a metric learning approach

Meizhu Liu; Le Lu; Jinbo Bi; Vikas C. Raykar; Matthias Wolf; Marcos Salganicoff

Computer aided detection (CAD) systems have emerged as noninvasive and effective tools, using 3D CT Colonography (CTC) for early detection of colonic polyps. In this paper, we propose a robust and automatic polyp prone-supine view matching method, to facilitate the regular CTC workflow where radiologists need to manually match the CAD findings in prone and supine CT scans for validation. Apart from previous colon registration approaches based on global geometric information, this paper presents a feature selection and metric distance learning approach to build a pairwise matching function (where true pairs of polyp detections have smaller distances than false pairs), learned using local polyp classification features. Thus our process can seamlessly handle collapsed colon segments or other severe structural artifacts which often exist in CTC, since only local features are used, whereas other global geometry dependent methods may become invalid for collapsed segmentation cases. Our automatic approach is extensively evaluated using a large multi-site dataset of 195 patient cases in training and 223 cases for testing. No external examination on the correctness of colon segmentation topology is needed. The results show that we achieve significantly superior matching accuracy than previous methods, on at least one order-of-magnitude larger CTC datasets.


arXiv: Computer Vision and Pattern Recognition | 2013

Semantic Context Forests for Learning-Based Knee Cartilage Segmentation in 3D MR Images

Quan Wang; Dijia Wu; Le Lu; Meizhu Liu; Kim L. Boyer; Shaohua Kevin Zhou

The automatic segmentation of human knee cartilage from 3D MR images is a useful yet challenging task due to the thin sheet structure of the cartilage with diffuse boundaries and inhomogeneous intensities. In this paper, we present an iterative multi-class learning method to segment the femoral, tibial and patellar cartilage simultaneously, which effectively exploits the spatial contextual constraints between bone and cartilage, and also between different cartilages. First, based on the fact that the cartilage grows in only certain area of the corresponding bone surface, we extract the distance features of not only to the surface of the bone, but more informatively, to the densely registered anatomical landmarks on the bone surface. Second, we introduce a set of iterative discriminative classifiers that at each iteration, probability comparison features are constructed from the class confidence maps derived by previously learned classifiers. These features automatically embed the semantic context information between different cartilages of interest. Validated on a total of 176 volumes from the Osteoarthritis Initiative (OAI) dataset, the proposed approach demonstrates high robustness and accuracy of segmentation in comparison with existing state-of-the-art MR cartilage segmentation methods.


international symposium on biomedical imaging | 2012

Unsupervised automatic white matter fiber clustering using a Gaussian mixture model

Meizhu Liu; Baba C. Vemuri; Rachid Deriche

Fiber tracking from diffusion tensor images is an essential step in numerous clinical applications. There is a growing demand for an accurate and efficient framework to perform quantitative analysis of white matter fiber bundles. In this paper, we propose a robust framework for fiber clustering. This framework is composed of two parts: accessible fiber representation, and a statistically robust divergence measure for comparing fibers. Each fiber is represented using a Gaussian mixture model (GMM), which is the linear combination of Gaussian distributions. The dissimilarity between two fibers is measured using the total square loss function between their corresponding GMMs (which is statistically robust). Finally, we perform the hierarchical total Bregman soft clustering algorithm on the GMMs, yielding clustered fiber bundles. Further, our method is able to determine the number of clusters automatically. We present experimental results depicting favorable performance of our method on both synthetic and real data examples.


computer vision and pattern recognition | 2011

Robust and efficient regularized boosting using total Bregman divergence

Meizhu Liu; Baba C. Vemuri

Boosting is a well known machine learning technique used to improve the performance of weak learners and has been successfully applied to computer vision, medical image analysis, computational biology and other fields. A critical step in boosting algorithms involves update of the data sample distribution, however, most existing boosting algorithms use updating mechanisms that lead to overfitting and instabilities during evolution of the distribution which in turn results in classification inaccuracies. Regularized boosting has been proposed in literature as a means to overcome these difficulties. In this paper, we propose a novel total Bregman divergence (tBD) regularized LPBoost, termed tBRLPBoost. tBD is a recently proposed divergence in literature, which is statistically robust and we prove that tBRLPBoost requires a constant number of iterations to learn a strong classifier and hence is computationally more efficient compared to other regularized boosting algorithms in literature. Also, unlike other boosting methods that are only effective on a handful of datasets, tBRLPBoost works well on a variety of datasets. We present results of testing our algorithm on many public domain databases along with comparisons to several other state-of-the-art methods. Numerical results depict much improvement in efficiency and accuracy over competing methods.

Collaboration


Dive into the Meizhu Liu's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Le Lu

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shun-ichi Amari

RIKEN Brain Science Institute

View shared research outputs
Researchain Logo
Decentralizing Knowledge