Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tianmin Xu is active.

Publication


Featured researches published by Tianmin Xu.


IEEE Transactions on Biomedical Engineering | 2012

Personalized Tooth Shape Estimation From Radiograph and Cast

Yuru Pei; Fuhao Shi; Hua Chen; Jia Wei; Hongbin Zha; Ruoping Jiang; Tianmin Xu

Three-dimensional geometric information of teeth is usually needed in pre- and postoperative diagnoses of orthodontic dentistry. The computerized tomography can provide comprehensive 3-D teeth geometries. However, there is still a discussion on computed tomography (CT) as a routine in orthodontic dentistry due to radiation dose. Moreover, the CT is useless when a dentist needs to extract 3-D structures from old archive files with only radiographs and casts, where patient’s teeth changed ever since. In this paper, we propose a reconstruction framework for patient-specific teeth based on an integration of 2-D radiographs and digitized casts. The reconstruction is under a template-fitting framework. The shape and orientation of teeth templates are tuned in accordance with patient’s radiographs. Specially, the tooth root morphology is controlled by 2-D contours in radiographs. With ray tracing and a contour plane assumption, 2-D root contours in radiographs are projected back to 3-D space, and guide tooth root deformations. Moreover, the template’s crown is deformed nonrigidly to fit digitized casts that bear patient’s crown details. The system allows 3-D tooth reconstruction with patient-specific geometric details from just casts and 2-D radiographs.


IEEE Transactions on Biomedical Engineering | 2017

Superimposition of Cone-Beam Computed Tomography Images by Joint Embedding

Yuru Pei; Gengyu Ma; Gui Chen; Xiaoyun Zhang; Tianmin Xu; Hongbin Zha

Objective: The superimposition of cone-beam computed tomography (CBCT) images is an essential step to evaluate shape variations of pre and postorthodontic operations due to pose variations and the bony growth. The aim of this paper is to present and discuss the latest accomplishments in voxel-based craniofacial CBCT superimpositions along with structure discriminations. Methods: We propose a CBCT superimposition method based on joint embedding of subsets extracted from CBCT images. The subset is defined at local extremes of the first-order difference of Gaussian-smoothed volume images to reduce the data involved in the computation. A rotation-invariant integral operator is proposed as the context-aware textural descriptor of subsets. We cope with subset correspondences by joint embedding with matching identifications in manifolds, which take into account the structure of subsets as a whole to avoid mapping ambiguities. Once given subset correspondences, the rigid transformations, as well as the superimposition of volume images, are obtained. Our system allows users to specify the structure-of-interest based on a semisupervised label propagation technique. Results: The performance of the proposed method is evaluated on ten pairs of pre and postoperative CBCT images of adult patients and ten pairs of growing patients, respectively. The experiments demonstrate that the craniofacial CBCT superimposition can be performed effectively, and outperform state of the arts. Conclusion: The integration of sparse subsets with context-aware spherical intensity integral descriptors and correspondence establishment by joint embedding enables the reliable and efficient CBCT superimposition. Significance: The potential of CBCT superimposition techniques discussed in this paper is highlighted and related challenges are addressed.


DLMIA/ML-CDS@MICCAI | 2017

Non-rigid Craniofacial 2D-3D Registration Using CNN-Based Regression

Yuru Pei; Yungeng Zhang; Haifang Qin; Gengyu Ma; Yuke Guo; Tianmin Xu; Hongbin Zha

The 2D-3D registration is a cornerstone to align the inter-treatment X-ray images with the available volumetric images. In this paper, we propose a CNN regression based non-rigid 2D-3D registration method. An iterative refinement scheme is introduced to update the reference volumetric image and the digitally-reconstructed-radiograph (DRR) for convergence to the target X-ray image. The CNN-based regressor represents the mapping between an image pair and the in-between deformation parameters. In particular, the short residual connections in the convolution blocks and long jump connections for the multi-scale feature map fusion facilitate the information propagation in training the regressor. The proposed method has been applied to 2D-3D registration of synthetic X-ray and clinically-captured CBCT images. Experimental results demonstrate the proposed method realizes an accurate and efficient 2D-3D registration of craniofacial images.


international conference on image processing | 2016

Volumetric reconstruction of craniofacial structures from 2D lateral cephalograms by regression forest

Yuru Pei; Fanfan Dai; Tianmin Xu; Hongbin Zha; Gengyu Ma

The 3D reconstruction is an essential step to measure the craniofacial morphological changes from the historical growth database with only 2D cephalograms. In this paper, we propose a novel regression-forest-based method to estimate the volumetric intensity images from a lateral cephalogram. The regression forest can produce a prediction of the volumetric craniofacial structure as a mixture of Gaussian by weighted aggregating the distributions from trees. The dense anatomical structure can be reconstructed with no time-consuming digitally-reconstructed-radiographs (DRR) in the online testing process. The experiments demonstrate the proposed method can reconstruct volumetric intensity images from the lateral cephalograms effectively.


Journal of Craniofacial Surgery | 2016

Correlation Between Cephalometric Measures and End-of-Treatment Facial Attractiveness.

Xiao-nan Yu; Ding Bai; Xue Feng; Yue-hua Liu; Wen-jing Chen; Song Li; Guang-li Han; Ruo-ping Jiang; Tianmin Xu

AbstractSixty-nine experienced Chinese orthodontists evaluated 108 Chinese patients’ facial attractiveness from set of photographs (frontal, lateral, and frontal smiling photos) taken at the end of orthodontic treatment. These 108 patients, which contained an equal number of patients with Class I, II, and III malocclusion, were randomly selected from 6 orthodontic treatment centers throughout China. Spearman rank-order correlation coefficients (rs) analyses were performed to examine agreement in ranking between all judge pairs. Pearson correlation and multivariate regression were performed to examine the correlation between cephalometric measures and end-of-treatment Photo Attractiveness Rank.96.68% judge pairs showed moderate correlated (+0.4 ⩽ rs < +0.7) subjective rankings. Cephalometric measures significantly correlated with end-of-treatment Photo Attractiveness Rank included interincisal angle (r = 0.330, P < 0.05), L1/MP° (r = 0.386, P < 0.05), L1-NBmm (r = 0.451, P < 0.01), L1/NB° (r = 0.374, P < 0.05), and profile angle (r = 0.353, P < 0.05) in Class I patients with an explained variance of 32.8%, and ANB angle (r = 0.432, P < 0.01), angle of convexity (r = 0.448, P < 0.01), profile angle (r = 0.488, P < 0.01), Li to E-line (r = 0.374, P < 0.05), Li to B-line (r = 0.543, P < 0.01), and Z angle (r = 0.543, P < 0.01) in Class II patient with an explained variance of 43.3%.There was less association than expected between objective measurements on the lateral cephalograms and clinicians’ rankings of facial attractiveness on clinical photography in Chinese patients. Straight-stand lower incisor was desired for facial attractiveness of Class I malocclusion; and sagittal relationship and lip prominence influence the esthetics of Class II malocclusion in Chinese population.


medical image computing and computer assisted intervention | 2017

Mixed Metric Random Forest for Dense Correspondence of Cone-Beam Computed Tomography Images

Yuru Pei; Yunai Yi; Gengyu Ma; Yuke Guo; Gui Chen; Tianmin Xu; Hongbin Zha

Efficient dense correspondence and registration of CBCT images is an essential yet challenging task for inter-treatment evaluations of structural variations. In this paper, we propose an unsupervised mixed metric random forest (MMRF) for dense correspondence of CBCT images. The weak labeling resulted from a clustering forest is utilized to discriminate the badly-clustered supervoxels and related classes, which are favored in the following fine-tuning of the MMRF by penalized weighting in both classification and clustering entropy estimation. An iterative scheme is introduced for the forest reinforcement to minimize the inconsistent supervoxel labeling across CBCT images. In order to screen out the inconsistent matching pairs and to regularize the dense correspondence defined by the forest-based metric, we evaluate consistencies of candidate matching pairs by virtue of isometric constraints. The proposed correspondence method has been tested on 150 clinically captured CBCT images, and outperforms state-of-the-arts in terms of matching accuracy while being computationally efficient.


International Workshop on Machine Learning in Medical Imaging | 2017

Finding Dense Supervoxel Correspondence of Cone-Beam Computed Tomography Images

Yuru Pei; Yunai Yi; Gengyu Ma; Yuke Guo; Gui Chen; Tianmin Xu; Hongbin Zha

Dense correspondence establishment of cone-beam computed tomography (CBCT) images is a crucial step for attribute transfers and morphological variation assessments in clinical orthodontics. However, the registration by the traditional large-scale nonlinear optimization is time-consuming for the craniofacial CBCT images. The supervised random forest is known for its fast online performance, thought the limited training data impair the generalization capacity. In this paper, we propose an unsupervised random-forest-based approach for the supervoxel-wise correspondence of CBCT images. In particular, we present a theoretical complexity analysis with a data-dependent learning guarantee for the clustering hypotheses of the unsupervised random forest. A novel tree-pruning algorithm is proposed to refine the forest by removing the local trivial and inconsistent leaf nodes, where the learning bound serves as guidance for an optimal selection of tree structures. The proposed method has been tested on the label propagation of clinically-captured CBCT images. Experiments demonstrate the proposed method yields performance improvements over variants of both supervised and unsupervised random-forest-based methods.


International Workshop on Machine Learning in Medical Imaging | 2017

Multi-scale Volumetric ConvNet with Nested Residual Connections for Segmentation of Anterior Cranial Base.

Yuru Pei; Haifang Qin; Gengyu Ma; Yuke Guo; Gui Chen; Tianmin Xu; Hongbin Zha

Anterior cranial base (ACB) is known as the growth-stable structure. Automatic segmentation of the ACB is a prerequisite to superimpose orthodontic inter-treatment cone-beam computed tomography (CBCT) images. The automatic ACB segmentation is still a challenging task because of the ambiguous intensity distributions around fine-grained structures and artifacts due to the limited radiation dose. We propose a fully automatic segmentation of the ACB from CBCT images by a volumetric convolutional network with nested residual connections (NRN). The multi-scale feature fusion in the NRN not only promotes the information flows, but also introduces the supervision to multiple intermediate layers to speed up the convergence. The multi-level shortcut connections augment the feature maps in the decompression pathway and the end-to-end voxel-wise label prediction. The proposed NRN has been applied to the ACB segmentation from clinically-captured CBCT images. The quantitative assessment over the practitioner-annotated ground truths demonstrates the proposed method produces improvements to the state-of-the-arts.


Archive | 2016

3D exemplar-based random walks for tooth segmentation from cone-beam computed tomography images 3D exemplar-based random walks for tooth segmentation from cone-beam computed tomography images

Yuru Pei; Xingsheng Ai; Hongbin Zha; Tianmin Xu; Gengyu Ma

PURPOSE Tooth segmentation is an essential step in acquiring patient-specific dental geometries from cone-beam computed tomography (CBCT) images. Tooth segmentation from CBCT images is still a challenging task considering the comparatively low image quality caused by the limited radiation dose, as well as structural ambiguities from intercuspation and nearby alveolar bones. The goal of this paper is to present and discuss the latest accomplishments in semisupervised tooth segmentation with adaptive 3D shape constraints. METHODS The authors propose a 3D exemplar-based random walk method of tooth segmentation from CBCT images. The proposed method integrates semisupervised label propagation and regularization by 3D exemplar registration. To begin with, the pure random walk method is to get an initial segmentation of the teeth, which tends to be erroneous because of the structural ambiguity of CBCT images. And then, as an iterative refinement, the authors conduct a regularization by using 3D exemplar registration, as well as label propagation by random walks with soft constraints, to improve the tooth segmentation. In the first stage of the iteration, 3D exemplars with well-defined topologies are adapted to fit the tooth contours, which are obtained from the random walks based segmentation. The soft constraints on voxel labeling are defined by shape-based foreground dentine probability acquired by the exemplar registration, as well as the appearance-based probability from a support vector machine (SVM) classifier. In the second stage, the labels of the volume-of-interest (VOI) are updated by the random walks with soft constraints. The two stages are optimized iteratively. Instead of the one-shot label propagation in the VOI, an iterative refinement process can achieve a reliable tooth segmentation by virtue of exemplar-based random walks with adaptive soft constraints. RESULTS The proposed method was applied for tooth segmentation of twenty clinically captured CBCT images. Three metrics, including the Dice similarity coefficient (DSC), the Jaccard similarity coefficient (JSC), and the mean surface deviation (MSD), were used to quantitatively analyze the segmentation of anterior teeth including incisors and canines, premolars, and molars. The segmentation of the anterior teeth achieved a DSC up to 98%, a JSC of 97%, and an MSD of 0.11 mm compared with manual segmentation. For the premolars, the average values of DSC, JSC, and MSD were 98%, 96%, and 0.12 mm, respectively. The proposed method yielded a DSC of 95%, a JSC of 89%, and an MSD of 0.26 mm for molars. Aside from the interactive definition of label priors by the user, automatic tooth segmentation can be achieved in an average of 1.18 min. CONCLUSIONS The proposed technique enables an efficient and reliable tooth segmentation from CBCT images. This study makes it clinically practical to segment teeth from CBCT images, thus facilitating pre- and interoperative uses of dental morphologies in maxillofacial and orthodontic treatments.


Medical Physics | 2016

3D exemplar‐based random walks for tooth segmentation from cone‐beam computed tomography images

Yuru Pei; Xingsheng Ai; Hongbin Zha; Tianmin Xu; Gengyu Ma

PURPOSE Tooth segmentation is an essential step in acquiring patient-specific dental geometries from cone-beam computed tomography (CBCT) images. Tooth segmentation from CBCT images is still a challenging task considering the comparatively low image quality caused by the limited radiation dose, as well as structural ambiguities from intercuspation and nearby alveolar bones. The goal of this paper is to present and discuss the latest accomplishments in semisupervised tooth segmentation with adaptive 3D shape constraints. METHODS The authors propose a 3D exemplar-based random walk method of tooth segmentation from CBCT images. The proposed method integrates semisupervised label propagation and regularization by 3D exemplar registration. To begin with, the pure random walk method is to get an initial segmentation of the teeth, which tends to be erroneous because of the structural ambiguity of CBCT images. And then, as an iterative refinement, the authors conduct a regularization by using 3D exemplar registration, as well as label propagation by random walks with soft constraints, to improve the tooth segmentation. In the first stage of the iteration, 3D exemplars with well-defined topologies are adapted to fit the tooth contours, which are obtained from the random walks based segmentation. The soft constraints on voxel labeling are defined by shape-based foreground dentine probability acquired by the exemplar registration, as well as the appearance-based probability from a support vector machine (SVM) classifier. In the second stage, the labels of the volume-of-interest (VOI) are updated by the random walks with soft constraints. The two stages are optimized iteratively. Instead of the one-shot label propagation in the VOI, an iterative refinement process can achieve a reliable tooth segmentation by virtue of exemplar-based random walks with adaptive soft constraints. RESULTS The proposed method was applied for tooth segmentation of twenty clinically captured CBCT images. Three metrics, including the Dice similarity coefficient (DSC), the Jaccard similarity coefficient (JSC), and the mean surface deviation (MSD), were used to quantitatively analyze the segmentation of anterior teeth including incisors and canines, premolars, and molars. The segmentation of the anterior teeth achieved a DSC up to 98%, a JSC of 97%, and an MSD of 0.11 mm compared with manual segmentation. For the premolars, the average values of DSC, JSC, and MSD were 98%, 96%, and 0.12 mm, respectively. The proposed method yielded a DSC of 95%, a JSC of 89%, and an MSD of 0.26 mm for molars. Aside from the interactive definition of label priors by the user, automatic tooth segmentation can be achieved in an average of 1.18 min. CONCLUSIONS The proposed technique enables an efficient and reliable tooth segmentation from CBCT images. This study makes it clinically practical to segment teeth from CBCT images, thus facilitating pre- and interoperative uses of dental morphologies in maxillofacial and orthodontic treatments.

Collaboration


Dive into the Tianmin Xu's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yuke Guo

Luoyang Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge