Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gengyu Ma is active.

Publication


Featured researches published by Gengyu Ma.


IEEE Transactions on Biomedical Engineering | 2017

Superimposition of Cone-Beam Computed Tomography Images by Joint Embedding

Yuru Pei; Gengyu Ma; Gui Chen; Xiaoyun Zhang; Tianmin Xu; Hongbin Zha

Objective: The superimposition of cone-beam computed tomography (CBCT) images is an essential step to evaluate shape variations of pre and postorthodontic operations due to pose variations and the bony growth. The aim of this paper is to present and discuss the latest accomplishments in voxel-based craniofacial CBCT superimpositions along with structure discriminations. Methods: We propose a CBCT superimposition method based on joint embedding of subsets extracted from CBCT images. The subset is defined at local extremes of the first-order difference of Gaussian-smoothed volume images to reduce the data involved in the computation. A rotation-invariant integral operator is proposed as the context-aware textural descriptor of subsets. We cope with subset correspondences by joint embedding with matching identifications in manifolds, which take into account the structure of subsets as a whole to avoid mapping ambiguities. Once given subset correspondences, the rigid transformations, as well as the superimposition of volume images, are obtained. Our system allows users to specify the structure-of-interest based on a semisupervised label propagation technique. Results: The performance of the proposed method is evaluated on ten pairs of pre and postoperative CBCT images of adult patients and ten pairs of growing patients, respectively. The experiments demonstrate that the craniofacial CBCT superimposition can be performed effectively, and outperform state of the arts. Conclusion: The integration of sparse subsets with context-aware spherical intensity integral descriptors and correspondence establishment by joint embedding enables the reliable and efficient CBCT superimposition. Significance: The potential of CBCT superimposition techniques discussed in this paper is highlighted and related challenges are addressed.


DLMIA/ML-CDS@MICCAI | 2017

Non-rigid Craniofacial 2D-3D Registration Using CNN-Based Regression

Yuru Pei; Yungeng Zhang; Haifang Qin; Gengyu Ma; Yuke Guo; Tianmin Xu; Hongbin Zha

The 2D-3D registration is a cornerstone to align the inter-treatment X-ray images with the available volumetric images. In this paper, we propose a CNN regression based non-rigid 2D-3D registration method. An iterative refinement scheme is introduced to update the reference volumetric image and the digitally-reconstructed-radiograph (DRR) for convergence to the target X-ray image. The CNN-based regressor represents the mapping between an image pair and the in-between deformation parameters. In particular, the short residual connections in the convolution blocks and long jump connections for the multi-scale feature map fusion facilitate the information propagation in training the regressor. The proposed method has been applied to 2D-3D registration of synthetic X-ray and clinically-captured CBCT images. Experimental results demonstrate the proposed method realizes an accurate and efficient 2D-3D registration of craniofacial images.


international conference on image processing | 2016

Volumetric reconstruction of craniofacial structures from 2D lateral cephalograms by regression forest

Yuru Pei; Fanfan Dai; Tianmin Xu; Hongbin Zha; Gengyu Ma

The 3D reconstruction is an essential step to measure the craniofacial morphological changes from the historical growth database with only 2D cephalograms. In this paper, we propose a novel regression-forest-based method to estimate the volumetric intensity images from a lateral cephalogram. The regression forest can produce a prediction of the volumetric craniofacial structure as a mixture of Gaussian by weighted aggregating the distributions from trees. The dense anatomical structure can be reconstructed with no time-consuming digitally-reconstructed-radiographs (DRR) in the online testing process. The experiments demonstrate the proposed method can reconstruct volumetric intensity images from the lateral cephalograms effectively.


medical image computing and computer assisted intervention | 2017

Mixed Metric Random Forest for Dense Correspondence of Cone-Beam Computed Tomography Images

Yuru Pei; Yunai Yi; Gengyu Ma; Yuke Guo; Gui Chen; Tianmin Xu; Hongbin Zha

Efficient dense correspondence and registration of CBCT images is an essential yet challenging task for inter-treatment evaluations of structural variations. In this paper, we propose an unsupervised mixed metric random forest (MMRF) for dense correspondence of CBCT images. The weak labeling resulted from a clustering forest is utilized to discriminate the badly-clustered supervoxels and related classes, which are favored in the following fine-tuning of the MMRF by penalized weighting in both classification and clustering entropy estimation. An iterative scheme is introduced for the forest reinforcement to minimize the inconsistent supervoxel labeling across CBCT images. In order to screen out the inconsistent matching pairs and to regularize the dense correspondence defined by the forest-based metric, we evaluate consistencies of candidate matching pairs by virtue of isometric constraints. The proposed correspondence method has been tested on 150 clinically captured CBCT images, and outperforms state-of-the-arts in terms of matching accuracy while being computationally efficient.


International Workshop on Machine Learning in Medical Imaging | 2017

Finding Dense Supervoxel Correspondence of Cone-Beam Computed Tomography Images

Yuru Pei; Yunai Yi; Gengyu Ma; Yuke Guo; Gui Chen; Tianmin Xu; Hongbin Zha

Dense correspondence establishment of cone-beam computed tomography (CBCT) images is a crucial step for attribute transfers and morphological variation assessments in clinical orthodontics. However, the registration by the traditional large-scale nonlinear optimization is time-consuming for the craniofacial CBCT images. The supervised random forest is known for its fast online performance, thought the limited training data impair the generalization capacity. In this paper, we propose an unsupervised random-forest-based approach for the supervoxel-wise correspondence of CBCT images. In particular, we present a theoretical complexity analysis with a data-dependent learning guarantee for the clustering hypotheses of the unsupervised random forest. A novel tree-pruning algorithm is proposed to refine the forest by removing the local trivial and inconsistent leaf nodes, where the learning bound serves as guidance for an optimal selection of tree structures. The proposed method has been tested on the label propagation of clinically-captured CBCT images. Experiments demonstrate the proposed method yields performance improvements over variants of both supervised and unsupervised random-forest-based methods.


International Workshop on Machine Learning in Medical Imaging | 2017

Multi-scale Volumetric ConvNet with Nested Residual Connections for Segmentation of Anterior Cranial Base.

Yuru Pei; Haifang Qin; Gengyu Ma; Yuke Guo; Gui Chen; Tianmin Xu; Hongbin Zha

Anterior cranial base (ACB) is known as the growth-stable structure. Automatic segmentation of the ACB is a prerequisite to superimpose orthodontic inter-treatment cone-beam computed tomography (CBCT) images. The automatic ACB segmentation is still a challenging task because of the ambiguous intensity distributions around fine-grained structures and artifacts due to the limited radiation dose. We propose a fully automatic segmentation of the ACB from CBCT images by a volumetric convolutional network with nested residual connections (NRN). The multi-scale feature fusion in the NRN not only promotes the information flows, but also introduces the supervision to multiple intermediate layers to speed up the convergence. The multi-level shortcut connections augment the feature maps in the decompression pathway and the end-to-end voxel-wise label prediction. The proposed NRN has been applied to the ACB segmentation from clinically-captured CBCT images. The quantitative assessment over the practitioner-annotated ground truths demonstrates the proposed method produces improvements to the state-of-the-arts.


Archive | 2016

3D exemplar-based random walks for tooth segmentation from cone-beam computed tomography images 3D exemplar-based random walks for tooth segmentation from cone-beam computed tomography images

Yuru Pei; Xingsheng Ai; Hongbin Zha; Tianmin Xu; Gengyu Ma

PURPOSE Tooth segmentation is an essential step in acquiring patient-specific dental geometries from cone-beam computed tomography (CBCT) images. Tooth segmentation from CBCT images is still a challenging task considering the comparatively low image quality caused by the limited radiation dose, as well as structural ambiguities from intercuspation and nearby alveolar bones. The goal of this paper is to present and discuss the latest accomplishments in semisupervised tooth segmentation with adaptive 3D shape constraints. METHODS The authors propose a 3D exemplar-based random walk method of tooth segmentation from CBCT images. The proposed method integrates semisupervised label propagation and regularization by 3D exemplar registration. To begin with, the pure random walk method is to get an initial segmentation of the teeth, which tends to be erroneous because of the structural ambiguity of CBCT images. And then, as an iterative refinement, the authors conduct a regularization by using 3D exemplar registration, as well as label propagation by random walks with soft constraints, to improve the tooth segmentation. In the first stage of the iteration, 3D exemplars with well-defined topologies are adapted to fit the tooth contours, which are obtained from the random walks based segmentation. The soft constraints on voxel labeling are defined by shape-based foreground dentine probability acquired by the exemplar registration, as well as the appearance-based probability from a support vector machine (SVM) classifier. In the second stage, the labels of the volume-of-interest (VOI) are updated by the random walks with soft constraints. The two stages are optimized iteratively. Instead of the one-shot label propagation in the VOI, an iterative refinement process can achieve a reliable tooth segmentation by virtue of exemplar-based random walks with adaptive soft constraints. RESULTS The proposed method was applied for tooth segmentation of twenty clinically captured CBCT images. Three metrics, including the Dice similarity coefficient (DSC), the Jaccard similarity coefficient (JSC), and the mean surface deviation (MSD), were used to quantitatively analyze the segmentation of anterior teeth including incisors and canines, premolars, and molars. The segmentation of the anterior teeth achieved a DSC up to 98%, a JSC of 97%, and an MSD of 0.11 mm compared with manual segmentation. For the premolars, the average values of DSC, JSC, and MSD were 98%, 96%, and 0.12 mm, respectively. The proposed method yielded a DSC of 95%, a JSC of 89%, and an MSD of 0.26 mm for molars. Aside from the interactive definition of label priors by the user, automatic tooth segmentation can be achieved in an average of 1.18 min. CONCLUSIONS The proposed technique enables an efficient and reliable tooth segmentation from CBCT images. This study makes it clinically practical to segment teeth from CBCT images, thus facilitating pre- and interoperative uses of dental morphologies in maxillofacial and orthodontic treatments.


Medical Physics | 2016

3D exemplar‐based random walks for tooth segmentation from cone‐beam computed tomography images

Yuru Pei; Xingsheng Ai; Hongbin Zha; Tianmin Xu; Gengyu Ma

PURPOSE Tooth segmentation is an essential step in acquiring patient-specific dental geometries from cone-beam computed tomography (CBCT) images. Tooth segmentation from CBCT images is still a challenging task considering the comparatively low image quality caused by the limited radiation dose, as well as structural ambiguities from intercuspation and nearby alveolar bones. The goal of this paper is to present and discuss the latest accomplishments in semisupervised tooth segmentation with adaptive 3D shape constraints. METHODS The authors propose a 3D exemplar-based random walk method of tooth segmentation from CBCT images. The proposed method integrates semisupervised label propagation and regularization by 3D exemplar registration. To begin with, the pure random walk method is to get an initial segmentation of the teeth, which tends to be erroneous because of the structural ambiguity of CBCT images. And then, as an iterative refinement, the authors conduct a regularization by using 3D exemplar registration, as well as label propagation by random walks with soft constraints, to improve the tooth segmentation. In the first stage of the iteration, 3D exemplars with well-defined topologies are adapted to fit the tooth contours, which are obtained from the random walks based segmentation. The soft constraints on voxel labeling are defined by shape-based foreground dentine probability acquired by the exemplar registration, as well as the appearance-based probability from a support vector machine (SVM) classifier. In the second stage, the labels of the volume-of-interest (VOI) are updated by the random walks with soft constraints. The two stages are optimized iteratively. Instead of the one-shot label propagation in the VOI, an iterative refinement process can achieve a reliable tooth segmentation by virtue of exemplar-based random walks with adaptive soft constraints. RESULTS The proposed method was applied for tooth segmentation of twenty clinically captured CBCT images. Three metrics, including the Dice similarity coefficient (DSC), the Jaccard similarity coefficient (JSC), and the mean surface deviation (MSD), were used to quantitatively analyze the segmentation of anterior teeth including incisors and canines, premolars, and molars. The segmentation of the anterior teeth achieved a DSC up to 98%, a JSC of 97%, and an MSD of 0.11 mm compared with manual segmentation. For the premolars, the average values of DSC, JSC, and MSD were 98%, 96%, and 0.12 mm, respectively. The proposed method yielded a DSC of 95%, a JSC of 89%, and an MSD of 0.26 mm for molars. Aside from the interactive definition of label priors by the user, automatic tooth segmentation can be achieved in an average of 1.18 min. CONCLUSIONS The proposed technique enables an efficient and reliable tooth segmentation from CBCT images. This study makes it clinically practical to segment teeth from CBCT images, thus facilitating pre- and interoperative uses of dental morphologies in maxillofacial and orthodontic treatments.


MLMI@MICCAI | 2018

Masseter Segmentation from Computed Tomography Using Feature-Enhanced Nested Residual Neural Network

Haifang Qin; Yuru Pei; Yuke Guo; Gengyu Ma; Tianmin Xu; Hongbin Zha

Masticatory muscles are of significant aesthetic and functional importance to craniofacial developments. Automatic segmentation is a crucial step for shape and functional analysis of muscles. In this paper, we propose an automatic masseter segmentation framework using a deep neural network with coupled feature learning and label prediction pathways. The volumetric features are learned using the unsupervised convolutional auto-encoder and integrated with multi-level features in the label prediction pathway to augment features for segmentation. The label prediction pathway is built upon the nested residual network which is feasible for information propagation and fast convergence. The proposed method realizes the voxel-wise label inference of masseter muscles from the clinically captured computed tomography (CT) images. In the experiments, the proposed method outperforms the compared state-of-the-arts, achieving a mean Dice similarity coefficient (DSC) of \(93\pm 1.2\%\) for the segmentation of masseter muscles.


MLMI@MICCAI | 2018

Temporal Consistent 2D-3D Registration of Lateral Cephalograms and Cone-Beam Computed Tomography Images

Yungeng Zhang; Yuru Pei; Haifang Qin; Yuke Guo; Gengyu Ma; Tianmin Xu; Hongbin Zha

Craniofacial growths and developments play an important role in treatment planning of orthopedics and orthodontics. Traditional growth studies are mainly on longitudinal growth datasets of 2D lateral cephalometric radiographs (LCR). In this paper, we propose a temporal consistent 2D-3D registration technique enabling 3D growth measurements of craniofacial structures. We initialize the independent 2D-3D registration by the convolutional neural network (CNN)-based regression, which produces the dense displacement field of the cone-beam computed tomography (CBCT) image when given the LCR. The temporal constraints of the growth-stable structures are used to refine the 2D-3D registration. Instead of traditional independent 2D-3D registration, we jointly solve the nonrigid displacement fields of a series of input LCRs captured at different ages. The hierarchical pyramid of the digitally reconstructed radiographs (DRR) is introduced to fasten the convergence. The proposed method has been applied to the growth dataset in clinical orthodontics. The resulted 2D-3D registration is consistent with both the input LCRs concerning the structural contours and the 3D volumetric images regarding the growth-stable structures.

Collaboration


Dive into the Gengyu Ma's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yuke Guo

Luoyang Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge