Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Zhen Tang is active.

Publication


Featured researches published by Zhen Tang.


International Journal of Oral and Maxillofacial Surgery | 2015

Algorithm for planning a double-jaw orthognathic surgery using a computer-aided surgical simulation (CASS) protocol. Part 1: planning sequence

James J. Xia; Jaime Gateno; John F. Teichgraeber; Peng Yuan; Ken Chung Chen; Jianfu Li; Xiaoyan Zhang; Zhen Tang; D.M. Alfi

The success of craniomaxillofacial (CMF) surgery depends not only on the surgical techniques, but also on an accurate surgical plan. The adoption of computer-aided surgical simulation (CASS) has created a paradigm shift in surgical planning. However, planning an orthognathic operation using CASS differs fundamentally from planning using traditional methods. With this in mind, the Surgical Planning Laboratory of Houston Methodist Research Institute has developed a CASS protocol designed specifically for orthognathic surgery. The purpose of this article is to present an algorithm using virtual tools for planning a double-jaw orthognathic operation. This paper will serve as an operation manual for surgeons wanting to incorporate CASS into their clinical practice.


Annals of Biomedical Engineering | 2016

An eFace-Template Method for Efficiently Generating Patient-Specific Anatomically-Detailed Facial Soft Tissue FE Models for Craniomaxillofacial Surgery Simulation.

Xiaoyan Zhang; Zhen Tang; Michael A. K. Liebschner; Daeseung Kim; Shunyao Shen; Chien-Ming Chang; Peng Yuan; Guangming Zhang; Jaime Gateno; Xiaobo Zhou; Shao-Xiang Zhang; James J. Xia

Accurate surgical planning and prediction of craniomaxillofacial surgery outcome requires simulation of soft-tissue changes following osteotomy. This can only be accomplished on an anatomically-detailed facial soft tissue model. However, current anatomically-detailed facial soft tissue model generation is not appropriate for clinical applications due to the time intensive nature of manual segmentation and volumetric mesh generation. This paper presents a novel semi-automatic approach, named eFace-template method, for efficiently and accurately generating a patient-specific facial soft tissue model. Our novel approach is based on the volumetric deformation of an anatomically-detailed template to be fitted to the shape of each individual patient. The adaptation of the template is achieved by using a hybrid landmark-based morphing and dense surface fitting approach followed by a thin-plate spline interpolation. This methodology was validated using 4 visible human datasets (regarded as gold standards) and 30 patient models. The results indicated that our approach can accurately preserve the internal anatomical correspondence (i.e., muscles) for finite element modeling. Additionally, our hybrid approach was able to achieve an optimal balance among the patient shape fitting accuracy, anatomical correspondence and mesh quality. Furthermore, the statistical analysis showed that our hybrid approach was superior to two previously published methods: mesh-matching and landmark-based transformation. Ultimately, our eFace-template method can be directly and effectively used clinically to simulate the facial soft tissue changes in the clinical application.


IEEE Transactions on Biomedical Engineering | 2016

Automatic Craniomaxillofacial Landmark Digitization via Segmentation-Guided Partially-Joint Regression Forest Model and Multiscale Statistical Features

Jun Zhang; Yaozong Gao; Li Wang; Zhen Tang; James J. Xia; Dinggang Shen

Objective: The goal of this paper is to automatically digitize craniomaxillofacial (CMF) landmarks efficiently and accurately from cone-beam computed tomography (CBCT) images, by addressing the challenge caused by large morphological variations across patients and image artifacts of CBCT images. Methods: We propose a segmentation-guided partially-joint regression forest (S-PRF) model to automatically digitize CMF landmarks. In this model, a regression voting strategy is first adopted to localize each landmark by aggregating evidences from context locations, thus potentially relieving the problem caused by image artifacts near the landmark. Second, CBCT image segmentation is utilized to remove uninformative voxels caused by morphological variations across patients. Third, a partially-joint model is further proposed to separately localize landmarks based on the coherence of landmark positions to improve the digitization reliability. In addition, we propose a fast vector quantization method to extract high-level multiscale statistical features to describe a voxels appearance, which has low dimensionality, high efficiency, and is also invariant to the local inhomogeneity caused by artifacts. Results: Mean digitization errors for 15 landmarks, in comparison to the ground truth, are all less than


medical image computing and computer assisted intervention | 2015

Automatic Craniomaxillofacial Landmark Digitization via Segmentation-Guided Partially-Joint Regression Forest Model

Jun Zhang; Yaozong Gao; Li Wang; Zhen Tang; James J. Xia; Dinggang Shen

2


medical image computing and computer-assisted intervention | 2017

Joint craniomaxillofacial bone segmentation and landmark digitization by context-guided fully convolutional networks

Jun Zhang; Mingxia Liu; Li Wang; Si Chen; Peng Yuan; Jianfu Li; Steve Guofang Shen; Zhen Tang; Ken Chung Chen; James J. Xia; Dinggang Shen

mm. Conclusion: Our model has addressed challenges of both interpatient morphological variations and imaging artifacts. Experiments on a CBCT dataset show that our approach achieves clinically acceptable accuracy for landmark digitalization. Significance: Our automatic landmark digitization method can be used clinically to reduce the labor cost and also improve digitalization consistency.


Biomechanics and Modeling in Mechanobiology | 2018

An eFTD-VP framework for efficiently generating patient-specific anatomically detailed facial soft tissue FE mesh for craniomaxillofacial surgery simulation

Xiaoyan Zhang; Daeseung Kim; Shunyao Shen; Peng Yuan; Siting Liu; Zhen Tang; Guangming Zhang; Xiaobo Zhou; Jaime Gateno; Michael A. K. Liebschner; James J. Xia

Craniomaxillofacial (CMF) deformities involve congenital and acquired deformities of the head and face. Landmark digitization is a critical step in quantifying CMF deformities. In current clinical practice, CMF landmarks have to be manually digitized on 3D models, which is time-consuming. To date, there is no clinically acceptable method that allows automatic landmark digitization, due to morphological variations among different patients and artifacts of cone-beam computed tomography (CBCT) images. To address these challenges, we propose a segmentation-guided partially-joint regression forest model that can automatically digitizes CMF landmarks. In this model, a regression voting strategy is first adopted to localize landmarks by aggregating evidences from context locations, thus potentially relieving the problem caused by image artifacts near the landmark. Second, segmentation is also utilized to resolve inconsistent landmark appearances that are caused by morphological variations among different patients, especially on the teeth. Third, a partially-joint model is proposed to separately localize landmarks based on coherence of landmark positions to improve digitization reliability. The experimental results show that the accuracy of automatically digitized landmarks using our approach is clinically acceptable.


medical image computing and computer assisted intervention | 2015

Automated Three-Piece Digital Dental Articulation

Jianfu Li; Flávio Wellington da Silva Ferraz; Shunyao Shen; Yi Fang Lo; Xiaoyan Zhang; Peng Yuan; Zhen Tang; Ken Chung Chen; Jaime Gateno; Xiaobo Zhou; James J. Xia

Generating accurate 3D models from cone-beam computed tomography (CBCT) images is an important step in developing treatment plans for patients with craniomaxillofacial (CMF) deformities. This process often involves bone segmentation and landmark digitization. Since anatomical landmarks generally lie on the boundaries of segmented bone regions, the tasks of bone segmentation and landmark digitization could be highly correlated. However, most existing methods simply treat them as two standalone tasks, without considering their inherent association. In addition, these methods usually ignore the spatial context information (i.e., displacements from voxels to landmarks) in CBCT images. To this end, we propose a context-guided fully convolutional network (FCN) for joint bone segmentation and landmark digitization. Specifically, we first train an FCN to learn the displacement maps to capture the spatial context information in CBCT images. Using the learned displacement maps as guidance information, we further develop a multi-task FCN to jointly perform bone segmentation and landmark digitization. Our method has been evaluated on 107 subjects from two centers, and the experimental results show that our method is superior to the state-of-the-art methods in both bone segmentation and landmark digitization.


medical image computing and computer assisted intervention | 2015

Automated Segmentation of CBCT Image with Prior-Guided Sequential Random Forest

Li Wang; Yaozong Gao; Feng Shi; Gang Li; Ken Chung Chen; Zhen Tang; James J. Xia; Dinggang Shen

Accurate surgical planning and prediction of craniomaxillofacial surgery outcome requires simulation of soft tissue changes following osteotomy. This can only be achieved by using an anatomically detailed facial soft tissue model. The current state-of-the-art of model generation is not appropriate to clinical applications due to the time-intensive nature of manual segmentation and volumetric mesh generation. The conventional patient-specific finite element (FE) mesh generation methods are to deform a template FE mesh to match the shape of a patient based on registration. However, these methods commonly produce element distortion. Additionally, the mesh density for patients depends on that of the template model. It could not be adjusted to conduct mesh density sensitivity analysis. In this study, we propose a new framework of patient-specific facial soft tissue FE mesh generation. The goal of the developed method is to efficiently generate a high-quality patient-specific hexahedral FE mesh with adjustable mesh density while preserving the accuracy in anatomical structure correspondence. Our FE mesh is generated by eFace template deformation followed by volumetric parametrization. First, the patient-specific anatomically detailed facial soft tissue model (including skin, mucosa, and muscles) is generated by deforming an eFace template model. The adaptation of the eFace template model is achieved by using a hybrid landmark-based morphing and dense surface fitting approach followed by a thin-plate spline interpolation. Then, high-quality hexahedral mesh is constructed by using volumetric parameterization. The user can control the resolution of hexahedron mesh to best reflect clinicians’ need. Our approach was validated using 30 patient models and 4 visible human datasets. The generated patient-specific FE mesh showed high surface matching accuracy, element quality, and internal structure matching accuracy. They can be directly and effectively used for clinical simulation of facial soft tissue change.


Medical Physics | 2015

Automated segmentation of dental CBCT image with prior-guided sequential random forests

Li Wang; Yaozong Gao; Feng Shi; Gang Li; Ken Chung Chen; Zhen Tang; James J. Xia; Dinggang Shen

In craniomaxillofacial CMF surgery, a critical step is to reestablish dental occlusion. Digitally establishing new dental occlusion is extremely difficult. It is especially true when the maxilla is segmentalized into 3 pieces, a common procedure in CMF surgery. In this paper, we present a novel midline-guided occlusal optimization MGO approach to automatically and efficiently reestablish dental occlusion for 3-piece maxillary surgery. Our MGO approach consists of 2 main steps. The anterior segment of the maxilla is first aligned to the intact mandible using our ergodic midline-match algorithm. The right and left posterior segments are then aligned to the mandible in sequence using an improved iterative surface-based minimum distance mapping algorithm. Our method has been validated using 15 sets of digital dental models. The results showed our algorithm-generated 3-piece articulation is more efficient and effective than the current standard of care method. The results demonstrated our approachs significant clinical impact and technical contributions.


Medical Physics | 2015

Estimating patient‐specific and anatomically correct reference model for craniomaxillofacial deformity via sparse representation

Li Wang; Yi Ren; Yaozong Gao; Zhen Tang; Ken Chung Chen; Jianfu Li; Steve Guofang Shen; Jin Yan; Philip K. M. Lee; Ben Chow; James J. Xia; Dinggang Shen

A major limitation of CBCT scans is the widespread image artifacts such as noise, beam hardening and inhomogeneity, causing great difficulty for accurate segmentation of bony structures from soft tissues, as well as separation of mandible from maxilla. In this paper, we present a novel fully automated method for CBCT image segmentation. Specifically, we first employ majority voting to estimate the initial probability maps of mandible and maxilla. We then extract both the appearance features from CBCTs and the context features from the initial probability maps to train the first-layer of classifier. Based on the first-layer of trained classifier, the probability maps are updated, which will be employed to further train the next layer of classifier. By iteratively training the subsequent classifier and the updated segmentation probability maps, we can derive a sequence of classifiers. Experimental results on 30 CBCTs show that the proposed method achieves the state-of-the-art performance.

Collaboration


Dive into the Zhen Tang's collaboration.

Top Co-Authors

Avatar

James J. Xia

Houston Methodist Hospital

View shared research outputs
Top Co-Authors

Avatar

Ken Chung Chen

Houston Methodist Hospital

View shared research outputs
Top Co-Authors

Avatar

Dinggang Shen

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Li Wang

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Yaozong Gao

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Jaime Gateno

Houston Methodist Hospital

View shared research outputs
Top Co-Authors

Avatar

Jianfu Li

Houston Methodist Hospital

View shared research outputs
Top Co-Authors

Avatar

Peng Yuan

Houston Methodist Hospital

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shunyao Shen

Shanghai Jiao Tong University

View shared research outputs
Researchain Logo
Decentralizing Knowledge