Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where J Lian is active.

Publication


Featured researches published by J Lian.


IEEE Transactions on Medical Imaging | 2013

Sparse Patch-Based Label Propagation for Accurate Prostate Localization in CT Images

Shu Liao; Yaozong Gao; J Lian; Dinggang Shen

In this paper, we propose a new prostate computed tomography (CT) segmentation method for image guided radiation therapy. The main contributions of our method lie in the following aspects. 1) Instead of using voxel intensity information alone, patch-based representation in the discriminative feature space with logistic sparse LASSO is used as anatomical signature to deal with low contrast problem in prostate CT images. 2) Based on the proposed patch-based signature, a new multi-atlases label fusion method formulated under sparse representation framework is designed to segment prostate in the new treatment images, with guidance from the previous segmented images of the same patient. This method estimates the prostate likelihood of each voxel in the new treatment image from its nearby candidate voxels in the previous segmented images, based on the nonlocal mean principle and sparsity constraint. 3) A hierarchical labeling strategy is further designed to perform label fusion, where voxels with high confidence are first labeled for providing useful context information in the same image for aiding the labeling of the remaining voxels. 4) An online update mechanism is finally adopted to progressively collect more patient-specific information from newly segmented treatment images of the same patient, for adaptive and more accurate segmentation. The proposed method has been extensively evaluated on a prostate CT image database consisting of 24 patients where each patient has more than 10 treatment images, and further compared with several state-of-the-art prostate CT segmentation algorithms using various evaluation metrics. Experimental results demonstrate that the proposed method consistently achieves higher segmentation accuracy than any other methods under comparison.


Medical Physics | 2013

Modeling the dosimetry of organ-at-risk in head and neck IMRT planning: An intertechnique and interinstitutional study

J Lian; L Yuan; Y. Ge; Bhishamjit S. Chera; David P. Yoo; Sha Chang; Fang-Fang Yin; Q. Jackie Wu

PURPOSE To build a statistical model to quantitatively correlate the anatomic features of structures and the corresponding dose-volume histogram (DVH) of head and neck (HN) Tomotherapy (Tomo) plans. To study if the model built upon one intensity modulated radiation therapy (IMRT) technique (such as conventional Linac) can be used to predict anticipated organs-at-risk (OAR) DVH of patients treated with a different IMRT technique (such as Tomo). To study if the model built upon the clinical experience of one institution can be used to aid IMRT planning for another institution. METHODS Forty-four Tomotherapy intensity modulate radiotherapy plans of HN cases (Tomo-IMRT) from Institution A were included in the study. A different patient group of 53 HN fixed gantry IMRT (FG-IMRT) plans was selected from Institution B. The analyzed OARs included the parotid, larynx, spinal cord, brainstem, and submandibular gland. Two major groups of anatomical features were considered: the volumetric information and the spatial information. The volume information includes the volume of target, OAR, and overlapped volume between target and OAR. The spatial information of OARs relative to PTVs was represented by the distance-to-target histogram (DTH). Important anatomical and dosimetric features were extracted from DTH and DVH by principal component analysis. Two regression models, one for Tomotherapy plan and one for IMRT plan, were built independently. The accuracy of intratreatment-modality model prediction was validated by a leave one out cross-validation method. The intertechnique and interinstitution validations were performed by using the FG-IMRT model to predict the OAR dosimetry of Tomo-IMRT plans. The dosimetry of OARs, under the same and different institutional preferences, was analyzed to examine the correlation between the model prediction and planning protocol. RESULTS Significant patient anatomical factors contributing to OAR dose sparing in HN Tomotherapy plans have been analyzed and identified. For all the OARs, the discrepancies of dose indices between the model predicted values and the actual plan values were within 2.1%. Similar results were obtained from the modeling of FG-IMRT plans. The parotid gland was spared in a comparable fashion during the treatment planning of two institutions. The model based on FG-IMRT plans was found to predict the median dose of the parotid of Tomotherapy plans quite well, with a mean error of 2.6%. Predictions from the FG-IMRT model suggested the median dose of the larynx, median dose of the brainstem and D2 of the brainstem could be reduced by 10.5%, 12.8%, and 20.4%, respectively, in the Tomo-IMRT plans. This was found to be correlated to the institutional differences in OAR constraint settings. Re-planning of six Tomotherapy patients confirmed the potential of optimization improvement predicted by the FG-IMRT model was correct. CONCLUSIONS The authors established a mathematical model to correlate the anatomical features and dosimetric indexes of OARs of HN patients in Tomotherapy plans. The model can be used for the setup of patient-specific OAR dose sparing goals and quality control of planning results.The institutional clinical experience was incorporated into the model which allows the model from one institution to generate a reference plan for another institution, or another IMRT technique.


medical image computing and computer assisted intervention | 2017

Medical Image Synthesis with Context-Aware Generative Adversarial Networks

Dong Nie; Roger Trullo; J Lian; Caroline Petitjean; Su Ruan; Qian Wang; Dinggang Shen

Computed tomography (CT) is critical for various clinical applications, e.g., radiation treatment planning and also PET attenuation correction in MRI/PET scanner. However, CT exposes radiation during acquisition, which may cause side effects to patients. Compared to CT, magnetic resonance imaging (MRI) is much safer and does not involve radiations. Therefore, recently researchers are greatly motivated to estimate CT image from its corresponding MR image of the same subject for the case of radiation planning. In this paper, we propose a data-driven approach to address this challenging problem. Specifically, we train a fully convolutional network (FCN) to generate CT given the MR image. To better model the nonlinear mapping from MRI to CT and produce more realistic images, we propose to use the adversarial training strategy to train the FCN. Moreover, we propose an image-gradient-difference based loss function to alleviate the blurriness of the generated CT. We further apply Auto-Context Model (ACM) to implement a context-aware generative adversarial network. Experimental results show that our method is accurate and robust for predicting CT images from MR images, and also outperforms three state-of-the-art methods under comparison.


IEEE Transactions on Medical Imaging | 2016

Estimating CT Image From MRI Data Using Structured Random Forest and Auto-Context Model

Tri Huynh; Yaozong Gao; Jiayin Kang; Li Wang; Pei Zhang; J Lian; Dinggang Shen

Computed tomography (CT) imaging is an essential tool in various clinical diagnoses and radiotherapy treatment planning. Since CT image intensities are directly related to positron emission tomography (PET) attenuation coefficients, they are indispensable for attenuation correction (AC) of the PET images. However, due to the relatively high dose of radiation exposure in CT scan, it is advised to limit the acquisition of CT images. In addition, in the new PET and magnetic resonance (MR) imaging scanner, only MR images are available, which are unfortunately not directly applicable to AC. These issues greatly motivate the development of methods for reliable estimate of CT image from its corresponding MR image of the same subject. In this paper, we propose a learning-based method to tackle this challenging problem. Specifically, we first partition a given MR image into a set of patches. Then, for each patch, we use the structured random forest to directly predict a CT patch as a structured output, where a new ensemble model is also used to ensure the robust prediction. Image features are innovatively crafted to achieve multi-level sensitivity, with spatial information integrated through only rigid-body alignment to help avoiding the error-prone inter-subject deformable registration. Moreover, we use an auto-context model to iteratively refine the prediction. Finally, we combine all of the predicted CT patches to obtain the final prediction for the given MR image. We demonstrate the efficacy of our method on two datasets: human brain and prostate images. Experimental results show that our method can accurately predict CT images in various scenarios, even for the images undergoing large shape variation, and also outperforms two state-of-the-art methods.


IEEE Transactions on Medical Imaging | 2012

Hierarchical Patch-Based Sparse Representation—A New Approach for Resolution Enhancement of 4D-CT Lung Data

Yu Zhang; Guorong Wu; Pew Thian Yap; Qianjin Feng; J Lian; Wufan Chen; Dinggang Shen

Four-dimensional computed tomography (4D-CT) plays an important role in lung cancer treatment because of its capability in providing a comprehensive characterization of respiratory motion for high-precision radiation therapy. However, due to the inherent high-dose exposure associated with CT, dense sampling along superior–inferior direction is often not practical, thus resulting in an inter-slice thickness that is much greater than in-plane voxel resolutions. As a consequence, artifacts such as lung vessel discontinuity and partial volume effects are often observed in 4D-CT images, which may mislead dose administration in radiation therapy. In this paper, we present a novel patch-based technique for resolution enhancement of 4D-CT images along the superior–inferior direction. Our working premise is that anatomical information that is missing in one particular phase can be recovered from other phases. Based on this assumption, we employ a hierarchical patch-based sparse representation mechanism to enhance the superior–inferior resolution of 4D-CT by reconstructing additional intermediate CT slices. Specifically, for each spatial location on an intermediate CT slice that we intend to reconstruct, we first agglomerate a dictionary of patches from images of all other phases in the 4D-CT. We then employ a sparse combination of patches from this dictionary, with guidance from neighboring (upper and lower) slices, to reconstruct a series of patches, which we progressively refine in a hierarchical fashion to reconstruct the final intermediate slices with significantly enhanced anatomical details. Our method was extensively evaluated using a public dataset. In all experiments, our method outperforms the conventional linear and cubic-spline interpolation methods in preserving image details and also in suppressing misleading artifacts, indicating that our proposed method can potentially be applied to better image-guided radiation therapy of lung cancer in the future.


IEEE Transactions on Medical Imaging | 2016

Accurate Segmentation of CT Male Pelvic Organs via Regression-Based Deformable Models and Multi-Task Random Forests

Yaozong Gao; Yeqin Shao; J Lian; Andrew Z. Wang; Ronald C. Chen; Dinggang Shen

Segmenting male pelvic organs from CT images is a prerequisite for prostate cancer radiotherapy. The efficacy of radiation treatment highly depends on segmentation accuracy. However, accurate segmentation of male pelvic organs is challenging due to low tissue contrast of CT images, as well as large variations of shape and appearance of the pelvic organs. Among existing segmentation methods, deformable models are the most popular, as shape prior can be easily incorporated to regularize the segmentation. Nonetheless, the sensitivity to initialization often limits their performance, especially for segmenting organs with large shape variations. In this paper, we propose a novel approach to guide deformable models, thus making them robust against arbitrary initializations. Specifically, we learn a displacement regressor, which predicts 3D displacement from any image voxel to the target organ boundary based on the local patch appearance. This regressor provides a non-local external force for each vertex of deformable model, thus overcoming the initialization problem suffered by the traditional deformable models. To learn a reliable displacement regressor, two strategies are particularly proposed. 1) A multi-task random forest is proposed to learn the displacement regressor jointly with the organ classifier; 2) an auto-context model is used to iteratively enforce structural information during voxel-wise prediction. Extensive experiments on 313 planning CT scans of 313 patients show that our method achieves better results than alternative classification or regression based methods, and also several other existing methods in CT pelvic organ segmentation.


medical image computing and computer assisted intervention | 2011

Estimating the 4d respiratory lung motion by spatiotemporal registration and building super-resolution image

Guorong Wu; Qian Wang; J Lian; Dinggang Shen

The estimation of lung motion in 4D-CT with respect to the respiratory phase becomes more and more important for radiation therapy of lung cancer. Modem CT scanner can only scan a limited region of body at each couch table position. Thus, motion artifacts due to the patients free breathing during scan are often observable in 4D-CT, which could undermine the procedure of correspondence detection in the registration. Another challenge of motion estimation in 4D-CT is how to keep the lung motion consistent over time. However, the current approaches fail to meet this requirement since they usually register each phase image to a pre-defined phase image independently, without considering the temporal coherence in 4D-CT. To overcome these limitations, we present a unified approach to estimate the respiratory lung motion with two iterative steps. First, we propose a new spatiotemporal registration algorithm to align all phase images of 4D-CT (in low-resolution) onto a high-resolution group-mean image in the common space. The temporal consistency is persevered by introducing the concept of temporal fibers for delineating the spatiotemporal behavior of lung motion along the respiratory phase. Second, the idea of super resolution is utilized to build the group-mean image with more details, by integrating the highly-redundant image information contained in the multiple respiratory phases. Accordingly, by establishing the correspondence of each phase image w.r.t. the high-resolution group-mean image, the difficulty of detecting correspondences between original phase images with missing structures is greatly alleviated, thus more accurate registration results can be achieved. The performance of our proposed 4D motion estimation method has been extensively evaluated on a public lung dataset. In all experiments, our method achieves more accurate and consistent results in lung motion estimation than all other state-of-the-art approaches.


Medical Physics | 2012

Improving image‐guided radiation therapy of lung cancer by reconstructing 4D‐CT from a single free‐breathing 3D‐CT on the treatment day

Guorong Wu; J Lian; Dinggang Shen

PURPOSE One of the major challenges of lung cancer radiation therapy is how to reduce the margin of treatment field but also manage geometric uncertainty from respiratory motion. To this end, 4D-CT imaging has been widely used for treatment planning by providing a full range of respiratory motion for both tumor and normal structures. However, due to the considerable radiation dose and the limit of resource and time, typically only a free-breathing 3D-CT image is acquired on the treatment day for image-guided patient setup, which is often determined by the image fusion of the free-breathing treatment and planning day 3D-CT images. Since individual slices of two free breathing 3D-CTs are possibly acquired at different phases, two 3D-CTs often look different, which makes the image registration very challenging. This uncertainty of pretreatment patient setup requires a generous margin of radiation field in order to cover the tumor sufficiently during the treatment. In order to solve this problem, our main idea is to reconstruct the 4D-CT (with full range of tumor motion) from a single free-breathing 3D-CT acquired on the treatment day. METHODS We first build a super-resolution 4D-CT model from a low-resolution 4D-CT on the planning day, with the temporal correspondences also established across respiratory phases. Next, we propose a 4D-to-3D image registration method to warp the 4D-CT model to the treatment day 3D-CT while also accommodating the new motion detected on the treatment day 3D-CT. In this way, we can more precisely localize the moving tumor on the treatment day. Specifically, since the free-breathing 3D-CT is actually the mixed-phase image where different slices are often acquired at different respiratory phases, we first determine the optimal phase for each local image patch in the free-breathing 3D-CT to obtain a sequence of partial 3D-CT images (with incomplete image data at each phase) for the treatment day. Then we reconstruct a new 4D-CT for the treatment day by registering the 4D-CT of the planning day (with complete information) to the sequence of partial 3D-CT images of the treatment day, under the guidance of the 4D-CT model built on the planning day. RESULTS We first evaluated the accuracy of our 4D-CT model on a set of lung 4D-CT images with manually labeled landmarks, where the maximum error in respiratory motion estimation can be reduced from 6.08 mm by diffeomorphic Demons to 3.67 mm by our method. Next, we evaluated our proposed 4D-CT reconstruction algorithm on both simulated and real free-breathing images. The reconstructed 4D-CT using our algorithm shows clinically acceptable accuracy and could be used to guide a more accurate patient setup than the conventional method. CONCLUSIONS We have proposed a novel two-step method to reconstruct a new 4D-CT from a single free-breathing 3D-CT on the treatment day. Promising reconstruction results imply the possible application of this new algorithm in the image guided radiation therapy of lung cancer.


computer vision and pattern recognition | 2012

Reconstruction of super-resolution lung 4D-CT using patch-based sparse representation

Yu Zhang; Guorong Wu; Pew Thian Yap; Qianjin Feng; J Lian; Wufan Chen; Dinggang Shen

4D-CT plays an important role in lung cancer treatment. However, due to the inherent high-dose exposure associated with CT, dense sampling along superior-inferior direction is often not practical. As a result, artifacts such as lung vessel discontinuity and partial volume are typical in 4D-CT images and might mislead dose administration in radiation therapy. In this paper, we present a novel patch-based technique for super-resolution enhancement of the 4D-CT images along the superior-inferior direction. Our working premise is that the anatomical information that is missing at one particular phase can be recovered from other phases. Based on this assumption, we employ a patch-based mechanism for guided reconstruction of super-resolution axial slices. Specifically, to reconstruct each targeted super-resolution slice for a CT image at a particular phase, we agglomerate a dictionary of patches from images of all other phases in the 4D-CT sequence. Then we perform a sparse combination of the patches in this dictionary to reconstruct details of a super-resolution patch, under constraint of similarity to the corresponding patches in the neighboring slices. By iterating this procedure over all possible patch locations, a superresolution 4D-CT image sequence with enhanced anatomical details can be eventually reconstructed. Our method was extensively evaluated using a public dataset. In all experiments, our method outperforms the conventional linear and cubic-spline interpolation methods in terms of preserving image details and suppressing misleading artifacts.


information processing in medical imaging | 2011

Reconstruction of 4D-CT from a single free-breathing 3D-CT by spatial-temporal image registration

Guorong Wu; Qian Wang; J Lian; Dinggang Shen

In the radiation therapy of lung cancer, a free-breathing 3D-CT image is usually acquired in the treatment day for image-guided patient setup, by registering with the free-breathing 3D-CT image acquired in the planning day. In this way, the optimal dose plan computed in the planning day can be transferred onto the treatment day for cancer radiotherapy. However, patient setup based on the simple registration of the free-breathing 3D-CT images of the planning and the treatment days may mislead the radiotherapy, since the free-breathing 3D-CT is actually the mixed-phase image, with different slices often acquired from different respiratory phases. Moreover, a 4D-CT that is generally acquired in the planning day for improvement of dose planning is often ignored for guiding patient setup in the treatment day. To overcome these limitations, we present a novel two-step method to reconstruct the 4D-CT from a single free-breathing 3D-CT of the treatment day, by utilizing the 4D-CT model built in the planning day. Specifically, in the first step, we proposed a new spatial-temporal registration algorithm to align all phase images of the 4D-CT acquired in the planning day, for building a 4D-CT model with temporal correspondences established among all respiratory phases. In the second step, we first determine the optimal phase for each slice of the free-breathing (mixed-phase) 3D-CT of the treatment day by comparing with the 4D-CT of the planning day and thus obtain a sequence of partial 3D-CT images for the treatment day, each with only the incomplete image information in certain slices; and then we reconstruct a complete 4D-CT for the treatment day by warping the 4D-CT of the planning day (with complete information) to the sequence of partial 3D-CT images of the treatment day, under the guidance of the 4D-CT model built in the planning day. We have comprehensively evaluated our 4D-CT model building algorithm on a public lung image database, achieving the best registration accuracy over all other state-of-the-art methods. Also, we have validated our proposed 4D-CT reconstruction algorithm on the simulated free-breathing data, obtaining very promising 4D-CT reconstruction results.

Collaboration


Dive into the J Lian's collaboration.

Top Co-Authors

Avatar

S Chang

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Dinggang Shen

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

B.S. Chera

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

T Cullip

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

X Tang

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

K Deschesne

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Lawrence B. Marks

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Guorong Wu

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

L Potter

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Mark Foskey

University of North Carolina at Chapel Hill

View shared research outputs
Researchain Logo
Decentralizing Knowledge