Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Dong Nie is active.

Publication


Featured researches published by Dong Nie.


medical image computing and computer assisted intervention | 2017

Medical Image Synthesis with Context-Aware Generative Adversarial Networks

Dong Nie; Roger Trullo; J Lian; Caroline Petitjean; Su Ruan; Qian Wang; Dinggang Shen

Computed tomography (CT) is critical for various clinical applications, e.g., radiation treatment planning and also PET attenuation correction in MRI/PET scanner. However, CT exposes radiation during acquisition, which may cause side effects to patients. Compared to CT, magnetic resonance imaging (MRI) is much safer and does not involve radiations. Therefore, recently researchers are greatly motivated to estimate CT image from its corresponding MR image of the same subject for the case of radiation planning. In this paper, we propose a data-driven approach to address this challenging problem. Specifically, we train a fully convolutional network (FCN) to generate CT given the MR image. To better model the nonlinear mapping from MRI to CT and produce more realistic images, we propose to use the adversarial training strategy to train the FCN. Moreover, we propose an image-gradient-difference based loss function to alleviate the blurriness of the generated CT. We further apply Auto-Context Model (ACM) to implement a context-aware generative adversarial network. Experimental results show that our method is accurate and robust for predicting CT images from MR images, and also outperforms three state-of-the-art methods under comparison.


international symposium on biomedical imaging | 2016

Fully convolutional networks for multi-modality isointense infant brain image segmentation

Dong Nie; Li Wang; Yaozong Gao; Dinggang Sken

The segmentation of infant brain tissue images into white matter (WM), gray matter (GM), and cerebrospinal fluid (CSF) plays an important role in studying early brain development. In the isointense phase (approximately 6–8 months of age), WM and GM exhibit similar levels of intensity in both T1 and T2 MR images, resulting in extremely low tissue contrast and thus making the tissue segmentation very challenging. The existing methods for tissue segmentation in this isointense phase usually employ patch-based sparse labeling on single T1, T2 or fractional anisotropy (FA) modality or their simply-stacked combinations without fully exploring the multi-modality information. To address the challenge, in this paper, we propose to use fully convolutional networks (FCNs) for the segmentation of isointense phase brain MR images. Instead of simply stacking the three modalities, we train one network for each modality image, and then fuse their high-layer features together for final segmentation. Specifically, we conduct a convolution-pooling stream for multimodality information from T1, T2, and FA images separately, and then combine them in high-layer for finally generating the segmentation maps as the outputs. We compared the performance of our approach with that of the commonly used segmentation methods on a set of manually segmented isointense phase brain images. Results showed that our proposed model significantly outperformed previous methods in terms of accuracy. In addition, our results also indicated a better way of integrating multi-modality images, which leads to performance improvement.


medical image computing and computer assisted intervention | 2016

3D Deep Learning for Multi-modal Imaging-Guided Survival Time Prediction of Brain Tumor Patients

Dong Nie; Han Zhang; Ehsan Adeli; Luyan Liu; Dinggang Shen

High-grade glioma is the most aggressive and severe brain tumor that leads to death of almost 50% patients in 1-2 years. Thus, accurate prognosis for glioma patients would provide essential guidelines for their treatment planning. Conventional survival prediction generally utilizes clinical information and limited handcrafted features from magnetic resonance images (MRI), which is often time consuming, laborious and subjective. In this paper, we propose using deep learning frameworks to automatically extract features from multi-modal preoperative brain images (i.e., T1 MRI, fMRI and DTI) of high-grade glioma patients. Specifically, we adopt 3D convolutional neural networks (CNNs) and also propose a new network architecture for using multi-channel data and learning supervised features. Along with the pivotal clinical features, we finally train a support vector machine to predict if the patient has a long or short overall survival (OS) time. Experimental results demonstrate that our methods can achieve an accuracy as high as 89.9% We also find that the learned features from fMRI and DTI play more important roles in accurately predicting the OS time, which provides valuable insights into functional neuro-oncological applications.


medical image computing and computer assisted intervention | 2016

Estimating CT image from MRI data using 3D fully convolutional networks

Dong Nie; Xiaohuan Cao; Yaozong Gao; Li Wang; Dinggang Shen

Computed tomography (CT) is critical for various clinical applications, e.g., radiotherapy treatment planning and also PET attenuation correction. However, CT exposes radiation during CT imaging, which may cause side effects to patients. Compared to CT, magnetic resonance imaging (MRI) is much safer and does not involve any radiation. Therefore, recently researchers are greatly motivated to estimate CT image from its corresponding MR image of the same subject for the case of radiotherapy planning. In this paper, we propose a 3D deep learning based method to address this challenging problem. Specifically, a 3D fully convolutional neural network (FCN) is adopted to learn an end-to-end nonlinear mapping from MR image to CT image. Compared to the conventional convolutional neural network (CNN), FCN generates structured output and can better preserve the neighborhood information in the predicted CT image. We have validated our method in a real pelvic CT/MRI dataset. Experimental results show that our method is accurate and robust for predicting CT image from MRI image, and also outperforms three state-of-the-art methods under comparison. In addition, the parameters, such as network depth and activation function, are extensively studied to give an insight for deep learning based regression tasks in our application.


Human Brain Mapping | 2017

Multi-task diagnosis for autism spectrum disorders using multi-modality features: A multi-center study

Jun Wang; Qian Wang; Jialin Peng; Dong Nie; Feng Zhao; Minjeong Kim; Han Zhang; Chong Yaw Wee; Shitong Wang; Dinggang Shen

Autism spectrum disorder (ASD) is a neurodevelopment disease characterized by impairment of social interaction, language, behavior, and cognitive functions. Up to now, many imaging‐based methods for ASD diagnosis have been developed. For example, one may extract abundant features from multi‐modality images and then derive a discriminant function to map the selected features toward the disease label. A lot of recent works, however, are limited to single imaging centers. To this end, we propose a novel multi‐modality multi‐center classification (M3CC) method for ASD diagnosis. We treat the classification of each imaging center as one task. By introducing the task‐task and modality‐modality regularizations, we solve the classification for all imaging centers simultaneously. Meanwhile, the optimal feature selection and the modeling of the discriminant functions can be jointly conducted for highly accurate diagnosis. Besides, we also present an efficient iterative optimization solution to our formulated problem and further investigate its convergence. Our comprehensive experiments on the ABIDE database show that our proposed method can significantly improve the performance of ASD diagnosis, compared to the existing methods. Hum Brain Mapp 38:3081–3097, 2017.


Neurocomputing | 2017

Deep auto-context convolutional neural networks for standard-dose PET image estimation from low-dose PET/MRI

Lei Xiang; Yu Qiao; Dong Nie; Le An; Weili Lin; Qian Wang; Dinggang Shen

Positron emission tomography (PET) is an essential technique in many clinical applications such as tumor detection and brain disorder diagnosis. In order to obtain high-quality PET images, a standard-dose radioactive tracer is needed, which inevitably causes the risk of radiation exposure damage. For reducing the patients exposure to radiation and maintaining the high quality of PET images, in this paper, we propose a deep learning architecture to estimate the high-quality standard-dose PET (SPET) image from the combination of the low-quality low-dose PET (LPET) image and the accompanying T1-weighted acquisition from magnetic resonance imaging (MRI). Specifically, we adapt the convolutional neural network (CNN) to account for the two channel inputs of LPET and T1, and directly learn the end-to-end mapping between the inputs and the SPET output. Then, we integrate multiple CNN modules following the auto-context strategy, such that the tentatively estimated SPET of an early CNN can be iteratively refined by subsequent CNNs. Validations on real human brain PET/MRI data show that our proposed method can provide competitive estimation quality of the PET images, compared to the state-of-the-art methods. Meanwhile, our method is highly efficient to test on a new subject, e.g., spending ~2 seconds for estimating an entire SPET image in contrast to ~16 minutes by the state-of-the-art method. The results above demonstrate the potential of our method in real clinical applications.


Medical Image Analysis | 2018

Deep embedding convolutional neural network for synthesizing CT image from T1-Weighted MR image

Lei Xiang; Qian Wang; Dong Nie; Lichi Zhang; Xiyao Jin; Yu Qiao; Dinggang Shen

HighlightsWe propose a very deep network architecture for estimating CT images from MR images directly. It learns an end‐to‐end mapping between different imaging modalities, without any patch‐level pre‐ or post‐processing.We present a novel embedding strategy, to embed the tentatively synthesized CT image into the feature maps and further transform these features maps forward for better estimating the final CT image.The experimental results show that our method can be flexibly adapted to different applications. Moreover, our method outperforms the state‐of‐the‐art methods, regarding both the accuracy of estimated CT images and the speed of synthesis process. ABSTRACT Recently, more and more attention is drawn to the field of medical image synthesis across modalities. Among them, the synthesis of computed tomography (CT) image from T1‐weighted magnetic resonance (MR) image is of great importance, although the mapping between them is highly complex due to large gaps of appearances of the two modalities. In this work, we aim to tackle this MR‐to‐CT synthesis task by a novel deep embedding convolutional neural network (DECNN). Specifically, we generate the feature maps from MR images, and then transform these feature maps forward through convolutional layers in the network. We can further compute a tentative CT synthesis from the midway of the flow of feature maps, and then embed this tentative CT synthesis result back to the feature maps. This embedding operation results in better feature maps, which are further transformed forward in DECNN. After repeating this embedding procedure for several times in the network, we can eventually synthesize a final CT image in the end of the DECNN. We have validated our proposed method on both brain and prostate imaging datasets, by also comparing with the state‐of‐the‐art methods. Experimental results suggest that our DECNN (with repeated embedding operations) demonstrates its superior performances, in terms of both the perceptive quality of the synthesized CT image and the run‐time cost for synthesizing a CT image.


international symposium on biomedical imaging | 2017

Segmentation of Organs at Risk in thoracic CT images using a SharpMask architecture and Conditional Random Fields

Roger Trullo; Caroline Petitjean; Su Ruan; Bernard Dubray; Dong Nie; Dinggang Shen

Cancer is one of the leading causes of death worldwide. Radiotherapy is a standard treatment for this condition and the first step of the radiotherapy process is to identify the target volumes to be targeted and the healthy organs at risk (OAR) to be protected. Unlike previous methods for automatic segmentation of OAR that typically use local information and individually segment each OAR, in this paper, we propose a deep learning framework for the joint segmentation of OAR in CT images of the thorax, specifically the heart, esophagus, trachea and the aorta. Making use of Fully Convolutional Networks (FCN), we present several extensions that improve the performance, including a new architecture that allows to use low level features with high level information, effectively combining local and global information for improving the localization accuracy. Finally, by using Conditional Random Fields (specifically the CRF as Recurrent Neural Network model), we are able to account for relationships between the organs to further improve the segmentation results. Experiments demonstrate competitive performance on a dataset of 30 CT scans.


IEEE Transactions on Biomedical Engineering | 2018

Medical Image Synthesis with Deep Convolutional Adversarial Networks

Dong Nie; Roger Trullo; J Lian; Li Wang; Caroline Petitjean; Su Ruan; Qian Wang; Dinggang Shen

Medical imaging plays a critical role in various clinical applications. However, due to multiple considerations such as cost and radiation dose, the acquisition of certain image modalities may be limited. Thus, medical image synthesis can be of great benefit by estimating a desired imaging modality without incurring an actual scan. In this paper, we propose a generative adversarial approach to address this challenging problem. Specifically, we train a fully convolutional network (FCN) to generate a target image given a source image. To better model a nonlinear mapping from source to target and to produce more realistic target images, we propose to use the adversarial learning strategy to better model the FCN. Moreover, the FCN is designed to incorporate an image-gradient-difference-based loss function to avoid generating blurry target images. Long-term residual unit is also explored to help the training of the network. We further apply Auto-Context Model to implement a context-aware deep convolutional adversarial network. Experimental results show that our method is accurate and robust for synthesizing target images from the corresponding source images. In particular, we evaluate our method on three datasets, to address the tasks of generating CT from MRI and generating 7T MRI from 3T MRI images. Our method outperforms the state-of-the-art methods under comparison in all datasets and tasks.


medical image computing and computer assisted intervention | 2017

Deformable Image Registration Based on Similarity-Steered CNN Regression

Xiaohuan Cao; Jianhua Yang; Jun Zhang; Dong Nie; Minjeong Kim; Qian Wang; Dinggang Shen

Existing deformable registration methods require exhaustively iterative optimization, along with careful parameter tuning, to estimate the deformation field between images. Although some learning-based methods have been proposed for initiating deformation estimation, they are often template-specific and not flexible in practical use. In this paper, we propose a convolutional neural network (CNN) based regression model to directly learn the complex mapping from the input image pair (i.e., a pair of template and subject) to their corresponding deformation field. Specifically, our CNN architecture is designed in a patch-based manner to learn the complex mapping from the input patch pairs to their respective deformation field. First, the equalized active-points guided sampling strategy is introduced to facilitate accurate CNN model learning upon a limited image dataset. Then, the similarity-steered CNN architecture is designed, where we propose to add the auxiliary contextual cue, i.e., the similarity between input patches, to more directly guide the learning process. Experiments on different brain image datasets demonstrate promising registration performance based on our CNN model. Furthermore, it is found that the trained CNN model from one dataset can be successfully transferred to another dataset, although brain appearances across datasets are quite variable.

Collaboration


Dive into the Dong Nie's collaboration.

Top Co-Authors

Avatar

Dinggang Shen

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Li Wang

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Qian Wang

Shanghai Jiao Tong University

View shared research outputs
Top Co-Authors

Avatar

Yaozong Gao

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Xiaohuan Cao

Northwestern Polytechnical University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Han Zhang

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Weili Lin

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Ehsan Adeli

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

J Lian

University of North Carolina at Chapel Hill

View shared research outputs
Researchain Logo
Decentralizing Knowledge