Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mingchen Gao is active.

Publication


Featured researches published by Mingchen Gao.


IEEE Transactions on Medical Imaging | 2016

Deep Convolutional Neural Networks for Computer-Aided Detection: CNN Architectures, Dataset Characteristics and Transfer Learning

Hoo-Chang Shin; Holger R. Roth; Mingchen Gao; Le Lu; Ziyue Xu; Isabella Nogues; Jianhua Yao; Daniel J. Mollura; Ronald M. Summers

Remarkable progress has been made in image recognition, primarily due to the availability of large-scale annotated datasets and deep convolutional neural networks (CNNs). CNNs enable learning data-driven, highly representative, hierarchical image features from sufficient training data. However, obtaining datasets as comprehensively annotated as ImageNet in the medical imaging domain remains a challenge. There are currently three major techniques that successfully employ CNNs to medical image classification: training the CNN from scratch, using off-the-shelf pre-trained CNN features, and conducting unsupervised CNN pre-training with supervised fine-tuning. Another effective method is transfer learning, i.e., fine-tuning CNN models pre-trained from natural image dataset to medical image tasks. In this paper, we exploit three important, but previously understudied factors of employing deep convolutional neural networks to computer-aided detection problems. We first explore and evaluate different CNN architectures. The studied models contain 5 thousand to 160 million parameters, and vary in numbers of layers. We then evaluate the influence of dataset scale and spatial image context on performance. Finally, we examine when and why transfer learning from pre-trained ImageNet (via fine-tuning) can be useful. We study two specific computer-aided detection (CADe) problems, namely thoraco-abdominal lymph node (LN) detection and interstitial lung disease (ILD) classification. We achieve the state-of-the-art performance on the mediastinal LN detection, and report the first five-fold cross-validation classification results on predicting axial CT slices with ILD categories. Our extensive empirical evaluation, CNN model analysis and valuable insights can be extended to the design of high performance CAD systems for other medical imaging tasks.Remarkable progress has been made in image recognition, primarily due to the availability of large-scale annotated datasets and deep convolutional neural networks (CNNs). CNNs enable learning data-driven, highly representative, hierarchical image features from sufficient training data. However, obtaining datasets as comprehensively annotated as ImageNet in the medical imaging domain remains a challenge. There are currently three major techniques that successfully employ CNNs to medical image classification: training the CNN from scratch, using off-the-shelf pre-trained CNN features, and conducting unsupervised CNN pre-training with supervised fine-tuning. Another effective method is transfer learning, i.e., fine-tuning CNN models pre-trained from natural image dataset to medical image tasks. In this paper, we exploit three important, but previously understudied factors of employing deep convolutional neural networks to computer-aided detection problems. We first explore and evaluate different CNN architectures. The studied models contain 5 thousand to 160 million parameters, and vary in numbers of layers. We then evaluate the influence of dataset scale and spatial image context on performance. Finally, we examine when and why transfer learning from pre-trained ImageNet (via fine-tuning) can be useful. We study two specific computer-aided detection (CADe) problems, namely thoraco-abdominal lymph node (LN) detection and interstitial lung disease (ILD) classification. We achieve the state-of-the-art performance on the mediastinal LN detection, and report the first five-fold cross-validation classification results on predicting axial CT slices with ILD categories. Our extensive empirical evaluation, CNN model analysis and valuable insights can be extended to the design of high performance CAD systems for other medical imaging tasks.


computer vision and pattern recognition | 2011

Abnormal detection using interaction energy potentials

Xinyi Cui; Qingshan Liu; Mingchen Gao; Dimitris N. Metaxas

A new method is proposed to detect abnormal behaviors in human group activities. This approach effectively models group activities based on social behavior analysis. Different from previous work that uses independent local features, our method explores the relationships between the current behavior state of a subject and its actions. An interaction energy potential function is proposed to represent the current behavior state of a subject, and velocity is used as its actions. Our method does not depend on human detection or segmentation, so it is robust to detection errors. Instead, tracked spatio-temporal interest points are able to provide a good estimation of modeling group interaction. SVM is used to find abnormal events. We evaluate our algorithm in two datasets: UMN and BEHAVE. Experimental results show its promising performance against the state-of-art methods.


Computer methods in biomechanics and biomedical engineering. Imaging & visualization | 2018

Holistic classification of CT attenuation patterns for interstitial lung diseases via deep convolutional neural networks

Mingchen Gao; Ulas Bagci; Le Lu; Aaron Wu; Mario Buty; Hoo-Chang Shin; Holger R. Roth; Georgios Z. Papadakis; Adrien Depeursinge; Ronald M. Summers; Ziyue Xu; Daniel J. Mollura

Interstitial lung diseases (ILD) involve several abnormal imaging patterns observed in computed tomography (CT) images. Accurate classification of these patterns plays a significant role in precise clinical decision making of the extent and nature of the diseases. Therefore, it is important for developing automated pulmonary computer-aided detection systems. Conventionally, this task relies on experts’ manual identification of regions of interest (ROIs) as a prerequisite to diagnose potential diseases. This protocol is time consuming and inhibits fully automatic assessment. In this paper, we present a new method to classify ILD imaging patterns on CT images. The main difference is that the proposed algorithm uses the entire image as a holistic input. By circumventing the prerequisite of manual input ROIs, our problem set-up is significantly more difficult than previous work but can better address the clinical workflow. Qualitative and quantitative results using a publicly available ILD database demonstrate state-of-the-art classification accuracy under the patch-based classification and shows the potential of predicting the ILD type using holistic image.


information processing in medical imaging | 2013

Segmenting the papillary muscles and the trabeculae from high resolution cardiac CT through restoration of topological handles

Mingchen Gao; Chao Chen; Shaoting Zhang; Zhen Qian; Dimitris N. Metaxas; Leon Axel

We introduce a novel algorithm for segmenting the high resolution CT images of the left ventricle (LV), particularly the papillary muscles and the trabeculae. High quality segmentations of these structures are necessary in order to better understand the anatomical function and geometrical properties of LV. These fine structures, however, are extremely challenging to capture due to their delicate and complex nature in both geometry and topology. Our algorithm computes the potential missing topological structures of a given initial segmentation. Using techniques from computational topology, e.g. persistent homology, our algorithm find topological handles which are likely to be the true signal. To further increase accuracy, these proposals are measured by the saliency and confidence from a trained classifier. Handles with high scores are restored in the final segmentation, leading to high quality segmentation results of the complex structures.


Computer Vision and Image Understanding | 2013

3D anatomical shape atlas construction using mesh quality preserved deformable models

Shaoting Zhang; Yiqiang Zhan; Xinyi Cui; Mingchen Gao; Junzhou Huang; Dimitris N. Metaxas

3D anatomical shape atlas construction has been extensively studied in medical image analysis research, owing to its importance in model-based image segmentation, longitudinal studies and populational statistical analysis, etc. Among multiple steps of 3D shape atlas construction, establishing anatomical correspondences across subjects, i.e., surface registration, is probably the most critical but challenging one. Adaptive focus deformable model (AFDM) [1] was proposed to tackle this problem by exploiting cross-scale geometry characteristics of 3D anatomy surfaces. Although the effectiveness of AFDM has been proved in various studies, its performance is highly dependent on the quality of 3D surface meshes, which often degrades along with the iterations of deformable surface registration (the process of correspondence matching). In this paper, we propose a new framework for 3D anatomical shape atlas construction. Our method aims to robustly establish correspondences across different subjects and simultaneously generate high-quality surface meshes without removing shape details. Mathematically, a new energy term is embedded into the original energy function of AFDM to preserve surface mesh qualities during deformable surface matching. More specifically, we employ the Laplacian representation to encode shape details and smoothness constraints. An expectation-maximization style algorithm is designed to optimize multiple energy terms alternatively until convergence. We demonstrate the performance of our method via a set of diverse applications, including a population of sparse cardiac MRI slices with 2D labels, 3D high resolution CT cardiac images and rodent brain MRIs with multiple structures. The constructed shape atlases exhibit good mesh qualities and preserve fine shape details. The constructed shape atlases can further benefit other research topics such as segmentation and statistical analysis.


international conference on functional imaging and modeling of heart | 2011

4D Cardiac Reconstruction Using High Resolution CT Images

Mingchen Gao; Junzhou Huang; Shaoting Zhang; Zhen Qian; Szilard Voros; Dimitris N. Metaxas; Leon Axel

Recent developments on the 320 multi-detector CT technologies have made the volumetric acquisition of 4D high resolution cardiac images in a single heart beat possible. In this paper, we present a framework that uses these data to reconstruct the 4D motion of the endocardial surface of the left ventricle (LV) for a full cardiac cycle. This reconstruction framework captures the motion of the full 3D surfaces of the complex anatomical features, such as the papillary muscles and the ventricular trabeculae, for the first time, which allows us to quantitatively investigate their possible functional significance in health and disease.


international conference on embedded networked sensor systems | 2012

Towards robust device-free passive localization through automatic camera-assisted recalibration

Chenren Xu; Mingchen Gao; Bernhard Firner; Yanyong Zhang; Richard E. Howard; Jun Li

Device-free passive localization (DfP) techniques can localize human subjects without wearing a radio tag. Being convenient and private, DfP can find many applications in ubiquitous/pervasive computing. Unfortunately, DfP techniques need frequent manual recalibration of the radio signal values, which can be cumbersome and costly. We present SenCam, a sensor-camera collaboration solution that conducts automatic recalibration by leveraging existing surveillance camera(s). When the camera detects a subject, it can periodically trigger recalibration and update the radio signal data accordingly. This technique requires camera access occasionally each month, minimizing computational costs and reducing privacy concerns when compared to localization techniques solely based on cameras. Through experiments in an open indoor space, we show that this scheme can retain good localization results while avoiding manual recalibration.


international symposium on biomedical imaging | 2016

Deep vessel tracking: A generalized probabilistic approach via deep learning

Aaron Wu; Ziyue Xu; Mingchen Gao; Mario Buty; Daniel J. Mollura

Analysis of vascular geometry is important in many medical imaging applications, such as retinal, pulmonary, and cardiac investigations. In order to make reliable judgments for clinical usage, accurate and robust segmentation methods are needed. Due to the high complexity of biological vasculature trees, manual identification is often too time-consuming and tedious to be used in practice. To design an automated and computerized method, a major challenge is that the appearance of vasculatures in medical images has great variance across modalities and subjects. Therefore, most existing approaches are specially designed for a particular task, lacking the flexibility to be adapted to other circumstances. In this paper, we present a generic approach for vascular structure identification from medical images, which can be used for multiple purposes robustly. The proposed method uses the state-of-the-art deep convolutional neural network (CNN) to learn the appearance features of the target. A Principal Component Analysis (PCA)-based nearest neighbor search is then utilized to estimate the local structure distribution, which is further incorporated within the generalized probabilistic tracking framework to extract the entire connected tree. Qualitative and quantitative results over retinal fundus data demonstrate that the proposed framework achieves comparable accuracy as compared with state-of-the-art methods, while efficiently producing more information regarding the candidate tree structure.


medical image computing and computer-assisted intervention | 2011

Using high resolution cardiac CT data to model and visualize patient-specific interactions between trabeculae and blood flow

Scott Kulp; Mingchen Gao; Shaoting Zhang; Zhen Qian; Szilard Voros; Dimitris N. Metaxas; Leon Axel

In this paper, we present a method to simulate and visualize blood flow through the human heart, using the reconstructed 4D motion of the endocardial surface of the left ventricle as boundary conditions. The reconstruction captures the motion of the full 3D surfaces of the complex features, such as the papillary muscles and the ventricular trabeculae. We use visualizations of the flow field to view the interactions between the blood and the trabeculae in far more detail than has been achieved previously, which promises to give a better understanding of cardiac flow. Finally, we use our simulation results to compare the blood flow within one healthy heart and two diseased hearts.


international symposium on biomedical imaging | 2016

Segmentation label propagation using deep convolutional neural networks and dense conditional random field

Mingchen Gao; Ziyue Xu; Le Lu; Aaron Wu; Isabella Nogues; Ronald M. Summers; Daniel J. Mollura

Availability and accessibility of large-scale annotated medical image datasets play an essential role in robust supervised learning of medical image analysis. Missed labeling of regions of interest is a common issue on existing medical image datasets due to the labor intensive nature of the annotation task which requires high levels of clinical proficiency. In this paper, we present a segmentation based label propagation method to a publicly available dataset on interstitial lung disease [3], to address the missing annotation challenge. Upon validation from an expert radiologist, the amount of available annotated training data is largely increased. Such a dataset expansion can can potentially increase the accuracy of Computer-aided Detection (CAD) systems. The proposed constrained segmentation propagation algorithm combines the cues from the initial annotations, deep convolutional neural networks and a dense fully-connected Conditional Random Field (CRF) that achieves high quantitative accuracy levels.

Collaboration


Dive into the Mingchen Gao's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Daniel J. Mollura

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Ziyue Xu

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Shaoting Zhang

University of North Carolina at Charlotte

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Le Lu

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Ronald M. Summers

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Aaron Wu

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Junzhou Huang

University of Texas at Arlington

View shared research outputs
Researchain Logo
Decentralizing Knowledge