Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jung W. Suh is active.

Publication


Featured researches published by Jung W. Suh.


IEEE Transactions on Pattern Analysis and Machine Intelligence | 2013

Multi-Atlas Segmentation with Joint Label Fusion

Hongzhi Wang; Jung W. Suh; Sandhitsu R. Das; John Pluta; Caryne Craige; Paul A. Yushkevich

Multi-atlas segmentation is an effective approach for automatically labeling objects of interest in biomedical images. In this approach, multiple expert-segmented example images, called atlases, are registered to a target image, and deformed atlas segmentations are combined using label fusion. Among the proposed label fusion strategies, weighted voting with spatially varying weight distributions derived from atlas-target intensity similarity have been particularly successful. However, one limitation of these strategies is that the weights are computed independently for each atlas, without taking into account the fact that different atlases may produce similar label errors. To address this limitation, we propose a new solution for the label fusion problem in which weighted voting is formulated in terms of minimizing the total expectation of labeling error and in which pairwise dependency between atlases is explicitly modeled as the joint probability of two atlases making a segmentation error at a voxel. This probability is approximated using intensity similarity between a pair of atlases and the target image in the neighborhood of each voxel. We validate our method in two medical image segmentation problems: hippocampus segmentation and hippocampus subfield segmentation in magnetic resonance (MR) images. For both problems, we show consistent and significant improvement over label fusion strategies that assign atlas weights independently.


NeuroImage | 2011

A learning-based wrapper method to correct systematic errors in automatic image segmentation: Consistently improved performance in hippocampus, cortex and brain segmentation☆

Hongzhi Wang; Sandhitsu R. Das; Jung W. Suh; Murat Altinay; John Pluta; Caryne Craige; Brian B. Avants; Paul A. Yushkevich

We propose a simple but generally applicable approach to improving the accuracy of automatic image segmentation algorithms relative to manual segmentations. The approach is based on the hypothesis that a large fraction of the errors produced by automatic segmentation are systematic, i.e., occur consistently from subject to subject, and serves as a wrapper method around a given host segmentation method. The wrapper method attempts to learn the intensity, spatial and contextual patterns associated with systematic segmentation errors produced by the host method on training data for which manual segmentations are available. The method then attempts to correct such errors in segmentations produced by the host method on new images. One practical use of the proposed wrapper method is to adapt existing segmentation tools, without explicit modification, to imaging data and segmentation protocols that are different from those on which the tools were trained and tuned. An open-source implementation of the proposed wrapper method is provided, and can be applied to a wide range of image segmentation problems. The wrapper method is evaluated with four host brain MRI segmentation methods: hippocampus segmentation using FreeSurfer (Fischl et al., 2002); hippocampus segmentation using multi-atlas label fusion (Artaechevarria et al., 2009); brain extraction using BET (Smith, 2002); and brain tissue segmentation using FAST (Zhang et al., 2001). The wrapper method generates 72%, 14%, 29% and 21% fewer erroneously segmented voxels than the respective host segmentation methods. In the hippocampus segmentation experiment with multi-atlas label fusion as the host method, the average Dice overlap between reference segmentations and segmentations produced by the wrapper method is 0.908 for normal controls and 0.893 for patients with mild cognitive impairment. Average Dice overlaps of 0.964, 0.905 and 0.951 are obtained for brain extraction, white matter segmentation and gray matter segmentation, respectively.


computer vision and pattern recognition | 2011

Regression-based label fusion for multi-atlas segmentation

Hongzhi Wang; Jung W. Suh; Sandhitsu R. Das; John Pluta; Murat Altinay; Paul A. Yushkevich

Automatic segmentation using multi-atlas label fusion has been widely applied in medical image analysis. To simplify the label fusion problem, most methods implicitly make a strong assumption that the segmentation errors produced by different atlases are uncorrelated. We show that violating this assumption significantly reduces the efficiency of multi-atlas segmentation. To address this problem, we propose a regression-based approach for label fusion. Our experiments on segmenting the hippocampus in magnetic resonance images (MRI) show significant improvement over previous label fusion techniques.


information processing in medical imaging | 2011

Optimal weights for multi-atlas label fusion

Hongzhi Wang; Jung W. Suh; John Pluta; Murat Altinay; Paul A. Yushkevich

Multi-atlas based segmentation has been applied widely in medical image analysis. For label fusion, previous studies show that image similarity-based local weighting techniques produce the most accurate results. However, these methods ignore the correlations between results produced by different atlases. Furthermore, they rely on pre-selected weighting models and ad hoc methods to choose model parameters. We propose a novel label fusion method to address these limitations. Our formulation directly aims at reducing the expectation of the combined error and can be efficiently solved in a closed form. In our hippocampus segmentation experiment, our method significantly outperforms similarity-based local weighting. Using 20 atlases, we produce results with 0.898 +/- 0.019 Dice overlap to manual labelings for controls.


Journal of Computer Assisted Tomography | 2009

Deformable registration of supine and prone colons for computed tomographic colonography.

Jung W. Suh; Christopher L. Wyatt

Computed tomographic colonography is a minimally invasive technique for detecting colorectal polyps and colon cancer. Most computed tomographic colonography protocols acquire both prone and supine images to improve the visualization of the lumen wall, reduce false-positives, and improve sensitivity. Comparisons between the prone and supine images can be improved by registration between the scans. In this paper, we propose registering colon lumens, segmented from prone and supine images, using feature matching of the colon centerline and nonrigid registration of the lumen shapes represented as distance functions. Experimental registration results (n = 21 subjects) show a correspondence accuracy of 13.77 ± 6.20 mm for a range of polyp sizes. The overlap in the registered lumen segmentations show an average Jaccard similarity coefficient of 0.915 ± 0.07.


Frontiers in Neuroscience | 2012

Robust Automated Amygdala Segmentation via Multi-Atlas Diffeomorphic Registration

Jamie L. Hanson; Jung W. Suh; Brendon M. Nacewicz; Matthew J. Sutterer; Amelia A. Cayo; Diane E. Stodola; Cory A. Burghy; Hongzhi Wang; Brian B. Avants; Paul A. Yushkevich; Marilyn J. Essex; Seth D. Pollak; Richard J. Davidson

Here, we describe a novel method for volumetric segmentation of the amygdala from MRI images collected from 35 human subjects. This approach is adapted from open-source techniques employed previously with the hippocampus (Suh et al., 2011; Wang et al., 2011a,b). Using multi-atlas segmentation and machine learning-based correction, we were able to produce automated amygdala segments with high Dice (Mean = 0.918 for the left amygdala; 0.916 for the right amygdala) and Jaccard coefficients (Mean = 0.850 for the left; 0.846 for the right) compared to rigorously hand-traced volumes. This automated routine also produced amygdala segments with high intra-class correlations (consistency = 0.830, absolute agreement = 0.819 for the left; consistency = 0.786, absolute agreement = 0.783 for the right) and bivariate (r = 0.831 for the left; r = 0.797 for the right) compared to hand-drawn amygdala. Our results are discussed in relation to other cutting-edge segmentation techniques, as well as commonly available approaches to amygdala segmentation (e.g., Freesurfer). We believe this new technique has broad application to research with large sample sizes for which amygdala quantification might be needed.


Medical Image Analysis | 2011

A non-rigid registration method for serial lower extremity hybrid SPECT/CT imaging.

Jung W. Suh; Dustin Scheinost; Donald P. Dione; Lawrence W. Dobrucki; Albert J. Sinusas; Xenophon Papademetris

Small animal X-ray computed tomographic (microCT) imaging of the lower extremities permits evaluation of arterial growth in models of hindlimb ischemia, and when applied serially can provide quantitative information about disease progression and aid in the evaluation of therapeutic interventions. The quantification of changes in tissue perfusion and concentration of molecular markers concurrently obtained using nuclear imaging requires the ability to non-rigidly register the microCT images over time, a task made more challenging by the potentially large changes in the positions of the legs due to articulation. While non-rigid registration methods have been extensively used in the evaluation of individual organs, application in whole body imaging has been limited, primarily because the scale of possible displacements and deformations is large resulting in poor convergence of most methods. In this paper we present a new method based on the extended demons algorithm that uses a level-set representation of the body contour and skeletal structure as an input. The proposed serial registration method reflects the natural physical moving combination of mouse anatomy in which the movement of bones is the framework for body movements, and the movement of skin constrains the detailed movements of the specific segmented body regions. We applied our method to both the registration of serial microCT mouse images and the quantification of microSPECT component of the serially hybrid microCT-SPECT images demonstrating improved performance as compared to existing registration techniques.


Medical Physics | 2012

CT-PET weighted image fusion for separately scanned whole body rat.

Jung W. Suh; Oh-Kyu Kwon; Dustin Scheinost; Albert J. Sinusas; Gary W. Cline; Xenophon Papademetris

PURPOSE The limited resolution and lack of spatial information in positron emission tomography (PET) images require the complementary anatomic information from the computed tomography (CT) and/or magnetic resonance imaging (MRI). Therefore, multimodality image fusion techniques such as PET/CT are critical in mapping the functional images to structural images and thus facilitate the interpretation of PET studies. In our experimental situation, the CT and PET images are acquired in separate scanners at different times and the inherent differences in the imaging protocols produce significant nonrigid changes between the two acquisitions in addition to dissimilar image characteristics. The registration conditions are also poor because CT images have artifacts due to the limitation of current scanning settings, while PET images are very blurry (in transmission-PET) and have vague anatomical structure boundaries (in emission-PET). METHODS The authors present a new method for whole body small animal multimodal registration. In particular, the authors register whole body rat CT image and PET images using a weighted demons algorithm. The authors use both the transmission-PET and the emission-PET images in the registration process emphasizing particular regions of the moving transmission-PET image using the emission-PET image. After a rigid transformation and a histogram matching between the CT and the transmission-PET images, the authors deformably register the transmission-PET image to the CT image with weights based on the intensity-normalized emission-PET image. For the deformable registration process, the authors develop a weighted demons registration method that can give preferences to particular regions of the input image using a weight image. RESULTS The authors validate the results with nine rat image sets using the M-Hausdorff distance (M-HD) similarity measure with different outlier-suppression parameters (OSP). In comparison with standard methods such as the regular demons and the normalized mutual information (NMI)-based nonrigid free-form deformation (FFD) registration, the proposed weighted demons registration method shows average M-HD errors: 3.99 ± 1.37 (OSP = 10), 5.04 ± 1.59 (OSP = 20) and 5.92 ± 1.61 (OSP = ∞) with statistical significance (p < 0.0003) respectively, while NMI-based nonrigid FFD has average M-HD errors: 5.74 ± 1.73 (OSP = 10), 7.40 ± 7.84 (OSP = 20) and 9.83 ± 4.13 (OSP = ∞), and the regular demons has average M-HD errors: 6.79 ± 0.83 (OSP = 10), 9.19 ± 2.39 (OSP = 20) and 11.63 ± 3.99 (OSP = ∞), respectively. In addition to M-HD comparisons, the visual comparisons on the faint-edged region between the CT and the aligned PET images also show the encouraging improvements over the other methods. CONCLUSIONS In the whole body multimodal registration between CT and PET images, the utilization of both the transmission-PET and the emission-PET images in the registration process by emphasizing particular regions of the transmission-PET image using an emission-PET image is effective. This method holds promise for other image fusion applications where multiple (more than two) input images should be registered into a single informative image.


IEEE Transactions on Biomedical Engineering | 2011

Registration Under Topological Change for CT Colonography

Jung W. Suh; Christopher L. Wyatt

Computed tomography (CT) colonography is a minimally invasive screening technique for colorectal polyps, in which X-ray CT images of the distended colon are acquired, usually in the prone and supine positions of a single patient. Registration of segmented colon images from both positions will be useful for computer-assisted polyp detection. We have previously presented algorithms for registration of the prone and supine colons when both are well distended and there is a single connected lumen. However, due to inadequate bowel preparation or peristalsis, there may be collapsed segments in one or both of the colon images resulting in a topological change in the images. Such changes make deformable registration of the colon images difficult, and at present, there are no registration algorithms that can accommodate them. In this paper, we present an algorithm that can perform volume registration of prone/supine colon images in the presence of a topological change. For this purpose, 3-D volume images are embedded as a manifold in a 4-D space, and the manifold is evolved for nonrigid registration. Experiments using data from 24 patients show that the proposed method achieves good registration results in both the shape alignment of topologically different colon images from a single patient and the polyp location estimation between supine and prone colon images.


international symposium on biomedical imaging | 2010

Serial nonrigid vascular registration using weighted normalized mutual information

Jung W. Suh; Dustin Scheinost; Xiaoning Qian; Albert J. Sinusas; Christopher K. Breuer; Xenophon Papademetris

Vascular registration is a challenging problem with many potential applications. However, registering vessels accurately is difficult as they often occupy a small portion of the image and their relative motion/deformation is swamped by the displacements seen in large organs such as the heart and the liver. Our registration method uses a vessel detection algorithm to generate a vesselness image (probability of having a vessel at any given voxel) which is used to construct a weighting factor that is used to modify the intensity metric to give preference to vascular structures while maintaining the larger context. Therefore, our proposing method uses fully data-driven calculated weights and needs no prior knowledge for the weight calculation. We applied our method to the registration of serial MRI lamb images obtained from studies on tissue engineered vascular grafts and demonstrate encouraging performance as compared to non-weighted registration methods.

Collaboration


Dive into the Jung W. Suh's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

John Pluta

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sandhitsu R. Das

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Brian B. Avants

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Murat Altinay

University of Pennsylvania

View shared research outputs
Researchain Logo
Decentralizing Knowledge