Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Dong Hye Ye is active.

Publication


Featured researches published by Dong Hye Ye.


IEEE Transactions on Medical Imaging | 2015

The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS)

Bjoern H. Menze; András Jakab; Stefan Bauer; Jayashree Kalpathy-Cramer; Keyvan Farahani; Justin S. Kirby; Yuliya Burren; Nicole Porz; Johannes Slotboom; Roland Wiest; Levente Lanczi; Elizabeth R. Gerstner; Marc-André Weber; Tal Arbel; Brian B. Avants; Nicholas Ayache; Patricia Buendia; D. Louis Collins; Nicolas Cordier; Jason J. Corso; Antonio Criminisi; Tilak Das; Hervé Delingette; Çağatay Demiralp; Christopher R. Durst; Michel Dojat; Senan Doyle; Joana Festa; Florence Forbes; Ezequiel Geremia

In this paper we report the set-up and results of the Multimodal Brain Tumor Image Segmentation Benchmark (BRATS) organized in conjunction with the MICCAI 2012 and 2013 conferences. Twenty state-of-the-art tumor segmentation algorithms were applied to a set of 65 multi-contrast MR scans of low- and high-grade glioma patients - manually annotated by up to four raters - and to 65 comparable scans generated using tumor image simulation software. Quantitative evaluations revealed considerable disagreement between the human raters in segmenting various tumor sub-regions (Dice scores in the range 74%-85%), illustrating the difficulty of this task. We found that different algorithms worked best for different sub-regions (reaching performance comparable to human inter-rater variability), but that no single algorithm ranked in the top for all sub-regions simultaneously. Fusing several good algorithms using a hierarchical majority vote yielded segmentations that consistently ranked above all individual algorithms, indicating remaining opportunities for further methodological improvements. The BRATS image data and manual annotations continue to be publicly available through an online evaluation system as an ongoing benchmarking resource.


Medical Image Analysis | 2010

GRAM: A framework for geodesic registration on anatomical manifolds.

Jihun Hamm; Dong Hye Ye; Ragini Verma; Christos Davatzikos

Medical image registration is a challenging problem, especially when there is large anatomical variation in the anatomies. Geodesic registration methods have been proposed to solve the large deformation registration problem. However, analytically defined geodesic paths may not coincide with biologically plausible paths of registration, since the manifold of diffeomorphisms is immensely broader than the manifold spanned by diffeomorphisms between real anatomies. In this paper, we propose a novel framework for large deformation registration using the learned manifold of anatomical variation in the data. In this framework, a large deformation between two images is decomposed into a series of small deformations along the shortest path on an empirical manifold that represents anatomical variation. Using a manifold learning technique, the major variation of the data can be visualized by a low-dimensional embedding, and the optimal group template is chosen as the geodesic mean on the manifold. We demonstrate the advantages of the proposed framework over direct registration with both simulated and real databases of brain images.


IEEE Transactions on Medical Imaging | 2014

Regional Manifold Learning for Disease Classification

Dong Hye Ye; Benoit Desjardins; Jihun Hamm; Harold I. Litt; Kilian M. Pohl

While manifold learning from images itself has become widely used in medical image analysis, the accuracy of existing implementations suffers from viewing each image as a single data point. To address this issue, we parcellate images into regions and then separately learn the manifold for each region. We use the regional manifolds as low-dimensional descriptors of high-dimensional morphological image features, which are then fed into a classifier to identify regions affected by disease. We produce a single ensemble decision for each scan by the weighted combination of these regional classification results. Each weight is determined by the regional accuracy of detecting the disease. When applied to cardiac magnetic resonance imaging of 50 normal controls and 50 patients with reconstructive surgery of Tetralogy of Fallot, our method achieves significantly better classification accuracy than approaches learning a single manifold across the entire image domain.


IEEE Transactions on Computational Imaging | 2016

A Gaussian Mixture MRF for Model-Based Iterative Reconstruction With Applications to Low-Dose X-Ray CT

Ruoqiao Zhang; Dong Hye Ye; Debashish Pal; Jean-Baptiste Thibault; Ken D. Sauer; Charles A. Bouman

Markov random fields (MRFs) have been widely used as prior models in various inverse problems such as tomographic reconstruction. While MRFs provide a simple and often effective way to model the spatial dependencies in images, they suffer from the fact that parameter estimation is difficult. In practice, this means that MRFs typically have very simple structure that cannot completely capture the subtle characteristics of complex images. In this paper, we present a novel Gaussian mixture Markov random field model (GM-MRF) that can be used as a very expressive prior model for inverse problems such as denoising and reconstruction. The GM-MRF forms a global image model by merging together individual Gaussian-mixture models (GMMs) for image patches. In addition, we present a novel analytical framework for computing MAP estimates using the GM-MRF prior model through the construction of surrogate functions that result in a sequence of quadratic optimizations. We also introduce a simple but effective method to adjust the GM-MRF so as to control the sharpness in low- and high-contrast regions of the reconstruction separately. We demonstrate the value of the model with experiments including image denoising and low-dose CT reconstruction.


electronic imaging | 2016

A Supervised Learning Approach for Dynamic Sampling.

G. M. Dilshan Godaliyadda; Dong Hye Ye; Michael D. Uchic; Michael A. Groeber; Gregery T. Buzzard; Charles A. Bouman

Sparse sampling schemes have the potential to reduce image acquisition time by reconstructing a desired image from a sparse subset of measured pixels. Moreover, dynamic sparse sampling methods have the greatest potential because each new pixel is selected based on information obtained from previous samples. However, existing dynamic sampling methods tend to be computationally expensive and therefore too slow for practical application. In this paper, we present a supervised learning based algorithm for dynamic sampling (SLADS) that uses machine-learning techniques to select the location of each new pixel measurement. SLADS is fast enough to be used in practical imaging applications because each new pixel location is selected using a simple regression algorithm. In addition, SLADS is accurate because the machine learning algorithm is trained using a total reduction in distortion metric which accounts for distortion in a neighborhood of the pixel being sampled. We present results on both computationally-generated synthetic data and experimentallycollected data that demonstrate substantial improvement relative to state-of-the-art static sampling methods. Introduction In conventional point-wise image acquisition, all pixels in a rectilinear grid are measured. However, in many imaging applications, a high-fidelity pixel measurement could take up to 1 second. Examples of such methods include electron back scatter diffraction (EBSD) microscopy and Raman spectroscopy, which are of great importance in material science and chemistry [1]. Then, acquiring a complete set of high-resolution measurements on these imaging applications becomes impractical. Sparse sampling offers the potential to dramatically reduce the time required to acquire an image. In this approach, a sparse set of pixels is measured, and the full resolution image is reconstructed from the set of sparse measurements. In addition to speeding image acquisition, sparse sampling methods also hold the potential to reduce the exposure of the object being imaged to destructive radiation. This is of critical importance when imaging biological samples using X-rays, electrons, or even optical photons [2, 3]. Sparse sampling approaches fall into two main categories: static and dynamic. In static sampling, pixels are measured in a pre-defined order. Examples of static sparse sampling methods include random sampling strategies such as in [4], and lowdiscrepancy sampling [5]. As a result some samples from these methods may not be very informative, as they do not take into account the object being scanned. There are static sampling methods based on an a priori knowledge of the object geometry and sparsity such as [6, 7]. However a priori knowledge is not always available for general imaging applications On the other hand, dynamic sampling (DS) methods adaptively determine new measurement locations based on the information obtained from previous measurements. This is a very powerful technique since in real applications previous measurements can tell one a great deal about the object being scanned and also about the best locations for future measurements Therefore, dynamic sampling has the potential to dramatically reduce the total number of samples required to achieve a particular level of distortion in the reconstructed image. An example of a dynamic sampling method was proposed in [8] by Kovačević et al. Here initially an object is measured with a sparse grid. Then, if the intensity of a pixel is above a certain threshold, the vicinity of that pixel is measured in higher resolution. However, the threshold was empirically chosen for the specific scanner and thus this method cannot be generalized for different imaging modalities. For general applications, a set of DS methods has been proposed in previous literature where an objective function is designed and the measurements are chosen to optimize that objective function. For instance, dynamic compressive sensing methods [9–11] find the next measurements that maximally reduces the differential entropy. However, dynamic compressive sensing methods use an unconstrained projection as a measurement and therefore are not suitable for point-wise measurements where the measurement is constrained. Apart from these methods, application specific DS methods that optimize an objective function to find the next measurement have been developed. One example is [12], where the authors modify the optimal experimental design [13] framework to incorporate dynamic measurement selection in a biochemical network. Seeger et al. in [14] also finds the measurement that reduces the differential entropy the most but now to select optimal K-space spiral and line measurements for magnetic resonance imaging (MRI). In addition, Batenburg et al. [15] propose a DS method for binary computed tomography in which the measurement that maximizes the information gain is selected. Even though these measurements are constrained they are application specific and therefore not applicable to general point-wise measurements. In [16] Godaliyadda et al. propose a DS algorithm for general point-wise measurements. Here, the authors use a MonteCarlo simulation method to approximate the conditional variance at every unmeasured location, given previous measurements, and select the pixel with largest conditional variance. However, Monte-Carlo simulation methods such as the Metropolis-Hastings method are very slow and therefore this method is infeasible for real-time applications. Furthermore, the objective function in this method does not account for the change of conditional variance in ©2016 Society for Imaging Science and Technology DOI: 10.2352/ISSN.2470-1173.2016.19.COIMG-153 IS&T International Symposium on Electronic Imaging 2016 Computational Imaging XIV COIMG-153.1 the entire image with a new measurement. In this paper, we propose a new DS algorithm for point-wise measurements named supervised learning approach for dynamic sampling (SLADS). The objective of SLADS is to select a new pixel so as to maximally reduce the conditional expectation of the reduction in distortion (ERD) in the entire reconstructed image. In SLADS, we compute the reduction in distortion for each pixel in a training data set, and then find the relationship between the ERD and a local feature vector through a regression algorithm. Since we use a supervised learning approach, we can very rapidly estimate the ERD at each pixel in the unknown testing image. Moreover, we introduce a measure that approximates the distortion reduction in the training dataset so that it accounts for the distortion reduction in the pixel and its neighbors. Since computing the distortion reduction for each pixel during training can be intractable, particularly for large images, this approximation is vital to make the training procedure feasible. Experimental results on sampling a computationally-generated synthetic EBSD image and an experimentally-collected image have shown that SLADS can compute a new sample locations very quickly (in the range of 5 500 ms), and can achieve the same reconstruction distortion as static sampling methods with dramatically fewer samples (2-4 times fewer). Dynamic Sampling Framework The objective in sparse sampling is to measure a sparse set of pixels in an image and then reconstruct the full resolution image from those sparse samples. Moreover, with sparse dynamic sampling, the location for each new pixel to measure will be informed by all the previous pixel measurements. To formulate the problem, we denote the image we would like to measure as X ∈ RN , where Xr is a pixel at location r ∈ Ω. Furthermore, let us assume that k pixels have been measured at a set of locations S = {s(1), · · · ,s(k)}, and that the corresponding measured values and locations are represented by the k×2 matrix Y (k) =  s,Xs(1) .. s,Xs(k)  . Then from Y (k), we can reconstruct an image X̂ (k), which is our best estimate of X given the first k measurements. Now, if we select Xs as our next pixel to measure, then presumably we can reconstruct a better estimate of the image, which we will denote by X̂ (k;s). So then X̂ (k;s) is our best estimate of X given both Y (k) and Xs. So at this point, our goal is to select the next location s(k+1) that results in the greatest decrease in reconstruction distortion. In order to formulate this problem, let D(Xr, X̂r) denote the distortion measure between a pixel Xr and its estimate X̂r, and let D(X , X̂) = ∑ r∈Ω D(Xr, X̂r) , (1) denote the total distortion between the image X and its estimate X̂ . Then using this notation, we may define R r to be the local reduction in distortion at pixel r that would result from the measurement of the pixel Xs. R r = D(Xr, X̂ (k) r )−D(Xr, X̂ (k;s) r ) (2) Importantly, the measurement of the pixel Xs does not only reduce distortion at that pixel. It also reduces the distortion at neighboring pixels. So in order to represent the total reduction in distortion, we must sum over all pixels r ∈Ω. R(k;s) = ∑ r∈Ω R r (3) = D(X , X̂ (k))−D(X , X̂ (k;s)) . (4) Now of course, we do not know what the value of Xs until it is measured; so we also do not know the value R(k;s). Therefore, we must make our selection of the next pixel based on the conditional expectation of reduction in distortion which we will refer to as the ERD given by R̄(k;s) = E [ R(k;s)|Y (k) ] . (5) So with this notation, our goal is to efficiently compute the next pixel to sample, s(k+1), as the solution to the following optimization. s(k+1) = arg max s∈{Ω\S } ( R̄(k;s) ) (6) Once we measure the location Xs(k+1) , then we form the new measurement vector Y (k+1) = [ Y (k) s,Xs(k+1) ]


Journal of Synchrotron Radiation | 2017

Dynamic X-ray diffraction sampling for protein crystal positioning

Nicole M. Scarborough; G. M. Dilshan Godaliyadda; Dong Hye Ye; David J. Kissick; Shijie Zhang; Justin A. Newman; Michael J. Sheedlo; Azhad U. Chowdhury; Robert F. Fischetti; Chittaranjan Das; Gregery T. Buzzard; Charles A. Bouman; Garth J. Simpson

A sparse supervised learning approach for dynamic sampling (SLADS) is described for dose reduction in diffraction-based protein crystal positioning. Crystal centering is typically a prerequisite for macromolecular diffraction at synchrotron facilities, with X-ray diffraction mapping growing in popularity as a mechanism for localization. In X-ray raster scanning, diffraction is used to identify the crystal positions based on the detection of Bragg-like peaks in the scattering patterns; however, this additional X-ray exposure may result in detectable damage to the crystal prior to data collection. Dynamic sampling, in which preceding measurements inform the next most information-rich location to probe for image reconstruction, significantly reduced the X-ray dose experienced by protein crystals during positioning by diffraction raster scanning. The SLADS algorithm implemented herein is designed for single-pixel measurements and can select a new location to measure. In each step of SLADS, the algorithm selects the pixel, which, when measured, maximizes the expected reduction in distortion given previous measurements. Ground-truth diffraction data were obtained for a 5 µm-diameter beam and SLADS reconstructed the image sampling 31% of the total volume and only 9% of the interior of the crystal greatly reducing the X-ray dosage on the crystal. Using in situ two-photon-excited fluorescence microscopy measurements as a surrogate for diffraction imaging with a 1 µm-diameter beam, the SLADS algorithm enabled image reconstruction from a 7% sampling of the total volume and 12% sampling of the interior of the crystal. When implemented into the beamline at Argonne National Laboratory, without ground-truth images, an acceptable reconstruction was obtained with 3% of the image sampled and approximately 5% of the crystal. The incorporation of SLADS into X-ray diffraction acquisitions has the potential to significantly minimize the impact of X-ray exposure on the crystal by limiting the dose and area exposed for image reconstruction and crystal positioning using data collection hardware present in most macromolecular crystallography end-stations.


international conference on image processing | 2015

Joint metal artifact reduction and segmentation of CT images using dictionary-based image prior and continuous-relaxed potts model

Pengchong Jin; Dong Hye Ye; Charles A. Bouman

Segmenting interesting objects from CT images has a wide range of applications. However, to achieve good results, it is often necessary to apply metal artifact reduction to raw CT images before segmentation. While there has been a great deal of research focusing on metal artifact reduction and segmentation as individual tasks, there have been very few attempts to solve the two problems jointly. We present a novel approach to solve the problem of segmenting raw CT images with metal artifacts, without the access to the raw CT data. Given an approximate metal artifact mask, the problem is formulated as a joint optimization over the restored image and the segmentation label, and the cost function includes a dictionary-based image prior to regularize the restored image and a continuous-relaxed Potts model for multi-class segmentation. An effective alternating method is used to solve the resulting optimization problem. The algorithm is applied to both simulated and real datasets and results show that it is effective in reducing metal artifacts and generating better segmentations simultaneously.


intelligent robots and systems | 2016

Multi-target detection and tracking from a single camera in Unmanned Aerial Vehicles (UAVs)

Jing Li; Dong Hye Ye; Timothy H. Chung; Mathias Kölsch; Juan P. Wachs; Charles A. Bouman

Despite the recent flight control regulations, Unmanned Aerial Vehicles (UAVs) are still gaining popularity in civilian and military applications, as much as for personal use. Such emerging interest is pushing the development of effective collision avoidance systems. Such systems play a critical role UAVs operations especially in a crowded airspace setting. Because of cost and weight limitations associated with UAVs payload, camera based technologies are the de-facto choice for collision avoidance navigation systems. This requires multi-target detection and tracking algorithms from a video, which can be run on board efficiently. While there has been a great deal of research on object detection and tracking from a stationary camera, few have attempted to detect and track small UAVs from a moving camera. In this paper, we present a new approach to detect and track UAVs from a single camera mounted on a different UAV. Initially, we estimate background motions via a perspective transformation model and then identify distinctive points in the background subtracted image. We find spatio-temporal traits of each moving object through optical flow matching and then classify those candidate targets based on their motion patterns compared with the background. The performance is boosted through Kalman filter tracking. This results in temporal consistency among the candidate detections. The algorithm was validated on video datasets taken from a UAV. Results show that our algorithm can effectively detect and track small UAVs with limited computing resources.


Analytical Chemistry | 2018

Dynamic Sparse Sampling for Confocal Raman Microscopy

Shijie Zhang; Zhengtian Song; G. M. Dilshan Godaliyadda; Dong Hye Ye; Azhad U. Chowdhury; Atanu Sengupta; Gregery T. Buzzard; Charles A. Bouman; Garth J. Simpson

The total number of data points required for image generation in Raman microscopy was greatly reduced using sparse sampling strategies, in which the preceding set of measurements informed the next most information-rich sampling location. Using this approach, chemical images of pharmaceutical materials were obtained with >99% accuracy from 15.8% sampling, representing an ∼6-fold reduction in measurement time relative to full field of view rastering with comparable image quality. This supervised learning approach to dynamic sampling (SLADS) has the distinct advantage of being directly compatible with standard confocal Raman instrumentation. Furthermore, SLADS is not limited to Raman imaging, potentially providing time-savings in image reconstruction whenever the single-pixel measurement time is the limiting factor in image generation.


electronic imaging | 2017

A Model Based Neuron Detection Approach using Sparse Location Priors.

Soumendu Majee; Dong Hye Ye; Gregery T. Buzzard; Charles A. Bouman

In order to accurately monitor neural activity in a living mouse brain, it is necessary to image each neuron at a high frame rate. Newly developed genetically encoded calcium indicators like GCaMP6 have fast kinetic response and can be used to target specific cell types for long duration. This enables neural activity imaging of neuron cells with high frame rate via fluorescence microscopy. In fluorescence microscopy, a laser scans the whole volume and the imaging time is proportional to the volume of the brain scanned. Scanning the whole brain volume is time consuming and fails to fully exploit the fast kinetic response of new calcium indicators. One way to increase the frame rate is to image only the sparse set of voxels containing the neurons. However, in order to do this, it is necessary to accurately detect and localize the position of each neuron during the data acquisition. In this paper, we present a novel model-based neuron detection algorithm using sparse location priors. We formulate the neuron detection problem as an image reconstruction problem where we reconstruct an image that encodes the location of the neuron centers. We use a sparsity based prior model since the neuron centers are sparsely distributed in the 3D volume. The information about the shape of neurons is encoded in the forward model using the impulse response of a filter and is estimated from training data. Our method is robust to illumination variance and noise in the image. Furthermore, the cost function to minimize in our formulation is convex and hence is not dependent on good initialization. We test our method on GCaMP6 fluorescence neuron images and observe better performance than widely used methods.

Collaboration


Dive into the Dong Hye Ye's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Atanu Sengupta

Dr. Reddy's Laboratories

View shared research outputs
Top Co-Authors

Avatar

Ken D. Sauer

University of Notre Dame

View shared research outputs
Researchain Logo
Decentralizing Knowledge