Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Won-Ki Jeong is active.

Publication


Featured researches published by Won-Ki Jeong.


2003 Shape Modeling International. | 2003

Using growing cell structures for surface reconstruction

Ioannis P. Ivrissimtzis; Won-Ki Jeong; Hans-Peter Seidel

We study the use of neural network algorithms in surface reconstruction from an unorganized point cloud, and meshing of an implicit surface. We found that for such applications, the most suitable type of neural networks is a modified version of the growing cell structure we propose here. The algorithm works by sampling randomly a target space, usually a point cloud or an implicit surface, and adjusting accordingly the neural network. The adjustment includes the connectivity of the network. Doing several experiments we found that the algorithm gives satisfactory results in some challenging situations involving sharp features and concavities. Another attractive feature of the algorithm is that its speed is virtually independent of the size of the input data, making it particularly suitable for the reconstruction of a surface from a very large point set.


IEEE Computer Graphics and Applications | 2010

Ssecrett and NeuroTrace: Interactive Visualization and Analysis Tools for Large-Scale Neuroscience Data Sets

Won-Ki Jeong; Johanna Beyer; Markus Hadwiger; Rusty Blue; Amelio Vázquez-Reina; R. Clay Reid; Jeff W. Lichtman; Hanspeter Pfister

Data sets imaged with modern electron microscopes can range from tens of terabytes to about one petabyte. Two new tools, Ssecrett and NeuroTrace, support interactive exploration and analysis of large-scale optical-and electron-microscopy images to help scientists reconstruct complex neural circuits of the mammalian nervous system.


IEEE Transactions on Visualization and Computer Graphics | 2007

Interactive Visualization of Volumetric White Matter Connectivity in DT-MRI Using a Parallel-Hardware Hamilton-Jacobi Solver

Won-Ki Jeong; P.T. Fletcher; Ran Tao; Ross T. Whitaker

In this paper we present a method to compute and visualize volumetric white matter connectivity in diffusion tensor magnetic resonance imaging (DT-MRI) using a Hamilton-Jacobi (H-J) solver on the GPU (graphics processing unit). Paths through the volume are assigned costs that are lower if they are consistent with the preferred diffusion directions. The proposed method finds a set of voxels in the DTI volume that contain paths between two regions whose costs are within a threshold of the optimal path. The result is a volumetric optimal path analysis, which is driven by clinical and scientific questions relating to the connectivity between various known anatomical regions of the brain. To solve the minimal path problem quickly, we introduce a novel numerical algorithm for solving H-J equations, which we call the fast iterative method (FIM). This algorithm is well-adapted to parallel architectures, and we present a GPU-based implementation, which runs roughly 50-100 times faster than traditional CPU-based solvers for anisotropic H-J equations. The proposed system allows users to freely change the endpoints of interesting pathways and to visualize the optimal volumetric path between them at an interactive rate. We demonstrate the proposed method on some synthetic and real DT-MRI datasets and compare the performance with existing methods.


IEEE Transactions on Visualization and Computer Graphics | 2012

Interactive Volume Exploration of Petascale Microscopy Data Streams Using a Visualization-Driven Virtual Memory Approach

Markus Hadwiger; Johanna Beyer; Won-Ki Jeong; Hanspeter Pfister

This paper presents the first volume visualization system that scales to petascale volumes imaged as a continuous stream of high-resolution electron microscopy images. Our architecture scales to dense, anisotropic petascale volumes because it: (1) decouples construction of the 3D multi-resolution representation required for visualization from data acquisition, and (2) decouples sample access time during ray-casting from the size of the multi-resolution hierarchy. Our system is designed around a scalable multi-resolution virtual memory architecture that handles missing data naturally, does not pre-compute any 3D multi-resolution representation such as an octree, and can accept a constant stream of 2D image tiles from the microscopes. A novelty of our system design is that it is visualization-driven: we restrict most computations to the visible volume data. Leveraging the virtual memory architecture, missing data are detected during volume ray-casting as cache misses, which are propagated backwards for on-demand out-of-core processing. 3D blocks of volume data are only constructed from 2D microscope image tiles when they have actually been accessed during ray-casting. We extensively evaluate our system design choices with respect to scalability and performance, compare to previous best-of-breed systems, and illustrate the effectiveness of our system for real microscopy data from neuroscience.


Nature | 2017

Whole-brain serial-section electron microscopy in larval zebrafish

David G. C. Hildebrand; Marcelo Cicconet; Russel M. Iguel Torres; Woohyuk Choi; Tran Minh Quan; Jungmin Moon; Arthur W. Wetzel; Andrew Champion; Brett J. Graham; Owen Randlett; George Scott Plummer; Ruben Portugues; Isaac H. Bianco; Stephan Saalfeld; Alexander D. Baden; Kunal Lillaney; Randal C. Burns; Joshua T. Vogelstein; Alexander F. Schier; Wei-Chung Allen Lee; Won-Ki Jeong; Jeff W. Lichtman; Florian Engert

High-resolution serial-section electron microscopy (ssEM) makes it possible to investigate the dense meshwork of axons, dendrites, and synapses that form neuronal circuits. However, the imaging scale required to comprehensively reconstruct these structures is more than ten orders of magnitude smaller than the spatial extents occupied by networks of interconnected neurons, some of which span nearly the entire brain. Difficulties in generating and handling data for large volumes at nanoscale resolution have thus restricted vertebrate studies to fragments of circuits. These efforts were recently transformed by advances in computing, sample handling, and imaging techniques, but high-resolution examination of entire brains remains a challenge. Here, we present ssEM data for the complete brain of a larval zebrafish (Danio rerio) at 5.5 days post-fertilization. Our approach utilizes multiple rounds of targeted imaging at different scales to reduce acquisition time and data management requirements. The resulting dataset can be analysed to reconstruct neuronal processes, permitting us to survey all myelinated axons (the projectome). These reconstructions enable precise investigations of neuronal morphology, which reveal remarkable bilateral symmetry in myelinated reticulospinal and lateral line afferent axons. We further set the stage for whole-brain structure–function comparisons by co-registering functional reference atlases and in vivo two-photon fluorescence microscopy data from the same specimen. All obtained images and reconstructions are provided as an open-access resource.


Volume Graphics | 2006

Interactive 3D seismic fault detection on the Graphics Hardware

Won-Ki Jeong; Ross T. Whitaker; Mark W. Dobin

This paper presents a 3D, volumetric, seismic fault detection system that re lies on a novel set of nonlinear filters combined with a GPU (Graphics Processing Unit) implementation, which make s interactive nonlinear, volumetric processing feasible. The method uses a 3D structure tensor to robustly me asure seismic orientations. These tensors guide an anisotropic diffusion, which reduces noise in the data while enhanc ing the fault discontinuity and coherency along seismic strata. A fault-likelihood volume is computed using a dir ectional variance measure, and 3D fault voxels are then extracted through a non-maximal-suppression meth od. We also show how the proposed algorithms are efficiently implemented with a GPU programming model, and compa re this to a CPU implementation to show the benefits of using the GPU for this computationally demanding proble m.


IEEE Transactions on Visualization and Computer Graphics | 2010

Interactive Histology of Large-Scale Biomedical Image Stacks

Won-Ki Jeong; Jens Schneider; Stephen G. Turney; Beverly E. Faulkner-Jones; D Meyer; Rüdiger Westermann; R C Reid; Jeff W. Lichtman; Hanspeter Pfister

Histology is the study of the structure of biological tissue using microscopy techniques. As digital imaging technology advances, high resolution microscopy of large tissue volumes is becoming feasible; however, new interactive tools are needed to explore and analyze the enormous datasets. In this paper we present a visualization framework that specifically targets interactive examination of arbitrarily large image stacks. Our framework is built upon two core techniques: display-aware processing and GPU-accelerated texture compression. With display-aware processing, only the currently visible image tiles are fetched and aligned on-the-fly, reducing memory bandwidth and minimizing the need for time-consuming global pre-processing. Our novel texture compression scheme for GPUs is tailored for quick browsing of image stacks. We evaluate the usability of our viewer for two histology applications: digital pathology and visualization of neural structure at nanoscale-resolution in serial electron micrographs.


IEEE Computer Graphics and Applications | 2013

Exploring the Connectome: Petascale Volume Visualization of Microscopy Data Streams

Johanna Beyer; Markus Hadwiger; Ali K. Al-Awami; Won-Ki Jeong; Narayanan Kasthuri; Jeff W. Lichtman; Hanspeter Pfister

Recent advances in high-resolution microscopy let neuroscientists acquire neural-tissue volume data of extremely large sizes. However, the tremendous resolution and the high complexity of neural structures present big challenges to storage, processing, and visualization at interactive rates. A proposed system provides interactive exploration of petascale (petavoxel) volumes resulting from high-throughput electron microscopy data streams. The system can concurrently handle multiple volumes and can support the simultaneous visualization of high-resolution voxel segmentation data. Its visualization-driven design restricts most computations to a small subset of the data. It employs a multiresolution virtual-memory architecture for better scalability than previous approaches and for handling incomplete data. Researchers have employed it for a 1-teravoxel mouse cortex volume, of which several hundred axons and dendrites as well as synapses have been segmented and labeled.


medical image computing and computer assisted intervention | 2011

Neural process reconstruction from sparse user scribbles

Mike Roberts; Won-Ki Jeong; Amelio Vázquez-Reina; Markus Unger; Horst Bischof; Jeff W. Lichtman; Hanspeter Pfister

We present a novel semi-automatic method for segmenting neural processes in large, highly anisotropic EM (electron microscopy) image stacks. Our method takes advantage of sparse scribble annotations provided by the user to guide a 3D variational segmentation model, thereby allowing our method to globally optimally enforce 3D geometric constraints on the segmentation. Moreover, we leverage a novel algorithm for propagating segmentation constraints through the image stack via optimal volumetric pathways, thereby allowing our method to compute highly accurate 3D segmentations from very sparse user input. We evaluate our method by reconstructing 16 neural processes in a 1024 x 1024 x 50 nanometer-scale EM image stack of a mouse hippocampus. We demonstrate that, on average, our method is 68% more accurate than previous state-of-the-art semi-automatic methods.


international symposium on 3d data processing visualization and transmission | 2004

Neural mesh ensembles

Ioannis P. Ivrissimtzis; Yunjin Lee; Seungyong Lee; Won-Ki Jeong; Hans-Peter Seidel

This work proposes the use of neural network ensembles to boost the performance of a neural network based surface reconstruction algorithm. Ensemble is a very popular and powerful statistical technique based on the idea of averaging several outputs of a probabilistic algorithm. In the context of surface reconstruction, two main problems arise. The first is finding an efficient way to average meshes with different connectivity, and the second is tuning the parameters for surface reconstruction to maximize the performance of the ensemble. We solve the first problem by voxelizing all the meshes on the same regular grid and taking majority vote on each voxel. We tune the parameters experimentally, borrowing ideas from weak learning methods.

Collaboration


Dive into the Won-Ki Jeong's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tran Minh Quan

Ulsan National Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Markus Hadwiger

King Abdullah University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Woohyuk Choi

Ulsan National Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Seungyong Lee

Pohang University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Sumin Hong

Ulsan National Institute of Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge