Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Tran Minh Quan is active.

Publication


Featured researches published by Tran Minh Quan.


Nature | 2017

Whole-brain serial-section electron microscopy in larval zebrafish

David G. C. Hildebrand; Marcelo Cicconet; Russel M. Iguel Torres; Woohyuk Choi; Tran Minh Quan; Jungmin Moon; Arthur W. Wetzel; Andrew Champion; Brett J. Graham; Owen Randlett; George Scott Plummer; Ruben Portugues; Isaac H. Bianco; Stephan Saalfeld; Alexander D. Baden; Kunal Lillaney; Randal C. Burns; Joshua T. Vogelstein; Alexander F. Schier; Wei-Chung Allen Lee; Won-Ki Jeong; Jeff W. Lichtman; Florian Engert

High-resolution serial-section electron microscopy (ssEM) makes it possible to investigate the dense meshwork of axons, dendrites, and synapses that form neuronal circuits. However, the imaging scale required to comprehensively reconstruct these structures is more than ten orders of magnitude smaller than the spatial extents occupied by networks of interconnected neurons, some of which span nearly the entire brain. Difficulties in generating and handling data for large volumes at nanoscale resolution have thus restricted vertebrate studies to fragments of circuits. These efforts were recently transformed by advances in computing, sample handling, and imaging techniques, but high-resolution examination of entire brains remains a challenge. Here, we present ssEM data for the complete brain of a larval zebrafish (Danio rerio) at 5.5 days post-fertilization. Our approach utilizes multiple rounds of targeted imaging at different scales to reduce acquisition time and data management requirements. The resulting dataset can be analysed to reconstruct neuronal processes, permitting us to survey all myelinated axons (the projectome). These reconstructions enable precise investigations of neuronal morphology, which reveal remarkable bilateral symmetry in myelinated reticulospinal and lateral line afferent axons. We further set the stage for whole-brain structure–function comparisons by co-registering functional reference atlases and in vivo two-photon fluorescence microscopy data from the same specimen. All obtained images and reconstructions are provided as an open-access resource.


IEEE Transactions on Visualization and Computer Graphics | 2014

Vivaldi: A Domain-Specific Language for Volume Processing and Visualization on Distributed Heterogeneous Systems.

Hyungsuk Choi; Woohyuk Choi; Tran Minh Quan; David G. C. Hildebrand; Hanspeter Pfister; Won-Ki Jeong

As the size of image data from microscopes and telescopes increases, the need for high-throughput processing and visualization of large volumetric data has become more pressing. At the same time, many-core processors and GPU accelerators are commonplace, making high-performance distributed heterogeneous computing systems affordable. However, effectively utilizing GPU clusters is difficult for novice programmers, and even experienced programmers often fail to fully leverage the computing power of new parallel architectures due to their steep learning curve and programming complexity. In this paper, we propose Vivaldi, a new domain-specific language for volume processing and visualization on distributed heterogeneous computing systems. Vivaldis Python-like grammar and parallel processing abstractions provide flexible programming tools for non-experts to easily write high-performance parallel computing code. Vivaldi provides commonly used functions and numerical operators for customized visualization and high-throughput image processing applications. We demonstrate the performance and usability of Vivaldi on several examples ranging from volume rendering to image segmentation.


international symposium on biomedical imaging | 2016

Compressed sensing reconstruction of dynamic contrast enhanced MRI using GPU-accelerated convolutional sparse coding

Tran Minh Quan; Won-Ki Jeong

In this paper, we propose a data-driven image reconstruction algorithm that specifically aims to reconstruct undersampled dynamic contrast enhanced (DCE) MRI data. The proposed method is based on the convolutional sparse coding algorithm, which leverages the Fourier convolution theorem to accelerate the process of learning a collections of filters and iteratively refines the reconstruction result using the sparse codes found during the reconstruction process. We introduce a novel energy formation based on the learning over time-varing DCE-MRI images, and propose an extension of Alternating Direction Method of Multiplier (ADMM) method to solve the constrained optimization problem efficiently using the GPU. We assess the performance of the proposed method by comparing with the state-of-the-art dictionary-based compressed sensing (CS) MRI method.


IEEE Transactions on Medical Imaging | 2018

Compressed Sensing MRI Reconstruction Using a Generative Adversarial Network With a Cyclic Loss

Tran Minh Quan; Thanh Nguyen-Duc; Won-Ki Jeong

Compressed Sensing MRI (CS-MRI) has provided theoretical foundations upon which the time-consuming MRI acquisition process can be accelerated. However, it primarily relies on iterative numerical solvers which still hinders their adaptation in time-critical applications. In addition, recent advances in deep neural networks have shown their potential in computer vision and image processing, but their adaptation to MRI reconstruction is still in an early stage. In this paper, we propose a novel deep learning-based generative adversarial model, RefineGAN, for fast and accurate CS-MRI reconstruction. The proposed model is a variant of fully-residual convolutional autoencoder and generative adversarial networks (GANs), specifically designed for CS-MRI formulation; it employs deeper generator and discriminator networks with cyclic data consistency loss for faithful interpolation in the given under-sampled k-space data. In addition, our solution leverages a chained network to further enhance the reconstruction quality. RefineGAN is fast and accurate - the reconstruction process is extremely rapid, as low as tens of milliseconds for reconstruction of a 256 x 256 image, because it is one-way deployment on a feedforward network, and the image quality is superior even for extremely low sampling rate (as low as 10%) due to the data-driven nature of the method. We demonstrate that RefineGAN outperforms the state-of-the-art CS-MRI methods by a large margin in terms of both running time and image quality via evaluation using several open-source MRI databases.


medical image computing and computer assisted intervention | 2015

Multi-GPU Reconstruction of Dynamic Compressed Sensing MRI

Tran Minh Quan; SoHyun Han; HyungJoon Cho; Won-Ki Jeong

Magnetic resonance imaging (MRI) is a widely used in-vivo imaging technique that is essential to the diagnosis of disease, but its longer acquisition time hinders its wide adaptation in time-critical applications, such as emergency diagnosis. Recent advances in compressed sensing (CS) research have provided promising theoretical insights to accelerate the MRI acquisition process, but CS reconstruction also poses computational challenges that make MRI less practical. In this paper, we introduce a fast, scalable parallel CS-MRI reconstruction method that runs on graphics processing unit (GPU) cluster systems for dynamic contrast-enhanced (DCE) MRI. We propose a modified Split-Bregman iteration using a variable splitting method for CS-based DCE-MRI. We also propose a parallel GPU Split-Bregman solver that scales well across multiple GPUs to handle large data size. We demonstrate the validity of the proposed method on several synthetic and real DCE-MRI datasets and compare with existing methods.


IEEE Transactions on Parallel and Distributed Systems | 2016

A Fast Discrete Wavelet Transform Using Hybrid Parallelism on GPUs

Tran Minh Quan; Won-Ki Jeong

Wavelet transform has been widely used in many signal and image processing applications. Due to its wide adoption for time-critical applications, such as streaming and real-time signal processing, many acceleration techniques were developed during the past decade. Recently, the graphics processing unit (GPU) has gained much attention for accelerating computationally-intensive problems and many solutions of GPU-based discrete wavelet transform (DWT) have been introduced, but most of them did not fully leverage the potential of the GPU. In this paper, we present various state-of-the-art GPU optimization strategies in DWT implementation, such as leveraging shared memory, registers, warp shuffling instructions, and thread- and instruction-level parallelism (TLP, ILP), and finally elaborate our hybrid approach to further boost up its performance. In addition, we introduce a novel mixed-band memory layout for Haar DWT, where multi-level transform can be carried out in a single fused kernel launch. As a result, unlike recent GPU DWT methods that focus mainly on maximizing ILP, we show that the optimal GPU DWT performance can be achieved by hybrid parallelism combining both TLP and ILP together in a mixed-band approach. We demonstrate the performance of our proposed method by comparison with other CPU and GPU DWT methods.


medical image computing and computer assisted intervention | 2016

Compressed Sensing Dynamic MRI Reconstruction Using GPU-accelerated 3D Convolutional Sparse Coding

Tran Minh Quan; Won-Ki Jeong

In this paper, we introduce a fast alternating method for reconstructing highly undersampled dynamic MRI data using 3D convolutional sparse coding. The proposed solution leverages Fourier Convolution Theorem to accelerate the process of learning a set of 3D filters and iteratively refine the MRI reconstruction based on the sparse codes found subsequently. In contrast to conventional CS methods which exploit the sparsity by applying universal transforms such as wavelet and total variation, our approach extracts and adapts the temporal information directly from the MRI data using compact shift-invariant 3D filters. We provide a highly parallel algorithm with GPU support for efficient computation, and therefore, the reconstruction outperforms CPU implementation of the state-of-the art dictionary learning-based approaches by up to two orders of magnitude.


international conference on image processing | 2014

A fast mixed-band lifting wavelet transform on the GPU

Tran Minh Quan; Won-Ki Jeong

Discrete wavelet transform (DWT) has been widely used in many image compression applications, such as JPEG2000 and compressive sensing MRI. Even though a lifting scheme [1] has been widely adopted to accelerate DWT, only a handful of research has been done on its efficient implementation on many-core accelerators, such as graphics processing units (GPUs). Moreover, we observe that rearranging the spatial locations of wavelet coefficients at every level of DWT significantly impairs the performance of memory transaction on the GPU. To address these problems, we propose a mixed-band lifting wavelet transform that reduces uncoalesced global memory access on the GPU and maximizes on-chip memory bandwidth by implementing in-place operations using registers. We assess the performance of the proposed method by comparing with the state-of-the-art DWT libraries, and show its usability in a compressive sensing (CS) MRI application.


IEEE Transactions on Visualization and Computer Graphics | 2018

An Intelligent System Approach for Probabilistic Volume Rendering Using Hierarchical 3D Convolutional Sparse Coding

Tran Minh Quan; Junyoung Choi; Haejin Jeong; Won-Ki Jeong

In this paper, we propose a novel machine learning-based voxel classification method for highly-accurate volume rendering. Unlike conventional voxel classification methods that incorporate intensity-based features, the proposed method employs dictionary based features learned directly from the input data using hierarchical multi-scale 3D convolutional sparse coding, a novel extension of the state-of-the-art learning-based sparse feature representation method. The proposed approach automatically generates high-dimensional feature vectors in up to 75 dimensions, which are then fed into an intelligent system built on a random forest classifier for accurately classifying voxels from only a handful of selection scribbles made directly on the input data by the user. We apply the probabilistic transfer function to further customize and refine the rendered result. The proposed method is more intuitive to use and more robust to noise in comparison with conventional intensity-based classification methods. We evaluate the proposed method using several synthetic and real-world volume datasets, and demonstrate the methods usability through a user study.


arXiv: Computer Vision and Pattern Recognition | 2016

FusionNet: A deep fully residual convolutional neural network for image segmentation in connectomics.

Tran Minh Quan; David G. C. Hildebrand; Won-Ki Jeong

Collaboration


Dive into the Tran Minh Quan's collaboration.

Top Co-Authors

Avatar

Won-Ki Jeong

Ulsan National Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gyuhyun Lee

Ulsan National Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Thanh Nguyen-Duc

Ulsan National Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Woohyuk Choi

Ulsan National Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Haejin Jeong

Ulsan National Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

HyungJoon Cho

Ulsan National Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Hyungsuk Choi

Ulsan National Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Jungmin Moon

Ulsan National Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Junyoung Choi

Ulsan National Institute of Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge