Heewon Kye
Hansung University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Heewon Kye.
Computers in Biology and Medicine | 2009
Taekhee Lee; Jeongjin Lee; Ho Lee; Heewon Kye; Yeong Gil Shin; Soo Hong Kim
Recent advances in graphics processing unit (GPU) have enabled direct volume rendering at interactive rates. However, although perspective volume rendering for opaque isosurface is rapidly performed using conventional GPU-based method, perspective volume rendering for non-opaque volume such as translucency rendering is still slow. In this paper, we propose an efficient GPU-based acceleration technique of fast perspective volume ray casting for translucency rendering in computed tomography (CT) colonography. The empty space searching step is separated from the shading and compositing steps, and they are divided into separate processing passes in the GPU. Using this multi-pass acceleration, empty space leaping is performed exactly at the voxel level rather than at the block level, so that the efficiency of empty space leaping is maximized for colon data set, which has many curved or narrow regions. In addition, the numbers of shading and compositing steps are fixed, and additional empty space leapings between colon walls are performed to increase computational efficiency further near the haustral folds. Experiments were performed to illustrate the efficiency of the proposed scheme compared with the conventional GPU-based method, which has been known to be the fastest algorithm. The experimental results showed that the rendering speed of our method was 7.72fps for translucency rendering of 1024x1024 colonoscopy image, which was about 3.54 times faster than that of the conventional method. Since our method performed the fully optimized empty space leaping for any kind of colon inner shapes, the frame-rate variations of our method were about two times smaller than that of the conventional method to guarantee smooth navigation. The proposed method could be successfully applied to help diagnose colon cancer using translucency rendering in virtual colonoscopy.
Graphical Models \/graphical Models and Image Processing \/computer Vision, Graphics, and Image Processing | 2008
Heewon Kye; Byeong-Seok Shin; Yeong Gil Shin
The pre-integrated volume rendering technique is widely used for creating high quality images. It produces good images even though the transfer function is nonlinear. Because the size of the pre-integration lookup table is proportional to the square of data precision, the required storage and computation load steeply increase for rendering of high-precision volume data. In this paper, we propose a method that approximates the pre-integration function proportional to the data precision. Using the arithmetic mean instead of the geometric mean and storing opacity instead of extinction density, this technique reduces the size and the update time of the pre-integration lookup table so that it classifies high-precision volume data interactively. We demonstrate performance gains for typical renderings of volume datasets.
Magnetic Resonance in Medicine | 2012
Jeongjin Lee; Kyoung Won Kim; Ho Lee; So Jung Lee; Sanghyun Choi; Woo Kyoung Jeong; Heewon Kye; Gi-Won Song; Shin Hwang; Sung-Gyu Lee
In this article, we determined the relative accuracy of semiautomated spleen volumetry with diffusion‐weighted (DW) MR images compared to standard manual volumetry with DW‐MR or CT images. Semiautomated spleen volumetry using simple thresholding followed by 3D and 2D connected component analysis was performed with DW‐MR images. Manual spleen volumetry was performed on DW‐MR and CT images. In this study, 35 potential live liver donor candidates were included. Semiautomated volumetry results were highly correlated with manual volumetry results using DW‐MR (r = 0.99; P < 0.0001; mean percentage absolute difference, 1.43 ± 0.94) and CT (r = 0.99; P < 0.0001; 1.76 ± 1.07). Mean total processing time for semiautomated volumetry was significantly shorter compared to that of manual volumetry with DW‐MR (P < 0.0001) and CT (P < 0.0001). In conclusion, semiautomated spleen volumetry with DW‐MR images can be performed rapidly and accurately when compared with standard manual volumetry. Magn Reson Med, 2012.
Computerized Medical Imaging and Graphics | 2012
Heewon Kye; Bong-Soo Sohn; Jeongjin Lee
Maximum intensity projection (MIP) is an important visualization method that has been widely used for the diagnosis of enhanced vessels or bones by rotating or zooming MIP images. With the rapid spread of multidetector-row computed tomography (MDCT) scanners, MDCT scans of a patient generate a large data set. However, previous acceleration methods for MIP rendering of such a data set failed to generate MIP images at interactive rates. In this paper, we propose novel culling methods in both object and image space for interactive MIP rendering of large medical data sets. In object space, for the visibility test of a block, we propose the initial occluder resulting from a preceding image to utilize temporal coherence and increase the block culling ratio a lot. In addition, we propose the hole filling method using the mesh generation and rendering to improve the culling performance during the generation of the initial occluder. In image space, we find out that there is a trade-off between the block culling ratio in object space and the culling efficiency in image space. In this paper, we classify the visible blocks into two types by their visibility. And we propose a balanced culling method by applying a different culling algorithm in image space for each type to utilize the trade-off and improve the rendering speed. Experimental results on twenty CT data sets showed that our method achieved 3.85 times speed up in average without any loss of image quality comparing with conventional bricking method. Using our visibility culling method, we achieved interactive GPU-based MIP rendering of large medical data sets.
Computer Animation and Virtual Worlds | 2005
Heewon Kye; Byeong-Seok Shin; Yeong Gil Shin; Helen Hong
Shear–warp volume rendering has the advantages of a moderate image quality and a fast rendering speed. However, in the case of dynamic changes in the opacity transfer function, the efficiency of memory access drops, as the method cannot exploit pre‐classified volumes. In this paper, we propose an efficient algorithm that exploits the spatial locality of memory references for interactive classifications. The algorithm inserts a rotation matrix when factorizing the viewing transformation, so that it may perform a scanline‐based traversal in both object space and image space. In addition, we present solutions to some problems of the proposed method, namely inaccurate front‐to‐back composition, the occurrence of holes, and increased computation. Our method is noticeably faster than traditional shear‐warp rendering methods because of an improved utilization of cache memory. Copyright
Multimedia Tools and Applications | 2017
Heewon Kye; Se Hee Lee; Jeongjin Lee
The authors regret that acknowledgment of the financial support of the first author was omitted from the manuscript.
Journal of Korea Game Society | 2013
Heewon Kye
As volume rendering has been applied for computer game, the visualization of volume data with surface data in one scene has been required. Though a hybrid rendering of volume and surface data have been developed using the GPGPU functionality, computer games which run on low-level hardware are difficult to perform the hybrid rendering. In this paper, we propose a new hybrid rendering based on DirectX 9.0 and general hardware. We generate the layered depth images from surface data using a new method to reduce the depth complexity and generation time. Then, we perform the hybrid rendering using the layered depth images. In the rendering process, we suggest a new method to transform the coordinate system from a surface coordinate to a volume coorinate and propose an accelerated rendering technique. As the result, we can perform volume-surface hybrid rendering in an efficient way.
Journal of Korea Multimedia Society | 2012
Heewon Kye
Journal of Korea Multimedia Society | 2015
Jinhyun Nam; Heewon Kye
Journal of Korea Multimedia Society | 2012
Jeongjin Lee; Che-Hwan Seo; Ho Lee; Heewon Kye; Min-Sun Lee