Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yeong Gil Shin is active.

Publication


Featured researches published by Yeong Gil Shin.


Computer Methods and Programs in Biomedicine | 2007

Efficient liver segmentation using a level-set method with optimal detection of the initial liver boundary from level-set speed images

Jeongjin Lee; Namkug Kim; Ho Lee; Joon Beom Seo; Hyung Jin Won; Yong Moon Shin; Yeong Gil Shin; Soo-Hong Kim

Automatic liver segmentation is difficult because of the wide range of human variations in the shapes of the liver. In addition, nearby organs and tissues have similar intensity distributions to the liver, making the livers boundaries ambiguous. In this study, we propose a fast and accurate liver segmentation method from contrast-enhanced computed tomography (CT) images. We apply the two-step seeded region growing (SRG) onto level-set speed images to define an approximate initial liver boundary. The first SRG efficiently divides a CT image into a set of discrete objects based on the gradient information and connectivity. The second SRG detects the objects belonging to the liver based on a 2.5-dimensional shape propagation, which models the segmented liver boundary of the slice immediately above or below the current slice by points being narrow-band, or local maxima of distance from the boundary. With such optimal estimation of the initial liver boundary, our method decreases the computation time by minimizing level-set propagation, which converges at the optimal position within a fixed iteration number. We utilize level-set speed images that have been generally used for level-set propagation to detect the initial liver boundary with the additional help of computationally inexpensive steps, which improves computational efficiency. Finally, a rolling ball algorithm is applied to refine the liver boundary more accurately. Our method was validated on 20 sets of abdominal CT scans and the results were compared with the manually segmented result. The average absolute volume error was 1.25+/-0.70%. The average processing time for segmenting one slice was 3.35 s, which is over 15 times faster than manual segmentation or the previously proposed technique. Our method could be used for liver transplantation planning, which requires a fast and accurate measurement of liver volume.


Computer-aided Design | 1998

Fast 3D solid model reconstruction from orthographic views

Byeong-Seok Shin; Yeong Gil Shin

As the number of applications that use 3D solid models increases, there is a need to devise an efficient method of constructing a solid model. One approach is reconstruction from orthographic projections. With this method input of geometric information is easy. However, it requires combinatorial searches and complicated geometric operations because of the loss of semantic information during projection. In this paper, we propose an efficient algorithm for reconstructing solid models using geometric properties and the topology of geometric primitives. The experimental results show that the algorithm can reconstruct 3D models much faster than previous algorithms.


pacific conference on computer graphics and applications | 1999

An efficient wavelet-based compression method for volume rendering

Taeyoung Kim; Yeong Gil Shin

Since volume rendering needs a lot of computation time and memory space, many researches have been suggested for accelerating rendering or reducing data size using compression techniques. However, there is little progress in a research for accomplishing these goals. This paper presents an efficient wavelet-based compression method providing fast visualization of large volume data, which is divided into individual blocks with regular resolution. Wavelet transformed block is runlength encoded in accordance with the reconstruction order resulting in a fairly good compression ratio and fast reconstruction. A cache data structure is designed to speed up the reconstruction, and an adaptive compression scheme is proposed to produce a higher quality rendered image. The compression method proposed here is combined with several accelerated volume rendering algorithms, such as brute-force volume rendering with min-max table and Lacroutes shear-warp factorization. Experimental results have shown the space requirement to be about 1/27 and the rendering time to be about 3 seconds for 512/spl times/512/spl times/512 data sets while preserving the quality of an image much like using the original data.


Health | 2005

Hybrid lung segmentation in chest CT images for computer-aided diagnosis

Yeny Yim; Helen Hong; Yeong Gil Shin

We propose an automatic segmentation method for accurately identifying lung surfaces in chest CT images. Our method consists of three steps. First, lungs and airways are extracted by an inverse seeded region growing and connected component labeling. Second, trachea and large airways are delineated from the lungs by three-dimensional region growing. Third, accurate lung region borders are obtained by subtracting the result of the second step from that of the first step. The proposed method has been applied to 10 patient datasets with lung cancer or pulmonary embolism. Experimental results show that our segmentation method extracts lung surfaces automatically and accurately. Averaged over all volumes, the root mean square difference between the computer and manual analysis is 1.2 pixels.


IEEE Transactions on Biomedical Engineering | 2011

Automatic Extraction of Inferior Alveolar Nerve Canal Using Feature-Enhancing Panoramic Volume Rendering

Gyehyun Kim; Jeongjin Lee; Ho Lee; Jinwook Seo; Yun-Mo Koo; Yeong Gil Shin; Bohyoung Kim

Dental implant surgery, which involves the surgical insertion of a dental implant into the jawbone as an artificial root, has become one of the most successful applications of computed tomography (CT) in dental implantology. For successful implant surgery, it is essential to identify vital anatomic structures such as the inferior alveolar nerve (IAN), which should be avoided during the surgical procedure. Due to the ambiguity of its structure, the IAN is very elusive to extract in dental CT images. As a result, the IAN canal is typically identified in most previous studies. This paper presents a novel method of automatically extracting the IAN canal. Mental and mandibular foramens, which are regarded as the ends of the IAN canal in the mandible, are detected automatically using 3-D panoramic volume rendering (VR) and texture analysis techniques. In the 3-D panoramic VR, novel color shading and compositing methods are proposed to emphasize the foramens and isolate them from other fine structures. Subsequently, the path of the IAN canal is computed using a line-tracking algorithm. Finally, the IAN canal is extracted by expanding the region of the path using a fast marching method with a new speed function exploiting the anatomical information about the canal radius. In experimental results using ten clinical datasets, the proposed method identified the IAN canal accurately, demonstrating that this approach assists dentists substantially during dental implant surgery.


Medical Physics | 2010

Deformable lung registration between exhale and inhale CT scans using active cells in a combined gradient force approach.

Yeny Yim; Helen Hong; Yeong Gil Shin

PURPOSE This article proposes an accurate and fast deformable registration method between end-exhale and end-inhale CT scans that can handle large lung deformations and accelerate the registration process. METHODS The density correction method is applied to reduce the density difference between two CT scans due to respiration and gravity. The lungs are globally aligned by affine registration and nonlinearly deformed by a demons algorithm using a combined gradient force and active cells. The use of combined gradient force allows a fast convergence in the lung regions with a weak gradient of the target image by taking into account the gradient of the source image. The use of active cells helps to accelerate the registration process and reduce the degree of deformation folding because it avoids unnecessary computation of the displacement for well-matched lung regions. RESULTS The proposed method was tested with end-exhale and end-inhale CT scans acquired from eight normal subjects. The performance of the proposed method was evaluated through comparisons of methods that use a target gradient force or a combined gradient force, as well as methods with and without active cells. The proposed method with combined gradient force led to significantly higher accuracy compared to the method with target gradient force. For the entire lung, the proposed method provided a mean landmark error of 2.8 +/- 1.5 mm. For the lower 30% part of the lungs, the Dice similarity coefficient and normalized cross correlation of the proposed method were higher than the original demon algorithm by 2.3% (p=0.0172) and 2.2% (p=0.0028), respectively. The proposed method with an active cell led to fewer voxels with negative Jacobian values and a 55% decrease of processing time compared to the method without an active cell. CONCLUSIONS The results show that the proposed method can accurately register lungs with large deformations and can considerably reduce the processing time. The proposed deformable registration technique can be used for quantitative assessments of air trapping in obstructive lung disease and for tumor motion tracking during the planning of radiotherapy treatments.


pacific conference on computer graphics and applications | 1998

Efficient image-based rendering of volume data

Jae-Jeong Choi; Yeong Gil Shin

The paper presents an efficient image based rendering algorithm of volume data. Using intermediate image space instead of image space, mapping becomes more efficient and holes coming from the point-to-point mapping can be removed. Mapping in intermediate image space is easily performed by looking up the table with the depth value of a source pixel. We also suggest a way of minimizing space requirement for pre-acquired images. Experimental results show that the algorithm can generate 25-40 images per second without noticeable image degradation in the case of making 256/sup 2/ images with a 256/sup 3/ voxel data set.


Computers in Biology and Medicine | 2009

Fast perspective volume ray casting method using GPU-based acceleration techniques for translucency rendering in 3D endoluminal CT colonography

Taekhee Lee; Jeongjin Lee; Ho Lee; Heewon Kye; Yeong Gil Shin; Soo Hong Kim

Recent advances in graphics processing unit (GPU) have enabled direct volume rendering at interactive rates. However, although perspective volume rendering for opaque isosurface is rapidly performed using conventional GPU-based method, perspective volume rendering for non-opaque volume such as translucency rendering is still slow. In this paper, we propose an efficient GPU-based acceleration technique of fast perspective volume ray casting for translucency rendering in computed tomography (CT) colonography. The empty space searching step is separated from the shading and compositing steps, and they are divided into separate processing passes in the GPU. Using this multi-pass acceleration, empty space leaping is performed exactly at the voxel level rather than at the block level, so that the efficiency of empty space leaping is maximized for colon data set, which has many curved or narrow regions. In addition, the numbers of shading and compositing steps are fixed, and additional empty space leapings between colon walls are performed to increase computational efficiency further near the haustral folds. Experiments were performed to illustrate the efficiency of the proposed scheme compared with the conventional GPU-based method, which has been known to be the fastest algorithm. The experimental results showed that the rendering speed of our method was 7.72fps for translucency rendering of 1024x1024 colonoscopy image, which was about 3.54 times faster than that of the conventional method. Since our method performed the fully optimized empty space leaping for any kind of colon inner shapes, the frame-rate variations of our method were about two times smaller than that of the conventional method to guarantee smooth navigation. The proposed method could be successfully applied to help diagnose colon cancer using translucency rendering in virtual colonoscopy.


Computers in Biology and Medicine | 2008

Robust feature-based registration using a Gaussian-weighted distance map and brain feature points for brain PET/CT images

Ho Lee; Jeongjin Lee; Namkug Kim; Sang Joon Kim; Yeong Gil Shin

Feature-based registration is an effective technique for clinical use, because it can greatly reduce computational costs. However, this technique, which estimates the transformation by using feature points extracted from two images may cause misalignments, particularly in brain PET and CT images that have low correspondence rates between features due to differences in image characteristics. To cope with this limitation, we propose a robust feature-based registration technique using a Gaussian-weighted distance map (GWDM) that finds the best alignment of feature points even when features of two images are mismatched. A GWDM is generated by propagating the value of the Gaussian-weighted mask from feature points of CT images and leads the feature points of PET images to be aligned on an optimal location even though there is a localization error between feature points extracted from PET and CT images. Feature points are extracted from two images by our automatic brain segmentation method. In our experiments, simulated and clinical data sets were used to compare our method with conventional methods such as normalized mutual information (NMI)-based registration and chamfer matching in accuracy, robustness, and computational time. Experimental results showed that our method aligned the images robustly even in cases where conventional methods failed to find optimal locations. In addition, the accuracy of our method was comparable to that of the NMI-based registration method.


pacific conference on computer graphics and applications | 1997

Template-based rendering of run-length encoded volumes

Cheol-Hi Lee; Yun-Mo Koo; Yeong Gil Shin

Template-based volume rendering is an acceleration technique for volume ray casting. It does not trade-off the image quality for the rendering speed. However, it still falls short of interactive manipulation of volume data, mainly due to the ray-by-ray volume access pattern and the long ray path in the transparent regions. In this paper, we present an object-order template-based volume rendering method that uses run-length encoding to enable the skipping of highly transparent regions. We present three algorithms, one for each principal axis direction. By combining the advantages of object-order volume traversal and run-length encoded volumes, the algorithms achieve high-quality rendering in a much shorter time than the original template-based volume rendering.

Collaboration


Dive into the Yeong Gil Shin's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Helen Hong

Seoul Women's University

View shared research outputs
Top Co-Authors

Avatar

Ho Lee

Seoul National University

View shared research outputs
Top Co-Authors

Avatar

Bohyoung Kim

Seoul National University Bundang Hospital

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jinwook Seo

Seoul National University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Seongjin Park

Seoul National University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge