Seyoun Park
Johns Hopkins University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Seyoun Park.
The Visual Computer | 2006
Seyoun Park; Xiaohu Guo; Hayong Shin; Hong Qin
In this paper, we present a new surface content completion system that can effectively repair both shape and appearance from scanned, incomplete point set inputs. First, geometric holes can be robustly identified from noisy and defective data sets without the need for any normal or orientation information. The geometry and texture information of the holes can then be determined either automatically from the models’ context, or interactively from users’ selection. We use local parameterizations to align patches in order to extract their curvature-driven digital signature. After identifying the patch that most resembles each hole region, the geometry and texture information can be completed by warping the candidate region and gluing it onto the hole area. The displacement vector field for the exact alignment process is computed by solving a Poisson equation with boundary conditions. Our experiments show that the unified framework, founded upon the techniques of deformable models, local parameterization, and PDE modeling, can provide a robust and elegant solution for content completion of defective, complex point surfaces.
Computer-aided Design and Applications | 2004
Hayong Shin; Seyoun Park; Eonjin Park
AbstractRecently point set model is getting increasing research attention in many geometric modeling application areas including computer graphics and CAD/CAM. This paper presents a novel approach to directly slicing point set model with the focus on making rapid prototyping part out of point set model without making any mesh or surface. Main challenge in handling point set model lies in how to interpret inter-point empty space and implicit quadric surfel is used in this research. This paper also explains how to utilize the quadric surfel for slicing the point set, so as to obtain contour curves for RP. Also described in this paper is how to extract smooth curve(s) out of the 2D point cloud obtained by slicing the 3D point set model.
Physics in Medicine and Biology | 2017
Seyoun Park; William Plishker; Harry Quon; John Wong; Raj Shekhar; Junghoon Lee
Cone-beam CT (CBCT) is a widely used intra-operative imaging modality in image-guided radiotherapy and surgery. A short scan followed by a filtered-backprojection is typically used for CBCT reconstruction. While data on the mid-plane (plane of source-detector rotation) is complete, off-mid-planes undergo different information deficiency and the computed reconstructions are approximate. This causes different reconstruction artifacts at off-mid-planes depending on slice locations, and therefore impedes accurate registration between CT and CBCT. In this paper, we propose a method to accurately register CT and CBCT by iteratively matching local CT and CBCT intensities. We correct CBCT intensities by matching local intensity histograms slice by slice in conjunction with intensity-based deformable registration. The correction-registration steps are repeated in an alternating way until the result image converges. We integrate the intensity matching into three different deformable registration methods, B-spline, demons, and optical flow that are widely used for CT-CBCT registration. All three registration methods were implemented on a graphics processing unit for efficient parallel computation. We tested the proposed methods on twenty five head and neck cancer cases and compared the performance with state-of-the-art registration methods. Normalized cross correlation (NCC), structural similarity index (SSIM), and target registration error (TRE) were computed to evaluate the registration performance. Our method produced overall NCC of 0.96, SSIM of 0.94, and TRE of 2.26 → 2.27 mm, outperforming existing methods by 9%, 12%, and 27%, respectively. Experimental results also show that our method performs consistently and is more accurate than existing algorithms, and also computationally efficient.
Proceedings of SPIE | 2015
Seyoun Park; William Plishker; Raj Shekhar; Harry Quon; John Wong; Junghoon Lee
In this paper, we propose a method to accurately register CT to cone-beam CT (CBCT) by iteratively correcting local CBCT intensity. CBCT is a widely used intra-operative imaging modality in image-guided radiotherapy and surgery. A short scan followed by a filtered-backprojection is typically used for CBCT reconstruction. While data on the mid-plane (plane of source-detector rotation) is complete, off-mid-planes undergo different information deficiency and the computed reconstructions are approximate. This causes different reconstruction artifacts at off-mid-planes depending on slice locations, and therefore impedes accurate registration between CT and CBCT. To address this issue, we correct CBCT intensities by matching local intensity histograms slice by slice in conjunction with intensity-based deformable registration. This correction-registration step is repeated until the result image converges. We tested the proposed method on eight head-and-neck cancer cases and compared its performance with state-of-the-art registration methods, Bspline, demons, and optical flow, which are widely used for CT-CBCT registration. Normalized mutual-information (NMI), normalized cross-correlation (NCC), and structural similarity (SSIM) were computed as similarity measures for the performance evaluation. Our method produced overall NMI of 0.59, NCC of 0.96, and SSIM of 0.93, outperforming existing methods by 3.6%, 2.4%, and 2.8% in terms of NMI, NCC, and SSIM scores, respectively. Experimental results show that our method is more consistent and roust than existing algorithms, and also computationally efficient with faster convergence.
Computer-aided Design | 2015
Wonhyung Jung; Seyoun Park; Hayong Shin
Dental computer-aided design (CAD) systems have been intensively introduced to digital dentistry in recent years. As basic digital models, volumetric computed tomography (CT) images or optical surface scan data are used in most dental fields. In many fields, including orthodontics, complete teeth models are required for the diagnosis, planning and treatment purposes. In this research, we introduce a novel modeling approach combining dental CT images and an optically scanned surface to create complete individual teeth models. First, to classify crown and root regions for each set of data, corresponding pairs between two different data are determined based on their spatial relationship. The pairs are used to define the co-segmentation energy by introducing similarity and dissimilarity terms for each corresponding pair. Efficient global optimization can be performed by formulating a graph-cut problem to find the segmentation result that minimizes the energy. After classifying crown and root regions for each data set, complete individual teeth are obtained by merging the two different data sets. The advancing front method was successfully applied for merging purposes by considering the signed distance from the crown boundary of the surface mesh to the root surface of the CT. The teeth models which have detailed geometries obtained from the optically scanned surface and interstice regions recovered from volumetric data can be obtained using the proposed method. In addition, the suggested merging approach makes it possible to obtain complete teeth models from incomplete CT data with metal artifacts. A novel teeth modeling framework by combining optical scan data and dental CT images is introduced.Co-segmentation between optical scan data and dental CT images is performed by the graph-cut method simultaneously.The proposed algorithm is automatic and time efficient, and shows high fidelity.Teeth data with defects such as metal artifacts can be completed successfully.
Medical Physics | 2016
Rana Farah; Seyoun Park; Steven M. Shea; Erik Tryggestad; John Wong; Russell K. Hales; Jin Soo Lee
PURPOSE The purpose of this study is to develop a method to track and examine the correlation between the 3D motion of a lung tumor and an external surrogate with dynamic MRI. METHODS Dynamic MRI was obtained from lung cancer patients. To examine the motion correlation between external surrogates and the tumor, we placed four fiducials on the patients chest at different locations. We acquired a contiguous multi-slice 2D cine MRI (sagittal) to capture the lung and whole tumor, followed by a two-slice 2D cine MRI (sagittal) to simultaneously track the tumor and fiducials. To extract real-time motion, we first reconstructed a phase-binned 4D-MRI from the multi-slice dataset using body area as the respiratory surrogate and a groupwise registration technique. The reconstructed 4D-MRI provided 3D template tumor volumes. Real-time 3D tumor position was calculated by 3D-2D template matching, registering 3D tumor templates and the cine 2D frames from the two-slice tracking dataset. External surrogate 3D trajectories were derived via matching a 3D geometrical model of the fiducial to image features on the 2D cine (two-slice) tracking datasets. Thus, we could analyze the correlation between the 3D trajectories of the tumor and external fiducials. RESULTS We tested our method on four lung cancer patients. 3D tumor motion correlated with the external surrogate signal, but showed a noticeable phase mismatch. The 3D tumor trajectory showed significant cycle-to-cycle variation, while the external surrogate was not sensitive enough to capture such variations. Additionally, surrogate signals obtained from fiducials at different locations showed noticeable phase mismatch. CONCLUSION Our preliminary data show that external surrogate motion has significant variance in relation to tumor motion. Consequently, surrogate-based therapy should be used with caution. Quantitative evaluation of conventional tumor motion management methods such as the internal-target-volume-based approach as well as external-surrogate-based gating, are underway. This work was supported by NIH/NCI under grant R21CA178455.
World Congress on Medical Physics and Biomedical Engineering, 2015 | 2015
Seyoun Park; William Plishker; Adam Robinson; George F. Zaki; Raj Shekhar; T.R. McNutt; Harry Quon; John Wong; Junghoon Lee
A critical requirement of successful adaptive radiotherapy (ART) is the knowledge of anatomical changes as well as actual dose delivered to the patient during the course of treatment. While cone-beam CT (CBCT) is typically used to minimize the patient setup error and monitor daily anatomical changes, its poor image quality impedes accurate segmentation of the target structures and the dose computation. We developed an integrated ART software platform that combines fast and accurate image registration, segmentation, and dose computation/ accumulation methods. The developed platform automatically links patient images, radiotherapy plan, beam and dosimetric parameters, and daily treatment information, thus providing and efficient ART workflow. Furthermore, to improve the accuracy of deformable image registration (DIR) between the planning CT and daily CBCTs, we iteratively correct CBCT intensities by matching local intensity histograms in conjunction with the DIR process. We tested our DIR method on six head and neck (HN) cancer cases, producing improved registration quality. Our method produced overall NMI of 0.663 and NCC of 0.987, outperforming conventional methods by 3.8% and 1.9%, respectively. The overall ART process has been validated on two HN cancer cases, showing differences between the planned and the actually delivered dose values. Both DIR and dose computation modules are accelerated by GPUs, and the computation time for DIR and dose computation at each fraction is 1min.
Proceedings of SPIE | 2016
Seyoun Park; Adam Robinson; Harry Quon; A.P. Kiess; Colette Shen; John Wong; William Plishker; Raj Shekhar; Junghoon Lee
In this paper, we propose a CT-CBCT registration method to accurately predict the tumor volume change based on daily cone-beam CTs (CBCTs) during radiotherapy. CBCT is commonly used to reduce patient setup error during radiotherapy, but its poor image quality impedes accurate monitoring of anatomical changes. Although physician’s contours drawn on the planning CT can be automatically propagated to daily CBCTs by deformable image registration (DIR), artifacts in CBCT often cause undesirable errors. To improve the accuracy of the registration-based segmentation, we developed a DIR method that iteratively corrects CBCT intensities by local histogram matching. Three popular DIR algorithms (B-spline, demons, and optical flow) with the intensity correction were implemented on a graphics processing unit for efficient computation. We evaluated their performances on six head and neck (HN) cancer cases. For each case, four trained scientists manually contoured the nodal gross tumor volume (GTV) on the planning CT and every other fraction CBCTs to which the propagated GTV contours by DIR were compared. The performance was also compared with commercial image registration software based on conventional mutual information (MI), VelocityAI (Varian Medical Systems Inc.). The volume differences (mean±std in cc) between the average of the manual segmentations and automatic segmentations are 3.70±2.30 (B-spline), 1.25±1.78 (demons), 0.93±1.14 (optical flow), and 4.39±3.86 (VelocityAI). The proposed method significantly reduced the estimation error by 9% (B-spline), 38% (demons), and 51% (optical flow) over the results using VelocityAI. Although demonstrated only on HN nodal GTVs, the results imply that the proposed method can produce improved segmentation of other critical structures over conventional methods.
Medical Physics | 2016
Seyoun Park; T.R. McNutt; William Plishker; Harry Quon; John Wong; Raj Shekhar; Junghoon Lee
PURPOSE Accurate tracking of anatomical changes and computation of actually delivered dose to the patient are critical for successful adaptive radiation therapy (ART). Additionally, efficient data management and fast processing are practically important for the adoption in clinic as ART involves a large amount of image and treatment data. The purpose of this study was to develop an accurate and efficient Software platform for CUmulative Dose Assessment (scuda) that can be seamlessly integrated into the clinical workflow. METHODS scuda consists of deformable image registration (DIR), segmentation, dose computation modules, and a graphical user interface. It is connected to our image PACS and radiotherapy informatics databases from which it automatically queries/retrieves patient images, radiotherapy plan, beam data, and daily treatment information, thus providing an efficient and unified workflow. For accurate registration of the planning CT and daily CBCTs, the authors iteratively correct CBCT intensities by matching local intensity histograms during the DIR process. Contours of the target tumor and critical structures are then propagated from the planning CT to daily CBCTs using the computed deformations. The actual delivered daily dose is computed using the registered CT and patient setup information by a superposition/convolution algorithm, and accumulated using the computed deformation fields. Both DIR and dose computation modules are accelerated by a graphics processing unit. RESULTS The cumulative dose computation process has been validated on 30 head and neck (HN) cancer cases, showing 3.5 ± 5.0 Gy (mean±STD) absolute mean dose differences between the planned and the actually delivered doses in the parotid glands. On average, DIR, dose computation, and segmentation take 20 s/fraction and 17 min for a 35-fraction treatment including additional computation for dose accumulation. CONCLUSIONS The authors developed a unified software platform that provides accurate and efficient monitoring of anatomical changes and computation of actually delivered dose to the patient, thus realizing an efficient cumulative dose computation workflow. Evaluation on HN cases demonstrated the utility of our platform for monitoring the treatment quality and detecting significant dosimetric variations that are keys to successful ART.
Computer Vision and Image Understanding | 2016
Hyungil Moon; Geonhwan Ju; Seyoun Park; Hayong Shin
Abstract In this paper, we introduce a novel three-dimensional (3D) reconstruction framework for ultrasound images using a piecewise smooth Markov random field (MRF) model from irregularly spaced B-scan images obtained by freehand scanning. Freehand 3D ultrasound imaging is a useful system for various clinical applications, including image-guided surgeries and interventions, as well as diagnoses, due to the variety of its scan ranges and relatively low cost. The reconstruction process performs a key role in this system because its sampling irregularities may cause undesired artifacts, and ultrasound images generally suffer from noise and distortions. However, traditional approaches are based on simple geometric interpolations, such as pixel-based or distance-weighted methods, which are sensitive to sampling density and speckle noise. These approaches generally have an additional limitation of smoothing objects boundaries. To reduce speckle noise and preserve boundaries, we devised a piecewise smooth (PS) MRF model and developed its optimization algorithm. In our framework, we can easily apply an individual noise level for each image pixel, which is specified by the characteristics of an ultrasound probe, and possibly, the lateral and axial positions of an image. As a result, the reconstructed volume has sharp object boundaries with reduced speckle noise and artifacts. Our PS-MRF model provides simple segmentation results within a reconstruction framework that is useful for various purposes, such as clear visualization. The corresponding optimization methods have also been developed, and we tested a virtual phantom and a physical phantom model. Experimental results show that our method outperforms existing methods in terms of interpolation and segmentation accuracy. With this method, all computations can be performed with practical time consumption and with an appropriate resolution, via parallel computing using graphic processing units.