Yugang Min
University of California, Los Angeles
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Yugang Min.
International Journal of Radiation Oncology Biology Physics | 2015
X. Sharon Qi; Anand P. Santhanam; John Neylon; Yugang Min; Tess Armstrong; Ke Sheng; R Staton; Jason Pukala; Andrew Pham; Daniel A. Low; Steve P. Lee; Michael Steinberg; R. Manon; Allen M. Chen; Patrick A. Kupelian
PURPOSE The purpose of this study was to systematically monitor anatomic variations and their dosimetric consequences during intensity modulated radiation therapy (IMRT) for head and neck (H&N) cancer by using a graphics processing unit (GPU)-based deformable image registration (DIR) framework. METHODS AND MATERIALS Eleven IMRT H&N patients undergoing IMRT with daily megavoltage computed tomography (CT) and weekly kilovoltage CT (kVCT) scans were included in this analysis. Pretreatment kVCTs were automatically registered with their corresponding planning CTs through a GPU-based DIR framework. The deformation of each contoured structure in the H&N region was computed to account for nonrigid change in the patient setup. The Jacobian determinant of the planning target volumes and the surrounding critical structures were used to quantify anatomical volume changes. The actual delivered dose was calculated accounting for the organ deformation. The dose distribution uncertainties due to registration errors were estimated using a landmark-based gamma evaluation. RESULTS Dramatic interfractional anatomic changes were observed. During the treatment course of 6 to 7 weeks, the parotid gland volumes changed up to 34.7%, and the center-of-mass displacement of the 2 parotid glands varied in the range of 0.9 to 8.8 mm. For the primary treatment volume, the cumulative minimum and mean and equivalent uniform doses assessed by the weekly kVCTs were lower than the planned doses by up to 14.9% (P=.14), 2% (P=.39), and 7.3% (P=.05), respectively. The cumulative mean doses were significantly higher than the planned dose for the left parotid (P=.03) and right parotid glands (P=.006). The computation including DIR and dose accumulation was ultrafast (∼45 seconds) with registration accuracy at the subvoxel level. CONCLUSIONS A systematic analysis of anatomic variations in the H&N region and their dosimetric consequences is critical in improving treatment efficacy. Nearly real-time assessment of anatomic and dosimetric variations is feasible using the GPU-based DIR framework. Clinical implementation of this technology may enable timely plan adaptation and improved outcome.
International Journal of Biomedical Imaging | 2012
Olusegun J. Ilegbusi; Zhiliang Li; Behnaz Seyfi; Yugang Min; Sanford L. Meeks; Patrick A. Kupelian; Anand P. Santhanam
Lung radiotherapy is greatly benefitted when the tumor motion caused by breathing can be modeled. The aim of this paper is to present the importance of using anisotropic and subject-specific tissue elasticity for simulating the airflow inside the lungs. A computational-fluid-dynamics (CFD) based approach is presented to simulate airflow inside a subject-specific deformable lung for modeling lung tumor motion and the motion of the surrounding tissues during radiotherapy. A flow-structure interaction technique is employed that simultaneously models airflow and lung deformation. The lung is modeled as a poroelastic medium with subject-specific anisotropic poroelastic properties on a geometry, which was reconstructed from four-dimensional computed tomography (4DCT) scan datasets of humans with lung cancer. The results include the 3D anisotropic lung deformation for known airflow pattern inside the lungs. The effects of anisotropy are also presented on both the spatiotemporal volumetric lung displacement and the regional lung hysteresis.
Medical Physics | 2015
Anand P. Santhanam; Yugang Min; Patrick A. Kupelian; Daniel A. Low
Purpose: To employ a multi-3D camera system for patient and treatment equipment tracking using the Kinect v2 camera system. Methods: The system has two cameras, a color camera (RGB) and a time-of-flight infrared camera (IR). The IR is used to measure the distances between the camera and the subject for each IR pixel producing a point cloud of fraxels (fragment pixel, our terminology for a measured location + color generated by a 3D camera). We intend to use multiple Kinect cameras to generate a real-time digital model of the treatment room. To enable this, each camera’s fraxels need to be quantitative and registered to the linac coordinate system. We developed a calibration system consisting of a large flat board with IR and visible markers at measured locations. Images of the board were acquired with both cameras and the relative geometry of the board to the camera physically measured. The board images were used to characterize the geometric lens distortion and the marker locations used to calibrate the fraxel depth scales. The cameras were placed in the same room and pointed to a localization jig consisting of a checkerboard aligned to the room coordinate system. The camera rotations and translations to the board were determined for each camera and were applied to the camera fraxels to generate a single point cloud of the room. Results: The transformations between the cameras yielded a 3D treatment space accuracy of < 2 mm error in a radiotherapy setup within 500mm of isocenter. A novel-computing framework using multiple intel-NUC systems and a core dual-E6 processors allowed multiple cameras to be used simultaneously, greatly reducing occlusions. Conclusions: The proposed 3D camera system has the potential for providing real-time access to the treatment room from outside and remote locations, and automate processes such as collision detection.
Frontiers in Oncology | 2013
Anand P. Santhanam; Yugang Min; Tai H. Dou; Patrick A. Kupelian; Daniel A. Low
Radiotherapy is safely employed for treating wide variety of cancers. The radiotherapy workflow includes a precise positioning of the patient in the intended treatment position. While trained radiation therapists conduct patient positioning, consultation is occasionally required from other experts, including the radiation oncologist, dosimetrist, or medical physicist. In many circumstances, including rural clinics and developing countries, this expertise is not immediately available, so the patient positioning concerns of the treating therapists may not get addressed. In this paper, we present a framework to enable remotely located experts to virtually collaborate and be present inside the 3D treatment room when necessary. A multi-3D camera framework was used for acquiring the 3D treatment space. A client–server framework enabled the acquired 3D treatment room to be visualized in real-time. The computational tasks that would normally occur on the client side were offloaded to the server side to enable hardware flexibility on the client side. On the server side, a client specific real-time stereo rendering of the 3D treatment room was employed using a scalable multi graphics processing units (GPU) system. The rendered 3D images were then encoded using a GPU-based H.264 encoding for streaming. Results showed that for a stereo image size of 1280 × 960 pixels, experts with high-speed gigabit Ethernet connectivity were able to visualize the treatment space at approximately 81 frames per second. For experts remotely located and using a 100 Mbps network, the treatment space visualization occurred at 8–40 frames per second depending upon the network bandwidth. This work demonstrated the feasibility of remote real-time stereoscopic patient setup visualization, enabling expansion of high quality radiation therapy into challenging environments.
Medical Physics | 2016
Anand P. Santhanam; Yugang Min; P Beron; Nzhde Agazaryan; Patrick A. Kupelian; Daniel A. Low
PURPOSE Patient safety hazards such as a wrong patient/site getting treated can lead to catastrophic results. The purpose of this project is to automatically detect potential patient safety hazards during the radiotherapy setup and alert the therapist before the treatment is initiated. METHODS We employed a set of co-located and co-registered 3D cameras placed inside the treatment room. Each camera provided a point-cloud of fraxels (fragment pixels with 3D depth information). Each of the cameras were calibrated using a custom-built calibration target to provide 3D information with less than 2 mm error in the 500 mm neighborhood around the isocenter. To identify potential patient safety hazards, the treatment room components and the patients body needed to be identified and tracked in real-time. For feature recognition purposes, we used a graph-cut based feature recognition with principal component analysis (PCA) based feature-to-object correlation to segment the objects in real-time. Changes in the objects position were tracked using the CamShift algorithm. The 3D object information was then stored for each classified object (e.g. gantry, couch). A deep learning framework was then used to analyze all the classified objects in both 2D and 3D and was then used to fine-tune a convolutional network for object recognition. The number of network layers were optimized to identify the tracked objects with >95% accuracy. RESULTS Our systematic analyses showed that, the system was effectively able to recognize wrong patient setups and wrong patient accessories. The combined usage of 2D camera information (color + depth) enabled a topology-preserving approach to verify patient safety hazards in an automatic manner and even in scenarios where the depth information is partially available. CONCLUSION By utilizing the 3D cameras inside the treatment room and a deep learning based image classification, potential patient safety hazards can be effectively avoided.
Medical Physics | 2017
John Neylon; Yugang Min; Daniel A. Low; Anand P. Santhanam
Purpose: A critical step in adaptive radiotherapy (ART) workflow is deformably registering the simulation CT with the daily or weekly volumetric imaging. Quantifying the deformable image registration accuracy under these circumstances is a complex task due to the lack of known ground‐truth landmark correspondences between the source data and target data. Generating landmarks manually (using experts) is time‐consuming, and limited by image quality and observer variability. While image similarity metrics (ISM) may be used as an alternative approach to quantify the registration error, there is a need to characterize the ISM values by developing a nonlinear cost function and translate them to physical distance measures in order to enable fast, quantitative comparison of registration performance. Methods: In this paper, we present a proof‐of‐concept methodology for automated quantification of DIR performance. A nonlinear cost function was developed as a combination of ISM values and governed by the following two expectations for an accurate registration: (a) the deformed data obtained from transforming the simulation CT data with the deformation vector field (DVF) should match the target image data with near perfect similarity, and (b) the similarity between the simulation CT and deformed data should match the similarity between the simulation CT and the target image data. A deep neural network (DNN) was developed that translated the cost function values to actual physical distance measure. To train the neural network, patient‐specific biomechanical models of the head‐and‐neck anatomy were employed. The biomechanical model anatomy was systematically deformed to represent changes in patient posture and physiological regression. Volumetric source and target images with known ground‐truth deformations vector fields were then generated, representing the daily or weekly imaging data. Annotated data was then fed through a supervised machine learning process, iteratively optimizing a nonlinear model able to predict the target registration error (TRE) for given ISM values. The cost function for sub‐volumes enclosing critical radiotherapy structures in the head‐and‐neck region were computed and compared with the ground truth TRE values. Results: When examining different combinations of registration parameters for a single DIR, the neural network was able to quantify DIR error to within a single voxel for 95% of the sub‐volumes examined. In addition, correlations between the neural network predicted error and the ground‐truth TRE for the Planning Target Volume and the parotid contours were consistently observed to be > 0.9. For variations in posture and tumor regression for 10 different patients, patient‐specific neural networks predicted the TRE to within a single voxel > 90% on average. Conclusions: The formulation presented in this paper demonstrates the ability for fast, accurate quantification of registration performance. DNN provided the necessary level of abstraction to estimate a quantified TRE from the ISM expectations described above, when sufficiently trained on annotated data. In addition, biomechanical models facilitated the DNN with the required variations in the patient posture and physiological regression. With further development and validation on clinical patient data, such networks have potential impact in patient and site‐specific optimization, and stream‐lining clinical registration validation.
Medical Physics | 2013
Yugang Min; Anand P. Santhanam; Daniel A. Low; Patrick A. Kupelian
Purpose: To investigate methods for 3D camera‐based monitoring and quantitative tracking in the radiation therapy treatment room.Methods and Materials: A laboratory test‐bed for installing 3D cameras and testing 3D tracking algorithms was built to develop techniques for 3D treatment room monitoring. Kinect cameras were used as the 3D camera platforms. The cameras were anchored to a rigid framework attached to the laboratory walls to avoid vibrations and camera motion drift. Spatio‐temporal clustering algorithm was employed to remove camera fluttering noise. Features whose depth estimation varied by more than 3 mm in 1/3rd of a second were removed. A client server framework enabled the 3D treatment space to be visualized by remotely located experts in real‐time. A scalable multi GPU system that enabled 3D treatment space rendering in stereo and in real‐time was employed on the server side. For tracking both the patient body surface as well as the treatment room equipment, we investigated two different algorithms, using a Point Feature Histogram and multi‐resolution 3D Hough transform. For real‐time tracking we, also employed 2D multi‐resolution optical flow for effective temporal tracking on radiation equipment as well as the patient anatomy. Results: For remotely located experts, the treatment space visualization was conducted at 40 fps with a resolution of 640 × 480 pixels. Sub‐millimeter accuracy for tracking the patient body surface as well as the treatment room equipment was obtained when the tracked features were <120 cm from any of the 3D cameras, degrading with increasing distance. Conclusion: Remote real‐time stereoscopic patient setup visualization is feasible, enabling expansion of high quality radiation therapy into challenging environments. For both tested tracking approaches, the run‐time analysis showed the need for GPU hardware to track features in real‐time.
Medical Physics | 2013
Anand P. Santhanam; T Dou; Yugang Min; Sanford L. Meeks; Patrick A. Kupelian
PURPOSE To perform a parametric study of the effect of registration parameters on 4D-CT image registration accuracy. METHODS AND MATERIALS A GPU based 4D-CT image registration that registers the 4D lung anatomy using a multi-level multi-contrast optical flow was used for this study. A set of 14 4D-CT datasets was employed for this study. The multi-level lung anatomy was segmented into the surface contour, blood vessels and parenchyma regions using OsiriX. The registration started at the lowest resolution of a 3D volume. Within each resolution level, the volumes were registered using optical flow. The motion field was first computed for surface contour pairs in the lowest resolution. At this stage, all the voxels except those on the surface contour (the lowest level of anatomical representation) were not included. GPU based Thin-Plate Splines was applied to the motion field so that voxels surrounding the surface contour had an initial displacement motion, which was closer to the actual value. The motion field was iteratively updated until the highest (original) resolution of the volume was processed. RESULTS The GPU implementation provided a speed-up of >50x as compared to the CPU implementation. The registration accuracy varied non-linearly with the kernel size. For both kernel size and smoothness factor, a non-linear correlation was observed towards the registration accuracy with an optimal value being 5 cu.mm and 200, respectively. The accuracy improved with the number of resolution and contrast levels with 4 and 3, respectively, providing optimal registration accuracy. Finally, usage of the first 3 anatomical level representations provided the optimum registration accuracy as opposed to 4 or more levels. CONCLUSION A parametric 4D image registration analyses showed its relationship to the registration accuracy to be non-linear. A patient breathing and CT-scanner specific study will quantitatively relate the registration errors on treatment planning and delivery.
Medical Physics | 2013
B White; David Thomas; J Lamb; S Jani; S Gaudio; Yugang Min; Subashini Srinivasan; Daniel B. Ennis; Anand P. Santhanam; Daniel A. Low
PURPOSE To improve the accuracy of a quantitative breathing motion model by developing a cardiac-induced lung tissue motion model from MRI data. METHODS 10 healthy volunteers were imaged on a 1.5T MR-scanner. A total of 24 short-axis and 18 radial views were acquired during a series of 12-15s breath-holds. The planar views were combined to create a 3D view of the anatomy. Each view contained 30 equal-partitioned frames beginning with the end-diastolic cardiac phase. A single-level 3D optical flow deformable image registration algorithm was used to measure the difference in tissue position between the end-diastolic image and the remaining phases. The maximum displacement magnitude and direction obtained in this manner was defined as g(X0 ), the cardiac-induced lung tissue motion. The motion model was assumed to be linear and the motion trajectory a product of g(X0 ) and h, where h was a phase-dependent scalar that had a value of 0 at end-diastole and 1 at the maximum tissue displacement phase. The model was evaluated by comparing the cardiac-induced lung tissue motion, using a lower motion threshold of 0.3mm, with the residual model error. RESULTS The deformable image registration algorithm was found to be highly accurate. Lung tissue near the myocardium was observed to have motion as large as 5mm. The average relative error for the model was 36.5% for sub-millimeter voxel motion. The average relative error decreased for greater voxel motion to 5.6% for >3mm voxel motion. The overall average model residual error was 0.19±0.18mm. CONCLUSION The magnitude of cardiac-induced lung tissue displacement was enough to degrade the accuracy of quantitative lung tissue motion modeling. The use of a single location-independent phase dependent term provided suitable model accuracy. Introducing a cardiac motion term has the potential to reduce the error in breathing motion models caused by uncompensated cardiac-induced lung tissue motion. This work supported in part by NIH R01CA096679 and R01CA116712.
British Journal of Radiology | 2018
Katelyn Hasse; John Neylon; Yugang Min; D O'Connell; Percy Lee; Daniel A. Low; Anand P. Santhanam
OBJECTIVE: Lung tissue elasticity is an effective spatial representation for Chronic Obstructive Pulmonary Disease phenotypes and pathophysiology. We investigated a novel imaging biomarker based on the voxel-by-voxel distribution of lung tissue elasticity. Our approach combines imaging and biomechanical modeling to characterize tissue elasticity. METHODS: We acquired 4DCT images for 13 lung cancer patients with known COPD diagnoses based on GOLD 2017 criteria. Deformation vector fields (DVFs) from the deformable registration of end-inhalation and end-exhalation breathing phases were taken to be the ground-truth. A linear elastic biomechanical model was assembled from end-exhalation datasets with a density-guided initial elasticity distribution. The elasticity estimation was formulated as an iterative process, where the elasticity was optimized based on its ability to reconstruct the ground-truth. An imaging biomarker (denoted YM1-3) derived from the optimized elasticity distribution, was compared with the current gold standard, RA950 using confusion matrix and area under the receiver operating characteristic (AUROC) curve analysis. RESULTS: The estimated elasticity had 90 % accuracy when representing the ground-truth DVFs. The YM1-3 biomarker had higher diagnostic accuracy (86% vs 71 %), higher sensitivity (0.875 vs 0.5), and a higher AUROC curve (0.917 vs 0.875) as compared to RA950. Along with acting as an effective spatial indicator of lung pathophysiology, the YM1-3 biomarker also proved to be a better indicator for diagnostic purposes than RA950. CONCLUSIONS: Overall, the results suggest that, as a biomarker, lung tissue elasticity will lead to new end points for clinical trials and new targeted treatment for COPD subgroups. ADVANCES IN KNOWLEDGE: The derivation of elasticity information directly from 4DCT imaging data is a novel method for performing lung elastography. The work demonstrates the need for a mechanics-based biomarker for representing lung pathophysiology.