Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where S Dhou is active.

Publication


Featured researches published by S Dhou.


Physics in Medicine and Biology | 2015

3D fluoroscopic image estimation using patient-specific 4DCBCT-based motion models

S Dhou; M. Hurwitz; P Mishra; Weixing Cai; Joerg Rottmann; Ruijiang Li; Christopher S. Williams; M Wagar; R Berbeco; Dan Ionascu; John H. Lewis

3D fluoroscopic images represent volumetric patient anatomy during treatment with high spatial and temporal resolution. 3D fluoroscopic images estimated using motion models built using 4DCT images, taken days or weeks prior to treatment, do not reliably represent patient anatomy during treatment. In this study we developed and performed initial evaluation of techniques to develop patient-specific motion models from 4D cone-beam CT (4DCBCT) images, taken immediately before treatment, and used these models to estimate 3D fluoroscopic images based on 2D kV projections captured during treatment. We evaluate the accuracy of 3D fluoroscopic images by comparison to ground truth digital and physical phantom images. The performance of 4DCBCT-based and 4DCT-based motion models are compared in simulated clinical situations representing tumor baseline shift or initial patient positioning errors. The results of this study demonstrate the ability for 4DCBCT imaging to generate motion models that can account for changes that cannot be accounted for with 4DCT-based motion models. When simulating tumor baseline shift and patient positioning errors of up to 5 mm, the average tumor localization error and the 95th percentile error in six datasets were 1.20 and 2.2 mm, respectively, for 4DCBCT-based motion models. 4DCT-based motion models applied to the same six datasets resulted in average tumor localization error and the 95th percentile error of 4.18 and 5.4 mm, respectively. Analysis of voxel-wise intensity differences was also conducted for all experiments. In summary, this study demonstrates the feasibility of 4DCBCT-based 3D fluoroscopic image generation in digital and physical phantoms and shows the potential advantage of 4DCBCT-based 3D fluoroscopic image estimation when there are changes in anatomy between the time of 4DCT imaging and the time of treatment delivery.


international conference on image processing | 2014

Motion-based projection generation for 4D-CT reconstruction

S Dhou; Geoffrey D. Hugo; Alen Docef

A method for reducing streaking artifacts in 4D-CT reconstruction by generating additional projections is proposed. This method uses optical flow to track anatomy motion across the complete set of projections and then uses this information to compute interpolated projections while compensating for breathing motion. Original and interpolated projections, all belonging to one respiratory phase, are used to reconstruct a 4D-CT volume. Experimental results showed that the proposed method reduces artifacts and blurring in reconstructed 4D-CT volumes and improves the image quality.


Medical Physics | 2016

SU-C-209-02: 3D Fluoroscopic Image Generation From Patient-Specific 4DCBCT-Based Motion Models Derived From Clinical Patient Images

S Dhou; Dan Ionascu; Weixing Cai; M. Hurwitz; Christopher S. Williams; John H. Lewis

PURPOSE We develop a method to generate time varying volumetric images (3D fluoroscopic images) using patient-specific motion models derived from four-dimensional cone-beam CT (4DCBCT). METHODS Motion models are derived by selecting one 4DCBCT phase as a reference image, and registering the remaining images to it. Principal component analysis (PCA) is performed on the resultant displacement vector fields (DVFs) to create a reduced set of PCA eigenvectors that capture the majority of respiratory motion. 3D fluoroscopic images are generated by optimizing the weights of the PCA eigenvectors iteratively through comparison of measured cone-beam projections and simulated projections generated from the motion model. This method was applied to images from five lung-cancer patients. The spatial accuracy of this method is evaluated by comparing landmark positions in the 3D fluoroscopic images to manually defined ground truth positions in the patient cone-beam projections. RESULTS 4DCBCT motion models were shown to accurately generate 3D fluoroscopic images when the patient cone-beam projections contained clearly visible structures moving with respiration (e.g., the diaphragm). When no moving anatomical structure was clearly visible in the projections, the 3D fluoroscopic images generated did not capture breathing deformations, and reverted to the reference image. For the subset of 3D fluoroscopic images generated from projections with visibly moving anatomy, the average tumor localization error and the 95th percentile were 1.6 mm and 3.1 mm respectively. CONCLUSION This study showed that 4DCBCT-based 3D fluoroscopic images can accurately capture respiratory deformations in a patient dataset, so long as the cone-beam projections used contain visible structures that move with respiration. For clinical implementation of 3D fluoroscopic imaging for treatment verification, an imaging field of view (FOV) that contains visible structures moving with respiration should be selected. If no other appropriate structures are visible, the images should include the diaphragm. This project was supported, in part, through a Master Research Agreement with Varian Medical Systems, Inc, Palo Alto, CA.


Medical Physics | 2016

SU-C-BRA-07: Variability of Patient-Specific Motion Models Derived Using Different Deformable Image Registration Algorithms for Lung Cancer Stereotactic Body Radiotherapy (SBRT) Patients

S Dhou; Dan Ionascu; John E. Lewis; Christopher S. Williams

PURPOSE To study the variability of patient-specific motion models derived from 4-dimensional CT (4DCT) images using different deformable image registration (DIR) algorithms for lung cancer stereotactic body radiotherapy (SBRT) patients. METHODS Motion models are derived by 1) applying DIR between each 4DCT image and a reference image, resulting in a set of displacement vector fields (DVFs), and 2) performing principal component analysis (PCA) on the DVFs, resulting in a motion model (a set of eigenvectors capturing the variations in the DVFs). Three DIR algorithms were used: 1) Demons, 2) Horn-Schunck, and 3) iterative optical flow. The motion models derived were compared using patient 4DCT scans. RESULTS Motion models were derived and the variations were evaluated according to three criteria: 1) the average root mean square (RMS) difference which measures the absolute difference between the components of the eigenvectors, 2) the dot product between the eigenvectors which measures the angular difference between the eigenvectors in space, and 3) the Euclidean Model Norm (EMN), which is calculated by summing the dot products of an eigenvector with the first three eigenvectors from the reference motion model in quadrature. EMN measures how well an eigenvector can be reconstructed using another motion model derived using a different DIR algorithm. Results showed that comparing to a reference motion model (derived using the Demons algorithm), the eigenvectors of the motion model derived using the iterative optical flow algorithm has smaller RMS, larger dot product, and larger EMN values than those of the motion model derived using Horn-Schunck algorithm. CONCLUSION The study showed that motion models vary depending on which DIR algorithms were used to derive them. The choice of a DIR algorithm may affect the accuracy of the resulting model, and it is important to assess the suitability of the algorithm chosen for a particular application. This project was supported, in part, through a Master Research Agreement with Varian Medical Systems, Inc, Palo Alto, CA.


Proceedings of SPIE | 2015

4DCBCT-based motion modeling and 3D fluoroscopic image generation for lung cancer radiotherapy

S Dhou; M Hurwitz; P Mishra; R Berbeco; John H. Lewis

A method is developed to build patient-specific motion models based on 4DCBCT images taken at treatment time and use them to generate 3D time-varying images (referred to as 3D fluoroscopic images). Motion models are built by applying Principal Component Analysis (PCA) on the displacement vector fields (DVFs) estimated by performing deformable image registration on each phase of 4DCBCT relative to a reference phase. The resulting PCA coefficients are optimized iteratively by comparing 2D projections captured at treatment time with projections estimated using the motion model. The optimized coefficients are used to generate 3D fluoroscopic images. The method is evaluated using anthropomorphic physical and digital phantoms reproducing real patient trajectories. For physical phantom datasets, the average tumor localization error (TLE) and (95th percentile) in two datasets were 0.95 (2.2) mm. For digital phantoms assuming superior image quality of 4DCT and no anatomic or positioning disparities between 4DCT and treatment time, the average TLE and the image intensity error (IIE) in six datasets were smaller using 4DCT-based motion models. When simulating positioning disparities and tumor baseline shifts at treatment time compared to planning 4DCT, the average TLE (95th percentile) and IIE were 4.2 (5.4) mm and 0.15 using 4DCT-based models, while they were 1.2 (2.2) mm and 0.10 using 4DCBCT-based ones, respectively. 4DCBCT-based models were shown to perform better when there are positioning and tumor baseline shift uncertainties at treatment time. Thus, generating 3D fluoroscopic images based on 4DCBCT-based motion models can capture both inter- and intra- fraction anatomical changes during treatment.


Medical Physics | 2015

WE-D-303-03: 3D Delivered Dose Assessment Using a 4DCT-Based Motion Model

Wenli Cai; M Hurwitz; Christopher S. Williams; S Dhou; R Berbeco; Joao Seco; F Cifter; M Myronakis; P Mishra; John E. Lewis

Purpose: To develop a clinically feasible method of calculating actual delivered dose for patients with significant respiratory motion during the course of SBRT. Methods: This approach can be specified in three steps. (1) At planning stage, a patient-specific motion model is created from planning 4DCT using a principal components analysis (PCA) algorithm. (2) During treatment, 2D time-varying projection images (either kV or MV projections) are acquired, from which time-varying ‘fluoroscopic’ 3D images of the patient are reconstructed using the motion model. (3) A 3D dose distribution is computed for each timepoint in the set of 3D fluoroscopic images, from which the total effective 3D delivered dose is calculated by accumulating dose distributions onto a reference image. This approach was validated using two modified XCAT phantoms with lung tumors and different respiratory motions. The estimated doses were compared to the dose that would be calculated for 4DCT-based planning and to the actual delivered dose that was calculated using “ground truth” XCAT phantoms. The approach was also tested using one set of patient data. Results: For the XCAT phantom with a regular breathing pattern, the errors in D95 are 0.11% and 0.83% respectively for kV and MV reconstructions compared to the ground truth, which is comparable to 4DCT (0.093%). For the XCAT phantom with an irregular breathing pattern, the errors are 0.81% and 1.75% for kV and MV reconstructions, both better than that of 4DCT (4.01%). For real patient, the dose estimation is clinically reasonable and demonstrates differences between 4DCT and MV reconstruction-based estimation. Conclusions: Using kV or MV projections, the proposed approach is able to assess delivered doses for all respiratory phases during treatment. Compared to the 4DCT dose, the dose estimation using reconstructed 3D fluoroscopic images is as good for regular respiratory pattern and better for irregular respiratory pattern.


Medical Physics | 2015

WE-D-303-04: 4DCBCT-Based Dose Assessment for SBRT Lung Cancer Treatment

Wenli Cai; M Hurwitz; C Williams; S Dhou; R Berbeco; Joao Seco; F Cifter; M Myronakis; J Lewis

Purpose: To develop a 4DCBCT-based dose assessment method for calculating actual delivered dose for patients with significant respiratory motion during the course of SBRT or anatomical changes between treatment days. Methods: To address the limitation of 4DCT-based dose assessment, we propose to calculate the delivered dose using time varying (‘fluoroscopic’) 3D patient images generated from a 4DCBCT-based motion model. The method includes four steps: (1) before each treatment, 4DCBCT data is acquired with the patient in treatment position, based on which a patient-specific motion model is created using a principal components analysis (PCA) algorithm. (2) During treatment, 2D time-varying kV projection images are continuously acquired, from which time-varying ‘fluoroscopic’ 3D images of the patient are reconstructed using the motion model. (3) Lateral truncation artifacts are corrected using planning 4DCT images. (4) the 3D dose distribution is computed for each timepoint in the set of 3D fluoroscopic images, from which the total effective 3D delivered dose is calculated by accumulating deformed dose distributions. This approach is validated using six modified XCAT phantoms with lung tumors and different respiratory motions derived from patient data. The estimated doses are compared to that calculated using ground-truth XCAT phantoms. Results: For each XCAT phantom, the delivered tumor dose values generally follow the same trend as that of the ground truth and at most timepoints the difference is less than 5%. For the overall delivered dose, the normalized error of calculated 3D dose distribution is generally less than 3% and the tumor D95 error is less than 1.5%. Conclusions: XCAT phantom studies indicate the potential of the proposed method to accurately estimate 3D tumor dose distributions for SBRT lung treatment based on 4DCBCT imaging and motion modeling. Further research is necessary to investigate its performance for clinical patients data.


Medical Physics | 2015

SU-E-I-03: Lateral Truncation Artifact Correction for 4DCBCT-Based Motion Modeling and Dose Assessment

S Dhou; F Cifter; M Myronakis; R Berbeco; John E. Lewis; Wenli Cai

Purpose: To allow accurate motion modeling and dose assessment based on 4DCBCT by addressing the limited field of view (FOV) and lateral truncation artifacts in current clinical CBCT systems. Due to the size and geometry of onboard flat panel detects, CBCT often cannot cover the entire thorax of adult patients. We implement method to extend the images generated from 4DCBCT-based motion models and correct lateral truncation artifacts. Methods: The method is based on deforming a reference 4DCT image containing the entire patient anatomy to the (smaller) CBCT image within the higher quality CBCT FOV. Next, the displacement vector field (DVF) derived inside the CBCT FOV is smoothly extrapolated out to the edges of the body. These extrapolated displacement vectors are used to generate a new body contour and HU values outside of the CBCT FOV. This method is applied to time-varying volumetric images (3D fluoroscopic images) generated from a 4DCBCT-based motion model at 2 Hz. Six XCAT phantoms are used to test this approach and reconstruction accuracy is investigated. Results: The normalized root mean square error between the corrected images generated from the 4DCBCT-based motion model and the ground truth XCAT phantom at each time point is generally less than 20%. These results are comparable to results from 4DCT-based motion models. The anatomical structures outside the CBCT FOV can be reconstructed with an error comparable to that inside the FOV. The resulting noise is comparable to that of 4DCT. Conclusions: The proposed approach can effectively correct the artifact due to lateral truncation in 4DCBCT-based motion models. The quality of the resulting images is comparable to images generated from 4DCT-based motion models. Capturing the body contour and anatomy outside the CBCT FOV makes more reasonable dose calculations possible.


Medical Physics | 2015

WE-G-207-06: 3D Fluoroscopic Image Generation From Patient-Specific 4DCBCT-Based Motion Models Derived From Physical Phantom and Clinical Patient Images

S Dhou; Wenli Cai; M Hurwitz; Christopher S. Williams; J Rottmann; P Mishra; M Myronakis; F Cifter; R Berbeco; Dan Ionascu; John E. Lewis

Purpose: Respiratory-correlated cone-beam CT (4DCBCT) images acquired immediately prior to treatment have the potential to represent patient motion patterns and anatomy during treatment, including both intra- and inter-fractional changes. We develop a method to generate patient-specific motion models based on 4DCBCT images acquired with existing clinical equipment and used to generate time varying volumetric images (3D fluoroscopic images) representing motion during treatment delivery. Methods: Motion models are derived by deformably registering each 4DCBCT phase to a reference phase, and performing principal component analysis (PCA) on the resulting displacement vector fields. 3D fluoroscopic images are estimated by optimizing the resulting PCA coefficients iteratively through comparison of the cone-beam projections simulating kV treatment imaging and digitally reconstructed radiographs generated from the motion model. Patient and physical phantom datasets are used to evaluate the method in terms of tumor localization error compared to manually defined ground truth positions. Results: 4DCBCT-based motion models were derived and used to generate 3D fluoroscopic images at treatment time. For the patient datasets, the average tumor localization error and the 95th percentile were 1.57 and 3.13 respectively in subsets of four patient datasets. For the physical phantom datasets, the average tumor localization error and the 95th percentile were 1.14 and 2.78 respectively in two datasets. 4DCBCT motion models are shown to perform well in the context of generating 3D fluoroscopic images due to their ability to reproduce anatomical changes at treatment time. Conclusion: This study showed the feasibility of deriving 4DCBCT-based motion models and using them to generate 3D fluoroscopic images at treatment time in real clinical settings. 4DCBCT-based motion models were found to account for the 3D non-rigid motion of the patient anatomy during treatment and have the potential to localize tumor and other patient anatomical structures at treatment time even when inter-fractional changes occur. This project was supported, in part, through a Master Research Agreement with Varian Medical Systems, Inc., Palo Alto, CA. The project was also supported, in part, by Award Number R21CA156068 from the National Cancer Institute.


Medical Physics | 2015

WE-D-303-02: Applications of Volumetric Images Generated with a Respiratory Motion Model Based On An External Surrogate Signal

M. Hurwitz; Christopher S. Williams; P Mishra; S Dhou; John H. Lewis

Purpose: Respiratory motion can vary significantly over the course of simulation and treatment. Our goal is to use volumetric images generated with a respiratory motion model to improve the definition of the internal target volume (ITV) and the estimate of delivered dose. Methods: Ten irregular patient breathing patterns spanning 35 seconds each were incorporated into a digital phantom. Ten images over the first five seconds of breathing were used to emulate a 4DCT scan, build the ITV, and generate a patient-specific respiratory motion model which correlated the measured trajectories of markers placed on the patients’ chests with the motion of the internal anatomy. This model was used to generate volumetric images over the subsequent thirty seconds of breathing. The increase in the ITV taking into account the full 35 seconds of breathing was assessed with ground-truth and model-generated images. For one patient, a treatment plan based on the initial ITV was created and the delivered dose was estimated using images from the first five seconds as well as ground-truth and model-generated images from the next 30 seconds. Results: The increase in the ITV ranged from 0.2 cc to 6.9 cc for the ten patients based on ground-truth information. The model predicted this increase in the ITV with an average error of 0.8 cc. The delivered dose to the tumor (D95) changed significantly from 57 Gy to 41 Gy when estimated using 5 seconds and 30 seconds, respectively. The model captured this effect, giving an estimated D95 of 44 Gy. Conclusion: A respiratory motion model generating volumetric images of the internal patient anatomy could be useful in estimating the increase in the ITV due to irregular breathing during simulation and in assessing delivered dose during treatment. This project was supported, in part, through a Master Research Agreement with Varian Medical Systems, Inc. and Radiological Society of North America Research Scholar Grant #RSCH1206.

Collaboration


Dive into the S Dhou's collaboration.

Top Co-Authors

Avatar

John H. Lewis

University of California

View shared research outputs
Top Co-Authors

Avatar

M. Hurwitz

University of California

View shared research outputs
Top Co-Authors

Avatar

P Mishra

Brigham and Women's Hospital

View shared research outputs
Top Co-Authors

Avatar

F Cifter

University of Massachusetts Lowell

View shared research outputs
Top Co-Authors

Avatar

M Myronakis

Brigham and Women's Hospital

View shared research outputs
Top Co-Authors

Avatar

R Berbeco

Brigham and Women's Hospital

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Weixing Cai

University of Rochester

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge