R Keyes
University of New Mexico
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by R Keyes.
Medical Physics | 2010
R Keyes; C Romano; D Arnold; Shuang Luan
Purpose: The accuracy of medical physics calculations is largely limited by the availability of fast computer hardware, which is in turn strongly limited by budget considerations. This research explores the use of the Cloud Computing paradigm to replace or supplement existing computing infrastructure for medical physics calculations. Cloud computing services allow for on‐demand virtual computing clusters with a pay‐as‐you‐go pricing model. This model has the potential to make large scale Monte Carlo simulations routine in clinical and research activities. Methods and materials: To test the feasibility of medical physics calculations in a cloud computing environment, a distributed Monte Carlo dose calculation framework was implemented with Amazons EC2 cloud computing service, the Fluka 2008.3b Monte Carlo package, a distributed processing framework written in Python, and the Hadoop MapReduce framework. A variety of relevant calculations were performed: depth dose profiles of proton, electron, and photonbeams, along with a simple proton treatment plan using a voxel patient geometry (Zubal phantom). The performance of EC2 was tested by running calculations on 1 to 200 virtual nodes with the different distributed processing frameworks. EC2 prices were compared with costs of comparable local hardware. Results: Relevant medical physics calculations were successfully carried out using the EC2 service. Heavy charged particle, electron, and photon depth dose profiles and a simple treatment plan were successfully calculated remotely. Performance data demonstrated the expected 1/n speed‐up as a function of node number. Two different distributed processing schemes were implemented in the cloud. Conclusion: Our implementations demonstrate the power of the existing cloud computing model. The relatively cheap pay‐as‐you‐go pricing for infrastructure on demand offers an extremely promising new paradigm for clinical and research computing, available to anyone with a network connection.
Medical Physics | 2009
R Keyes; Shuang Luan; Michael H. Holzscheiter
Purpose: Recent work has shown the potential benefits of antiprotons for radiotherapy. Because of the large amount of energy released by the annihilation of antiprotons, it is of interest to understand the dosimetric properties of the peripheral field. The purpose of this research is to develop a standard method to estimate, characterize, and compare the secondary dose in the peripheral radiation field of particle beams, allowing for the assessment of relative safety and treatment suitability for a given patient. Method and Materials: The FLUKA 2008.3 Monte Carlo package was used. A simplified set of target and secondary tissue phantoms (1 cm diameter spheres) were placed in a volume of water, with secondary phantoms placed every 10 cm out to 100 cm along the beam axis and perpendicular to the beam axis at the depth of the target. The target phantom was irradiated by a pencil beam of particles (antiprotons, protons, etc.). Absorbed dose was evaluated in the target phantom and ambient dose equivalent (H*(10)) was evaluated in the secondary phantoms using factors from ICRP 74 and Pelliccioni to convert from fluence. This data was then used to determine the ratio of H*(10) to target absorbed dose. Additionally, the relative contributions by different particles to the H*(10) were measured as well as various particle fluence spectra. Results: 2–40 million primary beam particles yielded satisfactory statistics to estimate the peripheral field dose values for beams of antiprotons and protons.Conclusion: As awareness of second cancer risk grows, a standard scheme for estimating secondary dose from particle therapy will be all the more important. Our scheme allows for detailed characterization of the peripheral field and dose estimation can be made for non‐target organs by weighting dose with tissue factors.
Medical Physics | 2012
D Riofrio; S Sellner; G Cabal; R Keyes; Michael H. Holzscheiter; O Jaekel; Shuang Luan
PURPOSE The dosimetric advantage of particle therapy comes with a much higher infrastructure investment and operation costs. Increasing patient throughput is a key factor to manage operation costs. We investigate the impact of variable beam spot sizes on treatment time and discuss the tradeoffs involved. METHODS The following realistic assumptions were used. (1) The beam traveling speed is independent of the beam spot size. (2) The beam spot is a 2D Gaussian. Changing the beam spot size implies varying the standard deviation. (3) The maximum beam intensity is a constant independent of the beam spot size. Increasing the beam spot reduces the fluence. (4) Varying the beam spot size incurs in a reset time penalty.A 2D tumor was used in the study. Dose calculations were based on pencil beam kernels from GEANT4.The total treatment time is divided into the beam travel time, the beam-on time, andthe time for changing the spot size. RESULTS We found that: (1) Changing the beam spot size has no impact on the beam-on time, because the maximum beam intensity is independentof the beam spot and increasing the beam spot only reduces the fluence. (2) Larger beam spot size shortens the total travel time inversely proportional to the radius of the beam spot. (3) Plans with different beam spot sizes have similar dosimetric qualities. (4) If higher beam intensity could be used for larger beam spot size, savings in beam-on time would be inversely proportional to the intensity available. CONCLUSIONS We have studied the interplay among beam intensity, travel time, and beam size reset time for a scanning beam with variable beam spot size. Our initial studies show necessary conditions for and limitations on savings in total treatment times. Further studies are being carried out to find additional time saving sources. Supported in part by NSF CBET-0853157.
Medical Physics | 2012
R Keyes; D. Maes; Shuang Luan
PURPOSE Charged particle beams are of great interest, because they can achieve highly conformal radiation dose distributions. Despite this, some scattered radiation is inevitably present outside of the target volumes, and is of concern because of risks such as radiogenic cancer. Accurately calculating the secondary dose in regions far from the target volume is very difficult due to extremely low particle fluence and the effect of heterogeneities on particle ranges, making calculations possible only with CPU-week long Monte Carlo runs. By using a modified track repeating method, we demonstrate fast and accurate estimation of secondary dose appropriate for clinical use. METHODS Primary and secondary particle track databases (including protons, electrons, photons, neutrons, and positrons) were generated with the Geant4 Monte Carlo toolkit. Several new strategies were developed or employed to improve the performance of non- primary particle propagation, including: (1) processing the databases such that only primary tracks producing deep penetrating photons or neutrons were kept and particles falling below transport thresholds were discarded, (2) a search algorithm that can locate a sub-track for a given energy in constant time, (3) multiplying photon and neutron tracks during propagation and scoring using particle splitting1. RESULTS Performance and accuracy were benchmarked against full Monte Carlo calculations (Geant4 and FLUKA). Filtering out tracks that did not produce deep penetrating photons or neutrons did not affect the accuracy of the secondary dose calculation. Preliminary performance analysis indicated 60- 100X speed up over Fluka and 700-1000X speed up over Geant4 with well maintained accuracy. CONCLUSIONS Estimation of secondary dose from particle therapy has so far been largely an academic exercise. This method for fast estimation of secondary dose brings patient / plan specific information within reach, allowing clinicians to make informed decisions on the potential long-term risks associated with specific dose delivery plans. Partial funding from NSF grant CBET-0853157.
Medical Physics | 2011
D Riofrio; R Keyes; D. Maes; Shuang Luan
Purpose: Linear energy transfer (LET) is a measure of local energy deposition along the particle track. There is a positive correlation between relative biological effectiveness (RBE) and LET. As a result, LET painting has become a recent focus in particle therapy planning. The goal of this research is to develop a planning algorithm that can simultaneously optimize dose and LET distributions for particle therapy. Methods: It is well known that the high‐LET region of a charged particle beam is located at the distal edge of the Bragg peak. Therefore if a beam stops at the center of the target, its high LET region is also likely inside the target. This makes lower energybeams more preferable than high energybeams. Intuitively, this means each target voxel should be treated using its “closet” beam, or lowest possible energybeam. This observation makes Voronoi partition a natural choice for simultaneously optimizing dose and LET distributions, which given a set of objects partitions a metric space into cells, where each cell corresponds to a region closest to an object. A planning algorithm based on Voronoi partition has been implemented in C. Given a set of particle beams, the algorithm first partitions the target into a set of Voronoi cell, each corresponding to the region of the tumor closest to a beam. Then a constrained least square solver is used to determine the optimal plan with minimized beamenergies, which simultaneously optimizes the LET distributions. Results: The planning algorithm has been tested on a skull base tumor using protons. A desired LET is observed inside the target, and dose and LET distributions are simultaneously optimized. Conclusions: Voronoi partition can guide the optimization to simultaneously optimize dose and LET distributions. Supported in part by Grants NSF CBET‐0755054 and NSF CBET‐0853157
Medical Physics | 2011
R Keyes; Niels Bassler; Michael H. Holzscheiter
Purpose: Experimental and theoretical studies of the potential use of antiproton beams for cancer therapy have been greatly aided by Monte Carlo based calculations. Previously published data from the AD‐4/ACE collaboration has shown excellent agreement between the Fluka Monte Carlo code and experimental absorbed dose data. In this research we investigate the suitability of the open source Geant4 Monte Carlo package for antiproton absorbed dose calculations.Methods: Ionization chamber depth‐dose data collected at the CERN Antiproton Decelerator and previously published by Bassler et al were used as the experimental benchmark. Version 9.3.p01 of Geant4 and Fluka version 2008.3b‐02 were used. The antiproton beam was incident on a 20 × 20 × 20 cm3 water phantom with parameters set to match the experimental 126 MeV beam at CERN as closely as possible. Several reference physics lists were used covering electromagnetic and nuclear physics processes.Results: Geant4 consistently placed the Bragg peak at the correct depth, but the shape of the depth‐dose curve was inaccurate overall, regardless of physics lists chosen. In some cases, the peak‐to‐plateau ratio was off by an order of magnitude with fully enabled hadronic physics. This appeared to be the result of excessive in‐flight annihilation and a corresponding overproduction of secondaries as well as a dearth of at‐rest antiproton annihilations. Conclusions: Currently Geant4 does appear to have the necessary physics implemented, however in‐flight annihilation cross sections and low energy annihilation thresholds seem to be off in its default reference or standard physics lists. We plan to help improve this by sharing data and working with Geant4 developers.
Medical Physics | 2011
R Keyes; D Arnold; A Reynaud; Shuang Luan
Purpose: At AAPM 2010, we presented the concept of using Cloud Computing to speed up clinical Monte Carlo calculations. Cloud Computing refers to a set of technologies that offers on‐demand computing resources using a pay‐as‐you‐go pricing model. We presented a feasibility study that included a variety of clinical relevant calculations using protons, electrons, and photons in water and a voxel patient geometry (Zubal phantom). While our initial study was designed for proof‐of‐concept, this research presents a more refined platform with an emphasis on particle therapy and demonstrates its potential to enable Monte Carlo dose calculation for routine clinical use.Methods: A Cloud Computing based distributed Monte Carlo dose calculation system called McCloud was developed. The system is based on Amazon EC2, Fluka, and a variety of “glue” technologies. The calculation involves these key steps: (1) A cluster is allocated in the Cloud. (2) A Monte Carlo task is launched on each node in the cluster. Extra care is used for random seed generations to ensure the correctness of the simulations. (3) The progress of the computation is dynamically monitored. (4) Once the computation is finished in the cluster, the results are aggregated using a linear model or a tree‐based distributive model, and a final dose distribution is returned to the user. Results: We found that for 14 Million 75MeV protons, an analog Monte Carlo simulation can be completed in 5 minutes 36 seconds. For this particular computation, the cost is less than 2 USD (based on official pricing at 8 cents per node per hour). Studies using various cluster sizes have shown the 1/n speed up expected from analytical Monte Carlo simulations Conclusions: Our implementations demonstrate the power of the existing cloud computing model, an extremely promising new paradigm for clinical and research computing. Further research is underway. Supported in part by NSF CBET‐0853157 and NSF CBET‐0755054.
Medical Physics | 2010
D Riofrio; G Cabal; R Keyes; Michael H. Holzscheiter; J DeMarco; Oliver Jäkel; Shuang Luan
Purpose: To develop a treatment planning algorithm that can reduce the number of energy changes for scanning beam particle therapy. Materials and Methods: Changing beamenergies in particle therapy requires a considerable amount of time; therefore, a high quality treatment plan with minimum number of energy changes is desired. In this research, we explore using Voronoi diagram to achieve this. A Voronoi diagram of a set of objects partitions a metric space into cells with one cell per object, such that each cell contains the region closest to its object. Our new planning algorithm mainly uses these steps: (1) Calculating a Voronoi partition of the target for the given beam angles, such that each Voronoi cell contains the portions of tumor “closest” to its beam. Here by “closest”, we mean to be able to hit a target from a beam angle with minimum penetration of normal tissues and no penetration of critical structures. (2) During optimization, each beam only treats the tumor region within its Voronoi cell. The final dose distribution is optimized using a combination of randomization and non‐negative least square (NNLS). The new planning algorithm has been implemented in C. The proton and antiproton kernels used for treatment planning were generated using FLUKA, and contains energy ranges from 75MeV to 175MeV at 1MeV step. Results: The new algorithm has been applied to a 3D C‐shaped tumor phantom for proton and anti‐proton therapies. Compared to not using Voronoi partition, we can reduce the number of energy changes by 70%, while maintaining similar treatment qualities. Conclusion: A particle therapy planning algorithm that can reduce energy changes by 70% has been developed. As part of our ongoing research, we are testing the algorithm for different anatomical sites.
Medical Physics | 2009
D Riofrio; R Keyes; A Hecht; Shuang Luan; Michael H. Holzscheiter; J DeMarco; B Fahimian
Purpose: To develop a planning algorithm for dynamic particle therapy. Materials and Methods: In our modeling, a beam of heavy charged particles is viewed as a high dose volume (called a “shot”) localized in its Bragg Peak region, and treatment planning is to route this shot to cover a target volume. Compared to pencil beam scanning, where the target is scanned layer by layer, we intend to optimize the route of the Bragg Peak along with the energy and orientation of the beams on a global scale. The key steps of our optimization include: (1) Use geometric and randomized techniques to select a collection of potential “shots” based on pre‐computed kernels of different energies. (2) Filter out the final shots using constrained least square optimization. (3) Calculate a “traveling salesman” route of the final shots. (4) Interpolate the route and perform an accurate dose calculation for each interpolation point. (5) Calculate the dwelling time of each interpolation point using constrained least square optimization.Results: A planning system is implemented and is applied to proton and antiproton therapies for 3 simulated 3D phantoms. The first phantom has a spherical target. The second phantom consists of a spherical target surrounded by a C‐shaped critical structure. The third phantom is obtained by swapping the target and critical structure in the second phantom. In all cases, 2 pairs of opposing beams were used in the optimization. In all cases, the planning algorithm generates high quality plans. We have also observed that proton and antiproton are comparable in their abilities to produce quality plans in terms of physical dose.Conclusion: A planning algorithm for dynamic particle therapy is developed and has been verified using protons and antiprotons. For future research, we will extend the technique to other heavy charged particles and incorporate radiobiological effects.
Medical Physics | 2009
B Fahimian; J DeMarco; R Keyes; Shuang Luan; Maria Zankl; Michael H. Holzscheiter
Purpose: Antiprotons have become of interest in radiotherapy due to their higher peak to plateau dose ratio relative to protons and carbon ions, and a beneficial increase in RBE towards the Bragg peak as recently verified by experimental investigations of the AD‐4 collaboration at CERN. An obstacle limiting further research is the lack of a treatment planning system capable of concurrently optimizing the necessary modulation of intensity and energy, while accounting for the variation in biological effectiveness. Here we develop a Monte Carlo based treatment planning system for this purpose and subsequently quantify its performance. Materials and Methods:Dose kernels corresponding to different energy and source configurations were calculated using MCNPX in phantom and voxelized patient CT scans, and then converted to biological equivalent dose using depth dependent RBE weighting factors derived from theory and experiment. Linear equations were formulated for each pixel representing superposition of different kernels weighted by unknown intensities. Algorithms using constrained least square and gradient descent optimization were developed to minimize objective functions measuring the geometric correlation of the planning target volume (PTV) with the calculated distribution, yielding an optimized intensity for beams as function of energy and direction. Results: Biologically optimized treatment plans implemented on a voxelized 38 year old human were in good agreement with the input PTVs, reproducing the PTVs with a mean error of less than 2.24%. Proof of principle demonstrations were successful in producing complicated structures, such resemblance of Einstein, in water phantom with a correlation greater than 93%. Conclusions: We have developed a Monte Carlo treatment planning system for energy and intensity modulated antiproton therapy capable of incorporating depth‐dependency of the RBE, and reproducing complicated PTVs with high accuracy. The work can be readily extended to incorporate more sophisticated objective functions such as NTCP and TCP functionals.