Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Anand P. Santhanam is active.

Publication


Featured researches published by Anand P. Santhanam.


International Journal of Radiation Oncology Biology Physics | 2008

Observations on Real-Time Prostate Gland Motion Using Electromagnetic Tracking

Katja M. Langen; Twyla R. Willoughby; Sanford L. Meeks; Anand P. Santhanam; Alexis Cunningham; Lisa Levine; Patrick A. Kupelian

PURPOSE To quantify and describe the real-time movement of the prostate gland in a large data set of patients treated with radiotherapy. METHODS AND MATERIALS The Calypso four-dimensional localization system was used for target localization in 17 patients, with electromagnetic markers implanted in the prostate of each patient. We analyzed a total of 550 continuous tracking sessions. The fraction of time that the prostate was displaced by >3, >5, >7, and >10 mm was calculated for each session and patient. The frequencies of displacements after initial patient positioning were analyzed over time. RESULTS Averaged over all patients, the prostate was displaced >3 and >5 mm for 13.6% and 3.3% of the total treatment time, respectively. For individual patients, the corresponding maximal values were 36.2% and 10.9%. For individual fractions, the corresponding maximal values were 98.7% and 98.6%. Displacements >3 mm were observed at 5 min after initial alignment in about one-eighth of the observations, and increased to one-quarter by 10 min. For individual patients, the maximal value of the displacements >3 mm at 5 and 10 min after initial positioning was 43% and 75%, respectively. CONCLUSION On average, the prostate was displaced by >3 mm and >5 mm approximately 14% and 3% of the time, respectively. For individual patients, these values were up to three times greater. After the initial positioning, the likelihood of displacement of the prostate gland increased with elapsed time. This highlights the importance of initiating treatment shortly after initially positioning the patient.


international conference of the ieee engineering in medicine and biology society | 2008

Modeling Real-Time 3-D Lung Deformations for Medical Visualization

Anand P. Santhanam; Celina Imielinska; Paul W. Davenport; Patrick Kupelian; Jannick P. Rolland

In this paper, we propose a physics-based and physiology-based approach for modeling real-time deformations of 3-D high-resolution polygonal lung models obtained from high-resolution computed tomography (HRCT) images of normal human subjects. The physics-based deformation operator is nonsymmetric, which accounts for the heterogeneous elastic properties of the lung tissue and spatial-dynamic flow properties of the air. An iterative approach is used to estimate the deformation with the deformation operator initialized based on the regional alveolar expandability, a key physiology-based parameter. The force applied on each surface node is based on the airflow pattern inside the lungs, which is known to be based on the orientation of the human subject. The validation of lung dynamics is done by resimulating the lung deformation and comparing it with HRCT data and computing force applied on each node derived from a 4-D HRCT dataset of a normal human subject using the proposed deformation operator and verifying its gradient with the orientation.


Proceedings of the AMI-ARCS 2004 Workshop | 2004

Physically-based Deformation of High-Resolution 3D Lung Models for Augmented Reality based Medical Visualization

Anand P. Santhanam; Cali M. Fidopiastis; Felix G. Hamza-Lup; Jannick P. Rolland; Celina Imielinska

Visualization tools using Augmented Reality Environments are effective in applications related to medical training, prognosis and expert interaction. Such medical visualization tools can also provide key visual insights on the physiology of deformable anatomical organs (e.g. lungs). In this paper we propose a deformation method that facilitates physically-based elastostatic deformations of 3D highresolution polygonal models. The implementation of the deformation method as a pre-computation approach is shown for a 3D high-resolution lung model. The deformation is represented as an integration of the applied force and the local elastic property assigned to the 3D lung model. The proposed deformation method shows faster convergence to equilibrium as compared to other physically-based simulation methods. The proposed method also accounts for the anisotropic tissue elastic properties. The transfer functions are formulated in such a way that they overcome stiffness effects during deformations.


Medical Physics | 2011

TH‐C‐BRC‐11: 3D Tracking of Interfraction and Intrafraction Head and Neck Anatomy during Radiotherapy Using Multiple Kinect Sensors

Anand P. Santhanam; Daniel A. Low; Patrick A. Kupelian

Purpose: The process of daily measurement and validation of tumors and normal structures with onboard imaging provides information useful for reducing patient setup uncertainty errors. However, the use of daily onboard CTimaging greatly increases the radiationdose to critical structures that lie within the CT volume. We now present a quantitative skin surface 3D imaging that when coupled with quantitative patient‐specific biomechanical models determine the tumor and normal organ deformation caused by routine patient head and neck misalignments. Incorporating such modeling and imaging could substantially decrease the number of cone‐ beam CT scans required for patient setup and ultimately the daily CT scandose. Methods:The quantitative skin surface 3D imaging that monitors the patient anatomy are developed using multiple Kinect sensors. A set of 4 3D cameras are used for illustration purposes to track the patient anatomy externally. Of the 4 cameras, 3 of them are used to track the patients anatomy contour (e.g. face, hands etc) using depth and intensity based contour tracking. Such an approach provides a set of 3D contours for each anatomical region from each camera. The 4th camera employs a marker less face recognition and tracking for delineating the region of the patients face. The location of the face is then shared among the camera controllers in realtime and the anatomical contour that closely matches the face region is selected. Once selected, all the contours are then integrated to form a single 3D representation of the anatomy and overlapping contours are cleaned using Voronoi data re‐sampling. Results: The unified 3D representation presents a quantified 3D surface image within a precision range of 0.3 cm at an acquisition rate of 30 frame‐per‐second. Conclusions :Daily measurement of 3D skin surface with the proposed imaging system is feasible for reducing patient setup uncertainty errors while minimizing radiation exposure risk.


Journal of Biomedical Optics | 2014

Parallelized multi-graphics processing unit framework for high-speed Gabor-domain optical coherence microscopy.

Patrice Tankam; Anand P. Santhanam; Kye-Sung Lee; Jungeun Won; Cristina Canavesi; Jannick P. Rolland

Abstract. Gabor-domain optical coherence microscopy (GD-OCM) is a volumetric high-resolution technique capable of acquiring three-dimensional (3-D) skin images with histological resolution. Real-time image processing is needed to enable GD-OCM imaging in a clinical setting. We present a parallelized and scalable multi-graphics processing unit (GPU) computing framework for real-time GD-OCM image processing. A parallelized control mechanism was developed to individually assign computation tasks to each of the GPUs. For each GPU, the optimal number of amplitude-scans (A-scans) to be processed in parallel was selected to maximize GPU memory usage and core throughput. We investigated five computing architectures for computational speed-up in processing 1000×1000 A-scans. The proposed parallelized multi-GPU computing framework enables processing at a computational speed faster than the GD-OCM image acquisition, thereby facilitating high-speed GD-OCM imaging in a clinical setting. Using two parallelized GPUs, the image processing of a 1×1×0.6  mm3 skin sample was performed in about 13 s, and the performance was benchmarked at 6.5 s with four GPUs. This work thus demonstrates that 3-D GD-OCM data may be displayed in real-time to the examiner using parallelized GPU processing.


international conference of the ieee engineering in medicine and biology society | 2007

Distributed Augmented Reality With 3-D Lung Dynamics—A Planning Tool Concept

Felix G. Hamza-Lup; Anand P. Santhanam; Celina Imielinska; Sanford L. Meeks; Jannick P. Rolland

Augmented reality (AR) systems add visual information to the world by using advanced display techniques. The advances in miniaturization and reduced hardware costs make some of these systems feasible for applications in a wide set of fields. We present a potential component of the cyber infrastructure for the operating room of the future: a distributed AR-based software-hardware system that allows real-time visualization of three-dimensional (3-D) lung dynamics superimposed directly on the patients body. Several emergency events (e.g., closed and tension pneumothorax) and surgical procedures related to lung (e.g., lung transplantation, lung volume reduction surgery, surgical treatment of lung infections, lung cancer surgery) could benefit from the proposed prototype


Optics Express | 2016

MEMS-based handheld scanning probe with pre-shaped input signals for distortion-free images in Gabor-domain optical coherence microscopy

Andrea Cogliati; Cristina Canavesi; Adam Hayes; Patrice Tankam; Virgil-Florin Duma; Anand P. Santhanam; Kevin P. Thompson; Jannick P. Rolland

High-speed scanning in optical coherence tomography (OCT) often comes with either compromises in image quality, the requirement for post-processing of the acquired images, or both. We report on distortion-free OCT volumetric imaging with a dual-axis micro-electro-mechanical system (MEMS)-based handheld imaging probe. In the context of an imaging probe with optics located between the 2D MEMS and the sample, we report in this paper on how pre-shaped open-loop input signals with tailored non-linear parts were implemented in a custom control board and, unlike the sinusoidal signals typically used for MEMS, achieved real-time distortion-free imaging without post-processing. The MEMS mirror was integrated into a compact, lightweight handheld probe. The MEMS scanner achieved a 12-fold reduction in volume and 17-fold reduction in weight over a previous dual-mirror galvanometer-based scanner. Distortion-free imaging with no post-processing with a Gabor-domain optical coherence microscope (GD-OCM) with 2 μm axial and lateral resolutions over a field of view of 1 × 1 mm2 is demonstrated experimentally through volumetric images of a regular microscopic structure, an excised human cornea, and in vivo human skin.


Medical Physics | 2014

A GPU based high‐resolution multilevel biomechanical head and neck model for validating deformable image registration

John Neylon; X. Qi; Ke Sheng; Robert J. Staton; Jason Pukala; Rafael R. Mañon; Daniel A. Low; Patrick A. Kupelian; Anand P. Santhanam

PURPOSE Validating the usage of deformable image registration (dir) for daily patient positioning is critical for adaptive radiotherapy (RT) applications pertaining to head and neck (HN) radiotherapy. The authors present a methodology for generating biomechanically realistic ground-truth data for validating dir algorithms for HN anatomy by (a) developing a high-resolution deformable biomechanical HN model from a planning CT, (b) simulating deformations for a range of interfraction posture changes and physiological regression, and (c) generating subsequent CT images representing the deformed anatomy. METHODS The biomechanical model was developed using HN kVCT datasets and the corresponding structure contours. The voxels inside a given 3D contour boundary were clustered using a graphics processing unit (GPU) based algorithm that accounted for inconsistencies and gaps in the boundary to form a volumetric structure. While the bony anatomy was modeled as rigid body, the muscle and soft tissue structures were modeled as mass-spring-damper models with elastic material properties that corresponded to the underlying contoured anatomies. Within a given muscle structure, the voxels were classified using a uniform grid and a normalized mass was assigned to each voxel based on its Hounsfield number. The soft tissue deformation for a given skeletal actuation was performed using an implicit Euler integration with each iteration split into two substeps: one for the muscle structures and the other for the remaining soft tissues. Posture changes were simulated by articulating the skeletal structure and enabling the soft structures to deform accordingly. Physiological changes representing tumor regression were simulated by reducing the target volume and enabling the surrounding soft structures to deform accordingly. Finally, the authors also discuss a new approach to generate kVCT images representing the deformed anatomy that accounts for gaps and antialiasing artifacts that may be caused by the biomechanical deformation process. Accuracy and stability of the model response were validated using ground-truth simulations representing soft tissue behavior under local and global deformations. Numerical accuracy of the HN deformations was analyzed by applying nonrigid skeletal transformations acquired from interfraction kVCT images to the models skeletal structures and comparing the subsequent soft tissue deformations of the model with the clinical anatomy. RESULTS The GPU based framework enabled the model deformation to be performed at 60 frames/s, facilitating simulations of posture changes and physiological regressions at interactive speeds. The soft tissue response was accurate with a R(2) value of >0.98 when compared to ground-truth global and local force deformation analysis. The deformation of the HN anatomy by the model agreed with the clinically observed deformations with an average correlation coefficient of 0.956. For a clinically relevant range of posture and physiological changes, the model deformations stabilized with an uncertainty of less than 0.01 mm. CONCLUSIONS Documenting dose delivery for HN radiotherapy is essential accounting for posture and physiological changes. The biomechanical model discussed in this paper was able to deform in real-time, allowing interactive simulations and visualization of such changes. The model would allow patient specific validations of the dir method and has the potential to be a significant aid in adaptive radiotherapy techniques.


Medical Physics | 2015

Accuracy of UTE-MRI-based patient setup for brain cancer radiation therapy.

Yingli Yang; Minsong Cao; Tania Kaprealian; Ke Sheng; Yu Gao; Fei Han; Caitlin Gomez; Anand P. Santhanam; Stephen Tenn; Nzhde Agazaryan; Daniel A. Low; Peng Hu

PURPOSE Radiation therapy simulations solely based on MRI have advantages compared to CT-based approaches. One feature readily available from computed tomography (CT) that would need to be reproduced with MR is the ability to compute digitally reconstructed radiographs (DRRs) for comparison against on-board radiographs commonly used for patient positioning. In this study, the authors generate MR-based bone images using a single ultrashort echo time (UTE) pulse sequence and quantify their 3D and 2D image registration accuracy to CT and radiographic images for treatments in the cranium. METHODS Seven brain cancer patients were scanned at 1.5 T using a radial UTE sequence. The sequence acquired two images at two different echo times. The two images were processed using an in-house software to generate the UTE bone images. The resultant bone images were rigidly registered to simulation CT data and the registration error was determined using manually annotated landmarks as references. DRRs were created based on UTE-MRI and registered to simulated on-board images (OBIs) and actual clinical 2D oblique images from ExacTrac™. RESULTS UTE-MRI resulted in well visualized cranial, facial, and vertebral bones that quantitatively matched the bones in the CT images with geometric measurement errors of less than 1 mm. The registration error between DRRs generated from 3D UTE-MRI and the simulated 2D OBIs or the clinical oblique x-ray images was also less than 1 mm for all patients. CONCLUSIONS UTE-MRI-based DRRs appear to be promising for daily patient setup of brain cancer radiotherapy with kV on-board imaging.


International Journal of Radiation Oncology Biology Physics | 2015

Accuracy of Routine Treatment Planning 4-Dimensional and Deep-Inspiration Breath-Hold Computed Tomography Delineation of the Left Anterior Descending Artery in Radiation Therapy

B White; S. Vennarini; Lilie L. Lin; Gary M. Freedman; Anand P. Santhanam; Daniel A. Low; Stefan Both

PURPOSE To assess the feasibility of radiation therapy treatment planning 4-dimensional computed tomography (4DCT) and deep-inspiration breath-hold (DIBH) CT to accurately contour the left anterior descending artery (LAD), a primary indicator of radiation-induced cardiac toxicity for patients undergoing radiation therapy. METHODS AND MATERIALS Ten subjects were prospectively imaged with a cardiac-gated MRI protocol to determine cardiac motion effects, including the displacement of a region of interest comprising the LAD. A series of planar views were obtained and resampled to create a 3-dimensional (3D) volume. A 3D optical flow deformable image registration algorithm determined tissue displacement during the cardiac cycle. The measured motion was then used as a spatial boundary to characterize motion blurring of the radiologist-delineated LAD structure for a cohort of 10 consecutive patients enrolled prospectively on a breast study including 4DCT and DIBH scans. Coronary motion-induced blurring artifacts were quantified by applying an unsharp filter to accentuate the LAD structure despite the presence of motion blurring. The 4DCT maximum inhalation and exhalation respiratory phases were coregistered to determine the LAD displacement during tidal respiration, as visualized in 4DCT. RESULTS The average 90th percentile heart motion for the region of interest was 0.7 ± 0.1 mm (left-right [LR]), 1.3 ± 0.6 mm (superior-inferior [SI]), and 0.6 ± 0.2 mm (anterior-posterior [AP]) in the cardiac-gated MRI cohort. The average relative increase in the number of voxels comprising the LAD contour was 69.4% ± 4.5% for the DIBH. The LAD volume overestimation had the dosimetric impact of decreasing the reported mean LAD dose by 23% ± 9% on average in the DIBH. During tidal respiration the average relative LAD contour increase was 69.3% ± 5.9% and 67.9% ± 4.6% for inhalation and exhalation respiratory phases, respectively. The average 90th percentile LAD motion was 4.8 ± 1.1 mm (LR), 0.9 ± 0.4 mm (SI), and 1.9 ± 0.6 mm (AP) for the 4DCT cohort, in the absence of cardiac gating. CONCLUSIONS An anisotropic margin of 2.7 mm (LR), 4.1 mm (SI), and 2.4 mm (AP) was quantitatively determined to account for motion blurring and patient setup error while placing minimum constraint on the plan optimization.

Collaboration


Dive into the Anand P. Santhanam's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Daniel A. Low

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

John Neylon

University of California

View shared research outputs
Top Co-Authors

Avatar

Yugang Min

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Sanford L. Meeks

University of Texas MD Anderson Cancer Center

View shared research outputs
Top Co-Authors

Avatar

Ke Sheng

University of California

View shared research outputs
Top Co-Authors

Avatar

Cali M. Fidopiastis

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Olusegun J. Ilegbusi

University of Central Florida

View shared research outputs
Top Co-Authors

Avatar

Katelyn Hasse

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge