Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Thomas S. Pheiffer is active.

Publication


Featured researches published by Thomas S. Pheiffer.


IEEE Transactions on Medical Imaging | 2014

A Mechanics-Based Nonrigid Registration Method for Liver Surgery Using Sparse Intraoperative Data

D. Caleb Rucker; Yifei Wu; Logan W. Clements; Janet E. Ondrake; Thomas S. Pheiffer; Amber L. Simpson; William R. Jarnagin; Michael I. Miga

In open abdominal image-guided liver surgery, sparse measurements of the organ surface can be taken intraoperatively via a laser-range scanning device or a tracked stylus with relatively little impact on surgical workflow. We propose a novel nonrigid registration method which uses sparse surface data to reconstruct a mapping between the preoperative CT volume and the intraoperative patient space. The mapping is generated using a tissue mechanics model subject to boundary conditions consistent with surgical supportive packing during liver resection therapy. Our approach iteratively chooses parameters which define these boundary conditions such that the deformed tissue model best fits the intraoperative surface data. Using two liver phantoms, we gathered a total of five deformation datasets with conditions comparable to open surgery. The proposed nonrigid method achieved a mean target registration error (TRE) of 3.3 mm for targets dispersed throughout the phantom volume, using a limited region of surface data to drive the nonrigid registration algorithm, while rigid registration resulted in a mean TRE of 9.5 mm. In addition, we studied the effect of surface data extent, the inclusion of subsurface data, the trade-offs of using a nonlinear tissue model, robustness to rigid misalignments, and the feasibility in five clinical datasets.


IEEE Transactions on Biomedical Engineering | 2013

Comparison Study of Intraoperative Surface Acquisition Methods for Surgical Navigation

Amber L. Simpson; Jessica Burgner; Courtenay L. Glisson; Stanley Duke Herrell; Burton Ma; Thomas S. Pheiffer; Robert J. Webster; Michael I. Miga

Soft-tissue image-guided interventions often require the digitization of organ surfaces for providing correspondence from medical images to the physical patient in the operating room. In this paper, the effect of several inexpensive surface acquisition techniques on target registration error and surface registration error (SRE) for soft tissue is investigated. A systematic approach is provided to compare image-to-physical registrations using three different methods of organ spatial digitization: 1) a tracked laser-range scanner (LRS), 2) a tracked pointer, and 3) a tracked conoscopic holography sensor (called a conoprobe). For each digitization method, surfaces of phantoms and biological tissues were acquired and registered to CT image volume counterparts. A comparison among these alignments demonstrated that registration errors were statistically smaller with the conoprobe than the tracked pointer and LRS ( p <; 0.01). In all acquisitions, the conoprobe outperformed the LRS and tracked pointer: for example, the arithmetic means of the SRE over all data acquisitions with a porcine liver were 1.73 ±0.77 mm, 3.25 ±0.78 mm, and 4.44 ±1.19 mm for the conoprobe, LRS, and tracked pointer, respectively. In a cadaveric kidney specimen, the arithmetic means of the SRE over all trials of the conoprobe and tracked pointer were 1.50 ±0.50 mm and 3.51 ±0.82 mm, respectively. Our results suggest that tissue displacements due to contact force and attempts to maintain contact with tissue, compromise registrations that are dependent on data acquired from a tracked surgical instrument and we provide an alternative method (tracked conoscopic holography) of digitizing surfaces for clinical usage. The tracked conoscopic holography device outperforms LRS acquisitions with respect to registration accuracy.


IEEE Journal of Translational Engineering in Health and Medicine | 2014

Near Real-Time Computer Assisted Surgery for Brain Shift Correction Using Biomechanical Models

Kay Sun; Thomas S. Pheiffer; Amber L. Simpson; Jared A. Weis; Reid C. Thompson; Michael I. Miga

Conventional image-guided neurosurgery relies on preoperative images to provide surgical navigational information and visualization. However, these images are no longer accurate once the skull has been opened and brain shift occurs. To account for changes in the shape of the brain caused by mechanical (e.g., gravity-induced deformations) and physiological effects (e.g., hyperosmotic drug-induced shrinking, or edema-induced swelling), updated images of the brain must be provided to the neuronavigation system in a timely manner for practical use in the operating room. In this paper, a novel preoperative and intraoperative computational processing pipeline for near real-time brain shift correction in the operating room was developed to automate and simplify the processing steps. Preoperatively, a computer model of the patients brain with a subsequent atlas of potential deformations due to surgery is generated from diagnostic image volumes. In the case of interim gross changes between diagnosis, and surgery when reimaging is necessary, our preoperative pipeline can be generated within one day of surgery. Intraoperatively, sparse data measuring the cortical brain surface is collected using an optically tracked portable laser range scanner. These data are then used to guide an inverse modeling framework whereby full volumetric brain deformations are reconstructed from precomputed atlas solutions to rapidly match intraoperative cortical surface shift measurements. Once complete, the volumetric displacement field is used to update, i.e., deform, preoperative brain images to their intraoperative shifted state. In this paper, five surgical cases were analyzed with respect to the computational pipeline and workflow timing. With respect to postcortical surface data acquisition, the approximate execution time was 4.5 min. The total update process which included positioning the scanner, data acquisition, inverse model processing, and image deforming was ~ 11-13 min. In addition, easily implemented hardware, software, and workflow processes were identified for improved performance in the near future.


IEEE Transactions on Biomedical Engineering | 2011

Tracking of Vessels in Intra-Operative Microscope Video Sequences for Cortical Displacement Estimation

Siyi Ding; Michael I. Miga; Thomas S. Pheiffer; Amber L. Simpson; Reid C. Thompson; Benoit M. Dawant

This article presents a method designed to automatically track cortical vessels in intra-operative microscope video sequences. The main application of this method is the estimation of cortical displacement that occurs during tumor resection procedures. The method works in three steps. First, models of vessels selected in the first frame of the sequence are built. These models are then used to track vessels across frames in the video sequence. Finally, displacements estimated using the vessels are extrapolated to the entire image. The method has been tested retrospectively on images simulating large displacement, tumor resection, and partial occlusion by surgical instruments and on 21 video sequences comprising several thousand frames acquired from three patients. Qualitative results show that the method is accurate, robust to the appearance and disappearance of surgical instruments, and capable of dealing with large differences in images caused by resection. Quantitative results show a mean vessel tracking error (VTE) of 2.4 pixels (0.3 or 0.6 mm, depending on the spatial resolution of the images) and an average target registration error (TRE) of 3.3 pixels (0.4 or 0.8 mm).


Medical Physics | 2012

Design and evaluation of an optically-tracked single-CCD laser range scanner.

Thomas S. Pheiffer; Amber L. Simpson; Brian Lennon; Reid C. Thompson; Michael I. Miga

PURPOSE Acquisition of laser range scans of an organ surface has the potential to efficiently provide measurements of geometric changes to soft tissue during a surgical procedure. A laser range scanner design is reported here which has been developed to drive intraoperative updates to conventional image-guided neurosurgery systems. METHODS The scanner is optically-tracked in the operating room with a multiface passive target. The novel design incorporates both the capture of surface geometry (via laser illumination) and color information (via visible light collection) through a single-lens onto the same charge-coupled device (CCD). The accuracy of the geometric data was evaluated by scanning a high-precision phantom and comparing relative distances between landmarks in the scans with the corresponding ground truth (known) distances. The range-of-motion of the scanner with respect to the optical camera was determined by placing the scanner in common operating room configurations while sampling the visibility of the reflective spheres. The tracking accuracy was then analyzed by fixing the scanner and phantom in place, perturbing the optical camera around the scene, and observing variability in scan locations with respect to a tracked pen probe ground truth as the camera tracked the same scene from different positions. RESULTS The geometric accuracy test produced a mean error and standard deviation of 0.25 ± 0.40 mm with an RMS error of 0.47 mm. The tracking tests showed that the scanner could be tracked at virtually all desired orientations required in the OR set up, with an overall tracking error and standard deviation of 2.2 ± 1.0 mm with an RMS error of 2.4 mm. There was no discernible difference between any of the three faces on the lasers range scanner (LRS) with regard to tracking accuracy. CONCLUSIONS A single-lens laser range scanner design was successfully developed and implemented with sufficient scanning and tracking accuracy for image-guided surgery.


IEEE Transactions on Biomedical Engineering | 2014

Evaluation of Conoscopic Holography for Estimating Tumor Resection Cavities in Model-Based Image-Guided Neurosurgery

Amber L. Simpson; Kay Sun; Thomas S. Pheiffer; D. Caleb Rucker; Allen K. Sills; Reid C. Thompson; Michael I. Miga

Surgical navigation relies on accurately mapping the intraoperative state of the patient to models derived from preoperative images. In image-guided neurosurgery, soft tissue deformations are common and have been shown to compromise the accuracy of guidance systems. In lieu of whole-brain intraoperative imaging, some advocate the use of intraoperatively acquired sparse data from laser-range scans, ultrasound imaging, or stereo reconstruction coupled with a computational model to drive subsurface deformations. Some authors have reported on compensating for brain sag, swelling, retraction, and the application of pharmaceuticals such as mannitol with these models. To date, strategies for modeling tissue resection have been limited. In this paper, we report our experiences with a novel digitization approach, called a conoprobe, to document tissue resection cavities and assess the impact of resection on model-based guidance systems. Specifically, the conoprobe was used to digitize the interior of the resection cavity during eight brain tumor resection surgeries and then compared against model prediction results of tumor locations. We should note that no effort was made to incorporate resection into the model but rather the objective was to determine if measurement was possible to study the impact on modeling tissue resection. In addition, the digitized resection cavity was compared with early postoperative MRI scans to determine whether these scans can further inform tissue resection. The results demonstrate benefit in model correction despite not having resection explicitly modeled. However, results also indicate the challenge that resection provides for model-correction approaches. With respect to the digitization technology, it is clear that the conoprobe provides important real-time data regarding resection and adds another dimension to our noncontact instrumentation framework for soft-tissue deformation compensation in guidance systems.


IEEE Transactions on Biomedical Engineering | 2011

Automatic Generation of Boundary Conditions Using Demons Nonrigid Image Registration for Use in 3-D Modality-Independent Elastography

Thomas S. Pheiffer; Jao J. Ou; Rowena E. Ong; Michael I. Miga

Modality-independent elastography (MIE) is a method of elastography that reconstructs the elastic properties of tissue using images acquired under different loading conditions and a biomechanical model. Boundary conditions are a critical input to the algorithm and are often determined by time-consuming point correspondence methods requiring manual user input. This study presents a novel method of automatically generating boundary conditions by nonrigidly registering two image sets with a demons diffusion-based registration algorithm. The use of this method was successfully performed in silico using magnetic resonance and X-ray-computed tomography image data with known boundary conditions. These preliminary results produced boundary conditions with an accuracy of up to 80% compared to the known conditions. Demons-based boundary conditions were utilized within a 3-D MIE reconstruction to determine an elasticity contrast ratio between tumor and normal tissue. Two phantom experiments were then conducted to further test the accuracy of the demons boundary conditions and the MIE reconstruction arising from the use of these conditions. Preliminary results show a reasonable characterization of the material properties on this first attempt and a significant improvement in the automation level and viability of the method.


Proceedings of SPIE | 2013

Phantom-based Comparison of the Accuracy of Point Clouds Extracted from Stereo Cameras and Laser Range Scanner

Ankur N. Kumar; Thomas S. Pheiffer; Amber L. Simpson; Reid C. Thompson; Michael I. Miga; Benoit M. Dawant

Using computational models, images acquired pre-operatively can be updated to account for intraoperative brain shift in image-guided surgical (IGS) systems. An optically tracked textured laser range scanner (tLRS) furnishes the 3D coordinates of cortical surface points (3D point clouds) over the surgical field of view and provides a correspondence between these and the pre-operative MR image. However, integration of the acquired tLRS data into a clinically acceptable system compatible throughout the clinical workflow of tumor resection has been challenging. This is because acquiring the tLRS data requires moving the scanner in and out of the surgical field, thus limiting the number of acquisitions. Large differences between acquisitions caused by tumor resection and tissue manipulation make it difficult to establish correspondence and estimate brain motion. An alternative to the tLRS is to use temporally dense feature-rich stereo surgical video data provided by the operating microscope. This allows for quick digitization of the cortical surface in 3D and can help continuously update the IGS system. In order to understand the tradeoffs between these approaches as input to an IGS system, we compare the accuracy of the 3D point clouds extracted from the stereo video system of the surgical microscope and the tLRS for phantom objects in this paper. We show that the stereovision system of the surgical microscope achieves accuracy in the 0.46-1.5mm range on our phantom objects and is a viable alternative to the tLRS for neurosurgical applications.


Medical Image Analysis | 2015

Persistent and automatic intraoperative 3D digitization of surfaces under dynamic magnifications of an operating microscope

Ankur N. Kumar; Michael I. Miga; Thomas S. Pheiffer; Lola B. Chambless; Reid C. Thompson; Benoit M. Dawant

One of the major challenges impeding advancement in image-guided surgical (IGS) systems is the soft-tissue deformation during surgical procedures. These deformations reduce the utility of the patients preoperative images and may produce inaccuracies in the application of preoperative surgical plans. Solutions to compensate for the tissue deformations include the acquisition of intraoperative tomographic images of the whole organ for direct displacement measurement and techniques that combines intraoperative organ surface measurements with computational biomechanical models to predict subsurface displacements. The later solution has the advantage of being less expensive and amenable to surgical workflow. Several modalities such as textured laser scanners, conoscopic holography, and stereo-pair cameras have been proposed for the intraoperative 3D estimation of organ surfaces to drive patient-specific biomechanical models for the intraoperative update of preoperative images. Though each modality has its respective advantages and disadvantages, stereo-pair camera approaches used within a standard operating microscope is the focus of this article. A new method that permits the automatic and near real-time estimation of 3D surfaces (at 1 Hz) under varying magnifications of the operating microscope is proposed. This method has been evaluated on a CAD phantom object and on full-length neurosurgery video sequences (∼1 h) acquired intraoperatively by the proposed stereovision system. To the best of our knowledge, this type of validation study on full-length brain tumor surgery videos has not been done before. The method for estimating the unknown magnification factor of the operating microscope achieves accuracy within 0.02 of the theoretical value on a CAD phantom and within 0.06 on 4 clinical videos of the entire brain tumor surgery. When compared to a laser range scanner, the proposed method for reconstructing 3D surfaces intraoperatively achieves root mean square errors (surface-to-surface distance) in the 0.28-0.81 mm range on the phantom object and in the 0.54-1.35 mm range on 4 clinical cases. The digitization accuracy of the presented stereovision methods indicate that the operating microscope can be used to deliver the persistent intraoperative input required by computational biomechanical models to update the patients preoperative images and facilitate active surgical guidance.


Ultrasound in Medicine and Biology | 2014

Model-Based Correction of Tissue Compression for Tracked Ultrasound in Soft Tissue Image-Guided Surgery

Thomas S. Pheiffer; Reid C. Thompson; Daniel Caleb Rucker; Amber L. Simpson; Michael I. Miga

Acquisition of ultrasound data negatively affects image registration accuracy during image-guided therapy because of tissue compression by the probe. We present a novel compression correction method that models sub-surface tissue displacement resulting from application of a tracked probe to the tissue surface. Patient landmarks are first used to register the probe pose to pre-operative imaging. The ultrasound probe geometry is used to provide boundary conditions to a biomechanical model of the tissue. The deformation field solution of the model is inverted to non-rigidly transform the ultrasound images to an estimation of the tissue geometry before compression. Experimental results with gel phantoms indicated that the proposed method reduced the tumor margin modified Hausdorff distance (MHD) from 5.0 ± 1.6 to 1.9 ± 0.6 mm, and reduced tumor centroid alignment error from 7.6 ± 2.6 to 2.0 ± 0.9 mm. The method was applied to a clinical case and reduced the tumor margin MHD error from 5.4 ± 0.1 to 2.6 ± 0.1 mm and the centroid alignment error from 7.2 ± 0.2 to 3.5 ± 0.4 mm.

Collaboration


Dive into the Thomas S. Pheiffer's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Amber L. Simpson

Memorial Sloan Kettering Cancer Center

View shared research outputs
Top Co-Authors

Avatar

Reid C. Thompson

Vanderbilt University Medical Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kay Sun

Vanderbilt University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge