Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daniel J. Mirota is active.

Publication


Featured researches published by Daniel J. Mirota.


Medical Physics | 2011

Mobile C-arm cone-beam CT for guidance of spine surgery: Image quality, radiation dose, and integration with interventional guidance

Sebastian Schafer; Sajendra Nithiananthan; Daniel J. Mirota; Ali Uneri; J. W. Stayman; Wojciech Zbijewski; C Schmidgunst; Gerhard Kleinszig; A. J. Khanna; Jeffrey H. Siewerdsen

PURPOSE A flat-panel detector based mobile isocentric C-arm for cone-beam CT (CBCT) has been developed to allow intraoperative 3D imaging with sub-millimeter spatial resolution and soft-tissue visibility. Image quality and radiation dose were evaluated in spinal surgery, commonly relying on lower-performance image intensifier based mobile C-arms. Scan protocols were developed for task-specific imaging at minimum dose, in-room exposure was evaluated, and integration of the imaging system with a surgical guidance system was demonstrated in preclinical studies of minimally invasive spine surgery. METHODS Radiation dose was assessed as a function of kilovolt (peak) (80-120 kVp) and milliampere second using thoracic and lumbar spine dosimetry phantoms. In-room radiation exposure was measured throughout the operating room for various CBCT scan protocols. Image quality was assessed using tissue-equivalent inserts in chest and abdomen phantoms to evaluate bone and soft-tissue contrast-to-noise ratio as a function of dose, and task-specific protocols (i.e., visualization of bone or soft-tissues) were defined. Results were applied in preclinical studies using a cadaveric torso simulating minimally invasive, transpedicular surgery. RESULTS Task-specific CBCT protocols identified include: thoracic bone visualization (100 kVp; 60 mAs; 1.8 mGy); lumbar bone visualization (100 kVp; 130 mAs; 3.2 mGy); thoracic soft-tissue visualization (100 kVp; 230 mAs; 4.3 mGy); and lumbar soft-tissue visualization (120 kVp; 460 mAs; 10.6 mGy) - each at (0.3  ×  0.3  ×  0.9 mm3 ) voxel size. Alternative lower-dose, lower-resolution soft-tissue visualization protocols were identified (100 kVp; 230 mAs; 5.1 mGy) for the lumbar region at (0.3  ×  0.3  ×  1.5 mm3 ) voxel size. Half-scan orbit of the C-arm (x-ray tube traversing under the table) was dosimetrically advantageous (prepatient attenuation) with a nonuniform dose distribution (∼2 ×  higher at the entrance side than at isocenter, and ∼3-4 lower at the exit side). The in-room dose (microsievert) per unit scan dose (milligray) ranged from ∼21 μSv/mGy on average at tableside to ∼0.1 μSv/mGy at 2.0 m distance to isocenter. All protocols involve surgical staff stepping behind a shield wall for each CBCT scan, therefore imparting ∼zero dose to staff. Protocol implementation in preclinical cadaveric studies demonstrate integration of the C-arm with a navigation system for spine surgery guidance-specifically, minimally invasive vertebroplasty in which the system provided accurate guidance and visualization of needle placement and bone cement distribution. Cumulative dose including multiple intraoperative scans was ∼11.5 mGy for CBCT-guided thoracic vertebroplasty and ∼23.2 mGy for lumbar vertebroplasty, with dose to staff at tableside reduced to ∼1 min of fluoroscopy time (∼40-60 μSv), compared to 5-11 min for the conventional approach. CONCLUSIONS Intraoperative CBCT using a high-performance mobile C-arm prototype demonstrates image quality suitable to guidance of spine surgery, with task-specific protocols providing an important basis for minimizing radiation dose, while maintaining image quality sufficient for surgical guidance. Images demonstrate a significant advance in spatial resolution and soft-tissue visibility, and CBCT guidance offers the potential to reduce fluoroscopy reliance, reducing cumulative dose to patient and staff. Integration with a surgical guidance system demonstrates precise tracking and visualization in up-to-date images (alleviating reliance on preoperative images only), including detection of errors or suboptimal surgical outcomes in the operating room.


IEEE Transactions on Medical Imaging | 2012

A System for Video-Based Navigation for Endoscopic Endonasal Skull Base Surgery

Daniel J. Mirota; Hanzi Wang; Russell H. Taylor; Masaru Ishii; Gary L. Gallia; Gregory D. Hager

Surgeries of the skull base require accuracy to safely navigate the critical anatomy. This is particularly the case for endoscopic endonasal skull base surgery (ESBS) where the surgeons work within millimeters of neurovascular structures at the skull base. Todays navigation systems provide approximately 2 mm accuracy. Accuracy is limited by the indirect relationship of the navigation system, the image and the patient. We propose a method to directly track the position of the endoscope using video data acquired from the endoscope camera. Our method first tracks image feature points in the video and reconstructs the image feature points to produce 3D points, and then registers the reconstructed point cloud to a surface segmented from preoperative computed tomography (CT) data. After the initial registration, the system tracks image features and maintains the 2D-3D correspondence of image features and 3D locations. These data are then used to update the current camera pose. We present a method for validation of our system, which achieves submillimeter (0.70 mm mean) target registration error (TRE) results.


computer assisted radiology and surgery | 2012

TREK: an integrated system architecture for intraoperative cone-beam CT-guided surgery.

Ali Uneri; Sebastian Schafer; Daniel J. Mirota; Sajendra Nithiananthan; Yoshito Otake; Russell H. Taylor; Jeffrey H. Siewerdsen

PurposeA system architecture has been developed for integration of intraoperative 3D imaging [viz., mobile C-arm cone-beam CT (CBCT)] with surgical navigation (e.g., trackers, endoscopy, and preoperative image and planning data). The goal of this paper is to describe the architecture and its handling of a broad variety of data sources in modular tool development for streamlined use of CBCT guidance in application-specific surgical scenarios.MethodsThe architecture builds on two proven open-source software packages, namely the cisst package (Johns Hopkins University, Baltimore, MD) and 3D Slicer (Brigham and Women’s Hospital, Boston, MA), and combines data sources common to image-guided procedures with intraoperative 3D imaging. Integration at the software component level is achieved through language bindings to a scripting language (Python) and an object-oriented approach to abstract and simplify the use of devices with varying characteristics. The platform aims to minimize offline data processing and to expose quantitative tools that analyze and communicate factors of geometric precision online. Modular tools are defined to accomplish specific surgical tasks, demonstrated in three clinical scenarios (temporal bone, skull base, and spine surgery) that involve a progressively increased level of complexity in toolset requirements.ResultsThe resulting architecture (referred to as “TREK”) hosts a collection of modules developed according to application-specific surgical tasks, emphasizing streamlined integration with intraoperative CBCT. These include multi-modality image display; 3D-3D rigid and deformable registration to bring preoperative image and planning data to the most up-to-date CBCT; 3D-2D registration of planning and image data to real-time fluoroscopy; infrared, electromagnetic, and video-based trackers used individually or in hybrid arrangements; augmented overlay of image and planning data in endoscopic or in-room video; and real-time “virtual fluoroscopy” computed from GPU-accelerated digitally reconstructed radiographs (DRRs). Application in three preclinical scenarios (temporal bone, skull base, and spine surgery) demonstrates the utility of the modular, task-specific approach in progressively complex tasks.ConclusionsThe design and development of a system architecture for image-guided surgery has been reported, demonstrating enhanced utilization of intraoperative CBCT in surgical applications with vastly different requirements. The system integrates C-arm CBCT with a broad variety of data sources in a modular fashion that streamlines the interface to application-specific tools, accommodates distinct workflow scenarios, and accelerates testing and translation of novel toolsets to clinical use. The modular architecture was shown to adapt to and satisfy the requirements of distinct surgical scenarios from a common code-base, leveraging software components arising from over a decade of effort within the imaging and computer-assisted interventions community.


IEEE Transactions on Medical Imaging | 2013

Evaluation of a System for High-Accuracy 3D Image-Based Registration of Endoscopic Video to C-Arm Cone-Beam CT for Image-Guided Skull Base Surgery

Daniel J. Mirota; Ali Uneri; Sebastian Schafer; Sajendra Nithiananthan; Douglas D. Reh; Masaru Ishii; Gary L. Gallia; Russell H. Taylor; Gregory D. Hager; Jeffrey H. Siewerdsen

The safety of endoscopic skull base surgery can be enhanced by accurate navigation in preoperative computed tomography (CT) or, more recently, intraoperative cone-beam CT (CBCT). The ability to register real-time endoscopic video with CBCT offers an additional advantage by rendering information directly within the visual scene to account for intraoperative anatomical change. However, tracker localization error (~1-2 mm ) limits the accuracy with which video and tomographic images can be registered. This paper reports the first implementation of image-based video-CBCT registration, conducts a detailed quantitation of the dependence of registration accuracy on system parameters, and demonstrates improvement in registration accuracy achieved by the image-based approach. Performance was evaluated as a function of parameters intrinsic to the image-based approach, including system geometry, CBCT image quality, and computational runtime. Overall system performance was evaluated in a cadaver study simulating transsphenoidal skull base tumor excision. Results demonstrated significant improvement (p <; 0.001) in registration accuracy with a mean reprojection distance error of 1.28 mm for the image-based approach versus 1.82 mm for the conventional tracker-based method. Image-based registration was highly robust against CBCT image quality factors of noise and resolution, permitting integration with low-dose intraoperative CBCT.


Proceedings of SPIE | 2011

High-accuracy 3D image-based registration of endoscopic video to C-arm cone-beam CT for image-guided skull base surgery

Daniel J. Mirota; Ali Uneri; Sebastian Schafer; Sajendra Nithiananthan; Douglas D. Reh; Gary L. Gallia; Russell H. Taylor; Gregory D. Hager; Jeffrey H. Siewerdsen

Registration of endoscopic video to preoperative CT facilitates high-precision surgery of the head, neck, and skull-base. Conventional video-CT registration is limited by the accuracy of the tracker and does not use the underlying video or CT image data. A new image-based video registration method has been developed to overcome the limitations of conventional tracker-based registration. This method adds to a navigation system based on intraoperative C-arm cone-beam CT (CBCT), in turn providing high-accuracy registration of video to the surgical scene. The resulting registration enables visualization of the CBCT and planning data within the endoscopic video. The system incorporates a mobile C-arm, integrated with an optical tracking system, video endoscopy, deformable registration of preoperative CT with intraoperative CBCT, and 3D visualization. Similarly to tracker-based approach, the image-based video-CBCT registration the endoscope is localized with optical tracking system followed by a direct 3D image-based registration of the video to the CBCT. In this way, the system achieves video-CBCT registration that is both fast and accurate. Application in skull-base surgery demonstrates overlay of critical structures (e.g., carotid arteries) and surgical targets with sub-mm accuracy. Phantom and cadaver experiments show consistent improvement of target registration error (TRE) in video overlay over conventional tracker-based registration-e.g., 0.92mm versus 1.82mm for image-based and tracker-based registration, respectively. The proposed method represents a two-fold advance-first, through registration of video to up-to-date intraoperative CBCT, and second, through direct 3D image-based video-CBCT registration, which together provide more confident visualization of target and normal tissues within up-to-date images.


Laryngoscope | 2012

Intraoperative C-arm cone-beam computed tomography: quantitative analysis of surgical performance in skull base surgery.

Stella S. Lee; Gary L. Gallia; Douglas D. Reh; Sebastian Schafer; Ali Uneri; Daniel J. Mirota; Sajendra Nithiananthan; Yoshito Otake; J. Webster Stayman; Wojciech Zbijewski; Jeffrey H. Siewerdsen

To determine whether incorporation of intraoperative imaging via a new cone‐beam computed tomography (CBCT) image‐guidance system improves accuracy and facilitates resection in sinus and skull‐base surgery through quantification of surgical performance.


Proceedings of SPIE | 2012

A clinical pilot study of a modular video-CT augmentation system for image-guided skull base surgery

Wen P. Liu; Daniel J. Mirota; Ali Uneri; Yoshito Otake; Gregory D. Hager; Douglas D. Reh; Masaru Ishii; Gary L. Gallia; Jeffrey H. Siewerdsen

Augmentation of endoscopic video with preoperative or intraoperative image data [e.g., planning data and/or anatomical segmentations defined in computed tomography (CT) and magnetic resonance (MR)], can improve navigation, spatial orientation, confidence, and tissue resection in skull base surgery, especially with respect to critical neurovascular structures that may be difficult to visualize in the video scene. This paper presents the engineering and evaluation of a video augmentation system for endoscopic skull base surgery translated to use in a clinical study. Extension of previous research yielded a practical system with a modular design that can be applied to other endoscopic surgeries, including orthopedic, abdominal, and thoracic procedures. A clinical pilot study is underway to assess feasibility and benefit to surgical performance by overlaying CT or MR planning data in realtime, high-definition endoscopic video. Preoperative planning included segmentation of the carotid arteries, optic nerves, and surgical target volume (e.g., tumor). An automated camera calibration process was developed that demonstrates mean re-projection accuracy (0.7±0.3) pixels and mean target registration error of (2.3±1.5) mm. An IRB-approved clinical study involving fifteen patients undergoing skull base tumor surgery is underway in which each surgery includes the experimental video-CT system deployed in parallel to the standard-of-care (unaugmented) video display. Questionnaires distributed to one neurosurgeon and two otolaryngologists are used to assess primary outcome measures regarding the benefit to surgical confidence in localizing critical structures and targets by means of video overlay during surgical approach, resection, and reconstruction.


Proceedings of SPIE | 2012

High-performance C-arm cone-beam CT guidance of thoracic surgery

Sebastian Schafer; Yoshito Otake; Ali Uneri; Daniel J. Mirota; Sajendra Nithiananthan; J. Webster Stayman; Wojciech Zbijewski; Gerhard Kleinszig; Rainer Graumann; Marc S. Sussman; Jeffrey H. Siewerdsen

Localizing sub-palpable nodules in minimally invasive video-assisted thoracic surgery (VATS) presents a significant challenge. To overcome inherent problems of preoperative nodule tagging using CT fluoroscopic guidance, an intraoperative C-arm cone-beam CT (CBCT) image-guidance system has been developed for direct localization of subpalpable tumors in the OR, including real-time tracking of surgical tools (including thoracoscope), and video-CBCT registration for augmentation of the thoracoscopic scene. Acquisition protocols for nodule visibility in the inflated and deflated lung were delineated in phantom and animal/cadaver studies. Motion compensated reconstruction was implemented to account for motion induced by the ventilated contralateral lung. Experience in CBCT-guided targeting of simulated lung nodules included phantoms, porcine models, and cadavers. Phantom studies defined low-dose acquisition protocols providing contrast-to-noise ratio sufficient for lung nodule visualization, confirmed in porcine specimens with simulated nodules (3-6mm diameter PE spheres, ~100-150HU contrast, 2.1mGy). Nodule visibility in CBCT of the collapsed lung, with reduced contrast according to air volume retention, was more challenging, but initial studies confirmed visibility using scan protocols at slightly increased dose (~4.6-11.1mGy). Motion compensated reconstruction employing a 4D deformation map in the backprojection process reduced artifacts associated with motion blur. Augmentation of thoracoscopic video with renderings of the target and critical structures (e.g., pulmonary artery) showed geometric accuracy consistent with camera calibration and the tracking system (2.4mm registration error). Initial results suggest a potentially valuable role for CBCT guidance in VATS, improving precision in minimally invasive, lungconserving surgeries, avoid critical structures, obviate the burdens of preoperative localization, and improve patient safety.


international conference information processing | 2010

Active multispectral illumination and image fusion for retinal microsurgery

Raphael Sznitman; Seth Billings; Diego Rother; Daniel J. Mirota; Yi Yang; James T. Handa; Peter L. Gehlbach; Jin U. Kang; Gregory D. Hager; Russell H. Taylor

It has been shown that white light exposure during retinal microsurgeries is detrimental to patients. To address this problem, we present a novel device and image processing tool, which can be used to significantly reduce the amount of phototoxicity induced in the eye. Our device alternates between illuminating the retina using white and limited spectrum light, while a fully automated image processing algorithm produces a synthetic white light video by colorizing non-white light images. We show qualitatively and quantitatively that our system can provide reliable images using far less toxic light when compared to traditional systems. In addition, the method proposed in this paper may be adapted to other clinical applications in order to give surgeons more flexibility when visualizing areas of interest.


Computer-assisted and robotic endoscopy : first International Workshop, CARE 2014, held in conjunction with MICCAI 2014, Boston, MA, USA, September 18, 2014 : revised selected papers / Xiongbiao Luo, Tobias Reichl, Daniel Mirota, Timoth... | 2014

Is Multi-model Feature Matching Better for Endoscopic Motion Estimation?

Xiang Xiang; Daniel J. Mirota; Austin Reiter; Gregory D. Hager

Camera motion estimation is a standard yet critical step to endoscopic visualization. It is affected by the variation of locations and correspondences of features detected in 2D images. Feature detectors and descriptors vary, though one of the most widely used remains SIFT. Practitioners usually also adopt its feature matching strategy, which defines inliers as the feature pairs subjecting to a global affine transformation. However, for endoscopic videos, we are curious if it is more suitable to cluster features into multiple groups. We can still enforce the same transformation as in SIFT within each group. Such a multi-model idea has been recently examined in the Multi-Affine work, which outperforms Lowes SIFT in terms of re-projection error on minimally invasive endoscopic images with manually labelled ground-truth matches of SIFT features. Since their difference lies in matching, the accuracy gain of estimated motion is attributed to the holistic Multi-Affine feature matching algorithm. But, more concretely, the matching criterion and point searching can be the same as those built in SIFT. We argue that the real variation is only the motion model verification. We either enforce a single global motion model or employ a group of multiple local ones. In this paper, we investigate how sensitive the estimated motion is affected by the number of motion models assumed in feature matching. While the sensitivity can be analytically evaluated, we present an empirical analysis in a leaving-one-out cross validation setting without requiring labels of ground-truth matches. Then, the sensitivity is characterized by the variance of a sequence of motion estimates. We present a series of quantitative comparison such as accuracy and variance between Multi-Affine motion models and the global affine model.

Collaboration


Dive into the Daniel J. Mirota's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ali Uneri

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yoshito Otake

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Gary L. Gallia

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar

Douglas D. Reh

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge