Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ali Uneri is active.

Publication


Featured researches published by Ali Uneri.


ieee international conference on biomedical robotics and biomechatronics | 2010

New steady-hand Eye Robot with micro-force sensing for vitreoretinal surgery

Ali Uneri; Marcin Balicki; James T. Handa; Peter L. Gehlbach; Russell H. Taylor; Iulian Iordachita

In retinal microsurgery, surgeons are required to perform micron scale maneuvers while safely applying forces to the retinal tissue that are below sensory perception. Real-time characterization and precise manipulation of this delicate tissue has thus far been hindered by human limits on tool control and the lack of a surgically compatible endpoint sensing instrument. Here we present the design of a new generation, cooperatively controlled microsurgery robot with a remote center-of-motion (RCM) mechanism and an integrated custom micro-force sensing surgical hook. Utilizing the forces measured by the end effector, we correct for tool deflections and implement a micro-force guided cooperative control algorithm to actively guide the operator. Preliminary experiments have been carried out to test our new control methods on raw chicken egg inner shell membranes and to capture useful dynamic characteristics associated with delicate tissue manipulations.


Medical Physics | 2011

Mobile C-arm cone-beam CT for guidance of spine surgery: Image quality, radiation dose, and integration with interventional guidance

Sebastian Schafer; Sajendra Nithiananthan; Daniel J. Mirota; Ali Uneri; J. W. Stayman; Wojciech Zbijewski; C Schmidgunst; Gerhard Kleinszig; A. J. Khanna; Jeffrey H. Siewerdsen

PURPOSE A flat-panel detector based mobile isocentric C-arm for cone-beam CT (CBCT) has been developed to allow intraoperative 3D imaging with sub-millimeter spatial resolution and soft-tissue visibility. Image quality and radiation dose were evaluated in spinal surgery, commonly relying on lower-performance image intensifier based mobile C-arms. Scan protocols were developed for task-specific imaging at minimum dose, in-room exposure was evaluated, and integration of the imaging system with a surgical guidance system was demonstrated in preclinical studies of minimally invasive spine surgery. METHODS Radiation dose was assessed as a function of kilovolt (peak) (80-120 kVp) and milliampere second using thoracic and lumbar spine dosimetry phantoms. In-room radiation exposure was measured throughout the operating room for various CBCT scan protocols. Image quality was assessed using tissue-equivalent inserts in chest and abdomen phantoms to evaluate bone and soft-tissue contrast-to-noise ratio as a function of dose, and task-specific protocols (i.e., visualization of bone or soft-tissues) were defined. Results were applied in preclinical studies using a cadaveric torso simulating minimally invasive, transpedicular surgery. RESULTS Task-specific CBCT protocols identified include: thoracic bone visualization (100 kVp; 60 mAs; 1.8 mGy); lumbar bone visualization (100 kVp; 130 mAs; 3.2 mGy); thoracic soft-tissue visualization (100 kVp; 230 mAs; 4.3 mGy); and lumbar soft-tissue visualization (120 kVp; 460 mAs; 10.6 mGy) - each at (0.3  ×  0.3  ×  0.9 mm3 ) voxel size. Alternative lower-dose, lower-resolution soft-tissue visualization protocols were identified (100 kVp; 230 mAs; 5.1 mGy) for the lumbar region at (0.3  ×  0.3  ×  1.5 mm3 ) voxel size. Half-scan orbit of the C-arm (x-ray tube traversing under the table) was dosimetrically advantageous (prepatient attenuation) with a nonuniform dose distribution (∼2 ×  higher at the entrance side than at isocenter, and ∼3-4 lower at the exit side). The in-room dose (microsievert) per unit scan dose (milligray) ranged from ∼21 μSv/mGy on average at tableside to ∼0.1 μSv/mGy at 2.0 m distance to isocenter. All protocols involve surgical staff stepping behind a shield wall for each CBCT scan, therefore imparting ∼zero dose to staff. Protocol implementation in preclinical cadaveric studies demonstrate integration of the C-arm with a navigation system for spine surgery guidance-specifically, minimally invasive vertebroplasty in which the system provided accurate guidance and visualization of needle placement and bone cement distribution. Cumulative dose including multiple intraoperative scans was ∼11.5 mGy for CBCT-guided thoracic vertebroplasty and ∼23.2 mGy for lumbar vertebroplasty, with dose to staff at tableside reduced to ∼1 min of fluoroscopy time (∼40-60 μSv), compared to 5-11 min for the conventional approach. CONCLUSIONS Intraoperative CBCT using a high-performance mobile C-arm prototype demonstrates image quality suitable to guidance of spine surgery, with task-specific protocols providing an important basis for minimizing radiation dose, while maintaining image quality sufficient for surgical guidance. Images demonstrate a significant advance in spatial resolution and soft-tissue visibility, and CBCT guidance offers the potential to reduce fluoroscopy reliance, reducing cumulative dose to patient and staff. Integration with a surgical guidance system demonstrates precise tracking and visualization in up-to-date images (alleviating reliance on preoperative images only), including detection of errors or suboptimal surgical outcomes in the operating room.


medical image computing and computer assisted intervention | 2010

Micro-force sensing in robot assisted membrane peeling for vitreoretinal surgery

Marcin Balicki; Ali Uneri; Iulian Iordachita; James T. Handa; Peter L. Gehlbach; Russell H. Taylor

Vitreoretinal surgeons use 0.5 mm diameter instruments to manipulate delicate tissue inside the eye while applying imperceptible forces that can cause damage to the retina. We present a system which robotically regulates user-applied forces to the tissue, to minimize the risk of retinal hemorrhage or tear during membrane peeling, a common task in vitreoretinal surgery. Our research platform is based on a cooperatively controlled microsurgery robot. It integrates a custom micro-force sensing surgical pick, which provides conventional surgical function and real time force information. We report the development of a new phantom, which is used to assess robot control, force feedback methods, and our newly implemented auditory sensory substitution to specifically assist membrane peeling. Our findings show that auditory sensory substitution decreased peeling forces in all tests, and that robotic force scaling with audio feedback is the most promising aid in reducing peeling forces and task completion time.


computer assisted radiology and surgery | 2012

TREK: an integrated system architecture for intraoperative cone-beam CT-guided surgery.

Ali Uneri; Sebastian Schafer; Daniel J. Mirota; Sajendra Nithiananthan; Yoshito Otake; Russell H. Taylor; Jeffrey H. Siewerdsen

PurposeA system architecture has been developed for integration of intraoperative 3D imaging [viz., mobile C-arm cone-beam CT (CBCT)] with surgical navigation (e.g., trackers, endoscopy, and preoperative image and planning data). The goal of this paper is to describe the architecture and its handling of a broad variety of data sources in modular tool development for streamlined use of CBCT guidance in application-specific surgical scenarios.MethodsThe architecture builds on two proven open-source software packages, namely the cisst package (Johns Hopkins University, Baltimore, MD) and 3D Slicer (Brigham and Women’s Hospital, Boston, MA), and combines data sources common to image-guided procedures with intraoperative 3D imaging. Integration at the software component level is achieved through language bindings to a scripting language (Python) and an object-oriented approach to abstract and simplify the use of devices with varying characteristics. The platform aims to minimize offline data processing and to expose quantitative tools that analyze and communicate factors of geometric precision online. Modular tools are defined to accomplish specific surgical tasks, demonstrated in three clinical scenarios (temporal bone, skull base, and spine surgery) that involve a progressively increased level of complexity in toolset requirements.ResultsThe resulting architecture (referred to as “TREK”) hosts a collection of modules developed according to application-specific surgical tasks, emphasizing streamlined integration with intraoperative CBCT. These include multi-modality image display; 3D-3D rigid and deformable registration to bring preoperative image and planning data to the most up-to-date CBCT; 3D-2D registration of planning and image data to real-time fluoroscopy; infrared, electromagnetic, and video-based trackers used individually or in hybrid arrangements; augmented overlay of image and planning data in endoscopic or in-room video; and real-time “virtual fluoroscopy” computed from GPU-accelerated digitally reconstructed radiographs (DRRs). Application in three preclinical scenarios (temporal bone, skull base, and spine surgery) demonstrates the utility of the modular, task-specific approach in progressively complex tasks.ConclusionsThe design and development of a system architecture for image-guided surgery has been reported, demonstrating enhanced utilization of intraoperative CBCT in surgical applications with vastly different requirements. The system integrates C-arm CBCT with a broad variety of data sources in a modular fashion that streamlines the interface to application-specific tools, accommodates distinct workflow scenarios, and accelerates testing and translation of novel toolsets to clinical use. The modular architecture was shown to adapt to and satisfy the requirements of distinct surgical scenarios from a common code-base, leveraging software components arising from over a decade of effort within the imaging and computer-assisted interventions community.


Proceedings of SPIE | 2011

Penalized-Likelihood Reconstruction for Sparse Data Acquisitions with Unregistered Prior Images and Compressed Sensing Penalties

J. W. Stayman; Wojciech Zbijewski; Yoshito Otake; Ali Uneri; Sebastian Schafer; Junghoon Lee; Jerry L. Prince; Jeffrey H. Siewerdsen

This paper introduces a general reconstruction technique for using unregistered prior images within model-based penalized- likelihood reconstruction. The resulting estimator is implicitly defined as the maximizer of an objective composed of a likelihood term that enforces a fit to data measurements and that incorporates the heteroscedastic statistics of the tomographic problem; and a penalty term that penalizes differences from prior image. Compressed sensing (p-norm) penalties are used to allow for differences between the reconstruction and the prior. Moreover, the penalty is parameterized with registration terms that are jointly optimized as part of the reconstruction to allow for mismatched images. We apply this novel approach to synthetic data using a digital phantom as well as tomographic data derived from a conebeam CT test bench. The test bench data includes sparse data acquisitions of a custom modifiable anthropomorphic lung phantom that can simulate lung nodule surveillance. Sparse reconstructions using this approach demonstrate the simultaneous incorporation of prior imagery and the necessary registration to utilize those priors.


IEEE Transactions on Medical Imaging | 2013

Evaluation of a System for High-Accuracy 3D Image-Based Registration of Endoscopic Video to C-Arm Cone-Beam CT for Image-Guided Skull Base Surgery

Daniel J. Mirota; Ali Uneri; Sebastian Schafer; Sajendra Nithiananthan; Douglas D. Reh; Masaru Ishii; Gary L. Gallia; Russell H. Taylor; Gregory D. Hager; Jeffrey H. Siewerdsen

The safety of endoscopic skull base surgery can be enhanced by accurate navigation in preoperative computed tomography (CT) or, more recently, intraoperative cone-beam CT (CBCT). The ability to register real-time endoscopic video with CBCT offers an additional advantage by rendering information directly within the visual scene to account for intraoperative anatomical change. However, tracker localization error (~1-2 mm ) limits the accuracy with which video and tomographic images can be registered. This paper reports the first implementation of image-based video-CBCT registration, conducts a detailed quantitation of the dependence of registration accuracy on system parameters, and demonstrates improvement in registration accuracy achieved by the image-based approach. Performance was evaluated as a function of parameters intrinsic to the image-based approach, including system geometry, CBCT image quality, and computational runtime. Overall system performance was evaluated in a cadaver study simulating transsphenoidal skull base tumor excision. Results demonstrated significant improvement (p <; 0.001) in registration accuracy with a mean reprojection distance error of 1.28 mm for the image-based approach versus 1.82 mm for the conventional tracker-based method. Image-based registration was highly robust against CBCT image quality factors of noise and resolution, permitting integration with low-dose intraoperative CBCT.


Physics in Medicine and Biology | 2016

3D-2D image registration for target localization in spine surgery: Investigation of similarity metrics providing robustness to content mismatch

T. De Silva; Ali Uneri; M. D. Ketcha; S. Reaungamornrat; Gerhard Kleinszig; Sebastian Vogt; Nafi Aygun; Sheng Fu Lo; Jean Paul Wolinsky; Jeffrey H. Siewerdsen

In image-guided spine surgery, robust three-dimensional to two-dimensional (3D-2D) registration of preoperative computed tomography (CT) and intraoperative radiographs can be challenged by the image content mismatch associated with the presence of surgical instrumentation and implants as well as soft-tissue resection or deformation. This work investigates image similarity metrics in 3D-2D registration offering improved robustness against mismatch, thereby improving performance and reducing or eliminating the need for manual masking. The performance of four gradient-based image similarity metrics (gradient information (GI), gradient correlation (GC), gradient information with linear scaling (GS), and gradient orientation (GO)) with a multi-start optimization strategy was evaluated in an institutional review board-approved retrospective clinical study using 51 preoperative CT images and 115 intraoperative mobile radiographs. Registrations were tested with and without polygonal masks as a function of the number of multistarts employed during optimization. Registration accuracy was evaluated in terms of the projection distance error (PDE) and assessment of failure modes (PDE  >  30 mm) that could impede reliable vertebral level localization. With manual polygonal masking and 200 multistarts, the GC and GO metrics exhibited robust performance with 0% gross failures and median PDE < 6.4 mm (±4.4 mm interquartile range (IQR)) and a median runtime of 84 s (plus upwards of 1-2 min for manual masking). Excluding manual polygonal masks and decreasing the number of multistarts to 50 caused the GC-based registration to fail at a rate of >14%; however, GO maintained robustness with a 0% gross failure rate. Overall, the GI, GC, and GS metrics were susceptible to registration errors associated with content mismatch, but GO provided robust registration (median PDE  =  5.5 mm, 2.6 mm IQR) without manual masking and with an improved runtime (29.3 s). The GO metric improved the registration accuracy and robustness in the presence of strong image content mismatch. This capability could offer valuable assistance and decision support in spine level localization in a manner consistent with clinical workflow.


Spine | 2015

Automatic Localization of Target Vertebrae in Spine Surgery: Clinical Evaluation of the LevelCheck Registration Algorithm

Sheng Fu L Lo; Yoshito Otake; Varun Puvanesarajah; Adam S. Wang; Ali Uneri; Tharindu De Silva; Sebastian Vogt; Gerhard Kleinszig; Benjamin D. Elder; C. Rory Goodwin; Thomas A. Kosztowski; Jason Liauw; Mari L. Groves; Ali Bydon; Daniel M. Sciubba; Timothy F. Witham; Jean Paul Wolinsky; Nafi Aygun; Ziya L. Gokaslan; Jeffrey H. Siewerdsen

Study Design. A 3-dimensional-2-dimensional (3D-2D) image registration algorithm, “LevelCheck,” was used to automatically label vertebrae in intraoperative mobile radiographs obtained during spine surgery. Accuracy, computation time, and potential failure modes were evaluated in a retrospective study of 20 patients. Objective. To measure the performance of the LevelCheck algorithm using clinical images acquired during spine surgery. Summary of Background Data. In spine surgery, the potential for wrong level surgery is significant due to the difficulty of localizing target vertebrae based solely on visual impression, palpation, and fluoroscopy. To remedy this difficulty and reduce the risk of wrong-level surgery, our team introduced a program (dubbed LevelCheck) to automatically localize target vertebrae in mobile radiographs using robust 3D-2D image registration to preoperative computed tomographic (CT) scan. Methods. Twenty consecutive patients undergoing thoracolumbar spine surgery, for whom both a preoperative CT scan and an intraoperative mobile radiograph were available, were retrospectively analyzed. A board-certified neuroradiologist determined the “true” vertebra levels in each radiograph. Registration of the preoperative CT scan to the intraoperative radiograph was calculated via LevelCheck, and projection distance errors were analyzed. Five hundred random initializations were performed for each patient, and algorithm settings (viz, the number of robust multistarts, ranging 50–200) were varied to evaluate the trade-off between registration error and computation time. Failure mode analysis was performed by individually analyzing unsuccessful registrations (>5 mm distance error) observed with 50 multistarts. Results. At 200 robust multistarts (computation time of ∼26 s), the registration accuracy was 100% across all 10,000 trials. As the number of multistarts (and computation time) decreased, the registration remained fairly robust, down to 99.3% registration accuracy at 50 multistarts (computation time ∼7 s). Conclusion. The LevelCheck algorithm correctly identified target vertebrae in intraoperative mobile radiographs of the thoracolumbar spine, demonstrating acceptable computation time, compatibility with routinely obtained preoperative CT scans, and warranting investigation in prospective studies. Level of Evidence: N/A


Proceedings of SPIE | 2011

High-accuracy 3D image-based registration of endoscopic video to C-arm cone-beam CT for image-guided skull base surgery

Daniel J. Mirota; Ali Uneri; Sebastian Schafer; Sajendra Nithiananthan; Douglas D. Reh; Gary L. Gallia; Russell H. Taylor; Gregory D. Hager; Jeffrey H. Siewerdsen

Registration of endoscopic video to preoperative CT facilitates high-precision surgery of the head, neck, and skull-base. Conventional video-CT registration is limited by the accuracy of the tracker and does not use the underlying video or CT image data. A new image-based video registration method has been developed to overcome the limitations of conventional tracker-based registration. This method adds to a navigation system based on intraoperative C-arm cone-beam CT (CBCT), in turn providing high-accuracy registration of video to the surgical scene. The resulting registration enables visualization of the CBCT and planning data within the endoscopic video. The system incorporates a mobile C-arm, integrated with an optical tracking system, video endoscopy, deformable registration of preoperative CT with intraoperative CBCT, and 3D visualization. Similarly to tracker-based approach, the image-based video-CBCT registration the endoscope is localized with optical tracking system followed by a direct 3D image-based registration of the video to the CBCT. In this way, the system achieves video-CBCT registration that is both fast and accurate. Application in skull-base surgery demonstrates overlay of critical structures (e.g., carotid arteries) and surgical targets with sub-mm accuracy. Phantom and cadaver experiments show consistent improvement of target registration error (TRE) in video overlay over conventional tracker-based registration-e.g., 0.92mm versus 1.82mm for image-based and tracker-based registration, respectively. The proposed method represents a two-fold advance-first, through registration of video to up-to-date intraoperative CBCT, and second, through direct 3D image-based video-CBCT registration, which together provide more confident visualization of target and normal tissues within up-to-date images.


Physics in Medicine and Biology | 2014

3D–2D registration for surgical guidance: effect of projection view angles on registration accuracy

Ali Uneri; Yoshito Otake; Adam S. Wang; Gerhard Kleinszig; Sebastian Vogt; A. J. Khanna; Jeffrey H. Siewerdsen

An algorithm for intensity-based 3D-2D registration of CT and x-ray projections is evaluated, specifically using single- or dual-projection views to provide 3D localization. The registration framework employs the gradient information similarity metric and covariance matrix adaptation evolution strategy to solve for the patient pose in six degrees of freedom. Registration performance was evaluated in an anthropomorphic phantom and cadaver, using C-arm projection views acquired at angular separation, Δθ, ranging from ∼0°-180° at variable C-arm magnification. Registration accuracy was assessed in terms of 2D projection distance error and 3D target registration error (TRE) and compared to that of an electromagnetic (EM) tracker. The results indicate that angular separation as small as Δθ ∼10°-20° achieved TRE <2 mm with 95% confidence, comparable or superior to that of the EM tracker. The method allows direct registration of preoperative CT and planning data to intraoperative fluoroscopy, providing 3D localization free from conventional limitations associated with external fiducial markers, stereotactic frames, trackers and manual registration.

Collaboration


Dive into the Ali Uneri's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yoshito Otake

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

M. D. Ketcha

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

A. J. Khanna

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge