Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yoshito Otake is active.

Publication


Featured researches published by Yoshito Otake.


IEEE Transactions on Medical Imaging | 2012

Intraoperative Image-based Multiview 2D/3D Registration for Image-Guided Orthopaedic Surgery: Incorporation of Fiducial-Based C-Arm Tracking and GPU-Acceleration

Yoshito Otake; Mehran Armand; Robert S. Armiger; Michael D. M. Kutzer; Ehsan Basafa; Peter Kazanzides; Russell H. Taylor

Intraoperative patient registration may significantly affect the outcome of image-guided surgery (IGS). Image-based registration approaches have several advantages over the currently dominant point-based direct contact methods and are used in some industry solutions in image-guided radiation therapy with fixed X-ray gantries. However, technical challenges including geometric calibration and computational cost have precluded their use with mobile C-arms for IGS. We propose a 2D/3D registration framework for intraoperative patient registration using a conventional mobile X-ray imager combining fiducial-based C-arm tracking and graphics processing unit (GPU)-acceleration. The two-stage framework 1) acquires X-ray images and estimates relative pose between the images using a custom-made in-image fiducial, and 2) estimates the patient pose using intensity-based 2D/3D registration. Experimental validations using a publicly available gold standard dataset, a plastic bone phantom and cadaveric specimens have been conducted. The mean target registration error (mTRE) was 0.34±0.04 mm (success rate: 100%, registration time: 14.2 s) for the phantom with two images 90° apart, and 0.99±0.41 mm (81%, 16.3 s) for the cadaveric specimen with images 58.5° apart. The experimental results showed the feasibility of the proposed registration framework as a practical alternative for IGS routines.


Physics in Medicine and Biology | 2012

Automatic localization of vertebral levels in x-ray fluoroscopy using 3D-2D registration: a tool to reduce wrong-site surgery.

Yoshito Otake; Sebastian Schafer; J. W. Stayman; Wojciech Zbijewski; Gerhard Kleinszig; Rainer Graumann; A. J. Khanna; Jeffrey H. Siewerdsen

Surgical targeting of the incorrect vertebral level (wrong-level surgery) is among the more common wrong-site surgical errors, attributed primarily to the lack of uniquely identifiable radiographic landmarks in the mid-thoracic spine. The conventional localization method involves manual counting of vertebral bodies under fluoroscopy, is prone to human error and carries additional time and dose. We propose an image registration and visualization system (referred to as LevelCheck), for decision support in spine surgery by automatically labeling vertebral levels in fluoroscopy using a GPU-accelerated, intensity-based 3D-2D (namely CT-to-fluoroscopy) registration. A gradient information (GI) similarity metric and a CMA-ES optimizer were chosen due to their robustness and inherent suitability for parallelization. Simulation studies involved ten patient CT datasets from which 50 000 simulated fluoroscopic images were generated from C-arm poses selected to approximate the C-arm operator and positioning variability. Physical experiments used an anthropomorphic chest phantom imaged under real fluoroscopy. The registration accuracy was evaluated as the mean projection distance (mPD) between the estimated and true center of vertebral levels. Trials were defined as successful if the estimated position was within the projection of the vertebral body (namely mPD <5 mm). Simulation studies showed a success rate of 99.998% (1 failure in 50 000 trials) and computation time of 4.7 s on a midrange GPU. Analysis of failure modes identified cases of false local optima in the search space arising from longitudinal periodicity in vertebral structures. Physical experiments demonstrated the robustness of the algorithm against quantum noise and x-ray scatter. The ability to automatically localize target anatomy in fluoroscopy in near-real-time could be valuable in reducing the occurrence of wrong-site surgery while helping to reduce radiation exposure. The method is applicable beyond the specific case of vertebral labeling, since any structure defined in pre-operative (or intra-operative) CT or cone-beam CT can be automatically registered to the fluoroscopic scene.


Physics in Medicine and Biology | 2014

Soft-tissue imaging with C-arm cone-beam CT using statistical reconstruction.

Adam S. Wang; J. Webster Stayman; Yoshito Otake; Gerhard Kleinszig; Sebastian Vogt; Gary L. Gallia; A. Jay Khanna; Jeffrey H. Siewerdsen

The potential for statistical image reconstruction methods such as penalized-likelihood (PL) to improve C-arm cone-beam CT (CBCT) soft-tissue visualization for intraoperative imaging over conventional filtered backprojection (FBP) is assessed in this work by making a fair comparison in relation to soft-tissue performance. A prototype mobile C-arm was used to scan anthropomorphic head and abdomen phantoms as well as a cadaveric torso at doses substantially lower than typical values in diagnostic CT, and the effects of dose reduction via tube current reduction and sparse sampling were also compared. Matched spatial resolution between PL and FBP was determined by the edge spread function of low-contrast (∼ 40-80 HU) spheres in the phantoms, which were representative of soft-tissue imaging tasks. PL using the non-quadratic Huber penalty was found to substantially reduce noise relative to FBP, especially at lower spatial resolution where PL provides a contrast-to-noise ratio increase up to 1.4-2.2 × over FBP at 50% dose reduction across all objects. Comparison of sampling strategies indicates that soft-tissue imaging benefits from fully sampled acquisitions at dose above ∼ 1.7 mGy and benefits from 50% sparsity at dose below ∼ 1.0 mGy. Therefore, an appropriate sampling strategy along with the improved low-contrast visualization offered by statistical reconstruction demonstrates the potential for extending intraoperative C-arm CBCT to applications in soft-tissue interventions in neurosurgery as well as thoracic and abdominal surgeries by overcoming conventional tradeoffs in noise, spatial resolution, and dose.


IEEE Transactions on Medical Imaging | 2012

Model-Based Tomographic Reconstruction of Objects Containing Known Components

J. W. Stayman; Yoshito Otake; Jerry L. Prince; A. J. Khanna; Jeffrey H. Siewerdsen

The likelihood of finding manufactured components (surgical tools, implants, etc.) within a tomographic field-of-view has been steadily increasing. One reason is the aging population and proliferation of prosthetic devices, such that more people undergoing diagnostic imaging have existing implants, particularly hip and knee implants. Another reason is that use of intraoperative imaging (e.g., cone-beam CT) for surgical guidance is increasing, wherein surgical tools and devices such as screws and plates are placed within or near to the target anatomy. When these components contain metal, the reconstructed volumes are likely to contain severe artifacts that adversely affect the image quality in tissues both near and far from the component. Because physical models of such components exist, there is a unique opportunity to integrate this knowledge into the reconstruction algorithm to reduce these artifacts. We present a model-based penalized-likelihood estimation approach that explicitly incorporates known information about component geometry and composition. The approach uses an alternating maximization method that jointly estimates the anatomy and the position and pose of each of the known components. We demonstrate that the proposed method can produce nearly artifact-free images even near the boundary of a metal implant in simulated vertebral pedicle screw reconstructions and even under conditions of substantial photon starvation. The simultaneous estimation of device pose also provides quantitative information on device placement that could be valuable to quality assurance and verification of treatment delivery.


computer assisted radiology and surgery | 2012

TREK: an integrated system architecture for intraoperative cone-beam CT-guided surgery.

Ali Uneri; Sebastian Schafer; Daniel J. Mirota; Sajendra Nithiananthan; Yoshito Otake; Russell H. Taylor; Jeffrey H. Siewerdsen

PurposeA system architecture has been developed for integration of intraoperative 3D imaging [viz., mobile C-arm cone-beam CT (CBCT)] with surgical navigation (e.g., trackers, endoscopy, and preoperative image and planning data). The goal of this paper is to describe the architecture and its handling of a broad variety of data sources in modular tool development for streamlined use of CBCT guidance in application-specific surgical scenarios.MethodsThe architecture builds on two proven open-source software packages, namely the cisst package (Johns Hopkins University, Baltimore, MD) and 3D Slicer (Brigham and Women’s Hospital, Boston, MA), and combines data sources common to image-guided procedures with intraoperative 3D imaging. Integration at the software component level is achieved through language bindings to a scripting language (Python) and an object-oriented approach to abstract and simplify the use of devices with varying characteristics. The platform aims to minimize offline data processing and to expose quantitative tools that analyze and communicate factors of geometric precision online. Modular tools are defined to accomplish specific surgical tasks, demonstrated in three clinical scenarios (temporal bone, skull base, and spine surgery) that involve a progressively increased level of complexity in toolset requirements.ResultsThe resulting architecture (referred to as “TREK”) hosts a collection of modules developed according to application-specific surgical tasks, emphasizing streamlined integration with intraoperative CBCT. These include multi-modality image display; 3D-3D rigid and deformable registration to bring preoperative image and planning data to the most up-to-date CBCT; 3D-2D registration of planning and image data to real-time fluoroscopy; infrared, electromagnetic, and video-based trackers used individually or in hybrid arrangements; augmented overlay of image and planning data in endoscopic or in-room video; and real-time “virtual fluoroscopy” computed from GPU-accelerated digitally reconstructed radiographs (DRRs). Application in three preclinical scenarios (temporal bone, skull base, and spine surgery) demonstrates the utility of the modular, task-specific approach in progressively complex tasks.ConclusionsThe design and development of a system architecture for image-guided surgery has been reported, demonstrating enhanced utilization of intraoperative CBCT in surgical applications with vastly different requirements. The system integrates C-arm CBCT with a broad variety of data sources in a modular fashion that streamlines the interface to application-specific tools, accommodates distinct workflow scenarios, and accelerates testing and translation of novel toolsets to clinical use. The modular architecture was shown to adapt to and satisfy the requirements of distinct surgical scenarios from a common code-base, leveraging software components arising from over a decade of effort within the imaging and computer-assisted interventions community.


Proceedings of SPIE | 2011

Penalized-Likelihood Reconstruction for Sparse Data Acquisitions with Unregistered Prior Images and Compressed Sensing Penalties

J. W. Stayman; Wojciech Zbijewski; Yoshito Otake; Ali Uneri; Sebastian Schafer; Junghoon Lee; Jerry L. Prince; Jeffrey H. Siewerdsen

This paper introduces a general reconstruction technique for using unregistered prior images within model-based penalized- likelihood reconstruction. The resulting estimator is implicitly defined as the maximizer of an objective composed of a likelihood term that enforces a fit to data measurements and that incorporates the heteroscedastic statistics of the tomographic problem; and a penalty term that penalizes differences from prior image. Compressed sensing (p-norm) penalties are used to allow for differences between the reconstruction and the prior. Moreover, the penalty is parameterized with registration terms that are jointly optimized as part of the reconstruction to allow for mismatched images. We apply this novel approach to synthetic data using a digital phantom as well as tomographic data derived from a conebeam CT test bench. The test bench data includes sparse data acquisitions of a custom modifiable anthropomorphic lung phantom that can simulate lung nodule surveillance. Sparse reconstructions using this approach demonstrate the simultaneous incorporation of prior imagery and the necessary registration to utilize those priors.


Journal of Craniofacial Surgery | 2014

Preliminary development of a workstation for craniomaxillofacial surgical procedures: introducing a computer-assisted planning and execution system.

Chad R. Gordon; Ryan J. Murphy; Devin Coon; Ehsan Basafa; Yoshito Otake; Mohammed Al Rakan; Erin M. Rada; Sriniras Susarla; Edward W. Swanson; Elliot K. Fishman; Gabriel F. Santiago; Gerald Brandacher; Peter Liacouras; Gerald T. Grant; Mehran Armand

IntroductionFacial transplantation represents one of the most complicated scenarios in craniofacial surgery because of skeletal, aesthetic, and dental discrepancies between donor and recipient. However, standard off-the-shelf vendor computer-assisted surgery systems may not provide custom features to mitigate the increased complexity of this particular procedure. We propose to develop a computer-assisted surgery solution customized for preoperative planning, intraoperative navigation including cutting guides, and dynamic, instantaneous feedback of cephalometric measurements/angles as needed for facial transplantation and other related craniomaxillofacial procedures. MethodsWe developed the Computer-Assisted Planning and Execution (CAPE) workstation to assist with planning and execution of facial transplantation. Preoperative maxillofacial computed tomography (CT) scans were obtained on 4 size-mismatched miniature swine encompassing 2 live face-jaw-teeth transplants. The system was tested in a laboratory setting using plastic models of mismatched swine, after which the system was used in 2 live swine transplants. Postoperative CT imaging was obtained and compared with the preoperative plan and intraoperative measures from the CAPE workstation for both transplants. ResultsPlastic model tests familiarized the team with the CAPE workstation and identified several defects in the workflow. Live swine surgeries demonstrated utility of the CAPE system in the operating room, showing submillimeter registration error of 0.6 ± 0.24 mm and promising qualitative comparisons between intraoperative data and postoperative CT imaging. ConclusionsThe initial development of the CAPE workstation demonstrated that integration of computer planning and intraoperative navigation for facial transplantation are possible with submillimeter accuracy. This approach can potentially improve preoperative planning, allowing ideal donor-recipient matching despite significant size mismatch, and accurate surgical execution for numerous types of craniofacial and orthognathic surgical procedures.


Medical Physics | 2012

Robust methods for automatic image-to-world registration in cone-beam CT interventional guidance

Hao Dang; Yoshito Otake; Sebastian Schafer; J. W. Stayman; Gerhard Kleinszig; Jeffrey H. Siewerdsen

PURPOSE Real-time surgical navigation relies on accurate image-to-world registration to align the coordinate systems of the image and patient. Conventional manual registration can present a workflow bottleneck and is prone to manual error and intraoperator variability. This work reports alternative means of automatic image-to-world registration, each method involving an automatic registration marker (ARM) used in conjunction with C-arm cone-beam CT (CBCT). The first involves a Known-Model registration method in which the ARM is a predefined tool, and the second is a Free-Form method in which the ARM is freely configurable. METHODS Studies were performed using a prototype C-arm for CBCT and a surgical tracking system. A simple ARM was designed with markers comprising a tungsten sphere within infrared reflectors to permit detection of markers in both x-ray projections and by an infrared tracker. The Known-Model method exercised a predefined specification of the ARM in combination with 3D-2D registration to estimate the transformation that yields the optimal match between forward projection of the ARM and the measured projection images. The Free-Form method localizes markers individually in projection data by a robust Hough transform approach extended from previous work, backprojected to 3D image coordinates based on C-arm geometric calibration. Image-domain point sets were transformed to world coordinates by rigid-body point-based registration. The robustness and registration accuracy of each method was tested in comparison to manual registration across a range of body sites (head, thorax, and abdomen) of interest in CBCT-guided surgery, including cases with interventional tools in the radiographic scene. RESULTS The automatic methods exhibited similar target registration error (TRE) and were comparable or superior to manual registration for placement of the ARM within ∼200 mm of C-arm isocenter. Marker localization in projection data was robust across all anatomical sites, including challenging scenarios involving the presence of interventional tools. The reprojection error of marker localization was independent of the distance of the ARM from isocenter, and the overall TRE was dominated by the configuration of individual fiducials and distance from the target as predicted by theory. The median TRE increased with greater ARM-to-isocenter distance (e.g., for the Free-Form method, TRE increasing from 0.78 mm to 2.04 mm at distances of ∼75 mm and 370 mm, respectively). The median TRE within ∼200 mm distance was consistently lower than that of the manual method (TRE = 0.82 mm). Registration performance was independent of anatomical site (head, thorax, and abdomen). The Free-Form method demonstrated a statistically significant improvement (p = 0.0044) in reproducibility compared to manual registration (0.22 mm versus 0.30 mm, respectively). CONCLUSIONS Automatic image-to-world registration methods demonstrate the potential for improved accuracy, reproducibility, and workflow in CBCT-guided procedures. A Free-Form method was shown to exhibit robustness against anatomical site, with comparable or improved TRE compared to manual registration. It was also comparable or superior in performance to a Known-Model method in which the ARM configuration is specified as a predefined tool, thereby allowing configuration of fiducials on the fly or attachment to the patient.


Spine | 2015

Automatic Localization of Target Vertebrae in Spine Surgery: Clinical Evaluation of the LevelCheck Registration Algorithm

Sheng Fu L Lo; Yoshito Otake; Varun Puvanesarajah; Adam S. Wang; Ali Uneri; Tharindu De Silva; Sebastian Vogt; Gerhard Kleinszig; Benjamin D. Elder; C. Rory Goodwin; Thomas A. Kosztowski; Jason Liauw; Mari L. Groves; Ali Bydon; Daniel M. Sciubba; Timothy F. Witham; Jean Paul Wolinsky; Nafi Aygun; Ziya L. Gokaslan; Jeffrey H. Siewerdsen

Study Design. A 3-dimensional-2-dimensional (3D-2D) image registration algorithm, “LevelCheck,” was used to automatically label vertebrae in intraoperative mobile radiographs obtained during spine surgery. Accuracy, computation time, and potential failure modes were evaluated in a retrospective study of 20 patients. Objective. To measure the performance of the LevelCheck algorithm using clinical images acquired during spine surgery. Summary of Background Data. In spine surgery, the potential for wrong level surgery is significant due to the difficulty of localizing target vertebrae based solely on visual impression, palpation, and fluoroscopy. To remedy this difficulty and reduce the risk of wrong-level surgery, our team introduced a program (dubbed LevelCheck) to automatically localize target vertebrae in mobile radiographs using robust 3D-2D image registration to preoperative computed tomographic (CT) scan. Methods. Twenty consecutive patients undergoing thoracolumbar spine surgery, for whom both a preoperative CT scan and an intraoperative mobile radiograph were available, were retrospectively analyzed. A board-certified neuroradiologist determined the “true” vertebra levels in each radiograph. Registration of the preoperative CT scan to the intraoperative radiograph was calculated via LevelCheck, and projection distance errors were analyzed. Five hundred random initializations were performed for each patient, and algorithm settings (viz, the number of robust multistarts, ranging 50–200) were varied to evaluate the trade-off between registration error and computation time. Failure mode analysis was performed by individually analyzing unsuccessful registrations (>5 mm distance error) observed with 50 multistarts. Results. At 200 robust multistarts (computation time of ∼26 s), the registration accuracy was 100% across all 10,000 trials. As the number of multistarts (and computation time) decreased, the registration remained fairly robust, down to 99.3% registration accuracy at 50 multistarts (computation time ∼7 s). Conclusion. The LevelCheck algorithm correctly identified target vertebrae in intraoperative mobile radiographs of the thoracolumbar spine, demonstrating acceptable computation time, compatibility with routinely obtained preoperative CT scans, and warranting investigation in prospective studies. Level of Evidence: N/A


Physics in Medicine and Biology | 2014

3D–2D registration for surgical guidance: effect of projection view angles on registration accuracy

Ali Uneri; Yoshito Otake; Adam S. Wang; Gerhard Kleinszig; Sebastian Vogt; A. J. Khanna; Jeffrey H. Siewerdsen

An algorithm for intensity-based 3D-2D registration of CT and x-ray projections is evaluated, specifically using single- or dual-projection views to provide 3D localization. The registration framework employs the gradient information similarity metric and covariance matrix adaptation evolution strategy to solve for the patient pose in six degrees of freedom. Registration performance was evaluated in an anthropomorphic phantom and cadaver, using C-arm projection views acquired at angular separation, Δθ, ranging from ∼0°-180° at variable C-arm magnification. Registration accuracy was assessed in terms of 2D projection distance error and 3D target registration error (TRE) and compared to that of an electromagnetic (EM) tracker. The results indicate that angular separation as small as Δθ ∼10°-20° achieved TRE <2 mm with 95% confidence, comparable or superior to that of the EM tracker. The method allows direct registration of preoperative CT and planning data to intraoperative fluoroscopy, providing 3D localization free from conventional limitations associated with external fiducial markers, stereotactic frames, trackers and manual registration.

Collaboration


Dive into the Yoshito Otake's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mehran Armand

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yoshinobu Sato

Nara Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Ali Uneri

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Shigeyuki Suzuki

Jikei University School of Medicine

View shared research outputs
Researchain Logo
Decentralizing Knowledge