Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Bernhard Fuerst is active.

Publication


Featured researches published by Bernhard Fuerst.


Medical Image Analysis | 2014

Automatic ultrasound–MRI registration for neurosurgery using the 2D and 3D LC2 Metric

Bernhard Fuerst; Wolfgang Wein; Markus Müller; Nassir Navab

To enable image guided neurosurgery, the alignment of pre-interventional magnetic resonance imaging (MRI) and intra-operative ultrasound (US) is commonly required. We present two automatic image registration algorithms using the similarity measure Linear Correlation of Linear Combination (LC(2)) to align either freehand US slices or US volumes with MRI images. Both approaches allow an automatic and robust registration, while the three dimensional method yields a significantly improved percentage of optimally aligned registrations for randomly chosen clinically relevant initializations. This study presents a detailed description of the methodology and an extensive evaluation showing an accuracy of 2.51mm, precision of 0.85mm and capture range of 15mm (>95% convergence) using 14 clinical neurosurgical cases.


medical image computing and computer assisted intervention | 2013

Global Registration of Ultrasound to MRI Using the LC2 Metric for Enabling Neurosurgical Guidance

Wolfgang Wein; Alexander Ladikos; Bernhard Fuerst; Amit Shah; Kanishka Sharma; Nassir Navab

Automatic and robust registration of pre-operative magnetic resonance imaging (MRI) and intra-operative ultrasound (US) is essential to neurosurgery. We reformulate and extend an approach which uses a Linear Correlation of Linear Combination (LC2)-based similarity metric, yielding a novel algorithm which allows for fully automatic US-MRI registration in the matter of seconds. It is invariant with respect to the unknown and locally varying relationship between US image intensities and both MRI intensity and its gradient. The overall method based on this both recovers global rigid alignment, as well as the parameters of a free-form-deformation (FFD) model. The algorithm is evaluated on 14 clinical neurosurgical cases with tumors, with an average landmark-based error of 2.52 mm for the rigid transformation. In addition, we systematically study the accuracy, precision, and capture range of the algorithm, as well as its sensitivity to different choices of parameters.


IEEE Transactions on Medical Imaging | 2015

Patient-Specific Biomechanical Model for the Prediction of Lung Motion From 4-D CT Images

Bernhard Fuerst; Tommaso Mansi; Francois Carnis; Martin Salzle; Jingdan Zhang; Jerome Declerck; Thomas Boettger; John E. Bayouth; Nassir Navab; Ali Kamen

This paper presents an approach to predict the deformation of the lungs and surrounding organs during respiration. The framework incorporates a computational model of the respiratory system, which comprises an anatomical model extracted from computed tomography (CT) images at end-expiration (EE), and a biomechanical model of the respiratory physiology, including the material behavior and interactions between organs. A personalization step is performed to automatically estimate patient-specific thoracic pressure, which drives the biomechanical model. The zone-wise pressure values are obtained by using a trust-region optimizer, where the estimated motion is compared to CT images at end-inspiration (EI). A detailed convergence analysis in terms of mesh resolution, time stepping and number of pressure zones on the surface of the thoracic cavity is carried out. The method is then tested on five public datasets. Results show that the model is able to predict the respiratory motion with an average landmark error of 3.40 ±1.0 mm over the entire respiratory cycle. The estimated 3-D lung motion may constitute as an advanced 3-D surrogate for more accurate medical image reconstruction and patient respiratory analysis.


computer assisted radiology and surgery | 2016

Calibration of RGBD camera and cone-beam CT for 3D intra-operative mixed reality visualization.

Sing Chun Lee; Bernhard Fuerst; Javad Fotouhi; Marius Fischer; Greg Osgood; Nassir Navab

PurposeThis work proposes a novel algorithm to register cone-beam computed tomography (CBCT) volumes and 3D optical (RGBD) camera views. The co-registered real-time RGBD camera and CBCT imaging enable a novel augmented reality solution for orthopedic surgeries, which allows arbitrary views using digitally reconstructed radiographs overlaid on the reconstructed patient’s surface without the need to move the C-arm.MethodsAn RGBD camera is rigidly mounted on the C-arm near the detector. We introduce a calibration method based on the simultaneous reconstruction of the surface and the CBCT scan of an object. The transformation between the two coordinate spaces is recovered using Fast Point Feature Histogram descriptors and the Iterative Closest Point algorithm.ResultsSeveral experiments are performed to assess the repeatability and the accuracy of this method. Target registration error is measured on multiple visual and radio-opaque landmarks to evaluate the accuracy of the registration. Mixed reality visualizations from arbitrary angles are also presented for simulated orthopedic surgeries.ConclusionTo the best of our knowledge, this is the first calibration method which uses only tomographic and RGBD reconstructions. This means that the method does not impose a particular shape of the phantom. We demonstrate a marker-less calibration of CBCT volumes and 3D depth cameras, achieving reasonable registration accuracy. This design requires a one-time factory calibration, is self-contained, and could be integrated into existing mobile C-arms to provide real-time augmented reality views from arbitrary angles.


computer assisted radiology and surgery | 2016

Preclinical usability study of multiple augmented reality concepts for K-wire placement

Marius Fischer; Bernhard Fuerst; Sing Chun Lee; Javad Fotouhi; Séverine Habert; Simon Weidert; Ekkehard Euler; Greg Osgood; Nassir Navab

PurposeIn many orthopedic surgeries, there is a demand for correctly placing medical instruments (e.g., K-wire or drill) to perform bone fracture repairs. The main challenge is the mental alignment of X-ray images acquired using a C-arm, the medical instruments, and the patient, which dramatically increases in complexity during pelvic surgeries. Current solutions include the continuous acquisition of many intra-operative X-ray images from various views, which will result in high radiation exposure, long surgical durations, and significant effort and frustration for the surgical staff. This work conducts a preclinical usability study to test and evaluate mixed reality visualization techniques using intra-operative X-ray, optical, and RGBD imaging to augment the surgeon’s view to assist accurate placement of tools.MethodWe design and perform a usability study to compare the performance of surgeons and their task load using three different mixed reality systems during K-wire placements. The three systems are interventional X-ray imaging, X-ray augmentation on 2D video, and 3D surface reconstruction augmented by digitally reconstructed radiographs and live tool visualization.ResultsThe evaluation criteria include duration, number of X-ray images acquired, placement accuracy, and the surgical task load, which are observed during 21 clinically relevant interventions performed by surgeons on phantoms. Finally, we test for statistically significant improvements and show that the mixed reality visualization leads to a significantly improved efficiency.ConclusionThe 3D visualization of patient, tool, and DRR shows clear advantages over the conventional X-ray imaging and provides intuitive feedback to place the medical tools correctly and efficiently.


medical image computing and computer-assisted intervention | 2012

A personalized biomechanical model for respiratory motion prediction.

Bernhard Fuerst; Tommaso Mansi; Jianwen Zhang; Parmeshwar Khurd; Jerome Declerck; Thomas Boettger; Nassir Navab; John E. Bayouth; Dorin Comaniciu; Ali Kamen

Time-resolved imaging of the thorax or abdominal area is affected by respiratory motion. Nowadays, one-dimensional respiratory surrogates are used to estimate the current state of the lung during its cycle, but with rather poor results. This paper presents a framework to predict the 3D lung motion based on a patient-specific finite element model of respiratory mechanics estimated from two CT images at end of inspiration (EI) and end of expiration (EE). We first segment the lung, thorax and sub-diaphragm organs automatically using a machine-learning algorithm. Then, a biomechanical model of the lung, thorax and sub-diaphragm is employed to compute the 3D respiratory motion. Our model is driven by thoracic pressures, estimated automatically from the EE and EI images using a trust-region approach. Finally, lung motion is predicted by modulating the thoracic pressures. The effectiveness of our approach is evaluated by predicting lung deformation during exhale on five DIR-Lab datasets. Several personalization strategies are tested, showing that an average error of 3.88 +/- 1.54 mm in predicted landmark positions can be achieved. Since our approach is generative, it may constitute a 3D surrogate information for more accurate medical image reconstruction and patient respiratory analysis.


computer assisted radiology and surgery | 2017

Comparison of optical see-through head-mounted displays for surgical interventions with object-anchored 2D-display

Long Qian; Alexander Barthel; Alex Johnson; Greg Osgood; Peter Kazanzides; Nassir Navab; Bernhard Fuerst

PurposeOptical see-through head-mounted displays (OST-HMD) feature an unhindered and instantaneous view of the surgery site and can enable a mixed reality experience for surgeons during procedures. In this paper, we present a systematic approach to identify the criteria for evaluation of OST-HMD technologies for specific clinical scenarios, which benefit from using an object-anchored 2D-display visualizing medical information.MethodsCriteria for evaluating the performance of OST-HMDs for visualization of medical information and its usage are identified and proposed. These include text readability, contrast perception, task load, frame rate, and system lag. We choose to compare three commercially available OST-HMDs, which are representatives of currently available head-mounted display technologies. A multi-user study and an offline experiment are conducted to evaluate their performance.ResultsStatistical analysis demonstrates that Microsoft HoloLens performs best among the three tested OST-HMDs, in terms of contrast perception, task load, and frame rate, while ODG R-7 offers similar text readability. The integration of indoor localization and fiducial tracking on the HoloLens provides significantly less system lag in a relatively motionless scenario.ConclusionsWith ever more OST-HMDs appearing on the market, the proposed criteria could be used in the evaluation of their suitability for mixed reality surgical intervention. Currently, Microsoft HoloLens may be more suitable than ODG R-7 and Epson Moverio BT-200 for clinical usability in terms of the evaluated criteria. To the best of our knowledge, this is the first paper that presents a methodology and conducts experiments to evaluate and compare OST-HMDs for their use as object-anchored 2D-display during interventions.


IEEE Transactions on Medical Imaging | 2017

Towards MRI-Based Autonomous Robotic US Acquisitions: A First Feasibility Study.

Christoph Hennersperger; Bernhard Fuerst; Salvatore Virga; Oliver Zettinig; Benjamin Frisch; Thomas Neff; Nassir Navab

Robotic ultrasound has the potential to assist and guide physicians during interventions. In this work, we present a set of methods and a workflow to enable autonomous MRI-guided ultrasound acquisitions. Our approach uses a structured-light 3D scanner for patient-to-robot and image-to-patient calibration, which in turn is used to plan 3D ultrasound trajectories. These MRI-based trajectories are followed autonomously by the robot and are further refined online using automatic MRI/US registration. Despite the low spatial resolution of structured light scanners, the initial planned acquisition path can be followed with an accuracy of 2.46 ± 0.96 mm. This leads to a good initialization of the MRI/US registration: the 3D-scan-based alignment for planning and acquisition shows an accuracy (distance between planned ultrasound and MRI) of 4.47 mm, and 0.97 mm after an online-update of the calibration based on a closed loop registration.Robotic ultrasound has the potential to assist and guide physicians during interventions. In this work, we present a set of methods and a workflow to enable autonomous MRI-guided ultrasound acquisitions. Our approach uses a structured-light 3D scanner for patient-to-robot and image-to-patient calibration, which in turn is used to plan 3D ultrasound trajectories. These MRI-based trajectories are followed autonomously by the robot and are further refined online using automatic MRI/US registration. Despite the low spatial resolution of structured light scanners, the initial planned acquisition path can be followed with an accuracy of 2.46 ± 0.96 mm. This leads to a good initialization of the MRI/US registration: the 3D-scan-based alignment for planning and acquisition shows an accuracy (distance between planned ultrasound and MRI) of 4.47 mm, and 0.97 mm after an online-update of the calibration based on a closed loop registration.


international conference on robotics and automation | 2016

Toward real-time 3D ultrasound registration-based visual servoing for interventional navigation

Oliver Zettinig; Bernhard Fuerst; Risto Kojcev; Marco Esposito; Mehrdad Salehi; Wolfgang Wein; Julia Rackerseder; Edoardo Sinibaldi; Benjamin Frisch; Nassir Navab

While intraoperative imaging is commonly used to guide surgical interventions, automatic robotic support for image-guided navigation has not yet been established in clinical routine. In this paper, we propose a novel visual servoing framework that combines, for the first time, full image-based 3D ultrasound registration with a real-time servo-control scheme. Paired with multi-modal fusion to a pre-interventional plan such as an annotated needle insertion path, it thus allows tracking a target anatomy, continuously updating the plan as the target moves, and keeping a needle guide aligned for accurate manual insertion. The presented system includes a motorized 3D ultrasound transducer mounted on a force-controlled robot and a GPU-based image processing toolkit. The tracking accuracy of our framework is validated on a geometric agar/gelatin phantom using a second robot, achieving positioning errors of on average 0.42-0.44 mm. With compounding and registration runtimes of up to total around 550 ms, real-time performance comes into reach. We also present initial results on a spine phantom, demonstrating the feasibility of our system for lumbar spine injections.


IEEE Transactions on Medical Imaging | 2016

First Robotic SPECT for Minimally Invasive Sentinel Lymph Node Mapping

Bernhard Fuerst; Julian Sprung; Francisco Pinto; Benjamin Frisch; Thomas Wendler; Hervé Simon; Laurent Mengus; Nynke S. van den Berg; Henk G. van der Poel; Fijs W. B. van Leeuwen; Nassir Navab

In this paper we present the usage of a drop-in gamma probe for intra-operative Single-Photon Emission Computed Tomography (SPECT) imaging in the scope of minimally invasive robot-assisted interventions. The probe is designed to be inserted and reside inside the abdominal cavity during the intervention. It is grasped during the procedure using a robotic laparoscopic gripper enabling full six degrees of freedom handling by the surgeon. We demonstrate the first deployment of the tracked probe for intra-operative in-patient robotic SPECT enabling augmented-reality image guidance. The hybrid mechanical- and image-based in-patient probe tracking is shown to have an accuracy of 0.2 mm. The overall system performance is evaluated and tested with a phantom for gynecological sentinel lymph node interventions and compared to ground-truth data yielding a mean reconstruction accuracy of 0.67 mm.

Collaboration


Dive into the Bernhard Fuerst's collaboration.

Top Co-Authors

Avatar

Javad Fotouhi

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar

Greg Osgood

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar

Sing Chun Lee

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar

Alex Johnson

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mehran Armand

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Long Qian

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge