Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Javad Fotouhi is active.

Publication


Featured researches published by Javad Fotouhi.


computer assisted radiology and surgery | 2016

Calibration of RGBD camera and cone-beam CT for 3D intra-operative mixed reality visualization.

Sing Chun Lee; Bernhard Fuerst; Javad Fotouhi; Marius Fischer; Greg Osgood; Nassir Navab

PurposeThis work proposes a novel algorithm to register cone-beam computed tomography (CBCT) volumes and 3D optical (RGBD) camera views. The co-registered real-time RGBD camera and CBCT imaging enable a novel augmented reality solution for orthopedic surgeries, which allows arbitrary views using digitally reconstructed radiographs overlaid on the reconstructed patient’s surface without the need to move the C-arm.MethodsAn RGBD camera is rigidly mounted on the C-arm near the detector. We introduce a calibration method based on the simultaneous reconstruction of the surface and the CBCT scan of an object. The transformation between the two coordinate spaces is recovered using Fast Point Feature Histogram descriptors and the Iterative Closest Point algorithm.ResultsSeveral experiments are performed to assess the repeatability and the accuracy of this method. Target registration error is measured on multiple visual and radio-opaque landmarks to evaluate the accuracy of the registration. Mixed reality visualizations from arbitrary angles are also presented for simulated orthopedic surgeries.ConclusionTo the best of our knowledge, this is the first calibration method which uses only tomographic and RGBD reconstructions. This means that the method does not impose a particular shape of the phantom. We demonstrate a marker-less calibration of CBCT volumes and 3D depth cameras, achieving reasonable registration accuracy. This design requires a one-time factory calibration, is self-contained, and could be integrated into existing mobile C-arms to provide real-time augmented reality views from arbitrary angles.


computer assisted radiology and surgery | 2016

Preclinical usability study of multiple augmented reality concepts for K-wire placement

Marius Fischer; Bernhard Fuerst; Sing Chun Lee; Javad Fotouhi; Séverine Habert; Simon Weidert; Ekkehard Euler; Greg Osgood; Nassir Navab

PurposeIn many orthopedic surgeries, there is a demand for correctly placing medical instruments (e.g., K-wire or drill) to perform bone fracture repairs. The main challenge is the mental alignment of X-ray images acquired using a C-arm, the medical instruments, and the patient, which dramatically increases in complexity during pelvic surgeries. Current solutions include the continuous acquisition of many intra-operative X-ray images from various views, which will result in high radiation exposure, long surgical durations, and significant effort and frustration for the surgical staff. This work conducts a preclinical usability study to test and evaluate mixed reality visualization techniques using intra-operative X-ray, optical, and RGBD imaging to augment the surgeon’s view to assist accurate placement of tools.MethodWe design and perform a usability study to compare the performance of surgeons and their task load using three different mixed reality systems during K-wire placements. The three systems are interventional X-ray imaging, X-ray augmentation on 2D video, and 3D surface reconstruction augmented by digitally reconstructed radiographs and live tool visualization.ResultsThe evaluation criteria include duration, number of X-ray images acquired, placement accuracy, and the surgical task load, which are observed during 21 clinically relevant interventions performed by surgeons on phantoms. Finally, we test for statistically significant improvements and show that the mixed reality visualization leads to a significantly improved efficiency.ConclusionThe 3D visualization of patient, tool, and DRR shows clear advantages over the conventional X-ray imaging and provides intuitive feedback to place the medical tools correctly and efficiently.


computer assisted radiology and surgery | 2017

Can real-time RGBD enhance intraoperative Cone-Beam CT?

Javad Fotouhi; Bernhard Fuerst; Wolfgang Wein; Nassir Navab

PurposeCone-Beam Computed Tomography (CBCT) is an important 3D imaging technology for orthopedic, trauma, radiotherapy guidance, angiography, and dental applications. The major limitation of CBCT is the poor image quality due to scattered radiation, truncation, and patient movement. In this work, we propose to incorporate information from a co-registered Red-Green-Blue-Depth (RGBD) sensor attached near the detector plane of the C-arm to improve the reconstruction quality, as well as correcting for undesired rigid patient movement.MethodsCalibration of the RGBD and C-arm imaging devices is performed in two steps: (i) calibration of the RGBD sensor and the X-ray source using a multimodal checkerboard pattern, and (ii) calibration of the RGBD surface reconstruction to the CBCT volume. The patient surface is acquired during the CBCT scan and then used as prior information for the reconstruction using Maximum-Likelihood Expectation-Maximization. An RGBD-based simultaneous localization and mapping method is utilized to estimate the rigid patient movement during scanning.ResultsPerformance is quantified and demonstrated using artificial data and bone phantoms with and without metal implants. Finally, we present movement-corrected CBCT reconstructions based on RGBD data on an animal specimen, where the average voxel intensity difference reduces from 0.157 without correction to 0.022 with correction.ConclusionThis work investigated the advantages of a C-arm X-ray imaging system used with an attached RGBD sensor. The experiments show the benefits of the opto/X-ray imaging system in: (i) improving the quality of reconstruction by incorporating the surface information of the patient, reducing the streak artifacts as well as the number of required projections, and (ii) recovering the scanning trajectory for the reconstruction in the presence of undesired patient rigid movement.


computer assisted radiology and surgery | 2016

Dual-robot ultrasound-guided needle placement: closing the planning-imaging-action loop.

Risto Kojcev; Bernhard Fuerst; Oliver Zettinig; Javad Fotouhi; Sing Chun Lee; Benjamin Frisch; Russell H. Taylor; Edoardo Sinibaldi; Nassir Navab

PurposePrecise needle placement is an important task during several medical procedures. Ultrasound imaging is often used to guide the needle toward the target region in soft tissue. This task remains challenging due to the user’s dependence on image quality, limited field of view, moving target, and moving needle. In this paper, we present a novel dual-robot framework for robotic needle insertions under robotic ultrasound guidance.MethodWe integrated force-controlled ultrasound image acquisition, registration of preoperative and intraoperative images, vision-based robot control, and target localization, in combination with a novel needle tracking algorithm. The framework allows robotic needle insertion to target a preoperatively defined region of interest while enabling real-time visualization and adaptive trajectory planning to provide safe and quick interactions. We assessed the framework by considering both static and moving targets embedded in water and tissue-mimicking gelatin.ResultsThe presented dual-robot tracking algorithms allow for accurate needle placement, namely to target the region of interest with an error around 1 mm.ConclusionTo the best of our knowledge, we show the first use of two independent robots, one for imaging, the other for needle insertion, that are simultaneously controlled using image processing algorithms. Experimental results show the feasibility and demonstrate the accuracy and robustness of the process.


medical image computing and computer assisted intervention | 2015

Vision-Based Intraoperative Cone-Beam CT Stitching for Non-overlapping Volumes

Bernhard Fuerst; Javad Fotouhi; Nassir Navab

Cone-Beam Computed Tomography CBCT is one of the primary imaging modalities in radiation therapy, dentistry, and orthopedic interventions. While providing crucial intraoperative imaging, CBCT is bounded by its limited imaging volume, motivating the use of image stitching techniques. Current methods rely on overlapping volumes, leading to an excessive amount of radiation exposure, or on external tracking hardware, which may increase the setup complexity. We attach an optical camera to a CBCT enabled C-arm, and co-register the video and X-ray views. Our novel algorithm recovers the spatial alignment of non-overlapping CBCT volumes based on the observed optical views, as well as the laser projection provided by the X-ray system. First, we estimate the transformation between two volumes by automatic detection and matching of natural surface features during the patient motion. Then, we recover 3D information by reconstructing the projection of the positioning-laser onto an unknown curved surface, which enables the estimation of the unknown scale. We present a full evaluation of the methodology, by comparing vision- and registration-based stitching.


Medical Imaging 2018: Imaging Informatics for Healthcare, Research, and Applications | 2018

Towards clinical translation of augmented orthopedic surgery: from pre-op CT to intra-op x-ray via RGBD sensing

Emerson Tucker; Javad Fotouhi; Mathias Unberath; Sing Chun Lee; Bernhard Fuerst; Alex Johnson; Mehran Armand; Greg Osgood; Nassir Navab

Pre-operative CT data is available for several orthopedic and trauma interventions, and is mainly used to identify injuries and plan the surgical procedure. In this work we propose an intuitive augmented reality environment allowing visualization of pre-operative data during the intervention, with an overlay of the optical information from the surgical site. The pre-operative CT volume is first registered to the patient by acquiring a single C-arm X-ray image and using 3D/2D intensity-based registration. Next, we use an RGBD sensor on the C-arm to fuse the optical information of the surgical site with patient pre-operative medical data and provide an augmented reality environment. The 3D/2D registration of the pre- and intra-operative data allows us to maintain a correct visualization each time the C-arm is repositioned or the patient moves. An overall mean target registration error (mTRE) and standard deviation of 5.24 ± 3.09 mm was measured averaged over 19 C-arm poses. The proposed solution enables the surgeon to visualize pre-operative data overlaid with information from the surgical site (e.g. surgeon’s hands, surgical tools, etc.) for any C-arm pose, and negates issues of line-of-sight and long setup times, which are present in commercially available systems.


medical image computing and computer-assisted intervention | 2018

DeepDRR – A Catalyst for Machine Learning in Fluoroscopy-Guided Procedures

Mathias Unberath; Jan-Nico Zaech; Sing Chun Lee; Bastian Bier; Javad Fotouhi; Mehran Armand; Nassir Navab

Machine learning-based approaches outperform competing methods in most disciplines relevant to diagnostic radiology. Interventional radiology, however, has not yet benefited substantially from the advent of deep learning, in particular because of two reasons: 1) Most images acquired during the procedure are never archived and are thus not available for learning, and 2) even if they were available, annotations would be a severe challenge due to the vast amounts of data. When considering fluoroscopy-guided procedures, an interesting alternative to true interventional fluoroscopy is in silico simulation of the procedure from 3D diagnostic CT. In this case, labeling is comparably easy and potentially readily available, yet, the appropriateness of resulting synthetic data is dependent on the forward model. In this work, we propose DeepDRR, a framework for fast and realistic simulation of fluoroscopy and digital radiography from CT scans, tightly integrated with the software platforms native to deep learning. We use machine learning for material decomposition and scatter estimation in 3D and 2D, respectively, combined with analytic forward projection and noise injection to achieve the required performance. On the example of anatomical landmark detection in X-ray images of the pelvis, we demonstrate that machine learning models trained on DeepDRRs generalize to unseen clinically acquired data without the need for re-training or domain adaptation. Our results are promising and promote the establishment of machine learning in fluoroscopy-guided procedures.


medical image computing and computer assisted intervention | 2018

X-ray-transform Invariant Anatomical Landmark Detection for Pelvic Trauma Surgery

Bastian Bier; Mathias Unberath; Jan-Nico Zaech; Javad Fotouhi; Mehran Armand; Greg Osgood; Nassir Navab; Andreas K. Maier

X-ray image guidance enables percutaneous alternatives to complex procedures. Unfortunately, the indirect view onto the anatomy in addition to projective simplification substantially increase the task-load for the surgeon. Additional 3D information such as knowledge of anatomical landmarks can benefit surgical decision making in complicated scenarios. Automatic detection of these landmarks in transmission imaging is challenging since image-domain features characteristic to a certain landmark change substantially depending on the viewing direction. Consequently and to the best of our knowledge, the above problem has not yet been addressed. In this work, we present a method to automatically detect anatomical landmarks in X-ray images independent of the viewing direction. To this end, a sequential prediction framework based on convolutional layers is trained on synthetically generated data of the pelvic anatomy to predict 23 landmarks in single X-ray images. View independence is contingent on training conditions and, here, is achieved on a spherical segment covering (120 x 90) degrees in LAO/RAO and CRAN/CAUD, respectively, centered around AP. On synthetic data, the proposed approach achieves a mean prediction error of 5.6 +- 4.5 mm. We demonstrate that the proposed network is immediately applicable to clinically acquired data of the pelvis. In particular, we show that our intra-operative landmark detection together with pre-operative CT enables X-ray pose estimation which, ultimately, benefits initialization of image-based 2D/3D registration.


medical image computing and computer assisted intervention | 2017

X-Ray In-Depth Decomposition: Revealing the Latent Structures

Shadi Albarqouni; Javad Fotouhi; Nassir Navab

X-ray is the most readily available imaging modality and has a broad range of applications that spans from diagnosis to intra-operative guidance in cardiac, orthopedics, and trauma procedures. Proper interpretation of the hidden and obscured anatomy in X-ray images remains a challenge and often requires high radiation dose and imaging from several perspectives. In this work, we aim at decomposing the conventional X-ray image into d X-ray components of independent, non-overlapped, clipped sub-volume, that separate rigid structures into distinct layers, leaving all deformable organs in one layer, such that the sum resembles the original input. Our proposed model is validaed on 6 clinical datasets (\(\sim \)7200 X-ray images) in addition to 615 real chest X-ray images. Despite the challenging aspects of modeling such a highly ill-posed problem, exciting and encouraging results are obtained paving the path for further contributions in this direction.


Healthcare technology letters | 2017

Multi-modal imaging, model-based tracking, and mixed reality visualisation for orthopaedic surgery

Sing Chun Lee; Bernhard Fuerst; Keisuke Tateno; Alex Johnson; Javad Fotouhi; Greg Osgood; Federico Tombari; Nassir Navab

Orthopaedic surgeons are still following the decades old workflow of using dozens of two-dimensional fluoroscopic images to drill through complex 3D structures, e.g. pelvis. This Letter presents a mixed reality support system, which incorporates multi-modal data fusion and model-based surgical tool tracking for creating a mixed reality environment supporting screw placement in orthopaedic surgery. A red–green–blue–depth camera is rigidly attached to a mobile C-arm and is calibrated to the cone-beam computed tomography (CBCT) imaging space via iterative closest point algorithm. This allows real-time automatic fusion of reconstructed surface and/or 3D point clouds and synthetic fluoroscopic images obtained through CBCT imaging. An adapted 3D model-based tracking algorithm with automatic tool segmentation allows for tracking of the surgical tools occluded by hand. This proposed interactive 3D mixed reality environment provides an intuitive understanding of the surgical site and supports surgeons in quickly localising the entry point and orienting the surgical tool during screw placement. The authors validate the augmentation by measuring target registration error and also evaluate the tracking accuracy in the presence of partial occlusion.

Collaboration


Dive into the Javad Fotouhi's collaboration.

Top Co-Authors

Avatar

Greg Osgood

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sing Chun Lee

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar

Alex Johnson

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar

Mehran Armand

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andreas K. Maier

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar

Bastian Bier

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar

Emerson Tucker

Johns Hopkins University

View shared research outputs
Researchain Logo
Decentralizing Knowledge