Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Greg Osgood is active.

Publication


Featured researches published by Greg Osgood.


Journal of Orthopaedic Trauma | 2015

Femoral nerve palsy after pelvic fracture treated with INFIX: a case series.

Daniel Hesse; Utku Kandmir; Brian D. Solberg; Alex Stroh; Greg Osgood; Stephen A. Sems; Cory Collinge

Objective: The treatment of some pelvic injuries has evolved recently to include the use of a subcutaneous anterior pelvic fixator (INFIX). We present 8 cases of femoral nerve palsy in 6 patients after application of an INFIX to highlight this potentially devastating complication to pelvic surgeons using this technique and discuss how it might be avoided in the future. Design: Retrospective chart review. Case series. Setting: Five level 1 and 2 trauma centers, tertiary referral hospitals. Patients/Participants: Six patients with anterior pelvic ring injury treated with an INFIX who experienced 8 femoral nerve palsies (2 bilateral). Intervention: Removal of internal fixator, treatment for femoral nerve palsy. Main Outcome Measurements: Clinical and electromyographic evaluation of patients. Results: All 6 patients with a total of 8 femoral nerve palsies had their INFIX removed. Variable resolution of the nerve injuries was observed. Conclusions: Application of an INFIX for the treatment of pelvic ring injury carries a potentially devastating risk to the femoral nerve(s). Despite early implant removal after detection of nerve injury, some patients had residual quadriceps weakness, disturbance of the thighs skin sensation, and/or gait disturbance attributable to femoral nerve palsy at the time of early final follow-up. Level of Evidence: Therapeutic Level IV. See Instructions for Authors for a complete description of levels of evidence.


computer assisted radiology and surgery | 2016

Calibration of RGBD camera and cone-beam CT for 3D intra-operative mixed reality visualization.

Sing Chun Lee; Bernhard Fuerst; Javad Fotouhi; Marius Fischer; Greg Osgood; Nassir Navab

PurposeThis work proposes a novel algorithm to register cone-beam computed tomography (CBCT) volumes and 3D optical (RGBD) camera views. The co-registered real-time RGBD camera and CBCT imaging enable a novel augmented reality solution for orthopedic surgeries, which allows arbitrary views using digitally reconstructed radiographs overlaid on the reconstructed patient’s surface without the need to move the C-arm.MethodsAn RGBD camera is rigidly mounted on the C-arm near the detector. We introduce a calibration method based on the simultaneous reconstruction of the surface and the CBCT scan of an object. The transformation between the two coordinate spaces is recovered using Fast Point Feature Histogram descriptors and the Iterative Closest Point algorithm.ResultsSeveral experiments are performed to assess the repeatability and the accuracy of this method. Target registration error is measured on multiple visual and radio-opaque landmarks to evaluate the accuracy of the registration. Mixed reality visualizations from arbitrary angles are also presented for simulated orthopedic surgeries.ConclusionTo the best of our knowledge, this is the first calibration method which uses only tomographic and RGBD reconstructions. This means that the method does not impose a particular shape of the phantom. We demonstrate a marker-less calibration of CBCT volumes and 3D depth cameras, achieving reasonable registration accuracy. This design requires a one-time factory calibration, is self-contained, and could be integrated into existing mobile C-arms to provide real-time augmented reality views from arbitrary angles.


computer assisted radiology and surgery | 2016

Preclinical usability study of multiple augmented reality concepts for K-wire placement

Marius Fischer; Bernhard Fuerst; Sing Chun Lee; Javad Fotouhi; Séverine Habert; Simon Weidert; Ekkehard Euler; Greg Osgood; Nassir Navab

PurposeIn many orthopedic surgeries, there is a demand for correctly placing medical instruments (e.g., K-wire or drill) to perform bone fracture repairs. The main challenge is the mental alignment of X-ray images acquired using a C-arm, the medical instruments, and the patient, which dramatically increases in complexity during pelvic surgeries. Current solutions include the continuous acquisition of many intra-operative X-ray images from various views, which will result in high radiation exposure, long surgical durations, and significant effort and frustration for the surgical staff. This work conducts a preclinical usability study to test and evaluate mixed reality visualization techniques using intra-operative X-ray, optical, and RGBD imaging to augment the surgeon’s view to assist accurate placement of tools.MethodWe design and perform a usability study to compare the performance of surgeons and their task load using three different mixed reality systems during K-wire placements. The three systems are interventional X-ray imaging, X-ray augmentation on 2D video, and 3D surface reconstruction augmented by digitally reconstructed radiographs and live tool visualization.ResultsThe evaluation criteria include duration, number of X-ray images acquired, placement accuracy, and the surgical task load, which are observed during 21 clinically relevant interventions performed by surgeons on phantoms. Finally, we test for statistically significant improvements and show that the mixed reality visualization leads to a significantly improved efficiency.ConclusionThe 3D visualization of patient, tool, and DRR shows clear advantages over the conventional X-ray imaging and provides intuitive feedback to place the medical tools correctly and efficiently.


computer assisted radiology and surgery | 2017

Comparison of optical see-through head-mounted displays for surgical interventions with object-anchored 2D-display

Long Qian; Alexander Barthel; Alex Johnson; Greg Osgood; Peter Kazanzides; Nassir Navab; Bernhard Fuerst

PurposeOptical see-through head-mounted displays (OST-HMD) feature an unhindered and instantaneous view of the surgery site and can enable a mixed reality experience for surgeons during procedures. In this paper, we present a systematic approach to identify the criteria for evaluation of OST-HMD technologies for specific clinical scenarios, which benefit from using an object-anchored 2D-display visualizing medical information.MethodsCriteria for evaluating the performance of OST-HMDs for visualization of medical information and its usage are identified and proposed. These include text readability, contrast perception, task load, frame rate, and system lag. We choose to compare three commercially available OST-HMDs, which are representatives of currently available head-mounted display technologies. A multi-user study and an offline experiment are conducted to evaluate their performance.ResultsStatistical analysis demonstrates that Microsoft HoloLens performs best among the three tested OST-HMDs, in terms of contrast perception, task load, and frame rate, while ODG R-7 offers similar text readability. The integration of indoor localization and fiducial tracking on the HoloLens provides significantly less system lag in a relatively motionless scenario.ConclusionsWith ever more OST-HMDs appearing on the market, the proposed criteria could be used in the evaluation of their suitability for mixed reality surgical intervention. Currently, Microsoft HoloLens may be more suitable than ODG R-7 and Epson Moverio BT-200 for clinical usability in terms of the evaluated criteria. To the best of our knowledge, this is the first paper that presents a methodology and conducts experiments to evaluate and compare OST-HMDs for their use as object-anchored 2D-display during interventions.


Physics in Medicine and Biology | 2017

Intraoperative evaluation of device placement in spine surgery using known-component 3D–2D image registration

Ali Uneri; T. De Silva; J. Goerres; M. Jacobson; M. D. Ketcha; S. Reaungamornrat; Gerhard Kleinszig; Sebastian Vogt; A. J. Khanna; Greg Osgood; Jean Paul Wolinsky; Jeffrey H. Siewerdsen

Intraoperative x-ray radiography/fluoroscopy is commonly used to assess the placement of surgical devices in the operating room (e.g. spine pedicle screws), but qualitative interpretation can fail to reliably detect suboptimal delivery and/or breach of adjacent critical structures. We present a 3D-2D image registration method wherein intraoperative radiographs are leveraged in combination with prior knowledge of the patient and surgical components for quantitative assessment of device placement and more rigorous quality assurance (QA) of the surgical product. The algorithm is based on known-component registration (KC-Reg) in which patient-specific preoperative CT and parametric component models are used. The registration performs optimization of gradient similarity, removes the need for offline geometric calibration of the C-arm, and simultaneously solves for multiple component bodies, thereby allowing QA in a single step (e.g. spinal construct with 4-20 screws). Performance was tested in a spine phantom, and first clinical results are reported for QA of transpedicle screws delivered in a patient undergoing thoracolumbar spine surgery. Simultaneous registration of ten pedicle screws (five contralateral pairs) demonstrated mean target registration error (TRE) of 1.1  ±  0.1 mm at the screw tip and 0.7  ±  0.4° in angulation when a prior geometric calibration was used. The calibration-free formulation, with the aid of component collision constraints, achieved TRE of 1.4  ±  0.6 mm. In all cases, a statistically significant improvement (p  <  0.05) was observed for the simultaneous solutions in comparison to previously reported sequential solution of individual components. Initial application in clinical data in spine surgery demonstrated TRE of 2.7  ±  2.6 mm and 1.5  ±  0.8°. The KC-Reg algorithm offers an independent check and quantitative QA of the surgical product using radiographic/fluoroscopic views acquired within standard OR workflow. Such intraoperative assessment could improve quality and safety, provide the opportunity to revise suboptimal constructs in the OR, and reduce the frequency of revision surgery.


British Journal of Radiology | 2017

Image quality of cone beam computed tomography for evaluation of extremity fractures in the presence of metal hardware: visual grading characteristics analysis

Greg Osgood; Gaurav K. Thawait; Nima Hafezi-Nejad; Delaram Shakoor; Adam Shaner; John Yorkston; Wojciech Zbijewski; Jeffrey H. Siewerdsen; Shadpour Demehri

OBJECTIVE To evaluate image quality and interobserver reliability of a novel cone-beam CT (CBCT) scanner in comparison with plain radiography for assessment of fracture healing in the presence of metal hardware. METHODS In this prospective institutional review board-approved Health Insurance Portability and Accountability Act of 1996-complaint study, written informed consent was obtained from 27 patients (10 females and 17 males; mean age 44 years, age range 21-83 years) with either upper or lower extremity fractures, and with metal hardware, who underwent CBCT scans and had a clinical radiograph of the affected part. Images were assessed by two independent observers for quality and interobserver reliability for seven visualization tasks. Visual grading characteristic (VGC) curve analysis determined the differences in image quality between CBCT and plain radiography. Interobserver agreement was calculated using Pearsons correlation coefficient. RESULTS VGC results displayed preference of CBCT images to plain radiographs in terms of visualizing (1) cortical and (2) trabecular bones; (3) fracture line; (4) callus formation; (5) bridging ossification; and (6) screw thread-bone interface and its inferiority to plain radiograph in the visualization of (7) large metallic side plate contour with strong interobserver correlation (p-value < 0.05), except for visualizing large metallic side plate contour. CONCLUSION For evaluation of fracture healing in the presence of metal hardware, CBCT image quality is preferable to plain radiograph for all visualization tasks, except for large metallic side plate contours. Advances in knowledge: CBCT has the potential to be a good diagnostic alternative to plain radiographs in evaluation of fracture healing in the presence of metal hardware.


Medical Imaging 2018: Imaging Informatics for Healthcare, Research, and Applications | 2018

Towards clinical translation of augmented orthopedic surgery: from pre-op CT to intra-op x-ray via RGBD sensing

Emerson Tucker; Javad Fotouhi; Mathias Unberath; Sing Chun Lee; Bernhard Fuerst; Alex Johnson; Mehran Armand; Greg Osgood; Nassir Navab

Pre-operative CT data is available for several orthopedic and trauma interventions, and is mainly used to identify injuries and plan the surgical procedure. In this work we propose an intuitive augmented reality environment allowing visualization of pre-operative data during the intervention, with an overlay of the optical information from the surgical site. The pre-operative CT volume is first registered to the patient by acquiring a single C-arm X-ray image and using 3D/2D intensity-based registration. Next, we use an RGBD sensor on the C-arm to fuse the optical information of the surgical site with patient pre-operative medical data and provide an augmented reality environment. The 3D/2D registration of the pre- and intra-operative data allows us to maintain a correct visualization each time the C-arm is repositioned or the patient moves. An overall mean target registration error (mTRE) and standard deviation of 5.24 ± 3.09 mm was measured averaged over 19 C-arm poses. The proposed solution enables the surgeon to visualize pre-operative data overlaid with information from the surgical site (e.g. surgeon’s hands, surgical tools, etc.) for any C-arm pose, and negates issues of line-of-sight and long setup times, which are present in commercially available systems.


Physics in Medicine and Biology | 2017

Spinal pedicle screw planning using deformable atlas registration

J. Goerres; Ali Uneri; T. De Silva; M. D. Ketcha; S. Reaungamornrat; M. Jacobson; Sebastian Vogt; Gerhard Kleinszig; Greg Osgood; Jean Paul Wolinsky; Jeffrey H. Siewerdsen

Spinal screw placement is a challenging task due to small bone corridors and high risk of neurological or vascular complications, benefiting from precision guidance/navigation and quality assurance (QA). Implicit to both guidance and QA is the definition of a surgical plan-i.e. the desired trajectories and device selection for target vertebrae-conventionally requiring time-consuming manual annotations by a skilled surgeon. We propose automation of such planning by deriving the pedicle trajectory and device selection from a patients preoperative CT or MRI. An atlas of vertebrae surfaces was created to provide the underlying basis for automatic planning-in this work, comprising 40 exemplary vertebrae at three levels of the spine (T7, T8, and L3). The atlas was enriched with ideal trajectory annotations for 60 pedicles in total. To define trajectories for a given patient, sparse deformation fields from the atlas surfaces to the input (CT or MR image) are applied on the annotated trajectories. Mean value coordinates are used to interpolate dense deformation fields. The pose of a straight trajectory is optimized by image-based registration to an accumulated volume of the deformed annotations. For evaluation, input deformation fields were created using coherent point drift (CPD) to perform a leave-one-out analysis over the atlas surfaces. CPD registration demonstrated surface error of 0.89  ±  0.10 mm (median  ±  interquartile range) for T7/T8 and 1.29  ±  0.15 mm for L3. At the pedicle center, registered trajectories deviated from the expert reference by 0.56  ±  0.63 mm (T7/T8) and 1.12  ±  0.67 mm (L3). The predicted maximum screw diameter differed by 0.45  ±  0.62 mm (T7/T8), and 1.26  ±  1.19 mm (L3). The automated planning method avoided screw collisions in all cases and demonstrated close agreement overall with expert reference plans, offering a potentially valuable tool in support of surgical guidance and QA.


medical image computing and computer assisted intervention | 2018

X-ray-transform Invariant Anatomical Landmark Detection for Pelvic Trauma Surgery

Bastian Bier; Mathias Unberath; Jan-Nico Zaech; Javad Fotouhi; Mehran Armand; Greg Osgood; Nassir Navab; Andreas K. Maier

X-ray image guidance enables percutaneous alternatives to complex procedures. Unfortunately, the indirect view onto the anatomy in addition to projective simplification substantially increase the task-load for the surgeon. Additional 3D information such as knowledge of anatomical landmarks can benefit surgical decision making in complicated scenarios. Automatic detection of these landmarks in transmission imaging is challenging since image-domain features characteristic to a certain landmark change substantially depending on the viewing direction. Consequently and to the best of our knowledge, the above problem has not yet been addressed. In this work, we present a method to automatically detect anatomical landmarks in X-ray images independent of the viewing direction. To this end, a sequential prediction framework based on convolutional layers is trained on synthetically generated data of the pelvic anatomy to predict 23 landmarks in single X-ray images. View independence is contingent on training conditions and, here, is achieved on a spherical segment covering (120 x 90) degrees in LAO/RAO and CRAN/CAUD, respectively, centered around AP. On synthetic data, the proposed approach achieves a mean prediction error of 5.6 +- 4.5 mm. We demonstrate that the proposed network is immediately applicable to clinically acquired data of the pelvis. In particular, we show that our intra-operative landmark detection together with pre-operative CT enables X-ray pose estimation which, ultimately, benefits initialization of image-based 2D/3D registration.


Healthcare technology letters | 2017

Multi-modal imaging, model-based tracking, and mixed reality visualisation for orthopaedic surgery

Sing Chun Lee; Bernhard Fuerst; Keisuke Tateno; Alex Johnson; Javad Fotouhi; Greg Osgood; Federico Tombari; Nassir Navab

Orthopaedic surgeons are still following the decades old workflow of using dozens of two-dimensional fluoroscopic images to drill through complex 3D structures, e.g. pelvis. This Letter presents a mixed reality support system, which incorporates multi-modal data fusion and model-based surgical tool tracking for creating a mixed reality environment supporting screw placement in orthopaedic surgery. A red–green–blue–depth camera is rigidly attached to a mobile C-arm and is calibrated to the cone-beam computed tomography (CBCT) imaging space via iterative closest point algorithm. This allows real-time automatic fusion of reconstructed surface and/or 3D point clouds and synthetic fluoroscopic images obtained through CBCT imaging. An adapted 3D model-based tracking algorithm with automatic tool segmentation allows for tracking of the surgical tools occluded by hand. This proposed interactive 3D mixed reality environment provides an intuitive understanding of the surgical site and supports surgeons in quickly localising the entry point and orienting the surgical tool during screw placement. The authors validate the augmentation by measuring target registration error and also evaluate the tracking accuracy in the presence of partial occlusion.

Collaboration


Dive into the Greg Osgood's collaboration.

Top Co-Authors

Avatar

Javad Fotouhi

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alex Johnson

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

M. Jacobson

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar

Sing Chun Lee

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar

J. Goerres

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar

M. D. Ketcha

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar

Mehran Armand

Johns Hopkins University

View shared research outputs
Researchain Logo
Decentralizing Knowledge