David G. Gobbi
Queen's University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by David G. Gobbi.
medical image computing and computer assisted intervention | 2000
David G. Gobbi; Roch M. Comeau; Terry M. Peters
Performing a craniotomy will cause brain tissue to shift. As a result of the craniotomy, the accuracy of stereotactic localization techniques is reduced unless the brain shift can be accurately measured. If an ultrasound probe is tracked by a 3D optical tracking system, intra-operative ultrasound images acquired through the craniotomy can be compared to pre-operative MRI images to quantify the shift. We have developed 2D and 3D image overlay tools which allow interactive, real-time visualization of the shift as well as software that uses homologous landmarks between the ultrasound and MRI image volumes to create a thin-plate-spline warp transformation that provides a mapping between pre-operative imaging coordinates and the shifted intra-operative coordinages. Our techniques have been demonstrated on poly vinyl alcohol cryogel phantoms which exhibit mechanical and imaging properties similar to those of the human brain.
medical image computing and computer assisted intervention | 1999
David G. Gobbi; Roch M. Comeau; Terry M. Peters
Stereotactic techniques are prevalent in neurosurgery. A fundamental assumption of stereotaxis is that the brain is a rigid body. It has been demonstrated, however, that following a craniotomy the brain tissue will shift by 10 mm on average. We are investigating intra-operative ultrasound, using an optical tracking system to record the position and orientation of the ultrasound probe, as a method of measuring and correcting for brain shift. We have determined that the accuracy to which ultrasound image coordinates can be tracked (including the errors involved in calibration) is better than 0.5 mm within the ultrasound image plane, and better than 2 mm perpendicular to the plane. We apply two visualization methods to compare the ultrasound and the pre-operative MRI: the first is real-time overlay of the ultrasound with the co-planar MR slice, and the second is the real-time texture mapping of the ultrasound video into a 3D view with the MRI. Our technique is demonstrated on a poly vinyl alcohol cryogel phantom.
Journal of Digital Imaging | 2007
Andinet Enquobahrie; Patrick Cheng; Kevin Gary; Luis Ibanez; David G. Gobbi; Frank Lindseth; Ziv Yaniv; Stephen R. Aylward; Julien Jomier; Kevin Cleary
This paper presents an overview of the image-guided surgery toolkit (IGSTK). IGSTK is an open source C++ software library that provides the basic components needed to develop image-guided surgery applications. It is intended for fast prototyping and development of image-guided surgery applications. The toolkit was developed through a collaboration between academic and industry partners. Because IGSTK was designed for safety-critical applications, the development team has adopted lightweight software processes that emphasizes safety and robustness while, at the same time, supporting geographically separated developers. A software process that is philosophically similar to agile software methods was adopted emphasizing iterative, incremental, and test-driven development principles. The guiding principle in the architecture design of IGSTK is patient safety. The IGSTK team implemented a component-based architecture and used state machine software design methodologies to improve the reliability and safety of the components. Every IGSTK component has a well-defined set of features that are governed by state machines. The state machine ensures that the component is always in a valid state and that all state transitions are valid and meaningful. Realizing that the continued success and viability of an open source toolkit depends on a strong user community, the IGSTK team is following several key strategies to build an active user community. These include maintaining a users and developers’ mailing list, providing documentation (application programming interface reference document and book), presenting demonstration applications, and delivering tutorial sessions at relevant scientific conferences.
medical image computing and computer assisted intervention | 2002
David G. Gobbi; Terry M. Peters
The most attractive feature of 2D B-mode ultrasound for intra-operative use is that it is both a real time and a highly interactive modality. Most 3D freehand reconstruction methods, however, are not fully interactive because they do not allow the display of any part of the 3D ultrasound image until all data collection and reconstruction is finished. We describe a technique whereby the 3D reconstruction occurs in real-time as the data is acquired, and where the operator can view the progress of the reconstruction on three orthogonal slice views through the ultrasound volume. Capture of the ultrasound data can be immediately followed by a straightforward, interactive nonlinear registration of a pre-operative MRI volume to match the intra-operative ultrasound. We demonstrate the our system on a deformable, multi-modal PVA-cryogel phantom and during a clinical surgery.
IEEE Computer | 2006
Kevin Gary; Luis Ibanez; Stephen R. Aylward; David G. Gobbi; M.B. Blake; Kevin Cleary
Image-guided surgery applies leading-edge technology and clinical practices to provide better quality of life to patients who can benefit from minimally invasive procedures. Reliable software is a critical component of image-guided surgical applications, yet costly expertise and technology infrastructure barriers hamper current research and commercialization efforts in this area. IGSTK applies the open source development and delivery model to this problem. Agile and component-based software engineering principles reduce the costs and risks associated with adopting this new technology, resulting in a safe, inexpensive, robust, shareable, and reusable software infrastructure.
NeuroImage | 2007
Simon P. DiMaio; Tina Kapur; Kevin Cleary; Stephen R. Aylward; Peter Kazanzides; Kirby G. Vosburgh; Randy E. Ellis; James S. Duncan; Keyvan Farahani; Heinz U. Lemke; Terry M. Peters; William E. Lorensen; David G. Gobbi; John Haller; Laurence P. Clarke; Stephen M. Pizer; Russell H. Taylor; Robert L. Galloway; Gabor Fichtinger; Nobuhiko Hata; Kimberly Lawson; Clare M. Tempany; Ron Kikinis; Ferenc A. Jolesz
System development for image-guided therapy (IGT), or image-guided interventions (IGI), continues to be an area of active interest across academic and industry groups. This is an emerging field that is growing rapidly: major academic institutions and medical device manufacturers have produced IGT technologies that are in routine clinical use, dozens of high-impact publications are published in well regarded journals each year, and several small companies have successfully commercialized sophisticated IGT systems. In meetings between IGT investigators over the last two years, a consensus has emerged that several key areas must be addressed collaboratively by the community to reach the next level of impact and efficiency in IGT research and development to improve patient care. These meetings culminated in a two-day workshop that brought together several academic and industrial leaders in the field today. The goals of the workshop were to identify gaps in the engineering infrastructure available to IGT researchers, develop the role of research funding agencies and the recently established US-based National Center for Image Guided Therapy (NCIGT), and ultimately to facilitate the transfer of technology among research centers that are sponsored by the National Institutes of Health (NIH). Workshop discussions spanned many of the current challenges in the development and deployment of new IGT systems. Key challenges were identified in a number of areas, including: validation standards; workflows, use-cases, and application requirements; component reusability; and device interface standards. This report elaborates on these key points and proposes research challenges that are to be addressed by a joint effort between academic, industry, and NIH participants.
Computerized Medical Imaging and Graphics | 2003
David G. Gobbi; Terry M. Peters
We have contributed an efficient, object-oriented implementation of 3D nonlinear transformations to the Visualization Toolkit that can be applied to a wide variety of data types, including images and polygonal meshes. The transformations are performed via thin-plate splines or via interpolation of a regular lattice of displacement vectors, and are part of a framework that is easily extensible to other nonlinear transformation types. In this paper we demonstrate application of these transformations in medical imaging in general and image-guided surgery in particular, and present a series of performance benchmarks.
computer assisted radiology and surgery | 2008
Andinet Enquobahrie; David G. Gobbi; Matthew W. Turek; Patrick Cheng; Ziv Yaniv; Frank Lindseth; Kevin Cleary
ObjectiveMany image-guided surgery applications require tracking devices as part of their core functionality. The Image-Guided Surgery Toolkit (IGSTK) was designed and developed to interface tracking devices with software applications incorporating medical images.MethodsIGSTK was designed as an open source C++ library that provides the basic components needed for fast prototyping and development of image-guided surgery applications. This library follows a component-based architecture with several components designed for specific sets of image-guided surgery functions. At the core of the toolkit is the tracker component that handles communication between a control computer and navigation device to gather pose measurements of surgical instruments present in the surgical scene. The representations of the tracked instruments are superimposed on anatomical images to provide visual feedback to the clinician during surgical procedures.ResultsThe initial version of the IGSTK toolkit has been released in the public domain and several trackers are supported. The toolkit and related information are available at http://www.igstk.org.ConclusionWith the increased popularity of minimally invasive procedures in health care, several tracking devices have been developed for medical applications. Designing and implementing high-quality and safe software to handle these different types of trackers in a common framework is a challenging task. It requires establishing key software design principles that emphasize abstraction, extensibility, reusability, fault-tolerance, and portability. IGSTK is an open source library that satisfies these needs for the image-guided surgery community.
medical image computing and computer assisted intervention | 2001
Glen Lehmann; Chiu Am; David G. Gobbi; Yves P. Starreveld; Douglas Boyd; Maria Drangova; Terry M. Peters
Conventional open-heart coronary bypass surgery requires a 30-cm long incision through the breast-bone and stopping the beating heart, which inflict great pain, trauma and lengthy recovery time to patients. Recently, a robot-assisted minimally invasive surgical technique has been introduced to coronary bypass to minimize incisions and avoid cardiac arrest in order to eliminate the medical complications associated with open-heart surgery. Despite its initial success, this innovation has its own limitations and problems. This paper discusses these limitations and proposes a framework that incorporates image-guidance techniques into MIRCAB surgery. We present two aspects of our preliminary work; 1) A Virtual Cardiac Surgical Planning system developed to visualize and manipulate simulated robotic surgical tools within the virtual patient. 2) Our work towards the extension of the static planning system to a dynamic situation that would model the position, orientation and dynamics of the heart, relative to the chest wall, during surgery.
Medical Imaging 2001: Visualization, Display, and Image-Guided Procedures | 2000
David G. Gobbi; Belinda Kh Lee; Terence M. Peters
B-Mode ultrasound is often used during neurosurgery to provide intra-operative images of the brain though a craniotomy, but the use of 3D ultrasound during surgery is still in its infancy. We have developed a system that provides real-time freehand 3D ultrasound reconstruction at a reduced resolution. The reconstruction proceeds incrementally and the 3D image is overlayed, via a computer, on a pre-operative 3D MRI scan. This provides the operator with the necessary feedback to maintain a constant freehand sweep-rate, and also ensures that the sweep covers the desired anatomical volume. All of the ultrasound video frames are buffered, and a full-resolution, compounded reconstruction proceeds once the manual sweep is complete. We have also developed tools for manual tagging of homologous landmarks in the 3D MRI and 3D ultrasound volumes that use a piecewise cubic approximation of thin-plate spline interpolation to achieve interactive nonlinear registration and warping of the MRI volume to the ultrasound volume: Each time a homologous point-pair is identified by the use, the image of the warped MRI is updated on the computer screen after less than 0.5 s.