Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jonathan M. Sorger is active.

Publication


Featured researches published by Jonathan M. Sorger.


Medical Image Analysis | 2013

Optical techniques for 3D surface reconstruction in computer-assisted laparoscopic surgery

Lena Maier-Hein; Peter Mountney; Adrien Bartoli; Haytham Elhawary; Daniel S. Elson; Anja Groch; Andreas Kolb; Marcos A. Rodrigues; Jonathan M. Sorger; Stefanie Speidel; Danail Stoyanov

One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-operative morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeons navigation capabilities by observing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted instruments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This paper reviews the state-of-the-art methods for optical intra-operative 3D reconstruction in laparoscopic surgery and discusses the technical challenges and future perspectives towards clinical translation. With the recent paradigm shift of surgical practice towards MIS and new developments in 3D optical imaging, this is a timely discussion about technologies that could facilitate complex CAS procedures in dynamic and deformable anatomical regions.


Journal of Endourology | 2012

Nerve Mapping for Prostatectomies: Novel Technologies Under Development

Karthikeyan Ponnusamy; Jonathan M. Sorger; Catherine J. Mohr

Prostatic neuroanatomy is difficult to visualize intraoperatively and can be extremely variable. Damage to these nerves during prostatectomies may lead to postoperative complications such as erectile dysfunction and incontinence. This review aims to discuss the prostatic neuroanatomy, sites of potential nerve damage during a prostatectomy, and nerve-mapping technologies being developed to prevent neural injury. These technologies include stimulation, dyes, and direct visualization. Nerve stimulation works by testing an area and observing a physiologic response but is limited by the long half-life for an erectile response; examples include CaverMap, ProPep, and optical nerve stimulation. Few nerve dyes have been approved by the Food and Drug Administration (FDA) because of the extensive testing required; examples of nerve dyes include compounds from Avelas and General Electric, fluorescent cholera toxin subunit B, indocyanine green, fluorescent inactivated herpes simplex 2, and Fluoro-Gold. Direct visualization techniques have a simpler FDA approval process; examples include optical coherence tomography, multiphoton microscopy, ultrasound, coherent anti-Stokes Raman scattering. Many researchers are developing several novel technologies that can be categorized as stimulation based, dye-based, or direct visualization. As of yet, none has shown clear evidence to improve surgical outcomes and consequently lack wide adoption. Further development of these technologies may lead to improved complication rates after prostatectomies. Clinically, some technologies have demonstrated utility in predicting the development of complications. By using that information, more aggressive rehabilitation programs may lead to improved long-term function. These technologies can also be applied for research to improve our knowledge of the neuroanatomy and physiology of erection and incontinence.


Journal of Robotic Surgery | 2013

Toward intraoperative image-guided transoral robotic surgery

Wen P. Liu; Sureerat Reaugamornrat; Anton Deguet; Jonathan M. Sorger; Jeffrey H. Siewerdsen; Jeremy D. Richmon; Russell H. Taylor

This paper presents the development and evaluation of video augmentation on the stereoscopic da Vinci S system with intraoperative image guidance for base of tongue tumor resection in transoral robotic surgery (TORS). The proposed workflow for image-guided TORS begins by identifying and segmenting critical oropharyngeal structures (e.g., the tumor and adjacent arteries and nerves) from preoperative computed tomography (CT) and/or magnetic resonance (MR) imaging. These preoperative planned data can be deformably registered to the intraoperative endoscopic view using mobile C-arm cone-beam computed tomography (CBCT) [Daly et al. in Med Phys 2006; 33(10):3767–3780; Siewerdsen et al. in SPIE Medical Imaging 2009]. Augmentation of TORS endoscopic video defining surgical targets and critical structures has the potential to improve navigation, spatial orientation, and confidence in tumor resection. Experiments in animal specimens achieved statistically significant improvement in target localization error when comparing the proposed image guidance system to simulated current practice.


Archives of Otolaryngology-head & Neck Surgery | 2014

Cadaveric Feasibility Study of da Vinci Si-Assisted Cochlear Implant With Augmented Visual Navigation for Otologic Surgery

Wen P. Liu; Mahdi Azizian; Jonathan M. Sorger; Russell H. Taylor; Brian K. Reilly; Kevin Cleary; Diego Preciado

IMPORTANCE To our knowledge, this is the first reported cadaveric feasibility study of a master-slave-assisted cochlear implant procedure in the otolaryngology-head and neck surgery field using the da Vinci Si system (da Vinci Surgical System; Intuitive Surgical, Inc). We describe the surgical workflow adaptations using a minimally invasive system and image guidance integrating intraoperative cone beam computed tomography through augmented reality. OBJECTIVE To test the feasibility of da Vinci Si-assisted cochlear implant surgery with augmented reality, with visualization of critical structures and facilitation with precise cochleostomy for electrode insertion. DESIGN AND SETTING Cadaveric case study of bilateral cochlear implant approaches conducted at Intuitive Surgical Inc, Sunnyvale, California. INTERVENTIONS Bilateral cadaveric mastoidectomies, posterior tympanostomies, and cochleostomies were performed using the da Vinci Si system on a single adult human donor cadaveric specimen. MAIN OUTCOMES AND MEASURES Radiographic confirmation of successful cochleostomies, placement of a phantom cochlear implant wire, and visual confirmation of critical anatomic structures (facial nerve, cochlea, and round window) in augmented stereoendoscopy. RESULTS With a surgical mean time of 160 minutes per side, complete bilateral cochlear implant procedures were successfully performed with no violation of critical structures, notably the facial nerve, chorda tympani, sigmoid sinus, dura, or ossicles. Augmented reality image overlay of the facial nerve, round window position, and basal turn of the cochlea was precise. Postoperative cone beam computed tomography scans confirmed successful placement of the phantom implant electrode array into the basal turn of the cochlea. CONCLUSIONS AND RELEVANCE To our knowledge, this is the first study in the otolaryngology-head and neck surgery literature examining the use of master-slave-assisted cochleostomy with augmented reality for cochlear implants using the da Vinci Si system. The described system for cochleostomy has the potential to improve the surgeons confidence, as well as surgical safety, efficiency, and precision by filtering tremor. The integration of augmented reality may be valuable for surgeons dealing with complex cases of congenital anatomic abnormality, for revision cochlear implant with distorted anatomy and poorly pneumatized mastoids, and as a method of interactive teaching. Further research into the cost-benefit ratio of da Vinci Si-assisted otologic surgery, as well as refinements of the proposed workflow, are required before considering clinical studies.


International Journal of Medical Robotics and Computer Assisted Surgery | 2015

Intraoperative image-guided transoral robotic surgery: pre-clinical studies

Wen P. Liu; Sureerat Reaugamornrat; Jonathan M. Sorger; Jeffrey H. Siewerdsen; Russell H. Taylor; Jeremy D. Richmon

Adequate resection of oropharyngeal neoplasms with transoral robotic surgery (TORS) poses multiple challenges, including difficulty with access, inability to palpate the tumor, loss of landmarks, and intraoperative patient positioning with mouth retractor and tongue extended creating significant tissue distortion from preoperative imaging.


Cancer Research | 2017

Regulatory Aspects of Optical Methods and Exogenous Targets for Cancer Detection

Willemieke S. Tummers; Jason M. Warram; Kiranya E. Tipirneni; John Fengler; Paula Jacobs; Lalitha K. Shankar; Lori Henderson; Betsy Ballard; Brian W. Pogue; Jamey P. Weichert; Michael Bouvet; Jonathan M. Sorger; Christopher H. Contag; John V. Frangioni; Michael F. Tweedle; James P. Basilion; Sanjiv S. Gambhir; Eben L. Rosenthal

Considerable advances in cancer-specific optical imaging have improved the precision of tumor resection. In comparison to traditional imaging modalities, this technology is unique in its ability to provide real-time feedback to the operating surgeon. Given the significant clinical implications of optical imaging, there is an urgent need to standardize surgical navigation tools and contrast agents to facilitate swift regulatory approval. Because fluorescence-enhanced surgery requires a combination of both device and drug, each may be developed in conjunction, or separately, which are important considerations in the approval process. This report is the result of a one-day meeting held on May 4, 2016 with officials from the National Cancer Institute, the FDA, members of the American Society of Image-Guided Surgery, and members of the World Molecular Imaging Society, which discussed consensus methods for FDA-directed human testing and approval of investigational optical imaging devices as well as contrast agents for surgical applications. The goal of this workshop was to discuss FDA approval requirements and the expectations for approval of these novel drugs and devices, packaged separately or in combination, within the context of optical surgical navigation. In addition, the workshop acted to provide clarity to the research community on data collection and trial design. Reported here are the specific discussion items and recommendations from this critical and timely meeting. Cancer Res; 77(9); 2197-206. ©2017 AACR.


Journal of Robotic Surgery | 2015

Augmented reality and cone beam CT guidance for transoral robotic surgery.

Wen P. Liu; Jeremy D. Richmon; Jonathan M. Sorger; Mahdi Azizian; Russell H. Taylor

In transoral robotic surgery preoperative image data do not reflect large deformations of the operative workspace from perioperative setup. To address this challenge, in this study we explore image guidance with cone beam computed tomographic angiography to guide the dissection of critical vascular landmarks and resection of base-of-tongue neoplasms with adequate margins for transoral robotic surgery. We identify critical vascular landmarks from perioperative c-arm imaging to augment the stereoscopic view of a da Vinci si robot in addition to incorporating visual feedback from relative tool positions. Experiments resecting base-of-tongue mock tumors were conducted on a series of ex vivo and in vivo animal models comparing the proposed workflow for video augmentation to standard non-augmented practice and alternative, fluoroscopy-based image guidance. Accurate identification of registered augmented critical anatomy during controlled arterial dissection and en bloc mock tumor resection was possible with the augmented reality system. The proposed image-guided robotic system also achieved improved resection ratios of mock tumor margins (1.00) when compared to control scenarios (0.0) and alternative methods of image guidance (0.58). The experimental results show the feasibility of the proposed workflow and advantages of cone beam computed tomography image guidance through video augmentation of the primary stereo endoscopy as compared to control and alternative navigation methods.


computer assisted radiology and surgery | 2015

2D-3D radiograph to cone-beam computed tomography (CBCT) registration for C-arm image-guided robotic surgery.

Wen Pei Liu; Yoshito Otake; Mahdi Azizian; Oliver Wagner; Jonathan M. Sorger; Mehran Armand; Russell H. Taylor

PurposeC-arm radiographs are commonly used for intraoperative image guidance in surgical interventions. Fluoroscopy is a cost-effective real-time modality, although image quality can vary greatly depending on the target anatomy. Cone-beam computed tomography (CBCT) scans are sometimes available, so 2D–3D registration is needed for intra-procedural guidance. C-arm radiographs were registered to CBCT scans and used for 3D localization of peritumor fiducials during a minimally invasive thoracic intervention with a da Vinci Si robot.MethodsIntensity-based 2D–3D registration of intraoperative radiographs to CBCT was performed. The feasible range of X-ray projections achievable by a C-arm positioned around a da Vinci Si surgical robot, configured for robotic wedge resection, was determined using phantom models. Experiments were conducted on synthetic phantoms and animals imaged with an OEC 9600 and a Siemens Artis zeego, representing the spectrum of different C-arm systems currently available for clinical use.ResultsThe image guidance workflow was feasible using either an optically tracked OEC 9600 or a Siemens Artis zeego C-arm, resulting in an angular difference of


Proceedings of SPIE | 2013

A gaussian mixture + demons deformable registration method for cone-beam CT-guided robotic transoral base-of-tongue surgery

S. Reaungamornrat; Wen P. Liu; Sebastian Schafer; Yoshito Otake; Sajendra Nithiananthan; Ali Uneri; Jeremy D. Richmon; Jonathan M. Sorger; Jeffrey H. Siewerdsen; Russell H. Taylor


Archive | 2015

Clinical Milestones for Optical Imaging

Jonathan M. Sorger

\Delta \theta : \sim 30^{\circ }

Collaboration


Dive into the Jonathan M. Sorger's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wen P. Liu

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar

Jeremy D. Richmon

Massachusetts Eye and Ear Infirmary

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jakob Unger

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Joao Lagarto

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge