Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kun-Chang Yu is active.

Publication


Featured researches published by Kun-Chang Yu.


Computerized Medical Imaging and Graphics | 2008

3D CT-Video Fusion for Image-Guided Bronchoscopy

William E. Higgins; James P. Helferty; Kongkuo Lu; Scott A. Merritt; Lav Rai; Kun-Chang Yu

Bronchoscopic biopsy of the central-chest lymph nodes is an important step for lung-cancer staging. Before bronchoscopy, the physician first visually assesses a patients three-dimensional (3D) computed tomography (CT) chest scan to identify suspect lymph-node sites. Next, during bronchoscopy, the physician guides the bronchoscope to each desired lymph-node site. Unfortunately, the physician has no link between the 3D CT image data and the live video stream provided during bronchoscopy. Thus, the physician must essentially perform biopsy blindly, and the skill levels between different physicians differ greatly. We describe an approach that enables synergistic fusion between the 3D CT data and the bronchoscopic video. Both the integrated planning and guidance system and the internal CT-video registration and fusion methods are described. Phantom, animal, and human studies illustrate the efficacy of the methods.


Chest | 2008

Image-Guided Bronchoscopy for Peripheral Lung Lesions: A Phantom Study

Scott A. Merritt; Jason D. Gibbs; Kun-Chang Yu; Viral Patel; Lav Rai; Duane C. Cornish; Rebecca Bascom; William E. Higgins

BACKGROUND Ultrathin bronchoscopy guided by virtual bronchoscopy (VB) techniques show promise for the diagnosis of peripheral lung lesions. In a phantom study, we evaluated a new real-time, VB-based, image-guided system for guiding the bronchoscopic biopsy of peripheral lung lesions and compared its performance to that of standard bronchoscopy practice. METHODS Twelve bronchoscopists of varying experience levels participated in the study. The task was to use an ultrathin bronchoscope and a biopsy forceps to localize 10 synthetically created lesions situated at varying airway depths. For route planning and guidance, the bronchoscopists employed either standard bronchoscopy practice or the real-time image-guided system. Outcome measures were biopsy site position error, which was defined as the distance from the forceps contact point to the ground-truth lesion boundary, and localization success, which was defined as a site identification having a biopsy site position error of < or = 5 mm. RESULTS Mean (+/- SD) localization success more than doubled from 43 +/- 16% using standard practice to 94 +/- 7.9% using image guidance (p < 10(-15) [McNemar paired test]). The mean biopsy site position error dropped from 9.7 +/- 9.1 mm for standard practice to 2.2 +/- 2.3 mm for image guidance. For standard practice, localization success decreased from 56% for generation 3 to 4 lesions to 31% for generation 6 to 8 lesions and also decreased from 51% for lesions on a carina vs 23% for lesions situated away from a carina. These factors were far less pronounced when using image guidance, as follows: success for generation 3 to 4 lesions, 97%; success for generation 6 to 8 lesions, 91%; success for lesions on a carina, 98%; success for lesions away from a carina, 86%. Bronchoscopist experience did not significantly affect performance using the image-guided system. CONCLUSIONS Real-time, VB-based image guidance can potentially far exceed standard bronchoscopy practice for enabling the bronchoscopic biopsy of peripheral lung lesions.


Journal of Digital Imaging | 2010

Image-Based Reporting for Bronchoscopy

Kun-Chang Yu; Jason D. Gibbs; Michael W. Graham; William E. Higgins

Bronchoscopy is often performed for staging lung cancer. The recent development of multidetector computed tomography (MDCT) scanners and ultrathin bronchoscopes now enable the bronchoscopic biopsy and treatment of peripheral diagnostic regions of interest (ROIs). Because these ROIs are often located several generations within the airway tree, careful planning and interpretation of the bronchoscopic route is required prior to a procedure. The current practice for planning bronchoscopic procedures, however, is difficult, error prone, and time consuming. To alleviate these issues, we propose a method for producing and previewing reports for bronchoscopic procedures using patient-specific MDCT chest scans. The reports provide quantitative data about the bronchoscopic routes and both static and dynamic previews of the proper airway route. The previews consist of virtual bronchoscopic endoluminal renderings along the route and three-dimensional cues for a final biopsy site. The reports require little storage space and computational resources, enabling physicians to view the reports on a portable tablet PC. To evaluate the efficacy of the reporting system, we have generated reports for 22 patients in a human lung cancer patient pilot study. For 17 of these patients, we used the reports in conjunction with live image-based bronchoscopic guidance to direct physicians to central chest and peripheral ROIs for subsequent diagnostic evaluation. Our experience shows that the tool enabled useful procedure preview and an effective means for planning strategy prior to a live bronchoscopy.


Proceedings of SPIE, the International Society for Optical Engineering | 2005

3D image fusion and guidance for computer-assisted bronchoscopy

William E. Higgins; Lav Rai; Scott A. Merritt; Kongkuo Lu; N. T. Linger; Kun-Chang Yu

The standard procedure for diagnosing lung cancer involves two stages. First, the physician evaluates a high-resolution three-dimensional (3D) computed-tomography (CT) chest image to produce a procedure plan. Next, the physician performs bronchoscopy on the patient, which involves navigating the the bronchoscope through the airways to planned biopsy sites. Unfortunately, the physician has no link between the 3D CT image data and the live video stream provided during bronchoscopy. In addition, these data sources differ greatly in what they physically give, and no true 3D planning tools exist for planning and guiding procedures. This makes it difficult for the physician to translate a CT-based procedure plan to the video domain of the bronchoscope. Thus, the physician must essentially perform biopsy blindly, and the skill levels between different physicians differ greatly. We describe a system that enables direct 3D CT-based procedure planning and provides direct 3D guidance during bronchoscopy. 3D CT-based information on biopsy sites is provided interactively as the physician moves the bronchoscope. Moreover, graphical information through a live fusion of the 3D CT data and bronchoscopic video is provided during the procedure. This information is coupled with a series of computer-graphics tools to give the physician a greatly augmented reality of the patients interior anatomy during a procedure. Through a series of controlled tests and studies with human lung-cancer patients, we have found that the system not only reduces the variation in skill level between different physicians, but also increases biopsy success rate.


Medical Imaging 2007: Physiology, Function, and Structure from Medical Images | 2007

Method for continuous guidance of endoscopy

Scott A. Merritt; Lav Rai; Jason D. Gibbs; Kun-Chang Yu; William E. Higgins

Previous research has indicated that use of guidance systems during endoscopy can improve the performance and decrease the skill variation of physicians. Current guidance systems, however, rely on computationally intensive registration techniques or costly and error-prone electromagnetic (E/M) registration techniques, neither of which fit seamlessly into the clinical workflow. We have previously proposed a real-time image-based registration technique that addresses both of these problems. We now propose a system-level approach that incorporates this technique into a complete paradigm for real-time image-based guidance in order to provide a physician with continuously-updated navigational and guidance information. At the core of the system is a novel strategy for guidance of endoscopy. Additional elements such as global surface rendering, local cross-sectional views, and pertinent distances are also incorporated into the system to provide additional utility to the physician. Phantom results were generated using bronchoscopy performed on a rapid prototype model of a human tracheobronchial airway tree. The system has also been tested in ongoing live human tests. Thus far, ten such tests, focused on bronchoscopic intervention of pulmonary patients, have been run successfully.


Medical Imaging 2008: Physiology, Function, and Structure from Medical Images | 2008

Integrated system for planning peripheral bronchoscopic procedures

Jason D. Gibbs; Michael W. Graham; Kun-Chang Yu; William E. Higgins

Bronchoscopy is often performed for diagnosing lung cancer. The recent development of multidetector CT (MDCT) scanners and ultrathin bronchoscopes now enable the bronchoscopic biopsy and treatment of peripheral regions of interest (ROIs). Because the peripheral ROIs are often located several generations within the airway tree, careful planning is required prior to a procedure. The current practice for planning peripheral bronchoscopic procedures, however, is difficult, error-prone, and time-consuming. We propose a system for planning peripheral bronchoscopic procedures using patient-specific MDCT chest scans. The planning process begins with a semi-automatic segmentation of ROIs. The remaining system components are completely automatic, beginning with a new strategy for tracheobronchial airway-tree segmentation. The system then uses a new locally-adaptive approach for finding the interior airway-wall surfaces. From the polygonal airway-tree surfaces, a centerline-analysis method extracts the central axes of the airway tree. The systems route-planning component then analyzes the data generated in the previous stages to determine an appropriate path through the airway tree to the ROI. Finally, an automated report generator gives quantitative data about the route and both static and dynamic previews of the procedure. These previews consist of virtual bronchoscopic endoluminal renderings at bifurcations encountered along the route and renderings of the airway tree and ROI at the suggested biopsy location. The system is currently in use for a human lung-cancer patient pilot study involving the planning and subsequent live image-based guidance of suspect peripheral cancer nodules.


Proceedings of SPIE | 2009

Improved Navigation for Image-Guided Bronchoscopy

Rahul Khare; Kun-Chang Yu; William E. Higgins

Past work has shown that guidance systems help improve both the navigation through airways and final biopsy of regions of interest via bronchoscopy. We have previously proposed an image-based bronchoscopic guidance system. The system, however, has three issues that arise during navigation: 1) sudden disorienting changes can occur in endoluminal views; 2) more feedback could be afforded during navigation; and 3) the systems graphical user interface (GUI) lacks a convenient interface for smooth navigation between bifurcations. In order to alleviate these issues, we present an improved navigation system. The improvements offer the following: 1) an enhanced visual presentation; 2) smooth navigation; 3) an interface for handling registration errors; and 4) improved bifurcation-point identification. The improved navigation system thus provides significant ergonomic and navigational advantages over the previous system.


Proceedings of SPIE | 2012

Fluoroscopic image-guided intervention system for transbronchial localization

Lav Rai; Thomas Keast; Henky Wibowo; Kun-Chang Yu; Jeffrey W. Draper; Jason D. Gibbs

Reliable transbronchial access of peripheral lung lesions is desirable for the diagnosis and potential treatment of lung cancer. This procedure can be difficult, however, because accessory devices (e.g., needle or forceps) cannot be reliably localized while deployed. We present a fluoroscopic image-guided intervention (IGI) system for tracking such bronchoscopic accessories. Fluoroscopy, an imaging technology currently utilized by many bronchoscopists, has a fundamental shortcoming - many lung lesions are invisible in its images. Our IGI system aligns a digitally reconstructed radiograph (DRR) defined from a pre-operative computed tomography (CT) scan with live fluoroscopic images. Radiopaque accessory devices are readily apparent in fluoroscopic video, while lesions lacking a fluoroscopic signature but identifiable in the CT scan are superimposed in the scene. The IGI system processing steps consist of: (1) calibrating the fluoroscopic imaging system; (2) registering the CT anatomy with its depiction in the fluoroscopic scene; (3) optical tracking to continually update the DRR and target positions as the fluoroscope is moved about the patient. The end result is a continuous correlation of the DRR and projected targets with the anatomy depicted in the live fluoroscopic video feed. Because both targets and bronchoscopic devices are readily apparent in arbitrary fluoroscopic orientations, multiplane guidance is straightforward. The system tracks in real-time with no computational lag. We have measured a mean projected tracking accuracy of 1.0 mm in a phantom and present results from an in vivo animal study.


Archive | 2008

Method and apparatus for continuous guidance of endoscopy

William E. Higgins; Scott A. Merritt; Lav Rai; Jason D. Gibbs; Kun-Chang Yu


Archive | 2009

MEDICAL IMAGE REPORTING SYSTEM AND METHOD

Williams E. Higgins; Jason D. Gibbs; Kun-Chang Yu; Michael W. Graham; Kongkuo Lu

Collaboration


Dive into the Kun-Chang Yu's collaboration.

Top Co-Authors

Avatar

William E. Higgins

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

Jason D. Gibbs

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

Lav Rai

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

Scott A. Merritt

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

Kongkuo Lu

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

Michael W. Graham

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

Duane C. Cornish

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

James P. Helferty

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

Rahul Khare

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

Rebecca Bascom

Pennsylvania State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge