Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Duane C. Cornish is active.

Publication


Featured researches published by Duane C. Cornish.


IEEE Transactions on Medical Imaging | 2010

Robust 3-D Airway Tree Segmentation for Image-Guided Peripheral Bronchoscopy

Michael W. Graham; Jason D. Gibbs; Duane C. Cornish; William E. Higgins

A vital task in the planning of peripheral bronchoscopy is the segmentation of the airway tree from a 3-D multidetector computed tomography chest scan. Unfortunately, existing methods typically do not sufficiently extract the necessary peripheral airways needed to plan a procedure. We present a robust method that draws upon both local and global information. The method begins with a conservative segmentation of the major airways. Follow-on stages then exhaustively search for additional candidate airway locations. Finally, a graph-based optimization method counterbalances both the benefit and cost of retaining candidate airway locations for the final segmentation. Results demonstrate that the proposed method typically extracts 2-3 more generations of airways than several other methods, and that the extracted airway trees enable image-guided bronchoscopy deeper into the human lung periphery than past studies.


Chest | 2008

Image-Guided Bronchoscopy for Peripheral Lung Lesions: A Phantom Study

Scott A. Merritt; Jason D. Gibbs; Kun-Chang Yu; Viral Patel; Lav Rai; Duane C. Cornish; Rebecca Bascom; William E. Higgins

BACKGROUND Ultrathin bronchoscopy guided by virtual bronchoscopy (VB) techniques show promise for the diagnosis of peripheral lung lesions. In a phantom study, we evaluated a new real-time, VB-based, image-guided system for guiding the bronchoscopic biopsy of peripheral lung lesions and compared its performance to that of standard bronchoscopy practice. METHODS Twelve bronchoscopists of varying experience levels participated in the study. The task was to use an ultrathin bronchoscope and a biopsy forceps to localize 10 synthetically created lesions situated at varying airway depths. For route planning and guidance, the bronchoscopists employed either standard bronchoscopy practice or the real-time image-guided system. Outcome measures were biopsy site position error, which was defined as the distance from the forceps contact point to the ground-truth lesion boundary, and localization success, which was defined as a site identification having a biopsy site position error of < or = 5 mm. RESULTS Mean (+/- SD) localization success more than doubled from 43 +/- 16% using standard practice to 94 +/- 7.9% using image guidance (p < 10(-15) [McNemar paired test]). The mean biopsy site position error dropped from 9.7 +/- 9.1 mm for standard practice to 2.2 +/- 2.3 mm for image guidance. For standard practice, localization success decreased from 56% for generation 3 to 4 lesions to 31% for generation 6 to 8 lesions and also decreased from 51% for lesions on a carina vs 23% for lesions situated away from a carina. These factors were far less pronounced when using image guidance, as follows: success for generation 3 to 4 lesions, 97%; success for generation 6 to 8 lesions, 91%; success for lesions on a carina, 98%; success for lesions away from a carina, 86%. Bronchoscopist experience did not significantly affect performance using the image-guided system. CONCLUSIONS Real-time, VB-based image guidance can potentially far exceed standard bronchoscopy practice for enabling the bronchoscopic biopsy of peripheral lung lesions.


Chest | 2008

Interbronchoscopist Variability in Endobronchial Path Selection: A Simulation Study

Marina Dolina; Duane C. Cornish; Scott A. Merritt; Lav Rai; Rickhesvar P. Mahraj; William E. Higgins; Rebecca Bascom

BACKGROUND Endobronchial path selection is important for the bronchoscopic diagnosis of focal lung lesions. Path selection typically involves mentally reconstructing a three-dimensional path by interpreting a stack of two-dimensional (2D) axial plane CT scan sections. The hypotheses of our study about path selection were as follows: (1) bronchoscopists are inaccurate and overly confident when making endobronchial path selections based on 2D CT scan analysis; and (2) path selection accuracy and confidence improve and become better aligned when bronchoscopists employ path-planning methods based on virtual bronchoscopy (VB). METHODS Studies of endobronchial path selection comparing three path-planning methods (ie, the standard 2D CT scan analysis and two new VB-based techniques) were performed. The task was to navigate to discrete lesions located between the third-order and fifth-order bronchi of the right upper and middle lobes. Outcome measures were the cumulative accuracy of making four sequential path selection decisions and self-reported confidence (1, least confident; 5, most confident). Both experienced and inexperienced bronchoscopists participated in the studies. RESULTS In the first study involving a static paper-based tool, the mean (+/- SD) cumulative accuracy was 14 +/- 3% using 2D CT scan analysis (confidence, 3.4 +/- 1.3) and 49 +/- 15% using a VB-based technique (confidence, 4.2 +/- 1.1; p = 0.0001 across all comparisons). For a second study using an interactive computer-based tool, the mean accuracy was 40 +/- 28% using 2D CT scan analysis (confidence, 3.0 +/- 0.3) and 96 +/- 3% using a dynamic VB-based technique (confidence, 4.6 +/- 0.2). Regardless of the experience level of the bronchoscopist, use of the standard 2D CT scan analysis resulted in poor path selection accuracy and misaligned confidence. Use of the VB-based techniques resulted in considerably higher accuracy and better aligned decision confidence. CONCLUSIONS Endobronchial path selection is a source of error in the bronchoscopy workflow. The use of VB-based path-planning techniques significantly improves path selection accuracy over use of the standard 2D CT scan section analysis in this simulation format.


applied imagery pattern recognition workshop | 2009

Kalman filter based video background estimation

Jesse Scott; Michael A. Pusateri; Duane C. Cornish

Transferring responsibility for object tracking in a video scene to computer vision rather than human operators has the appeal that the computer will remain vigilant under all circumstances while operator attention can wane. However, when operating at their peak performance, human operators often outperform computer vision because of their ability to adapt to changes in the scene. While many tracking algorithms are available, background subtraction, where a background image is subtracted from the current frame to isolate the foreground objects in a scene, remains a well proven and popular technique. Under some circumstances, a background image can be obtained manually when no foreground objects are present. In the case of persistent surveillance outdoors, the background has a time evolution due to diurnal changes, weather, and seasonal changes. Such changes render a fixed background scene inadequate. We present a method for estimating the background of a scene utilizing a Kalman filter approach. Our method applies a one-dimensional Kalman filter to each pixel of the camera array to track the pixel intensity. We designed the algorithm to track the background intensity of a scene assuming that the camera view is relatively stationary and that the time evolution of the background occurs much slower than the time evolution of relevant foreground events. This allows the background subtraction algorithm to adapt automatically to changes in the scene. The algorithm is a two step process of mean intensity update and standard deviation update. These updates are derived from standard Kalman filter equations. Our algorithm also allows objects to transition between the background and foreground as appropriate by modeling the input standard deviation. For example, a car entering a parking lot surveillance camera field of view would initially be included in the foreground. However, once parked, it will eventually transition to the background. We present results validating our algorithms ability to estimate backgrounds in a variety of scenes. We demonstrate the application of our method to track objects using simple frame detection with no temporal coherency.


Proceedings of SPIE | 2012

Bronchoscopy guidance system based on bronchoscope-motion measurements

Duane C. Cornish; William E. Higgins

Bronchoscopy-guidance systems assist physicians during bronchoscope navigation. However, these systems require an attending technician and fail to continuously track the bronchoscope. We propose a real-time technicianfree bronchoscopy-guidance system that employs continuous tracking. For guidance, our system presents directions on virtual views that are generated from the bronchoscopes tracked location. The system achieves bronchoscope tracking using a strategy that is based on a recently proposed method for sensor-based bronchoscope-motion tracking.1 Furthermore, a graphical indicator notifies the physician when he/she has maneuvered the bronchoscope to an incorrect branch. Our proposed system uses the sensor data to generate virtual views through multiple candidate routes and employs image matching in a Bayesian framework to determine the most probable bronchoscope pose. Tests based on laboratory phantoms validate the potential of the system.


IEEE Transactions on Biomedical Engineering | 2014

Optimal Procedure Planning and Guidance System for Peripheral Bronchoscopy

Jason D. Gibbs; Michael W. Graham; Rebecca Bascom; Duane C. Cornish; Rahul Khare; William E. Higgins

With the development of multidetector computed-tomography (MDCT) scanners and ultrathin bronchoscopes, the use of bronchoscopy for diagnosing peripheral lung-cancer nodules is becoming a viable option. The work flow for assessing lung cancer consists of two phases: 1) 3-D MDCT analysis and 2) live bronchoscopy. Unfortunately, the yield rates for peripheral bronchoscopy have been reported to be as low as 14%, and bronchoscopy performance varies considerably between physicians. Recently, proposed image-guided systems have shown promise for assisting with peripheral bronchoscopy. Yet, MDCT-based route planning to target sites has relied on tedious error-prone techniques. In addition, route planning tends not to incorporate known anatomical, device, and procedural constraints that impact a feasible route. Finally, existing systems do not effectively integrate MDCT-derived route information into the live guidance process. We propose a system that incorporates an automatic optimal route-planning method, which integrates known route constraints. Furthermore, our system offers a natural translation of the MDCT-based route plan into the live guidance strategy via MDCT/video data fusion. An image-based study demonstrates the route-planning methods functionality. Next, we present a prospective lung-cancer patient study in which our system achieved a successful navigation rate of 91% to target sites. Furthermore, when compared to a competing commercial system, our system enabled bronchoscopy over two airways deeper into the airway-tree periphery with a sample time that was nearly 2 min shorter on average. Finally, our systems ability to almost perfectly predict the depth of a bronchoscopes navigable route in advance represents a substantial benefit of optimal route planning.


Proceedings of SPIE | 2011

Real-time method for bronchoscope motion measurement and tracking

Duane C. Cornish; William E. Higgins

Bronchoscopy-guidance systems have been shown to improve the success rate of bronchoscopic procedures. A key technical cornerstone of bronchoscopy-guidance systems is the synchronization between the virtual world, derived from a patients three-dimensional (3D) multidetector computed-tomography (MDCT) scan, and the real world, derived from the bronchoscope video during a live procedure. Two main approaches for synchronizing these worlds exist: electromagnetic navigation bronchoscopy (ENB) and image-based bronchoscopy. ENB systems require considerable extra hardware, and both approaches have drawbacks that hinder continuous robust guidance. In addition, they both require an attending technician to be present. We propose a technician-free strategy that enables real-time guidance of bronchoscopy. The approach uses measurements of the bronchoscopes movement to predict its position in 3D virtual space. To achieve this, a bronchoscope model, defining the devices shape in the airway tree to a given point p, provides an insertion depth to p. In real time, our strategy compares an observed bronchoscope insertion depth and roll angle, measured by an optical sensor, to precalculated insertion depths along a predefined route in the virtual airway tree. This leads to a prediction of the bronchoscopes location and orientation. To test the method, experiments involving a PVC-pipe phantom and a human airway-tree phantom verified the bronchoscope models and the entire method, respectively. The method has considerable potential for improving guidance robustness and simplicity over other bronchoscopy-guidance systems.


american thoracic society international conference | 2010

Image-Guided Bronchoscopic Sampling Of Peripheral Lesions: A Human Study

William E. Higgins; Rebecca Bascom; Jason D. Gibbs; Michael W. Graham; Duane C. Cornish; Muhammad Khan; Rahul Khare


Archive | 2017

FACE DETECTION, AUGMENTATION, SPATIAL CUEING AND CLUTTER REDUCTION FOR THE VISUALLY IMPAIRED

Derek M. Rollend; Kapil D. Katyal; Kevin C. Wolfe; Dean M. Kleissas; Matthew P. Para; Paul E. Rosendall; John B. Helder; Philippe Burlina; Duane C. Cornish; Ryan J. Murphy; Matthew S. Johannes; Arup Roy; Seth Billings; Jonathan M. Oben; Robert J. Greenberg


american thoracic society international conference | 2012

Technician-Free Bronchoscopy Guidance Using External Sensor Measurements

William E. Higgins; Duane C. Cornish; Rahul Khare; Rebecca Bascom

Collaboration


Dive into the Duane C. Cornish's collaboration.

Top Co-Authors

Avatar

William E. Higgins

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

Rebecca Bascom

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

Jason D. Gibbs

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

Michael W. Graham

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

Rahul Khare

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

Kun-Chang Yu

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

Lav Rai

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

Scott A. Merritt

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

Jesse Scott

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

Marina Dolina

Pennsylvania State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge