James P. Helferty
Pennsylvania State University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by James P. Helferty.
IEEE Transactions on Medical Imaging | 2004
Atilla Peter Kiraly; James P. Helferty; Eric A. Hoffman; Geoffrey McLennan; William E. Higgins
Multidetector computed-tomography (MDCT) scanners provide large high-resolution three-dimensional (3-D) images of the chest. MDCT scanning, when used in tandem with bronchoscopy, provides a state-of-the-art approach for lung-cancer assessment. We have been building and validating a lung-cancer assessment system, which enables virtual-bronchoscopic 3-D MDCT image analysis and follow-on image-guided bronchoscopy. A suitable path planning method is needed, however, for using this system. We describe a rapid, robust method for computing a set of 3-D airway-tree paths from MDCT images. The method first defines the skeleton of a given segmented 3-D chest image and then performs a multistage refinement of the skeleton to arrive at a final tree structure. The tree consists of a series of paths and branch structural data, suitable for quantitative airway analysis and smooth virtual navigation. A comparison of the method to a previously devised path-planning approach, using a set of human MDCT images, illustrates the efficacy of the method. Results are also presented for human lung-cancer assessment and the guidance of bronchoscopy.
Computer Vision and Image Understanding | 2007
James P. Helferty; Anthony J. Sherbondy; Atilla Peter Kiraly; William E. Higgins
The standard procedure for diagnosing lung cancer involves two stages: three-dimensional (3D) computed-tomography (CT) image assessment, followed by interventional bronchoscopy. In general, the physician has no link between the 3D CT image assessment results and the follow-on bronchoscopy. Thus, the physician essentially performs bronchoscopic biopsy of suspect cancer sites blindly. We have devised a computer-based system that greatly augments the physicians vision during bronchoscopy. The system uses techniques from computer graphics and computer vision to enable detailed 3D CT procedure planning and follow-on image-guided bronchoscopy. The procedure plan is directly linked to the bronchoscope procedure, through a live registration and fusion of the 3D CT data and bronchoscopic video. During a procedure, the system provides many visual tools, fused CT-video data, and quantitative distance measures; this gives the physician considerable visual feedback on how to maneuver the bronchoscope and where to insert the biopsy needle. Central to the system is a CT-video registration technique, based on normalized mutual information. Several sets of results verify the efficacy of the registration technique. In addition, we present a series of test results for the complete system for phantoms, animals, and human lung-cancer patients. The results indicate that not only is the variation in skill level between different physicians greatly reduced by the system over the standard procedure, but that biopsy effectiveness increases.
Computerized Medical Imaging and Graphics | 2008
William E. Higgins; James P. Helferty; Kongkuo Lu; Scott A. Merritt; Lav Rai; Kun-Chang Yu
Bronchoscopic biopsy of the central-chest lymph nodes is an important step for lung-cancer staging. Before bronchoscopy, the physician first visually assesses a patients three-dimensional (3D) computed tomography (CT) chest scan to identify suspect lymph-node sites. Next, during bronchoscopy, the physician guides the bronchoscope to each desired lymph-node site. Unfortunately, the physician has no link between the 3D CT image data and the live video stream provided during bronchoscopy. Thus, the physician must essentially perform biopsy blindly, and the skill levels between different physicians differ greatly. We describe an approach that enables synergistic fusion between the 3D CT data and the bronchoscopic video. Both the integrated planning and guidance system and the internal CT-video registration and fusion methods are described. Phantom, animal, and human studies illustrate the efficacy of the methods.
international conference on image processing | 2001
James P. Helferty; William E. Higgins
We present a multi-modal image registration technique that matches virtual surface renderings, derived from 3D medical data, to corresponding endoscopic video. Our efforts focus on the chest. The views of the two image sources arise from inside the major bronchial airways, as shown in 3D CT chest images and live endoscopic video. Our technique is part of a large system for two-stage assessment of lung cancer. Stage-1 involves 3D CT assessment, which gives planning data for endoscopy. Stage-2 involves live endoscopy, supplemented by CT-based guidance. The registration technique is a critical component of this assessment. The technique draws upon the concept of normalized mutual information (NMI) and works in near real time on standard PCs. An optimization procedure iteratively adjusts the virtual viewpoint rendering until it matches a given video frames viewpoint. The optimization technique ends when the NMI is maximized. Results are given for phantom and human cases.
international conference on image processing | 2000
Chao Zhang; James P. Helferty; Geoffrey McLennan; William E. Higgins
Modern video-based endoscopes offer physicians a wide-angle held of view for minimally-invasive procedures. Unfortunately, inherent barrel distortion prevents accurate perception of range. This makes measurement and distance judgment difficult and causes difficulties in emerging applications, such as 3D medical-image registration. Such distortion also arises in other wide field-of-view camera circumstances. This paper presents a distortion-correction technique that can automatically calculate correction parameters, without precise knowledge of horizontal and vertical-orientation. The method is applicable to any camera-distortion correction situation. Based on a least-squares estimation, our proposed algorithm considers line fits in both field-of-view directions and global consistency that gives the optimal image center and expansion coefficients. The method is insensitive to the initial orientation of the endoscope and provides more exhaustive field-of-view correction than previously proposed algorithms. The distortion-correction procedure is demonstrated for endoscopic video images of a calibration test pattern, a rubber bronchial training device, and real human circumstances. The distortion correction is also shown as a necessary component of an image-guided virtual-endoscopy system that matches endoscope images to corresponding rendered 3D CT views.
computer assisted radiology and surgery | 2008
Lav Rai; James P. Helferty; William E. Higgins
ObjectiveThree-dimensional (3D) computed-tomography (CT) images and bronchoscopy are commonly used tools for the assessment of lung cancer. Before bronchoscopy, the physician first examines a 3D CT chest image to select pertinent diagnostic sites and to ascertain possible routes through the airway tree leading to each site. Next, during bronchoscopy, the physician maneuvers the bronchoscope through the airways, basing navigation decisions on the live bronchoscopic video, to reach each diagnostic site. Unfortunately, no direct link exists between the 3D CT image data and bronchoscopic video. This makes bronchoscopy difficult to perform successfully. Existing methods for the image-based guidance of bronchoscopy, either only involve single-frame registration or operate at impractically slow frame rates. We describe a method that combines the 3D CT image data and bronchoscopic video to enable continuous bronchoscopic guidance.MethodsThe method interleaves periodic CT-video registration with bronchoscopic video motion tracking. It begins by using single-frame CT-video registration to register the “Real World” of the bronchoscopic video and the “Virtual World” of the CT-based endoluminal renderings. Next, the method uses an optical-flow-based approach to track bronchoscope movement for a fixed number of video frames and also simultaneously updates the virtual-world position. This process of registration and tracking repeats as the bronchoscope moves.ResultsThe method operates robustly over a variety ofphantom and human cases. It typically performs successfully for over 150 frames and through multiple airway generations. In our tests, the method runs at a rate of up to seven frames per second. We have integrated the method into a complete system for the image-based planning and guidance of bronchoscopy.ConclusionThe method performs over an order of magnitude faster than previously proposed image-based bronchoscopy guidance methods. Software optimization and a modest hardware improvement could enable real-time performance.
computer vision and pattern recognition | 2005
James P. Helferty; Anthony J. Sherbondy; Atilla Peter Kiraly; William E. Higgins
The standard procedure for diagnosing lung cancer involves D computed tomography CT assessment followed by interventional bronchoscopy In general the physician has no link between the CT assessment results and the follow on bronchoscopy Thus the physician essentially performs bronchoscopic biopsy of suspect cancer sites blindly We have devised a computer based system that greatly augments the physician s vision during bronchoscopy The system uses techniques from computer graphics and com puter vision to enable detailed D CT procedure plan ning and follow on image guided bronchoscopy The procedure plan is directly linked to the bronchoscope procedure through a live fusion of the D CT data and bronchoscopic video During a procedure the physician receives considerable visual feedback on how to maneuver the bronchoscope and where to insert the biopsy needle We have performed a series of con trolled phantom and animal tests in addition to using the system on a large number of human lung cancer patients Results indicate that not only is the varia tion in skill level between di erent physicians greatly reduced but that their accuracy increases
Medical Imaging 2004: Physiology, Function, and Structure from Medical Images | 2004
James P. Helferty; Eric A. Hoffman; Geoffrey McLennan; William E. Higgins
Bronchoscopic biopsy is often used for assisting the assessment of lung cancer. We have found in previous research that live image guidance of bronchoscopy has much potential for improving biopsy outcome. We have devised a system for this purpose. During a guided bronchoscopy procedure, our system simultaneously draws upon both the bronchoscopes video stream and the patients 3D MDCT volume. The key data-processing step during guided bronchoscopy is the registration of the 3D MDCT data volume to the bronchoscopic video. The registration process is initialized by assuming that the bronchoscope is at a fixed viewpoint, giving a target reference video image, while the virtual-world camera inside the MDCT volume begins at an initial viewpoint that is within a reasonable vicinity of the bronchoscopes viewpoint. During registration, an optimization process searches for the optimal viewpoint to give the virtual image best matching the fixed video target. Overall, we have found that the CT-video registration technique operates robustly over a wide range of conditions, with considerable flexibility in the initial-viewpoint choice. Further, the system appears to be largely insensitive to the differences in lung capacity during the MDCT scan and during bronchoscopy. Finally, the system matches effectively in a wide range of anatomical circumstances.
Medical Imaging 2001: Physiology and Function from Multidimensional Images | 2001
James P. Helferty; Anthony J. Sherbondy; Atilla Peter Kiraly; Janice Z. Turlington; Eric A. Hoffman; Geoffrey McLennan; William E. Higgins
Transbronchial needle biopsy is a common procedure for early detection of lung cancer. In practice, accurate results are difficult to obtain, since the bronchoscopy procedure requires a blind puncture into a region hidden behind the airway walls. This paper presents an image-guided endoscopy system for procedure preplanning and for guidance during bronchoscopy. Before the bronchoscopy, a 3D CT scan is analyzed to define guidance paths through the major airways to suspect biopsy sites. During subsequent bronchoscopy, the paths give the physician step-by-step guidance to each suspect site location. At a suspect site, a virtual CT image is registered to the bronchoscopic video. Then, the predefined biopsy site, from the prior CT analysis, is rendered onto the registered video. This gives the physician a reference for performing the needle biopsy. This paper focuses on our recent experiments with this system. These experiments involve a rubber phantom model of the human airway tree and in vivo animal tests. The experiments demonstrate the promise of our approach.
oceans conference | 1993
James P. Helferty; David R. Mudgett; John Dzielski
This paper compares two performance indices for computing optimal observer paths for the bearings-only source constant velocity sources. The problem is based on maximizing the determinant of the Fisher information matrix (FIM) of the estimation problem. It considers minimizing the trace of a weighted sum of the Cramer-Rao lower bound (CRLB) of current source position error. Quasi-Newton optimization is used to compare optimal observer paths, given the goal of minimizing current position error. Significant differences in optimal paths are observed, and the CRLB trace is found to yield smaller range error.<<ETX>>