Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Lav Rai is active.

Publication


Featured researches published by Lav Rai.


Computerized Medical Imaging and Graphics | 2008

3D CT-Video Fusion for Image-Guided Bronchoscopy

William E. Higgins; James P. Helferty; Kongkuo Lu; Scott A. Merritt; Lav Rai; Kun-Chang Yu

Bronchoscopic biopsy of the central-chest lymph nodes is an important step for lung-cancer staging. Before bronchoscopy, the physician first visually assesses a patients three-dimensional (3D) computed tomography (CT) chest scan to identify suspect lymph-node sites. Next, during bronchoscopy, the physician guides the bronchoscope to each desired lymph-node site. Unfortunately, the physician has no link between the 3D CT image data and the live video stream provided during bronchoscopy. Thus, the physician must essentially perform biopsy blindly, and the skill levels between different physicians differ greatly. We describe an approach that enables synergistic fusion between the 3D CT data and the bronchoscopic video. Both the integrated planning and guidance system and the internal CT-video registration and fusion methods are described. Phantom, animal, and human studies illustrate the efficacy of the methods.


Chest | 2008

Image-Guided Bronchoscopy for Peripheral Lung Lesions: A Phantom Study

Scott A. Merritt; Jason D. Gibbs; Kun-Chang Yu; Viral Patel; Lav Rai; Duane C. Cornish; Rebecca Bascom; William E. Higgins

BACKGROUND Ultrathin bronchoscopy guided by virtual bronchoscopy (VB) techniques show promise for the diagnosis of peripheral lung lesions. In a phantom study, we evaluated a new real-time, VB-based, image-guided system for guiding the bronchoscopic biopsy of peripheral lung lesions and compared its performance to that of standard bronchoscopy practice. METHODS Twelve bronchoscopists of varying experience levels participated in the study. The task was to use an ultrathin bronchoscope and a biopsy forceps to localize 10 synthetically created lesions situated at varying airway depths. For route planning and guidance, the bronchoscopists employed either standard bronchoscopy practice or the real-time image-guided system. Outcome measures were biopsy site position error, which was defined as the distance from the forceps contact point to the ground-truth lesion boundary, and localization success, which was defined as a site identification having a biopsy site position error of < or = 5 mm. RESULTS Mean (+/- SD) localization success more than doubled from 43 +/- 16% using standard practice to 94 +/- 7.9% using image guidance (p < 10(-15) [McNemar paired test]). The mean biopsy site position error dropped from 9.7 +/- 9.1 mm for standard practice to 2.2 +/- 2.3 mm for image guidance. For standard practice, localization success decreased from 56% for generation 3 to 4 lesions to 31% for generation 6 to 8 lesions and also decreased from 51% for lesions on a carina vs 23% for lesions situated away from a carina. These factors were far less pronounced when using image guidance, as follows: success for generation 3 to 4 lesions, 97%; success for generation 6 to 8 lesions, 91%; success for lesions on a carina, 98%; success for lesions away from a carina, 86%. Bronchoscopist experience did not significantly affect performance using the image-guided system. CONCLUSIONS Real-time, VB-based image guidance can potentially far exceed standard bronchoscopy practice for enabling the bronchoscopic biopsy of peripheral lung lesions.


Chest | 2008

Interbronchoscopist Variability in Endobronchial Path Selection: A Simulation Study

Marina Dolina; Duane C. Cornish; Scott A. Merritt; Lav Rai; Rickhesvar P. Mahraj; William E. Higgins; Rebecca Bascom

BACKGROUND Endobronchial path selection is important for the bronchoscopic diagnosis of focal lung lesions. Path selection typically involves mentally reconstructing a three-dimensional path by interpreting a stack of two-dimensional (2D) axial plane CT scan sections. The hypotheses of our study about path selection were as follows: (1) bronchoscopists are inaccurate and overly confident when making endobronchial path selections based on 2D CT scan analysis; and (2) path selection accuracy and confidence improve and become better aligned when bronchoscopists employ path-planning methods based on virtual bronchoscopy (VB). METHODS Studies of endobronchial path selection comparing three path-planning methods (ie, the standard 2D CT scan analysis and two new VB-based techniques) were performed. The task was to navigate to discrete lesions located between the third-order and fifth-order bronchi of the right upper and middle lobes. Outcome measures were the cumulative accuracy of making four sequential path selection decisions and self-reported confidence (1, least confident; 5, most confident). Both experienced and inexperienced bronchoscopists participated in the studies. RESULTS In the first study involving a static paper-based tool, the mean (+/- SD) cumulative accuracy was 14 +/- 3% using 2D CT scan analysis (confidence, 3.4 +/- 1.3) and 49 +/- 15% using a VB-based technique (confidence, 4.2 +/- 1.1; p = 0.0001 across all comparisons). For a second study using an interactive computer-based tool, the mean accuracy was 40 +/- 28% using 2D CT scan analysis (confidence, 3.0 +/- 0.3) and 96 +/- 3% using a dynamic VB-based technique (confidence, 4.6 +/- 0.2). Regardless of the experience level of the bronchoscopist, use of the standard 2D CT scan analysis resulted in poor path selection accuracy and misaligned confidence. Use of the VB-based techniques resulted in considerably higher accuracy and better aligned decision confidence. CONCLUSIONS Endobronchial path selection is a source of error in the bronchoscopy workflow. The use of VB-based path-planning techniques significantly improves path selection accuracy over use of the standard 2D CT scan section analysis in this simulation format.


computer assisted radiology and surgery | 2008

Combined video tracking and image-video registration for continuous bronchoscopic guidance

Lav Rai; James P. Helferty; William E. Higgins

ObjectiveThree-dimensional (3D) computed-tomography (CT) images and bronchoscopy are commonly used tools for the assessment of lung cancer. Before bronchoscopy, the physician first examines a 3D CT chest image to select pertinent diagnostic sites and to ascertain possible routes through the airway tree leading to each site. Next, during bronchoscopy, the physician maneuvers the bronchoscope through the airways, basing navigation decisions on the live bronchoscopic video, to reach each diagnostic site. Unfortunately, no direct link exists between the 3D CT image data and bronchoscopic video. This makes bronchoscopy difficult to perform successfully. Existing methods for the image-based guidance of bronchoscopy, either only involve single-frame registration or operate at impractically slow frame rates. We describe a method that combines the 3D CT image data and bronchoscopic video to enable continuous bronchoscopic guidance.MethodsThe method interleaves periodic CT-video registration with bronchoscopic video motion tracking. It begins by using single-frame CT-video registration to register the “Real World” of the bronchoscopic video and the “Virtual World” of the CT-based endoluminal renderings. Next, the method uses an optical-flow-based approach to track bronchoscope movement for a fixed number of video frames and also simultaneously updates the virtual-world position. This process of registration and tracking repeats as the bronchoscope moves.ResultsThe method operates robustly over a variety ofphantom and human cases. It typically performs successfully for over 150 frames and through multiple airway generations. In our tests, the method runs at a rate of up to seven frames per second. We have integrated the method into a complete system for the image-based planning and guidance of bronchoscopy.ConclusionThe method performs over an order of magnitude faster than previously proposed image-based bronchoscopy guidance methods. Software optimization and a modest hardware improvement could enable real-time performance.


Medical Imaging 2006: Physiology, Function, and Structure from Medical Images | 2006

Real-time CT-video registration for continuous endoscopic guidance

Scott A. Merritt; Lav Rai; William E. Higgins

Previous research has shown that CT-image-based guidance could be useful for the bronchoscopic assessment of lung cancer. This research drew upon the registration of bronchoscopic video images to CT-based endoluminal renderings of the airway tree. The proposed methods either were restricted to discrete single-frame registration, which took several seconds to complete, or required non-real-time buffering and processing of video sequences. We have devised a fast 2D/3D image registration method that performs single-frame CT-Video registration in under 1/15th of a second. This allows the method to be used for real-time registration at full video frame rates without significantly altering the physicians behavior. The method achieves its speed through a gradient-based optimization method that allows most of the computation to be performed off-line. During live registration, the optimization iteratively steps toward the locally optimal viewpoint at which a CT-based endoluminal view is most similar to a current bronchoscopic video frame. After an initial registration to begin the process (generally done in the trachea for bronchoscopy), subsequent registrations are performed in real-time on each incoming video frame. As each new bronchoscopic video frame becomes available, the current optimization is initialized using the previous frames optimization result, allowing continuous guidance to proceed without manual re-initialization. Tests were performed using both synthetic and pre-recorded bronchoscopic video. The results show that the method is robust to initialization errors, that registration accuracy is high, and that continuous registration can proceed on real-time video at >15 frames per sec. with minimal user-intervention.


computer vision and pattern recognition | 2006

Real-time Image-Based Guidance Method for Lung-Cancer Assessment

Lav Rai; Scott A. Merritt; William E. Higgins

The assessment of lung cancer involves off-line threedimensional (3D) computed-tomography (CT) image assessment followed by live bronchoscopy. While standard bronchoscopy provides live video of information inside the lung airways, it does not give any information outside the airways. This results in a low procedure success rate. The success rate can be improved if the physician receives 3D CT-based image guidance. We describe a fast robust method that provides 3D CT-based image guidance during live bronchoscopy. The method enables a continuous registration between the 3D CT image space and the real bronchoscopic video. At a top level, it involves an interleaving of continuous video tracking and CT-video fine registration. During video tracking, the bronchoscope’s 3D motion is estimated using a point-based 3D-2D pose estimation method. Registration is performed via a warpingbased Gauss-Newton method that uses a normalized crosscorrelation- based cost for measuring similarity between the bronchoscopic video and CT image data. The method operates at a real-time rate of 10 frames per second, which is over an order of magnitude faster than past approaches. This real-time performance enables live guidance during a procedure. The method is incorporated into a computerbased system for image-guided bronchoscopy and has been applied to human lung-cancer patients. Results are presented for a phantom and lung-cancer patients. Keywords: 3D medical imaging, video tracking, image registration, virtual endoscopy, image-guided surgery


Proceedings of SPIE, the International Society for Optical Engineering | 2005

3D image fusion and guidance for computer-assisted bronchoscopy

William E. Higgins; Lav Rai; Scott A. Merritt; Kongkuo Lu; N. T. Linger; Kun-Chang Yu

The standard procedure for diagnosing lung cancer involves two stages. First, the physician evaluates a high-resolution three-dimensional (3D) computed-tomography (CT) chest image to produce a procedure plan. Next, the physician performs bronchoscopy on the patient, which involves navigating the the bronchoscope through the airways to planned biopsy sites. Unfortunately, the physician has no link between the 3D CT image data and the live video stream provided during bronchoscopy. In addition, these data sources differ greatly in what they physically give, and no true 3D planning tools exist for planning and guiding procedures. This makes it difficult for the physician to translate a CT-based procedure plan to the video domain of the bronchoscope. Thus, the physician must essentially perform biopsy blindly, and the skill levels between different physicians differ greatly. We describe a system that enables direct 3D CT-based procedure planning and provides direct 3D guidance during bronchoscopy. 3D CT-based information on biopsy sites is provided interactively as the physician moves the bronchoscope. Moreover, graphical information through a live fusion of the 3D CT data and bronchoscopic video is provided during the procedure. This information is coupled with a series of computer-graphics tools to give the physician a greatly augmented reality of the patients interior anatomy during a procedure. Through a series of controlled tests and studies with human lung-cancer patients, we have found that the system not only reduces the variation in skill level between different physicians, but also increases biopsy success rate.


Medical Imaging 2007: Physiology, Function, and Structure from Medical Images | 2007

Method for continuous guidance of endoscopy

Scott A. Merritt; Lav Rai; Jason D. Gibbs; Kun-Chang Yu; William E. Higgins

Previous research has indicated that use of guidance systems during endoscopy can improve the performance and decrease the skill variation of physicians. Current guidance systems, however, rely on computationally intensive registration techniques or costly and error-prone electromagnetic (E/M) registration techniques, neither of which fit seamlessly into the clinical workflow. We have previously proposed a real-time image-based registration technique that addresses both of these problems. We now propose a system-level approach that incorporates this technique into a complete paradigm for real-time image-based guidance in order to provide a physician with continuously-updated navigational and guidance information. At the core of the system is a novel strategy for guidance of endoscopy. Additional elements such as global surface rendering, local cross-sectional views, and pertinent distances are also incorporated into the system to provide additional utility to the physician. Phantom results were generated using bronchoscopy performed on a rapid prototype model of a human tracheobronchial airway tree. The system has also been tested in ongoing live human tests. Thus far, ten such tests, focused on bronchoscopic intervention of pulmonary patients, have been run successfully.


Chest | 2015

High yield of bronchoscopic transparenchymal nodule access real-time image-guided sampling in a novel model of small pulmonary nodules in canines.

Daniel H. Sterman; Thomas Keast; Lav Rai; Jason D. Gibbs; Henky Wibowo; Jeffrey W. Draper; Felix J.F. Herth; Gerard A. Silvestri

BACKGROUND Bronchoscopic transparenchymal nodule access (BTPNA) is a novel approach to accessing pulmonary nodules. This real-time, image-guided approach was evaluated for safety, accuracy, and yield in the healthy canine model. METHODS A novel, inorganic model of subcentimeter pulmonary nodules was developed, consisting of 0.25-cc aliquots of calcium hydroxylapatite (Radiesse) implanted via transbronchial access in airways seven generations beyond the main bronchi to represent targets for evaluation of accuracy and yield. Thoracic CT scans were acquired for each subject, and from these CT scans LungPoint Virtual Bronchoscopic Navigation software provided guidance to the region of interest. Novel transparenchymal nodule access software algorithms automatically generated point-of-entry recommendations, registered CT images, and real-time fluoroscopic images and overlaid guidance onto live bronchoscopic and fluoroscopic video to achieve a vessel-free, straight-line path from a central airway through parenchymal tissue for access to peripheral lesions. RESULTS In a nine-canine cohort, the BTPNA procedure was performed to sample 31 implanted Radiesse targets, implanted to simulate pulmonary nodules, via biopsy forceps through a specially designed sheath. The mean length of the 31 tunnels was 35 mm (20.5-50.3-mm range). Mean tunnel creation time was 16:52 min, and diagnostic yield was 90.3% (28 of 31). No significant adverse events were noted in the status of any of the canine subjects post BTPNA, with no pneumothoraces and minimal bleeding (all bleeding events < 2 mL in volume). CONCLUSIONS These canine studies demonstrate that BTPNA has the potential to achieve the high yield of transthoracic needle aspiration with the low complication profile associated with traditional bronchoscopy. These results merit further study in humans.


Chest | 2014

Feasibility and Safety of Bronchoscopic Transparenchymal Nodule Access in Canines: A New Real-Time Image-Guided Approach to Lung Lesions

Gerard A. Silvestri; Felix J.F. Herth; Thomas Keast; Lav Rai; Jason D. Gibbs; Henky Wibowo; Daniel H. Sterman

BACKGROUND The current approaches for tissue diagnosis of a solitary pulmonary nodule are transthoracic needle aspiration, guided bronchoscopy, or surgical resection. The choice of procedure is driven by patient and radiographic factors, risks, and benefits. We describe a new approach to the diagnosis of a solitary pulmonary nodule, namely bronchoscopic transparenchymal nodule access (BTPNA). METHODS In anesthetized dogs, fiducial markers were placed and thoracic CT images acquired. From the CT scan, the BTPNA software provided automatic point-of-entry prescribing of a bronchoscopic path (tunnel) through parenchymal tissue directly to the lesion. The preplanned procedure was uploaded to a virtual bronchoscopic navigation system. Bronchoscopic access was performed through the tunnels created. Proximity of the distal end of the tunnel sheath to the target was measured, and safety was recorded. RESULTS In four canines, 13 tunnels were created. The average length of the tunnels was 32.3 mm (range, 24.7-46.7 mm). The average proximity measure was 5.7 mm (range, 0.1-12.9 mm). The distance from the pleura to the nearest point within the target was 7.4 mm (range, 0.1-15 mm). Estimated blood loss was <2 mL per case. There were no pneumothoraces. CONCLUSIONS We describe a new approach to accessing lesions in the lung parenchyma. BTPNA allows bronchoscopic creation of a direct path with a sheath placed in proximity to the target, creating the potential to deliver biopsy tools within a lesion to acquire tissue. The technology appears safe. Further experiments are needed to assess the diagnostic yield of this procedure in animals and, if promising, to assess this technology in humans.

Collaboration


Dive into the Lav Rai's collaboration.

Top Co-Authors

Avatar

William E. Higgins

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

Jason D. Gibbs

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

Scott A. Merritt

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

Kun-Chang Yu

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

James P. Helferty

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

Kongkuo Lu

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Duane C. Cornish

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge