Benedicte Bascle
Princeton University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Benedicte Bascle.
international symposium on mixed and augmented reality | 1999
Nassir Navab; Benedicte Bascle; Mirko Appel; Echeyde Cubillo
The application presented augments uncalibrated images of factories with industrial drawings. Industrial drawings are among the most important documents used during the lifetime of industrial environments. They are the only common documents used during design, installation, monitoring and control, maintenance, update and finally dismantling of industrial units. Leading traditional industries towards the full use of virtual and augmented reality technology is impossible unless industrial drawings are integrated into our systems. We provide the missing link between industrial drawings and digital images of industrial sites. On one hand, this could enable us to calibrate cameras and build a 3D model of the scene without using any calibration markers. On the other hand it brings industrial drawings, floor map, images and 3D models into one unified framework. This provides a solid foundation for building efficient enhanced virtual industrial environments. The augmented scene is obtained by perspective warping of an industrial drawing of the factory onto its floor, wherever the floor is visible. The visibility of the floor is determined using probabilistic reasoning over a set of clues including (1) floor color/intensity (2) image warping and differencing between an uncalibrated stereoscopic image pair using the ground plane homography. Experimental results illustrate the approach.
medical image computing and computer assisted intervention | 2001
Frank Sauer; Ali Khamene; Benedicte Bascle; Gregory J. Rubino
We developed an augmented reality system targeting image guidance for surgical procedures. The surgeon wears a video-see-through head mounted display that provides him with a stereo video view of the patient. The live video images are augmented with graphical representations of anatomical structures that are segmented from medical image data. The surgeon can see e.g. a tumor in its actual location inside the patient. This in-situ visualization, where the computer maps the image information onto the patient, promises the most direct, intuitive guidance for surgical procedures. In this paper, we discuss technical details of the system and describe a first pre-clinical evaluation. This first evaluation is very positive and encourages us to get our system ready for installation in UCLAs iMRI operating room to perform clinical trials.
Medical Imaging 2000: Image Display and Visualization | 2000
Michael Loser; Nassir Navab; Benedicte Bascle; Russell H. Taylor
Visual servoing is well established in the field of industrial robotics, when using CCD cameras. This paper describes one of the first medical implementations of uncalibrated visual servoing. To our knowledge, this is the first time that visual servoing is done using x-ray fluoroscopy. In this paper we present a new image based approach for semi-automatically guidance of a needle or surgical tool during percutaneous procedures and is based on a series of granted and pending US patent applications. It is a simple and accurate method which requires no prior calibration or registration. Therefore, no additional sensors, no stererotactic frame and no additional calibration phantom is needed. Our techniques provides accurate 3D alignment of the tool with respect to an anatomic target and estimates the required insertion depth. We implemented and verified this method with three different medical robots at the Computer Integrated Surgery (CIS) Lab at the Johns Hopkins University. First tests were performed using a CCD-camera and a mobile uniplanar x-ray fluoroscope as imaging modality. We used small metal balls of 4 mm in diameter as target points. These targets were placed 60 to 70 mm deep inside a test-phantom. Our method led to correct insertions with mean deviation of 0.20 mm with CCD camera and mean deviation of about 1.5 mm in clinical surrounding with an old x-ray imaging system, where the images were not of best quality. These promising results present this method as a serious alternative to other needle placement techniques, which require cumbersome and time consuming calibration procedures.
computer vision and pattern recognition | 2000
Bertrand Thirion; Benedicte Bascle; Visvanathan Ramesh; Nassir Navab
Image segmentation has traditionally been thought of us a low/mid-level vision process incorporating no high level constraints. However, in complex and uncontrolled environments, such bottom-up strategies have drawbacks that lead to large misclassification rates. Remedies to this situation include taking into account (1) contextual and application constraints, (2) user input and feedback to incrementally improve the performance of the system. We attempt to incorporate these in the context of pipeline segmentation in industrial images. This problem is of practical importance for the 3D reconstruction of factory environments. However it poses several fundamental challenges mainly due to shading. Highlights and textural variations, etc. Our system performs pipe segmentation by fusing methods from physics-based vision, edge and texture analysis, probabilistic learning and the use of the graph-cut formalism.
Medical Imaging 2001: Visualization, Display, and Image-Guided Procedures | 2001
Calvin R. Maurer; Frank Sauer; Bo Hu; Benedicte Bascle; Bernhard Geiger; Fabian Wenzel; Filippo Recchi; Torsten Rohlfing; Chris R. Brown; Robert J. Bakos; Robert J. Maciunas; Ali Bani-Hashemi
We are developing a video see-through head-mounted display (HMD) augmented reality (AR) system for image-guided neurosurgical planning and navigation. The surgeon wears a HMD that presents him with the augmented stereo view. The HMD is custom fitted with two miniature color video cameras that capture a stereo view of the real-world scene. We are concentrating specifically at this point on cranial neurosurgery, so the images will be of the patients head. A third video camera, operating in the near infrared, is also attached to the HMD and is used for head tracking. The pose (i.e., position and orientation) of the HMD is used to determine where to overlay anatomic structures segmented from preoperative tomographic images (e.g., CT, MR) on the intraoperative video images. Two SGI 540 Visual Workstation computers process the three video streams and render the augmented stereo views for display on the HMD. The AR system operates in real time at 30 frames/sec with a temporal latency of about three frames (100 ms) and zero relative lag between the virtual objects and the real-world scene. For an initial evaluation of the system, we created AR images using a head phantom with actual internal anatomic structures (segmented from CT and MR scans of a patient) realistically positioned inside the phantom. When using shaded renderings, many users had difficulty appreciating overlaid brain structures as being inside the head. When using wire frames, and texture-mapped dot patterns, most users correctly visualized brain anatomy as being internal and could generally appreciate spatial relationships among various objects. The 3D perception of these structures is based on both stereoscopic depth cues and kinetic depth cues, with the user looking at the head phantom from varying positions. The perception of the augmented visualization is natural and convincing. The brain structures appear rigidly anchored in the head, manifesting little or no apparent swimming or jitter. The initial evaluation of the system is encouraging, and we believe that AR visualization might become an important tool for image-guided neurosurgical planning and navigation.
Medical Imaging 2002: Visualization, Image-Guided Procedures, and Display | 2002
Frank Sauer; Ali Khamene; Benedicte Bascle; Sebastian Vogt; Gregory J. Rubino
We developed an augmented reality system targeting image guidance for surgical procedures. The surgeon wears a video- see-through head mounted display that provides him with a stereo video view of the patient. The live video images are augmented with graphical representations of anatomical structures that are segmented from medical image data. The surgeon can see, e.g., a tumor in its actual location inside the patient. This in-situ visualization, where the computer maps the image information onto the patient, promises the most direct, intuitive guidance for surgical procedures. In this paper, we describe technical details of the system and its installation in UCLAs iMRI operating room. We added instrument tracking to the capabilities of our system to prepare it for minimally invasive procedures. We discuss several pre-clinical phantom experiments that support the potential clinical usefulness of augmented reality guidance.
Mustererkennung 2000, 22. DAGM-Symposium | 2000
Nassir Navab; Mirko Appel; Yakup Genc; Benedicte Bascle; V. Kumar; M. Neuberger
Despite the advanced 3D technology, majority of large industrial sites do not have access to a SD model of their facilities. These industries often use printed 2D drawings for almost all engineering designs and Updates. Here we introduce a new framework to combine different types of images available for an industrial site to recover the 3D structure. In particular, we consider two types of images: industrial drawings and photogrammetric images.
european conference on computer vision | 2004
Benedicte Bascle; Xiang Gao; Visvanathan Ramesh
This article presents two new approaches, one parametric and one non-parametric, to the linear grouping of image features. They are based on the Bayesian Hough Transform, which takes into account feature uncertainty. Our main contribution are two new ways to detect the most significant modes of the Hough Transform. Traditionally, this is done by non-maximum suppression. However, in truth, Hough bins measure the likelihoods not of single lines but of collection of lines. Therefore finding lines by non-maxima suppression is not appropriate. This article presents two alternatives. The first method uses bin integration, automatic pruning and fusion to perform mode detection. The second approach detects dominant modes using variable bandwidth mean shift. The advantages of these algorithms are that: (1) the uncertainties associated with feature measurements are taken into account during voting and mode estimation (2) dominant modes are detected in ways that are more correct and less sensitive to errors and biases than non-maxima suppression. The methods can be used with any feature type and any associated feature detection algorithm provided that it outputs a feature position, orientation and covariance matrices. Results illustrate the approaches.
emerging technologies and factory automation | 2001
Benedicte Bascle; K. Hampel; Nassir Navab; Brent Baxter; Xiang Zhang
Describes our 3D graphical interface for industrial process control. The software is based on Visio and WinCC. The software first builds a 3D model of the factory. This can be done using user input or automatically from industrial drawings (electronic or paper copies). Secondly, process values are attached to the scene and the process is activated and controlled through WinCC.
Archive | 2001
Benedicte Bascle; Thomas Ruge; Artur Raczynski