Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Benno Heigl is active.

Publication


Featured researches published by Benno Heigl.


Mustererkennung 1999, 21. DAGM-Symposium | 1999

Plenoptic Modeling and Rendering from Image Sequences Taken by Hand-Held Camera

Benno Heigl; Reinhard Koch; Marc Pollefeys; Joachim Denzler; Luc Van Gool

In this contribution we focus on plenoptic scene modeling and rendering from long image sequences taken with a hand-held camera. The image sequence is calibrated with a structure-from-motion approach that considers the special viewing geometry of plenoptic scenes. By applying a stereo matching technique, dense depth maps are recovered locally for each viewpoint.


international conference on computer vision | 1999

Calibration of hand-held camera sequences for plenoptic modeling

Reinhard Koch; Marc Pollefeys; Benno Heigl; L. Van Gool; Heinrich Niemann

We focus on the calibration of very long image sequences from a hand-held camera that samples the viewing sphere of a scene. View sphere sampling is important for plenoptic (image-based) modeling that captures the appearance of a scene by storing images from all possible directions. The plenoptic approach is appealing since it allows, in principle, fast scene rendering of scenes with complex geometry and surface reflections, without the need for an explicit geometrical scene model. However the acquired images have to be calibrated, and current approaches mostly use pre-calibrated acquisition systems. This limits the generality of the approach. We propose using an uncalibrated hand-held camera only. The image sequence is acquired by simply waving the camera around the scene objects, creating a zigzag scan path over the viewing sphere. We extend the sequential camera tracking of an existing structure-from-motion approach to the calibration of a mesh of viewpoints. Novel views are generated by piecewise mapping and interpolating the new image from the nearest viewpoints according to the viewpoint mesh. Local depth map estimates enhance the rendering process. Extensive experiments with ground truth data and hand-held sequences confirm the performance of our approach.


Archive | 2009

3D Imaging with Flat-Detector C-Arm Systems

Norbert Strobel; Oliver Meissner; Jan Boese; Thomas Brunner; Benno Heigl; Martin Hoheisel; Günter Lauritsch; Markus Nagel; Marcus Pfister; Ernst-Peter Rührnschopf; Bernhard Scholz; Bernd Schreiber; Martin Spahn; Michael Zellerhoff; Klaus Klingenbeck-Regn

Three-dimensional (3D) C-arm computed tomography is a new and innovative imaging technique. It uses two-dimensional (2D) X-ray projections acquired with a flat-panel detector C-arm angiography system to generate CT-like images. To this end, the C-arm system performs a sweep around the patient, acquiring up to several hundred 2D views. They serve as input for 3D cone-beam reconstruction. Resulting voxel data sets can be visualized either as cross-sectional images or as 3D data sets using different volume rendering techniques. Initially targeted at 3D high-contrast neurovascular applications, 3D C-arm imaging has been continuously improved over the years and is now capable of providing CT-like soft-tissue image quality. In combination with 2D fluoroscopic or radiographic imaging, information provided by 3D C-arm imaging can be valuable for therapy planning, guidance, and outcome assessment all in the interventional suite.


Medical Imaging 2003: Physics of Medical Imaging | 2003

Improving 3D image quality of x-ray C-arm imaging systems by using properly designed pose determination systems for calibrating the projection geometry

Norbert Strobel; Benno Heigl; Thomas Brunner; Oliver Schuetz; Matthias Mitschke; Karl Wiesent; Thomas Mertelmeier

C-arm volume reconstruction has become increasingly popular over the last years. These imaging systems generate 3D data sets for various interventional procedures such as endovascular treatment of aneurysms or orthopedic applications. Due to their open design and mechanical instability, C-arm imaging systems acquire projections along non-ideal scan trajectories. Volume reconstruction from filtered 2D X-ray projections requires a very precise knowledge of the imaging geometry. We show that the 3D image quality of C-arm cone beam imaging devices can be improved by proper design of the calibration phantom.


machine vision applications | 2003

MOBSY: Integration of vision and dialogue in service robots

Matthias Zobel; Joachim Denzler; Benno Heigl; Elmar Nöth; Dietrich Paulus; Jochen Schmidt; Georg Stemmer

Abstract. This contribution introduces MOBSY, a fully integrated, autonomous mobile service robot system. It acts as an automatic dialogue-based receptionist for visitors to our institute. MOBSY incorporates many techniques from different research areas into one working stand-alone system. The techniques involved range from computer vision over speech understanding to classical robotics.Along with the two main aspects of vision and speech, we also focus on the integration aspect, both on the methodological and on the technical level. We describe the task and the techniques involved. Finally, we discuss the experiences that we gained with MOBSY during a live performance at our institute.


application-specific systems, architectures, and processors | 2006

A Design Methodology for Hardware Acceleration of Adaptive Filter Algorithms in Image Processing

Hritam Dutta; Frank Hannig; Jürgen Teich; Benno Heigl; Heinz Hornegger

Massively parallel processor array architectures can be used as hardware accelerators for a plenty of dataflow dominant applications. Bilateral filtering is an example of a state-of-the-art algorithm in medical imaging, which falls in the class of 2D adaptive filter algorithms. In this paper, we propose a semi-automatic mapping methodology for the generation of hardware accelerators for such a generic class of adaptive filtering applications in image processing. The final architecture deliver similar synthesis results as a hand-tuned design.


Proceedings of the 10th International Workshop on Theoretical Foundations of Computer Vision: Multi-Image Analysis | 2000

Image-Based Rendering from Uncalibrated Lightfields with Scalable Geometry

Reinhard Koch; Benno Heigl; Marc Pollefeys

We combine uncalibrated Structure-from-Motion, lightfield rendering and view-dependent texture mapping to model and render scenes from a set of images that are acquired from an uncalibrated handheld video camera. The camera is simply moved by hand around the 3D scene of interest. The intrinsic camera parameters like focal length and the camera positions are automatically calibrated with a Structure-From-Motion approach. Dense and accurate depth maps for each camera viewpoint are computed with multi-viewpoint stereoscopic matching. The set of images, their calibration parameters and the depth maps are then utilized for depth-compensated image-based rendering. The rendering utilizes a scalable geometric approximation that is tailored to the needs of the rendering hardware.


international conference on computer vision systems | 1999

Active Knowledge-Based Scene Analysis

Dietrich Paulus; Ulrike Ahlrichs; Benno Heigl; Joachim Denzler; Joachim Hornegger; Heinrich Niemann

We present a modular architecture for image understanding and active computer vision which consists of three major components: Sensor and actor interfaces required for data-driven active vision are encapsulated to hide machine-dependent parts; image segmentation is implemented in object-oriented programming as a hierarchy of image operator classes, guaranteeing simple and uniform interfaces; knowledge about the environment is represented either as a semantic network or as statistical object models or as a combination of both; the semantic network formalism is used to represent actions which are needed in explorative vision. We apply these modules to create two application systems. The emphasis here is object localization and recognition in an office room: an active purposive camera control is applied to recover depth information and to focus on interesting objects; color segmentation is used to compute object features which are relatively insensitive to small aspect changes. Object hypotheses are verified by an A*-based search using the knowledge base.


Enhanced and synthetic vision 2000. Conference | 2000

Combining computer graphics and computer vision for probabilistic visual robot navigation

Benno Heigl; Joachim Denzler; Heinrich Niemann

In this contribution we present how techniques from computer graphics and computer vision can be combined to finally navigate a robot in natural environment based on visual information. The key idea is to reconstruct an image based scene model, which is used in the navigation task to judge position hypotheses by comparing the taken camera image with a virtual image created from the image based scene model. Computer graphics contributes to a method for photo-realistic rendering in real- time, computer vision methods are applied to fully automatically reconstruct the scene model from image sequences taken by a hand-held camera or a moving platform. During navigation, a probabilistic state estimation algorithm is applied to handle uncertainty in the image acquisition process and the dynamic model of the moving platform. We present experiments which proof that our proposed approach, i.e. using an image based scene model for navigation, is capable to globally localize a moving platform with reasonable effort. Using off-the-shelf computer graphics hardware even real-time navigation is possible.


international conference on computer vision systems | 2001

MOBSY: Integration of Vision and Dialogue in Service Robots

Matthias Zobel; Joachim Denzler; Benno Heigl; Elmar Nöth; Dietrich Paulus; Jochen Schmidt; Georg Stemmer

MOBSY is a fully integrated autonomous mobile service robot system. It acts as an automatic dialogue based receptionist for visitors of our institute. MOBSY incorporates many techniques from different research areas into one working stand-alone system. Especially the computer vision and dialogue aspects are of main interest from the pattern recognitions point of view. To summarize shortly, the involved techniques range from object classification over visual self-localization and recalibration to object tracking with multiple cameras. A dialogue component has to deal with speech recognition, understanding and answer generation. Further techniques needed are navigation, obstacle avoidance, and mechanisms to provide fault tolerant behavior. This contribution introduces our mobile system MOBSY. Among the main aspects vision and speech, we focus also on the integration aspect, both on the methodological and on the technical level. We describe the task and the involved techniques. Finally, we discuss the experiences that we gained with MOBSY during a live performance at the 25th anniversary of our institute.

Collaboration


Dive into the Benno Heigl's collaboration.

Top Co-Authors

Avatar

Heinrich Niemann

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar

Dietrich Paulus

University of Koblenz and Landau

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andrew F. Hall

Washington University in St. Louis

View shared research outputs
Top Co-Authors

Avatar

John Rauch

Washington University in St. Louis

View shared research outputs
Researchain Logo
Decentralizing Knowledge