Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Oliver Bimber is active.

Publication


Featured researches published by Oliver Bimber.


Teleoperators and Virtual Environments | 2001

The Extended Virtual Table: An Optical Extension for Table-Like Projection Systems

Oliver Bimber; L. Miguel Encarnação; Pedro Branco

A prototype of an optical extension for table-like rear-projection systems is described. A large, half-silvered mirror beam splitter is used as the optical combiner to unify a virtual and a real workbench. The virtual workbench has been enabled to display computer graphics beyond its projection boundaries and to combine virtual environments with the adjacent real world. A variety of techniques are described and referred to that allow indirect interaction with virtual objects through the mirror. Furthermore, the optical distortion that is caused by the half-silvered mirror combiner is analyzed, and techniques are presented to compensate for this distortion.


IEEE Computer Graphics and Applications | 2001

The Virtual Showcase

Oliver Bimber; Bernd Fröhlich; D. Schmalsteig; L.M. Encarnacao

We introduce a new projection-based AR display system-the Virtual Showcase. The Virtual Showcase has the same form factor as a real showcase, making it compatible with traditional museum displays. Real scientific and cultural artifacts are placed inside the Virtual Showcase allowing their 3D graphical augmentation. Inside the Virtual Showcase, virtual representations and real artifacts share the same space providing new ways of merging and exploring real and virtual content. Solely virtual exhibits may also be displayed. The virtual part of the showcase can react in various ways to a visitor, which provides the possibility for intuitive interaction with the displayed content.


international symposium on mixed and augmented reality | 2004

Video see-through AR on consumer cell-phones

Mathias Möhring; Christian Lessig; Oliver Bimber

We present a first running video see-through augmented reality system on a consumer cell-phone. It supports the detection and differentiation of different markers, and correct integration of rendered 3D graphics into the live video stream via a weak perspective projection camera model and an OpenGL rendering pipeline.


IEEE MultiMedia | 2007

Enabling Mobile Phones To Support Large-Scale Museum Guidance

Erich Bruns; Benjamnin Brombach; Thomas Zeidler; Oliver Bimber

We present a museum guidance system called PhoneGuide that uses widespread camera-equipped mobile phones for on-device object recognition in combination with pervasive tracking. It also provides location- and object-aware multimedia content to museum visitors, and is scalable to cover a large number of museum objects.


eurographics | 2008

The Visual Computing of Projector‐Camera Systems

Oliver Bimber; Daisuke Iwai; Gordon Wetzstein; Anselm Grundhöfer

This article focuses on real‐time image correction techniques that enable projector‐camera systems to display images onto screens that are not optimized for projections, such as geometrically complex, coloured and textured surfaces. It reviews hardware‐accelerated methods like pixel‐precise geometric warping, radiometric compensation, multi‐focal projection and the correction of general light modulation effects. Online and offline calibration as well as invisible coding methods are explained. Novel attempts in super‐resolution, high‐dynamic range and high‐speed projection are discussed. These techniques open a variety of new applications for projection displays. Some of them will also be presented in this report.


international conference on computer graphics and interactive techniques | 2005

Modern approaches to augmented reality

Oliver Bimber; Ramesh Raskar

This tutorial discusses the Spatial Augmented Reality (SAR) concept, its advantages and limitations. It will present examples of state-of-the-art display configurations, appropriate real-time rendering techniques, details about hardware and software implementations, and current areas of application. Specifically, it will describe techniques for optical combination using single/multiple spatially aligned mirror-beam splitters, image sources, transparent screens and optical holograms. Furthermore, it presents techniques for projectorbased augmentation of geometrically complex and textured display surfaces, and (along with optical combination) methods for achieving consistent illumination and occlusion effects. Emerging technologies that have the potential of enhancing future augmented reality displays will be surveyed.


pacific conference on computer graphics and applications | 2007

Radiometric Compensation through Inverse Light Transport

Gordon Wetzstein; Oliver Bimber

Video matting is the process of taking a sequence of frames, isolating the foreground, and replacing the background in each frame. We look at existing single-frame matting techniques and present a method that improves upon them by adding depth information acquired by a time-offlight range scanner. We use the depth information to automate the process so it can be practically used for video sequences. In addition, we show that we can improve the results from natural matting algorithms by adding a depth channel. The additional depth information allows us to reduce the artifacts that arise from ambiguities that occur when an object is a similar color to its background.Radiometric compensation techniques allow seamless projections onto complex everyday surfaces. Implemented with projector-camera systems they support the presentation of visual content in situations where projection-optimized screens are not available or not desired - as in museums, historic sites, air-plane cabins, or stage performances. We propose a novel approach that employs the full light transport between projectors and a camera to account for many illumination aspects, such as interreflections, refractions, shadows, and defocus. Precomputing the inverse light transport in combination with an efficient implementation on the GPU makes the real-time compensation of captured local and global light modulations possible.


mobile and ubiquitous multimedia | 2005

PhoneGuide: museum guidance supported by on-device object recognition on mobile phones

Paul Föckler; Thomas Zeidler; Benjamin Brombach; Erich Bruns; Oliver Bimber

We present PhoneGuide -- an enhanced museum guidance system that uses camera-equipped mobile phones and on-device object recognition.Our main technical achievement is a simple and light-weight object recognition approach that is realized with single-layer perceptron neuronal networks. In contrast to related systems which perform computationally intensive image processing tasks on remote servers, our intention is to carry out all computations directly on the phone. This ensures little or even no network traffic and consequently decreases cost for online times. Our laboratory experiments and field surveys have shown that photographed museum exhibits can be recognized with a probability of over 90%.We have evaluated different feature sets to optimize the recognition rate and performance. Our experiments revealed that normalized color features are most effective for our method. Choosing such a feature set allows recognizing an object below one second on up-to-date phones. The amount of data that is required for differentiating 50 objects from multiple perspectives is less than 6KBytes.


Computers & Graphics | 2000

A multi-layered architecture for sketch-based interaction within virtual environments

Oliver Bimber; L. Miguel Encarnação; André Stork

Abstract In this article, we describe a multi-layered architecture for sketch-based interaction within virtual environments. Our architecture consists of eight hierarchically arranged layers that are described by giving examples of how they are implemented and how they interact. Focusing on table-like projection systems (such as Virtual Tables or Responsive Workbenches) as human-centered output-devices, we show examples of how to integrate parts or all of the architecture into existing domain-specific applications — rather than realizing new general sketch applications — to make sketching an integral part of the next-generation human–computer interface.


IEEE Transactions on Visualization and Computer Graphics | 2008

Real-Time Adaptive Radiometric Compensation

Anselm Grundhöfer; Oliver Bimber

Recent radiometric compensation techniques make it possible to project images onto colored and textured surfaces. This is realized with projector-camera systems by scanning the projection surface on a per-pixel basis. Using the captured information, a compensation image is calculated that neutralizes geometric distortions and color blending caused by the underlying surface. As a result, the brightness and the contrast of the input image is reduced compared to a conventional projection onto a white canvas. If the input image is not manipulated in its intensities, the compensation image can contain values that are outside the dynamic range of the projector. These will lead to clipping errors and to visible artifacts on the surface. In this article, we present an innovative algorithm that dynamically adjusts the content of the input images before radiometric compensation is carried out. This reduces the perceived visual artifacts while simultaneously preserving a maximum of luminance and contrast. The algorithm is implemented entirely on the GPU and is the first of its kind to run in real time.

Collaboration


Dive into the Oliver Bimber's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ramesh Raskar

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Clemens Birklbauer

Johannes Kepler University of Linz

View shared research outputs
Top Co-Authors

Avatar

Alexander Koppelhuber

Johannes Kepler University of Linz

View shared research outputs
Top Co-Authors

Avatar

David C. Schedl

Johannes Kepler University of Linz

View shared research outputs
Top Co-Authors

Avatar

Dieter Schmalstieg

Graz University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

André Stork

Technische Universität Darmstadt

View shared research outputs
Researchain Logo
Decentralizing Knowledge