Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Erich Bruns is active.

Publication


Featured researches published by Erich Bruns.


IEEE MultiMedia | 2007

Enabling Mobile Phones To Support Large-Scale Museum Guidance

Erich Bruns; Benjamnin Brombach; Thomas Zeidler; Oliver Bimber

We present a museum guidance system called PhoneGuide that uses widespread camera-equipped mobile phones for on-device object recognition in combination with pervasive tracking. It also provides location- and object-aware multimedia content to museum visitors, and is scalable to cover a large number of museum objects.


mobile and ubiquitous multimedia | 2005

PhoneGuide: museum guidance supported by on-device object recognition on mobile phones

Paul Föckler; Thomas Zeidler; Benjamin Brombach; Erich Bruns; Oliver Bimber

We present PhoneGuide -- an enhanced museum guidance system that uses camera-equipped mobile phones and on-device object recognition.Our main technical achievement is a simple and light-weight object recognition approach that is realized with single-layer perceptron neuronal networks. In contrast to related systems which perform computationally intensive image processing tasks on remote servers, our intention is to carry out all computations directly on the phone. This ensures little or even no network traffic and consequently decreases cost for online times. Our laboratory experiments and field surveys have shown that photographed museum exhibits can be recognized with a probability of over 90%.We have evaluated different feature sets to optimize the recognition rate and performance. Our experiments revealed that normalized color features are most effective for our method. Choosing such a feature set allows recognizing an object below one second on up-to-date phones. The amount of data that is required for differentiating 50 objects from multiple perspectives is less than 6KBytes.


international conference on computer graphics and interactive techniques | 2007

Prakash: lighting aware motion capture using photosensing markers and multiplexed illuminators

Ramesh Raskar; Hideaki Nii; Bert deDecker; Yuki Hashimoto; Jay W. Summet; Dylan Moore; Yong Zhao; Jonathan Westhues; Paul H. Dietz; John C. Barnwell; Shree K. Nayar; Masahiko Inami; Philippe Bekaert; Michael Noland; Vlad Branzoi; Erich Bruns

In this paper, we present a high speed optical motion capture method that can measure three dimensional motion, orientation, and incident illumination at tagged points in a scene. We use tracking tags that work in natural lighting conditions and can be imperceptibly embedded in attire or other objects. Our system supports an unlimited number of tags in a scene, with each tag uniquely identified to eliminate marker reacquisition issues. Our tags also provide incident illumination data which can be used to match scene lighting when inserting synthetic elements. The technique is therefore ideal for on-set motion capture or real-time broadcasting of virtual sets. Unlike previous methods that employ high speed cameras or scanning lasers, we capture the scene appearance using the simplest possible optical devices - a light-emitting diode (LED) with a passive binary mask used as the transmitter and a photosensor used as the receiver. We strategically place a set of optical transmitters to spatio-temporally encode the volume of interest. Photosensors attached to scene points demultiplex the coded optical signals from multiple transmitters, allowing us to compute not only receiver location and orientation but also their incident illumination and the reflectance of the surfaces to which the photosensors are attached. We use our untethered tag system, called Prakash, to demonstrate methods of adding special effects to captured videos that cannot be accomplished using pure vision techniques that rely on camera images.


computer vision and pattern recognition | 2010

Fast and robust CAMShift tracking

David Exner; Erich Bruns; Daniel Kurz; Anselm Grundhöfer; Oliver Bimber

CAMShift is a well-established and fundamental algorithm for kernel-based visual object tracking. While it performs well with objects that have a simple and constant appearance, it is not robust in more complex cases. As it solely relies on back projected probabilities it can fail in cases when the objects appearance changes (e.g., due to object or camera movement, or due to lighting changes), when similarly colored objects have to be re-detected or when they cross their trajectories. We propose low-cost extensions to CAMShift that address and resolve all of these problems. They allow the accumulation of multiple histograms to model more complex object appearances and the continuous monitoring of object identities to handle ambiguous cases of partial or full occlusion. Most steps of our method are carried out on the GPU for achieving real-time tracking of multiple targets simultaneously. We explain efficient GPU implementations of histogram generation, probability back projection, computation of image moments, and histogram intersection. All of these techniques make full use of a GPUs high parallelization capabilities.


IEEE Computer Graphics and Applications | 2008

Mobile Phone-Enabled Museum Guidance with Adaptive Classification

Erich Bruns; Benjamin Brombach; Oliver Bimber

We present an overview of our adaptive museum guidance system called PhoneGuide. It uses camera-equipped mobile phones for on-device object recognition in ad-hoc sensor networks and provides location and object aware multimedia content to museum visitors.


ubiquitous computing | 2009

Adaptive training of video sets for image recognition on mobile phones

Erich Bruns; Oliver Bimber

We present an enhancement towards adaptive video training for PhoneGuide, a digital museum guidance system for ordinary camera-equipped mobile phones. It enables museum visitors to identify exhibits by capturing photos of them. In this article, a combined solution of object recognition and pervasive tracking is extended to a client–server-system for improving data acquisition and for supporting scale-invariant object recognition. A static as well as a dynamic training technique are presented that preprocess the collected object data differently and apply two types of neural networks (NN) for classification. Furthermore, the system enables a temporal adaptation for ensuring a continuous data acquisition to improve the recognition rate over time. A formal field experiment reveals current recognition rates and indicates the practicability of both methods under realistic conditions in a museum.


IEEE Pervasive Computing | 2012

Localization and Classification through Adaptive Pathway Analysis

Erich Bruns; Oliver Bimber

Evaluating user-generated spatiotemporal pathway data helps determine both the present and future location of museum visitors. The PhoneGuide adaptive mobile museum guidance system shows how this approach improves classification performance and achieves acceptable recognition rates.


advances in mobile multimedia | 2008

Phone-to-phone communication for adaptive image classification

Erich Bruns; Oliver Bimber

In this paper, we present a novel technique for adapting local image classifiers that are applied for object recognition on mobile phones through ad-hoc network communication between the devices. By continuously accumulating and exchanging collected user feedback among mobile phones that are located within signal range, we show that our approach improves the overall classification rate and adapts to dynamic changes quickly. This technique is applied in the context of PhoneGuide -- an adaptive museum guidance system.


intelligent user interfaces | 2009

Subobject detection through spatial relationships on mobile phones

Benjamin Brombach; Erich Bruns; Oliver Bimber

We present a novel image classification technique for detecting multiple objects (called subobjects) in a single image. In addition to image classifiers, we apply spatial relationships among the subobjects to verify and to predict the locations of detected and undetected subobjects, respectively. By continuously refining the spatial relationships throughout the detection process, even locations of completely occluded exhibits can be determined. This approach is applied in the context of PhoneGuide, an adaptive museum guidance system for camera-equipped mobile phones. Laboratory tests as well as a field experiment reveal recognition rates and performance improvements when compared to related approaches.


multimedia and ubiquitous engineering | 2010

Mobile Museum Guidance Using Relational Multi-Image Classification

Erich Bruns; Oliver Bimber

In this paper we present a multi-image classification technique for mobile phones that is supported by relational reasoning. Users capture a sequence of images employing a simple near-far camera movement. After classifying distinct keyframes using a nearest-neighbor approach the corresponding database images are only considered for a majority voting if they exhibit similar near-far inter-image relations to the captured keyframes. In the context of PhoneGuide, our adaptive mobile museum guidance system, a user study revealed that our multi-image classification technique leads to significantly higher classification rates than single image classification. Furthermore, when using near-far image relations, less keyframes are sufficient for classification. This increases the overall classification speed of our approach by up to 35%.

Collaboration


Dive into the Erich Bruns's collaboration.

Top Co-Authors

Avatar

Oliver Bimber

Johannes Kepler University of Linz

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ramesh Raskar

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bert deDecker

Mitsubishi Electric Research Laboratories

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge