Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daniel Kurz is active.

Publication


Featured researches published by Daniel Kurz.


computer vision and pattern recognition | 2010

Fast and robust CAMShift tracking

David Exner; Erich Bruns; Daniel Kurz; Anselm Grundhöfer; Oliver Bimber

CAMShift is a well-established and fundamental algorithm for kernel-based visual object tracking. While it performs well with objects that have a simple and constant appearance, it is not robust in more complex cases. As it solely relies on back projected probabilities it can fail in cases when the objects appearance changes (e.g., due to object or camera movement, or due to lighting changes), when similarly colored objects have to be re-detected or when they cross their trajectories. We propose low-cost extensions to CAMShift that address and resolve all of these problems. They allow the accumulation of multiple histograms to model more complex object appearances and the continuous monitoring of object identities to handle ambiguous cases of partial or full occlusion. Most steps of our method are carried out on the GPU for achieving real-time tracking of multiple targets simultaneously. We explain efficient GPU implementations of histogram generation, probability back projection, computation of image moments, and histogram intersection. All of these techniques make full use of a GPUs high parallelization capabilities.


international symposium on mixed and augmented reality | 2007

Laser Pointer Tracking in Projector-Augmented Architectural Environments

Daniel Kurz; Ferry Hantsch; Max Grosse; Alexander Schiewe; Oliver Bimber

We present a system that employs a custom-built pan-tilt-zoom camera for laser pointer tracking in arbitrary real environments. Once placed in a room, it carries out a fully automatic self-registration, registrations of projectors, and sampling of surface parameters, such as geometry and reflectivity. After these steps, it can be used for tracking a laser spot on the surface as well as an LED marker in 3D space, using inter-playing fish-eye context and controllable detail cameras. The captured surface information can be used for masking out areas that are problematic for laser pointer tracking, and for guiding geometric and radiometric image correction techniques that enable a projector-based augmentation on arbitrary surfaces. We describe a distributed software framework that couples laser pointer tracking for interaction, projector-based AR as well as video see-through AR for visualizations, with the domain specific functionality of existing desktop tools for architectural planning, simulation and building surveying.


IEEE Transactions on Visualization and Computer Graphics | 2011

Closed-Loop Feedback Illumination for Optical Inverse Tone-Mapping in Light Microscopy

Oliver Bimber; Daniel Kloeck; Toshiyuki Amano; Anselm Grundhoefer; Daniel Kurz

In this paper, we show that optical inverse tone-mapping (OITM) in light microscopy can improve the visibility of specimens, both when observed directly through the oculars and when imaged with a camera. In contrast to previous microscopy techniques, we premodulate the illumination based on the local modulation properties of the specimen itself. We explain how the modulation of uniform white light by a specimen can be estimated in real time, even though the specimen is continuously but not uniformly illuminated. This information is processed and back-projected constantly, allowing the illumination to be adjusted on the fly if the specimen is moved or the focus or magnification of the microscope is changed. The contrast of the specimens optical image can be enhanced, and high-intensity highlights can be suppressed. A formal pilot study with users indicates that this optimizes the visibility of spatial structures when observed through the oculars. We also demonstrate that the signal-to-noise (S/N) ratio in digital images of the specimen is higher if captured under an optimized rather than a uniform illumination. In contrast to advanced scanning techniques that maximize the S/N ratio using multiple measurements, our approach is fast because it requires only two images. This can improve image analysis in digital microscopy applications with real-time capturing requirements.


The Visual Computer | 2010

Color invariant chroma keying and color spill neutralization for dynamic scenes and cameras

Anselm Grundhöfer; Daniel Kurz; Sebastian Thiele; Oliver Bimber

In this article we show how temporal backdrops that alternately change their color rapidly at recording rate can aid chroma keying by transforming color spill into a neutral background illumination. Since the chosen colors sum up to white, the chromatic (color) spill component is neutralized when integrating over both backdrop states. The ability to separate both states additionally allows to compute high-quality alpha mattes. Besides the neutralization of color spill, our method is invariant to foreground colors and supports applications with real-time demands. In this article, we explain different realizations of temporal backdrops and describe how keying and color spill neutralization are carried out, how artifacts resulting from rapid motion can be reduced, and how our approach can be implemented to be compatible with common real-time post-production pipelines.


international conference on computer graphics and interactive techniques | 2009

Projected light microscopy

Oliver Bimber; Anselm Grundhöfer; Daniel Kurz; Sebastian Thiele; Ferry Hantsch; Toshiyuki Amano; Daniel Klöck

In light microscopy (or optical microscopy) visible light is either transmitted through or reflected from the specimen before it is observed or recorded. In its simplest imaging mode, bright field microscopy, the illumination light is modulated in intensity or color depending on the specimens transmission or reflection properties before it enters the objective lens. The general drawbacks of this are limited resolution (which is constrained by the wavelength of visible light) and limited contrast. Several techniques exist for enhancing the contrast of light microscopes, such as dark field microscopy, phase contrast microscopy, (differential) interference contrast microscopy, or fluorescence microscopy. Most of them are applied to make otherwise invisible transparent objects, such a biological structures like cells, visible. Specimens that are too thick for transmitting light, however, require a reflected illumination - for which there are few alternatives for contrast enhancements.


international symposium on mixed and augmented reality | 2016

Leveraging the User's Face for Absolute Scale Estimation in Handheld Monocular SLAM

Sebastian Knorr; Daniel Kurz

We present an approach to estimate absolute scale in handheld monocular SLAM by simultaneously tracking the users face with a user-facing camera while a world-facing camera captures the scene for localization and mapping. Given face tracking at absolute scale, two images of a face taken from two different viewpoints enable estimating the translational distance between the two viewpoints in absolute units, such as millimeters. Under the assumption that the face itself stayed stationary in the scene while taking the two images, the motion of the user-facing camera relative to the face can be transferred to the motion of the rigidly connected world-facing camera relative to the scene. This allows determining also the latter motion in absolute units and enables reconstructing and tracking the scene at absolute scale.As faces of different adult humans differ only moderately in terms of size, it is possible to rely on statistics for guessing the absolute dimensions of a face. For improved accuracy the dimensions of the particular face of the user can be calibrated.Based on sequences of world-facing and user-facing images captured by a mobile phone, we show for different scenes how our approach enables reconstruction and tracking at absolute scale using a proof-of-concept implementation. Quantitative evaluations against ground truth data confirm that our approach provides absolute scale at an accuracy well suited for different applications. Particularly, we show how our method enables various use cases in handheld Augmented Reality applications that superimpose virtual objects at absolute scale or feature interactive distance measurements.


virtual reality software and technology | 2008

Mutual occlusions on table-top displays in mixed reality applications

Daniel Kurz; Kiyoshi Kiyokawa; Haruo Takemura

This paper describes an approach to dealing with mutual occlusions between virtual and real objects on a table-top display. Display tables use stereoscopy to make virtual content appear to exist in 3 dimensions on or above a table top. The actual image, however, lies on the physical plane of the display table. Any real physical object introduced above this plane therefore obstructs our view of the display surface and disrupts the illusion of the virtual scene. The occlusions result between real objects and the display surface, not between real objects and virtual objects. For the same reason virtual objects cannot occlude real ones. Our approach uses an additional projector located near the users head to project those parts of virtual objects that should occlude real ones directly onto the real objects. We describe possible applications and limitations of the approach and its current implementation. Despite its limitations, we believe that the proposed approach can significantly improve interaction quality and performance for mixed reality scenarios.


international symposium on mixed and augmented reality | 2014

Workshop on tracking methods & applications

Jonathan Ventura; Daniel Wagner; Daniel Kurz; Harald Wuest; Selim Benhimane

The focus of this workshop is on all issues related to tracking for mixed and augmented reality applications. Unlike the tracking sessions of the main conference, this workshop does not require pure novelty of the proposed methods; it rather encourages presentations that concentrate on complete systems and integrated approaches engineered to run in real-world scenarios. The research felds covered include self-localization using computer vision or other sensing modalities (such as depth cameras, GPS, inertial, etc.) and tracking systems issues (such as system design, calibration, estimation, fusion, etc.). This years focus is also expanded to research on object detection and semantic scene understanding with relevance to augmented reality. Implementations on mobile devices and under real-time constraints are also part of the workshop focus. These are issues of core importance for practical augmented reality systems.


Archive | 2012

METHOD OF PROVIDING IMAGE FEATURE DESCRIPTORS

Selim Benhimane; Daniel Kurz; Thomas Olszamowski


international symposium on mixed and augmented reality | 2014

Towards Mobile Augmented Reality for the Elderly

Daniel Kurz; Anton Fedosov; Stefan Diewald; Jorg Guttier; Barbara Geilhof; Matthias Heuberger

Collaboration


Dive into the Daniel Kurz's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Oliver Bimber

Johannes Kepler University of Linz

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jonathan Ventura

University of Colorado Colorado Springs

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge