Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daniel Cotting is active.

Publication


Featured researches published by Daniel Cotting.


international symposium on mixed and augmented reality | 2004

Embedding imperceptible patterns into projected images for simultaneous acquisition and display

Daniel Cotting; Martin Naef; Markus H. Gross; Henry Fuchs

We introduce a method to imperceptibly embed arbitrary binary patterns into ordinary color images displayed by unmodified off-the-shelf digital light processing (DLP) projectors. The encoded images are visible only to cameras synchronized with the projectors and exposed for a short interval, while the original images appear only minimally degraded to the human eye. To achieve this goal, we analyze and exploit the micro-mirror modulation pattern used by the projection technology to generate intensity levels for each pixel and color channel. Our real-time embedding process maps the users original color image values to the nearest values whose camera-perceived intensities are the ones desired by the binary image to be embedded. The color differences caused by this mapping process are compensated by error-diffusion dithering. The non-intrusive nature of our approach allows simultaneous (immersive) display and acquisition under controlled lighting conditions, as defined on a pixel level by the binary patterns. We therefore introduce structured light techniques into human-inhabited mixed and augmented reality environments, where they previously often were too intrusive.


The Visual Computer | 2005

Scalable 3D video of dynamic scenes

Michael Waschbüsch; Stephan Würmlin; Daniel Cotting; Filip Sadlo; Markus H. Gross

In this paper we present a scalable 3D video framework for capturing and rendering dynamic scenes. The acquisition system is based on multiple sparsely placed 3D video bricks, each comprising a projector, two grayscale cameras, and a color camera. Relying on structured light with complementary patterns, texture images and pattern-augmented views of the scene are acquired simultaneously by time-multiplexed projections and synchronized camera exposures. Using space–time stereo on the acquired pattern images, high-quality depth maps are extracted, whose corresponding surface samples are merged into a view-independent, point-based 3D data structure. This representation allows for effective photo-consistency enforcement and outlier removal, leading to a significant decrease of visual artifacts and a high resulting rendering quality using EWA volume splatting. Our framework and its view-independent representation allow for simple and straightforward editing of 3D video. In order to demonstrate its flexibility, we show compositing techniques and spatiotemporal effects.


Proceedings Shape Modeling Applications, 2004. | 2004

Robust watermarking of point-sampled geometry

Daniel Cotting; Tim Weyrich; Mark Pauly; Markus H. Gross

We present a new scheme for digital watermarking ofpoint-sampled geometry based on spectral analysis. Byextending existing algorithms designed for polygonal datato unstructured point clouds, our method is particularlysuited for scanned models, where the watermark can bedirectly embedded in the raw data obtained from the 3Dacquisition device. To handle large data sets efficiently, weapply a fast hierarchical clustering algorithm that partitionsthe model into a set of patches. Each patch is mappedinto the space of eigenfunctions of an approximate Laplacianoperator to obtain a decomposition of the patch surfaceinto discrete frequency bands. The watermark is thenembedded into the low frequency components to minimizevisual artifacts in the model geometry. During extraction,the target model is resampled at optimal resolution usingan MLS projection. After extracting a watermark from thismodel, the corresponding bit stream is analyzed using statisticalmethods based on correlation. We have applied ourmethod to a number of point-sampled models of differentgeometric and topological complexity. These experimentsshow that our watermarking scheme is robust againstnumerous attacks, including low-pass filtering, resampling,affine transformations, cropping, additive randomnoise, and combinations of the above.


user interface software and technology | 2006

Interactive environment-aware display bubbles

Daniel Cotting; Markus H. Gross

We present a novel display metaphor which extends traditional tabletop projections in collaborative environments by introducing freeform, environment-aware display representations and a matching set of interaction schemes. For that purpose, we map personalized widgets or ordinary computer applications that have been designed for a conventional, rectangular layout into space-efficient bubbles whose warping is performed with a potential-based physics approach. With a set of interaction operators based on laser pointer tracking, these freeform displays can be transformed and elastically deformed using focus and context visualization techniques. We also provide operations for intuitive instantiation of bubbles, cloning, cut & pasting, deletion and grouping in an interactive way, and we allow for user-drawn annotations and text entry using a projected keyboard. Additionally, an optional environment-aware adaptivity of the displays is achieved by imperceptible, realtime scanning of the projection geometry. Subsequently, collision-responses of the bubbles with non-optimal surface parts are computed in a rigid body simulation. The extraction of the projection surface properties runs concurrently with the main application of the system. Our approach is entirely based on off the-shelf, low-cost hardware including DLP-projectors and FireWire cameras.


Computer Graphics Forum | 2005

Adaptive Instant Displays: Continuously Calibrated Projections Using Per-Pixel Light Control

Daniel Cotting; Remo Ziegler; Markus H. Gross; Henry Fuchs

We present a framework for achieving user-defined on-demand displays in setups containing bricks of movable cameras and DLP-projectors. A dynamic calibration procedure is introduced, which handles cameras and projectors in a unified way and allows continuous flexible setup changes, while seamless projection alignment and blending is performed simultaneously. For interaction, an intuitive laser pointer based technique is developed, which can be combined with real-time 3D information acquired from the scene. All these tasks can be performed concurrently with the display of a user-chosen application in a non-disturbing way. This is achieved by using an imperceptible structured light approach enabling pixel-based surface light control suited for a wide range of computer graphics and vision algorithms. To ensure scalability of light control in the same working space, multiple projectors are multiplexed.


Signal Processing-image Communication | 2007

Point-sampled 3D video of real-world scenes

Michael Waschbüsch; Stephan Würmlin; Daniel Cotting; Markus H. Gross

This paper presents a point-sampled approach for capturing 3D video footage and subsequent re-rendering of real-world scenes. The acquisition system is composed of multiple sparsely placed 3D video bricks. The bricks contain a low-cost projector, two grayscale cameras and a high-resolution color camera. To improve on depth calculation we rely on structured light patterns. Texture images and pattern-augmented views of the scene are acquired simultaneously by time multiplexed projections of complementary patterns and synchronized camera exposures. High-resolution depth maps are extracted using depth-from-stereo algorithms performed on the acquired pattern images. The surface samples corresponding to the depth values are merged into a view-independent, point-based 3D data structure. This representation allows for efficient post-processing algorithms and leads to a high resulting rendering quality using enhanced probabilistic EWA volume splatting. In this paper, we focus on the 3D video acquisition system and necessary image and video processing techniques.


Computer Graphics Forum | 2007

Interactive Visual Workspaces with Dynamic Foveal Areas and Adaptive Composite Interfaces

Daniel Cotting; Markus H. Gross

This paper presents novel techniques and metaphors for on‐demand visual workspaces in everyday office environments, providing space‐efficient, flexible and highly interactive graphical user interfaces using projected displays. For increased resolution, contents personalization and interactive visualization, the users can augment the large‐scale projections with dynamic high‐resolution foveal enhancements using a pocket light metaphor. To further optimize the presentation at a given resolution, the design of the displays can be modified interactively, and like a jigsaw puzzle, the layout can be customized using an adaptive compositing approach which supports free‐form focus‐and‐context rendering. With a unified intensity‐based tracking approach, we allow for natural multi‐touch interaction with the information space through bare hands, pointers and pens on arbitrary surfaces.


eurographics workshop on parallel graphics and visualization | 2006

WinSGL: software genlocking for cost-effective display synchronization under microsoft windows

Michael Waschbüsch; Daniel Cotting; M. Duller; Markus H. Gross

This paper presents the first software genlocking approach for unmodified Microsoft Windows systems, requiring no specialized graphics boards but only a low-cost signal generator as additional hardware. Compared to existing solutions for other operating systems, it does not rely on any real-time extensions or kernel modifications. Its novel design can be divided into two parts: First, an external synchronization signal is transmitted over interrupt lines to a dedicated driver. Second, a user-space application performs the synchronization by inserting or removing lines to the invisible part of the image. Robustness to potential frame losses is achieved through continuous consistent timestamping. Tests yield an accuracy of up to ± ½ line deviation from the external signal and a low CPU load of 2% on current PC systems. Our system has been designed to be compatible with off-the-shelf graphics hardware and digital output devices based on LCD or DLP technology. Our solution can be employed to build cost-effective VR installations such as large tiled and spatially immersive displays using commodity PC clusters.


VISSYM '02 Proceedings of the symposium on Data Visualisation 2002 | 2002

Visualization of large web access data sets

Ming C. Hao; Pankaj K. Garg; Umeshwar Dayal; Vijay Machiraju; Daniel Cotting

Many real-world e-service applications require analyzing large volumes of transaction data to extract web access information. This paper describes Web Access Visualization (WAV) a system that visually associates the affinities and relationships of clients and URLs for large volumes of web transaction data. To date, many practical research projects have shown the usefulness of a physics-based mass-spring technique to layout data items with close relationships onto a graph. The WAV system: (1) maps transaction data items (clients, URLs) and their relationships to vertices, edges, and positions on a 3D spherical surface; (2) encapsulates a physics-based engine in a visual data analysis platform; and (3) employs various content sensitive visual techniques - linked multiple views, layered drill-down, and fade in/out - for interactive data analysis. We have applied this system to a web application to analyze web access patterns and trends. The web service quality has been greatly benefited from using the information provided by WAV.


parallel computing | 2007

WinSGL: synchronizing displays in parallel graphics using cost-effective software genlocking

Daniel Cotting; Michael Waschbüsch; M. Duller; Markus H. Gross

This article presents a software genlocking approach for unmodified Microsoft Windows systems, requiring no specialized graphics boards but only a low-cost signal generator as additional hardware. Compared to existing solutions for other operating systems, it does not rely on any real-time extensions or kernel modifications. Its novel design can be divided into two parts: a dedicated driver reads an external synchronization signal via interrupt lines. A user-space application performs the synchronization using EnTech PowerStrip [EnTech Taiwan, PowerStrip Version 3.61, http://www.entechtaiwan.net/util/ps.shtm] by inserting or removing lines to the invisible part of the images output by the graphics board. Robustness to potential frame losses is achieved through continuous consistent timestamping. Tests yield an accuracy of up to +/-1/2 line deviation from the external signal and a low CPU load of 2% on current PC systems. Our system has been designed to be compatible with off-the-shelf graphics hardware and digital output devices based on LCD or DLP technology. Our solution can be employed to build cost-effective VR installations such as large tiled displays and spatially immersive environments using commodity PC clusters. Furthermore, displays synchronized to camera acquisitions allow for novel and convenient systems in the area of 3D scanning and smart adaptive displays, two application areas we will present in the second part of this article.

Collaboration


Dive into the Daniel Cotting's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Henry Fuchs

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Markus Gross

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge