Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michael Wand is active.

Publication


Featured researches published by Michael Wand.


ieee visualization | 2002

Interactive rendering of large volume data sets

Stefan Guthe; Michael Wand; J. Gonser; Wolfgang Strasser

We present a new algorithm for rendering very large volume data sets at interactive frame rates on standard PC hardware. The algorithm accepts scalar data sampled on a regular grid as input. The input data is converted into a compressed hierarchical wavelet representation in a preprocessing step. During rendering, the wavelet representation is decompressed on-the-fly and rendered using hardware texture mapping. The level of detail used for rendering is adapted to the local frequency spectrum of the data and its position relative to the viewer. Using a prototype implementation of the algorithm we were able to perform an interactive walkthrough of large data sets such as the visible human on a single off-the-shelf PC.


international conference on computer graphics and interactive techniques | 2011

Pattern-aware Deformation Using Sliding Dockers

Martin Bokeloh; Michael Wand; Vladlen Koltun; Hans-Peter Seidel

This paper introduces a new structure-aware shape deformation technique. The key idea is to detect continuous and discrete regular patterns and ensure that these patterns are preserved during free-...


international conference on computer graphics and interactive techniques | 2010

A connection between partial symmetry and inverse procedural modeling

Martin Bokeloh; Michael Wand; Hans-Peter Seidel

In this paper, we address the problem of inverse procedural modeling: Given a piece of exemplar 3D geometry, we would like to find a set of rules that describe objects that are similar to the exemplar. We consider local similarity, i.e., each local neighborhood of the newly created object must match some local neighborhood of the exemplar. We show that we can find explicit shape modification rules that guarantee strict local similarity by looking at the structure of the partial symmetries of the object. By cutting the object into pieces along curves within symmetric areas, we can build shape operations that maintain local similarity by construction. We systematically collect such editing operations and analyze their dependency to build a shape grammar. We discuss how to extract general rewriting systems, context free hierarchical rules, and grid-based rules. All of this information is derived directly from the model, without user interaction. The extracted rules are then used to implement tools for semi-automatic shape modeling by example, which are demonstrated on a number of different example data sets. Overall, our paper provides a concise theoretical and practical framework for inverse procedural modeling of 3D objects.


european conference on computer vision | 2016

Precomputed Real-Time Texture Synthesis with Markovian Generative Adversarial Networks

Chuan Li; Michael Wand

This paper proposes Markovian Generative Adversarial Networks (MGANs), a method for training generative networks for efficient texture synthesis. While deep neural network approaches have recently demonstrated remarkable results in terms of synthesis quality, they still come at considerable computational costs (minutes of run-time for low-res images). Our paper addresses this efficiency issue. Instead of a numerical deconvolution in previous work, we precompute a feed-forward, strided convolutional network that captures the feature statistics of Markovian patches and is able to directly generate outputs of arbitrary dimensions. Such network can directly decode brown noise to realistic texture, or photos to artistic paintings. With adversarial training, we obtain quality comparable to recent neural texture synthesis methods. As no optimization is required at generation time, our run-time performance (0.25 M pixel images at 25 Hz) surpasses previous neural texture synthesizers by a significant margin (at least 500 times faster). We apply this idea to texture synthesis, style transfer, and video stylization.


eurographics | 2013

Symmetry in 3D Geometry: Extraction and Applications

Niloy J. Mitra; Mark Pauly; Michael Wand; Duygu Ceylan

The concept of symmetry has received significant attention in computer graphics and computer vision research in recent years. Numerous methods have been proposed to find, extract, encode and exploit geometric symmetries and high‐level structural information for a wide variety of geometry processing tasks. This report surveys and classifies recent developments in symmetry detection. We focus on elucidating the key similarities and differences between existing methods to gain a better understanding of a fundamental problem in digital geometry processing and shape understanding in general. We discuss a variety of applications in computer graphics and geometry processing that benefit from symmetry information for more effective processing. An analysis of the strengths and limitations of existing algorithms highlights the plenitude of opportunities for future research both in terms of theory and applications.


international conference on computer graphics and interactive techniques | 2001

The randomized z-buffer algorithm: interactive rendering of highly complex scenes

Michael Wand; Matthias Fischer; Ingmar Peter; Friedhelm Meyer auf der Heide; Wolfgang Straßer

We present a new output-sensitive rendering algorithm, the randomized z-buffer algorithm. It renders an image of an arbitrary three-dimensional scene consisting of triangular primitives by reconstruction from a dynamically chosen set of random surface sample points. This approach is independent of mesh connectivity and topology. The resulting rendering time grows only logarithmically with the numbers of triangles in the scene. We were able to render walkthroughs of scenes of up to 1014 triangles at interactive frame rates. Automatic identification of low detail scene components ensures that the rendering speed of the randomized z-buffer cannot drop below that of conventional z-buffer rendering. Experimental and analytical evidence is given that the image quality is comparable to that of common approaches like z-buffer rendering. The precomputed data structures employed by the randomized z-buffer allow for interactive dynamic updates of the scene. Their memory requirements grow only linearly with the number of triangles and allow for a scene graph based instantiation scheme to further reduce memory consumption.


computer vision and pattern recognition | 2009

Markerless Motion Capture with unsynchronized moving cameras

Nils Hasler; Bodo Rosenhahn; Thorsten Thormählen; Michael Wand; Juergen Gall; Hans-Peter Seidel

In this work we present an approach for markerless motion capture (MoCap) of articulated objects, which are recorded with multiple unsynchronized moving cameras. Instead of using fixed (and expensive) hardware synchronized cameras, this approach allows us to track people with off-the-shelf handheld video cameras. To prepare a sequence for motion capture, we first reconstruct the static background and the position of each camera using Structure-from-Motion (SfM). Then the cameras are registered to each other using the reconstructed static background geometry. Camera synchronization is achieved via the audio streams recorded by the cameras in parallel. Finally, a markerless MoCap approach is applied to recover positions and joint configurations of subjects. Feature tracks and dense background geometry are further used to stabilize the MoCap. The experiments show examples with highly challenging indoor and outdoor scenes.


computer vision and pattern recognition | 2010

Optimal HDR reconstruction with linear digital cameras

Miguel Granados; Boris Ajdin; Michael Wand; Christian Theobalt; Hans-Peter Seidel; Hendrik P. A. Lensch

Given a multi-exposure sequence of a scene, our aim is to recover the absolute irradiance falling onto a linear camera sensor. The established approach is to perform a weighted average of the scaled input exposures. However, there is no clear consensus on the appropriate weighting to use. We propose a weighting function that produces statistically optimal estimates under the assumption of compound-Gaussian noise. Our weighting is based on a calibrated camera model that accounts for all noise sources. This model also allows us to simultaneously estimate the irradiance and its uncertainty. We evaluate our method on simulated and real world photographs, and show that we consistently improve the signal-to-noise ratio over previous approaches. Finally, we show the effectiveness of our model for optimal exposure sequence selection and HDR image denoising.


ACM Transactions on Graphics | 2009

Efficient reconstruction of nonrigid shape and motion from real-time 3D scanner data

Michael Wand; Bart Adams; M. Ovsjanikov; Alexander Berner; Martin Bokeloh; Philipp Jenke; Leonidas J. Guibas; Hans-Peter Seidel; Andreas Schilling

We present a new technique for reconstructing a single shape and its nonrigid motion from 3D scanning data. Our algorithm takes a set of time-varying unstructured sample points that capture partial views of a deforming object as input and reconstructs a single shape and a deformation field that fit the data. This representation yields dense correspondences for the whole sequence, as well as a completed 3D shape in every frame. In addition, the algorithm automatically removes spatial and temporal noise artifacts and outliers from the raw input data. Unlike previous methods, the algorithm does not require any shape template but computes a fitting shape automatically from the input data. Our reconstruction framework is based upon a novel topology-aware adaptive subspace deformation technique that allows handling long sequences with complex geometry efficiently. The algorithm accesses data in multiple sequential passes, so that long sequences can be streamed from hard disk, not being limited by main memory. We apply the technique to several benchmark datasets, significantly increasing the complexity of the data that can be handled efficiently in comparison to previous work.


international conference on computer graphics and interactive techniques | 2013

Structure-aware shape processing

Niloy J. Mitra; Michael Wand; Hao Zhang; Daniel Cohen-Or; Vladimir G. Kim; Qixing Huang

Shape structure is about the arrangement and relations between shape parts. Structure-aware shape processing goes beyond local geometry and low level processing, and analyzes and processes shapes at a high level. It focuses more on the global inter and intra semantic relations among the parts of shape rather than on their local geometry. With recent developments in easy shape acquisition, access to vast repositories of 3D models, and simple-to-use desktop fabrication possibilities, the study of structure in shapes has become a central research topic in shape analysis, editing, and modeling. A whole new line of structure-aware shape processing algorithms has emerged that base their operation on an attempt to understand such structure in shapes. The algorithms broadly consist of two key phases: an analysis phase, which extracts structural information from input data; and a (smart) processing phase, which utilizes the extracted information for exploration, editing, and synthesis of novel shapes. In this course, we will organize, summarize, and present the key concepts and methodological approaches towards efficient structure-aware shape processing. We discuss common models of structure, their implementation in terms of mathematical formalism and algorithms, and explain the key principles in the context of a number of state-of-the-art approaches. Further, we attempt to list the key open problems and challenges, both at the technical and at the conceptual level, to make it easier for new researchers to better explore and contribute to this topic. Our goal is to both give the practitioner an overview of available structure-aware shape processing techniques, as well as identify future research questions in this important, emerging, and fascinating research area.

Collaboration


Dive into the Michael Wand's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Niloy J. Mitra

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge