Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Markus H. Gross is active.

Publication


Featured researches published by Markus H. Gross.


symposium on computer animation | 2003

Particle-based fluid simulation for interactive applications

Matthias Müller; David Charypar; Markus H. Gross

Realistically animated fluids can add substantial realism to interactive applications such as virtual surgery simulators or computer games. In this paper we propose an interactive method based on Smoothed Particle Hydrodynamics (SPH) to simulate fluids with free surfaces. The method is an extension of the SPH-based technique by Desbrun to animate highly deformable bodies. We gear the method towards fluid simulation by deriving the force density fields directly from the Navier-Stokes equation and by adding a term to model surface tension effects. In contrast to Eulerian grid-based approaches, the particle-based approach makes mass conservation equations and convection terms dispensable which reduces the complexity of the simulation. In addition, the particles can directly be used to render the surface of the fluid. We propose methods to track and visualize the free surface using point splatting and marching cubes-based surface reconstruction. Our animation method is fast enough to be used in interactive systems and to allow for user interaction with models consisting of up to 5000 particles.


international conference on computer graphics and interactive techniques | 2000

Surfels: surface elements as rendering primitives

Hanspeter Pfister; Matthias Zwicker; Jeroen van Baar; Markus H. Gross

Surface elements (surfels) are a powerful paradigm to efficiently render complex geometric objects at interactive frame rates. Unlike classical surface discretizations, i.e., triangles or quadrilateral meshes, surfels are point primitives without explicit connectivity. Surfel attributes comprise depth, texture color, normal, and others. As a pre-process, an octree-based surfel representation of a geometric object is computed. During sampling, surfel positions and normals are optionally perturbed, and different levels of texture colors are prefiltered and stored per surfel. During rendering, a hierarchical forward warping algorithm projects surfels to a z-buffer. A novel method called visibility splatting determines visible surfels and holes in the z-buffer. Visible surfels are shaded using texture filtering, Phong illumination, and environment mapping using per-surfel normals. Several methods of image reconstruction, including supersampling, offer flexible speed-quality tradeoffs. Due to the simplicity of the operations, the surfel rendering pipeline is amenable for hardware implementation. Surfel objects offer complex shape, low rendering cost and high image quality, which makes them specifically suited for low-cost, real-time graphics, such as games.


ieee visualization | 2002

Efficient simplification of point-sampled surfaces

Mark Pauly; Markus H. Gross; Leif Kobbelt

We introduce, analyze and quantitatively compare a number of surface simplification methods for point-sampled geometry. We have implemented incremental and hierarchical clustering, iterative simplification, and particle simulation algorithms to create approximations of point-based models with lower sampling density. All these methods work directly on the point cloud, requiring no intermediate tesselation. We show how local variation estimation and quadric error metrics can be employed to diminish the approximation error and concentrate more samples in regions of high curvature. To compare the quality of the simplified surfaces, we have designed a new method for computing numerical and visual error estimates for point-sampled surfaces. Our algorithms are fast, easy to implement, and create high-quality surface approximations, clearly demonstrating the effectiveness of point-based surface simplification.


international conference on computer graphics and interactive techniques | 2001

Surface splatting

Matthias Zwicker; Hanspeter Pfister; Jeroen van Baar; Markus H. Gross

Modern laser range and optical scanners need rendering techniques that can handle millions of points with high resolution textures. This paper describes a point rendering and texture filtering technique called surface splatting which directly renders opaque and transparent surfaces from point clouds without connectivity. It is based on a novel screen space formulation of the Elliptical Weighted Average (EWA) filter. Our rigorous mathematical analysis extends the texture resampling framework of Heckbert to irregularly spaced point samples. To render the points, we develop a surface splat primitive that implements the screen space EWA filter. Moreover, we show how to optimally sample image and procedural textures to irregular point data during pre-processing. We also compare the optimal algorithm with a more efficient view-independent EWA pre-filter. Surface splatting makes the benefits of EWA texture filtering available to point-based rendering. It provides high quality anisotropic texture filtering, hidden surface removal, edge anti-aliasing, and order-independent transparency.


international conference on computer graphics and interactive techniques | 2005

Meshless deformations based on shape matching

Matthias Müller; Bruno Heidelberger; Matthias Teschner; Markus H. Gross

We present a new approach for simulating deformable objects. The underlying model is geometrically motivated. It handles pointbased objects and does not need connectivity information. The approach does not require any pre-processing, is simple to compute, and provides unconditionally stable dynamic simulations.The main idea of our deformable model is to replace energies by geometric constraints and forces by distances of current positions to goal positions. These goal positions are determined via a generalized shape matching of an undeformed rest state with the current deformed state of the point cloud. Since points are always drawn towards well-defined locations, the overshooting problem of explicit integration schemes is eliminated. The versatility of the approach in terms of object representations that can be handled, the efficiency in terms of memory and computational complexity, and the unconditional stability of the dynamic simulation make the approach particularly interesting for games.


symposium on computer animation | 2004

Point based animation of elastic, plastic and melting objects

Matthias Müller; Richard Keiser; Andrew Nealen; Mark Pauly; Markus H. Gross; Marc Alexa

We present a method for modeling and animating a wide spectrum of volumetric objects, with material properties anywhere in the range from stiff elastic to highly plastic. Both the volume and the surface representation are point based, which allows arbitrarily large deviations form the original shape. In contrast to previous point based elasticity in computer graphics, our physical model is derived from continuum mechanics, which allows the specification of common material properties such as Youngs Modulus and Poissons Ratio. In each step, we compute the spatial derivatives of the discrete displacement field using a Moving Least Squares (MLS) procedure. From these derivatives we obtain strains, stresses and elastic forces at each simulated point. We demonstrate how to solve the equations of motion based on these forces, with both explicit and implicit integration schemes. In addition, we propose techniques for modeling and animating a point-sampled surface that dynamically adapts to deformations of the underlying volumetric model.


international conference on computer graphics and interactive techniques | 1996

Simulating facial surgery using finite element models

Rolf M. Koch; Markus H. Gross; Friedrich R. Carls; Daniel F. von Büren; George Fankhauser; Yoav I. H. Parish

This paper describes a prototype system for surgical planning and prediction of human facial shape after craniofacial and maxillofacial surgery for patients with facial deformities. For this purpose it combines, unifies, and extends various methods from geometric modeling, finite element analysis, and image processing to render highly realistic 3D images of the post surgical situation. The basic concept of the system is to join advanced geometric modeling and animation systems such as Alias with a special purpose finite element model of the human face developed under AVS. In contrast to existing facial models we acquire facial surface and soft tissue data both from photogrammetric and CT scans of the individual. After initial data preprocessing, reconstruction, and registration, a finite element model of the facial surface and soft tissue is provided which is based on triangular finite elements. Stiffness parameters of the soft tissue are computed using segmentations of the underlying CT data. All interactive procedures such as bone and soft tissue repositioning are performed under the guidance of the modeling system which feeds the processed geometry into the FEM solver. The resulting shape is generated from minimizing the global energy of the surface under the presence of external forces. Photorealistic pictures are obtained from rendering the facial surface with the advanced animation system on which this prototype is built. Although we do not claim any of the presented algorithms themselves to be new, the synthesis of several methods offers a new facial model quality. Our concept is a significant extension to existing ones and, due to its versatility, can be employed in different applications such as facial animation, facial reconstruction, or the simulation of aging. We illustrate features of our system with some examples from the Visible Human Data Set.TM CR Descriptors: I.3.5 [Computational Geometry and Object Modeling]: Physically Based Modeling; I.3.7 [Three-Dimensional Graphics and Realism]; I.4.6 [Segmentation]: Edge and Feature Detection Pixel Classification; I.6.3 [Applications]; Additional


international conference on computer graphics and interactive techniques | 2010

Nonlinear disparity mapping for stereoscopic 3D

Manuel Lang; Alexander Hornung; Oliver Wang; Steven Poulakos; Aljoscha Smolic; Markus H. Gross

This paper addresses the problem of remapping the disparity range of stereoscopic images and video. Such operations are highly important for a variety of issues arising from the production, live broadcast, and consumption of 3D content. Our work is motivated by the observation that the displayed depth and the resulting 3D viewing experience are dictated by a complex combination of perceptual, technological, and artistic constraints. We first discuss the most important perceptual aspects of stereo vision and their implications for stereoscopic content creation. We then formalize these insights into a set of basic disparity mapping operators. These operators enable us to control and retarget the depth of a stereoscopic scene in a nonlinear and locally adaptive fashion. To implement our operators, we propose a new strategy based on stereoscopic warping of the input video streams. From a sparse set of stereo correspondences, our algorithm computes disparity and image-based saliency estimates, and uses them to compute a deformation of the input views so as to meet the target disparities. Our approach represents a practical solution for actual stereo production and display that does not require camera calibration, accurate dense depth maps, occlusion handling, or inpainting. We demonstrate the performance and versatility of our method using examples from live action post-production, 3D display size adaptation, and live broadcast. An additional user study and ground truth comparison further provide evidence for the quality and practical relevance of the presented work.


Computer Graphics Forum | 2003

Multi-scale Feature Extraction on Point-Sampled Surfaces

Mark Pauly; Richard Keiser; Markus H. Gross

We present a new technique for extracting line‐type features on point‐sampled geometry. Given an unstructuredpoint cloud as input, our method first applies principal component analysis on local neighborhoods toclassify points according to the likelihood that they belong to a feature. Using hysteresis thresholding, we thencompute a minimum spanning graph as an initial approximation of the feature lines. To smooth out the featureswhile maintaining a close connection to the underlying surface, we use an adaptation of active contour models.Central to our method is a multi‐scale classification operator that allows feature analysis at multiplescales, using the size of the local neighborhoods as a discrete scale parameter. This significantly improves thereliability of the detection phase and makes our method more robust in the presence of noise. To illustrate theusefulness of our method, we have implemented a non‐photorealistic point renderer to visualize point‐sampledsurfaces as line drawings of their extracted feature curves.


international conference on computer graphics and interactive techniques | 2002

Pointshop 3D: an interactive system for point-based surface editing

Matthias Zwicker; Mark Pauly; Oliver Knoll; Markus H. Gross

We present a system for interactive shape and appearance editing of 3D point-sampled geometry. By generalizing conventional 2D pixel editors, our system supports a great variety of different interaction techniques to alter shape and appearance of 3D point models, including cleaning, texturing, sculpting, carving, filtering, and resampling. One key ingredient of our framework is a novel concept for interactive point cloud parameterization allowing for distortion minimal and aliasing-free texture mapping. A second one is a dynamic, adaptive resampling method which builds upon a continuous reconstruction of the model surface and its attributes. These techniques allow us to transfer the full functionality of 2D image editing operations to the irregular 3D point setting. Our system reads, processes, and writes point-sampled models without intermediate tesselation. It is intended to complement existing low cost 3D scanners and point rendering pipelines for efficient 3D content creation.

Collaboration


Dive into the Markus H. Gross's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bernd Bickel

Institute of Science and Technology Austria

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mark Pauly

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge