Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Martin Isenburg is active.

Publication


Featured researches published by Martin Isenburg.


IEEE Transactions on Visualization and Computer Graphics | 2006

Fast and Efficient Compression of Floating-Point Data

Peter Lindstrom; Martin Isenburg

Large scale scientific simulation codes typically run on a cluster of CPUs that write/read time steps to/from a single file system. As data sets are constantly growing in size, this increasingly leads to I/O bottlenecks. When the rate at which data is produced exceeds the available I/O bandwidth, the simulation stalls and the CPUs are idle. Data compression can alleviate this problem by using some CPU cycles to reduce the amount of data needed to be transfered. Most compression schemes, however, are designed to operate offline and seek to maximize compression, not throughput. Furthermore, they often require quantizing floating-point values onto a uniform integer grid, which disqualifies their use in applications where exact values must be retained. We propose a simple scheme for lossless, online compression of floating-point data that transparently integrates into the I/O of many applications. A plug-in scheme for data-dependent prediction makes our scheme applicable to a wide variety of data used in visualization, such as unstructured meshes, point sets, images, and voxel grids. We achieve state-of-the-art compression rates and speeds, the latter in part due to an improved entropy coder. We demonstrate that this significantly accelerates I/O throughput in real simulation runs. Unlike previous schemes, our method also adapts well to variable-precision floating-point and integer data


international conference on shape modeling and applications | 2003

Isotropic surface remeshing

Pierre Alliez; E.C. de Verdire; Olivier Devillers; Martin Isenburg

This paper proposes a new method for isotropic remeshing of triangulated surface meshes. Given a triangulated surface mesh to be resampled and a user-specified density function defined over it, we first distribute the desired number of samples by generalizing error diffusion, commonly used in image halftoning, to work directly on mesh triangles and feature edges. We then use the resulting sampling as an initial configuration for building a weighted centroidal Voronoi tessellation in a conformal parameter space, where the specified density function is used for weighing. We finally create the mesh by lifting the corresponding constrained Delaunay triangulation from parameter space. A precise control over the sampling is obtained through a flexible design of the density function, the latter being possibly low-pass filtered to obtain a smoother gradation. We demonstrate the versatility of our approach through various remeshing examples.


ieee visualization | 2003

Large mesh simplification using processing sequences

Martin Isenburg; Peter Lindstrom; Stefan Gumhold; Jack Snoeyink

In this paper we show how out-of-core mesh processing techniques can be adapted to perform their computations based on the new processing sequence paradigm (Isenburg, et al., 2003), using mesh simplification as an example. We believe that this processing concept will also prove useful for other tasks, such a parameterization, remeshing, or smoothing, for which currently only in-core solutions exist. A processing sequence represents a mesh as a particular interleaved ordering of indexed triangles and vertices. This representation allows streaming very large meshes through main memory while maintaining information about the visitation status of edges and vertices. At any time, only a small portion of the mesh is kept in-core, with the bulk of the mesh data residing on disk. Mesh access is restricted to a fixed traversal order, but full connectivity and geometry information is available for the active elements of the traversal. This provides seamless and highly efficient out-of-core access to very large meshes for algorithms that can adapt their computations to this fixed ordering. The two abstractions that are naturally supported by this representation are boundary-based and buffer-based processing. We illustrate both abstractions by adapting two different simplification methods to perform their computation using a prototype of our mesh processing sequence API. Both algorithms benefit from using processing sequences in terms of improved quality, more efficient execution, and smaller memory footprints.


Computer-aided Design | 2005

Lossless compression of predicted floating-point geometry

Martin Isenburg; Peter Lindstrom; Jack Snoeyink

The size of geometric data sets in scientific and industrial applications is constantly increasing. Storing surface or volume meshes in standard uncompressed formats results in large files that are expensive to store and slow to load and transmit. Scientists and engineers often refrain from using mesh compression because currently available schemes modify the mesh data. While connectivity is encoded in a lossless manner, the floating-point coordinates associated with the vertices are quantized onto a uniform integer grid to enable efficient predictive compression. Although a fine enough grid can usually represent the data with sufficient precision, the original floating-point values will change, regardless of grid resolution. In this paper we describe a method for compressing floating-point coordinates with predictive coding in a completely lossless manner. The initial quantization step is omitted and predictions are calculated in floating-point. The predicted and the actual floating-point values are broken up into sign, exponent, and mantissa and their corrections are compressed separately with context-based arithmetic coding. As the quality of the predictions varies with the exponent, we use the exponent to switch between different arithmetic contexts. We report compression results using the popular parallelogram predictor, but our approach will work with any prediction scheme. The achieved bit-rates for lossless floating-point compression nicely complement those resulting from uniformly quantizing with different precisions.


ieee visualization | 2002

Compressing polygon mesh geometry with parallelogram prediction

Martin Isenburg; Pierre Alliez

We present a generalization of the geometry coder by Touma and Gotsman (1998) to polygon meshes. We let the polygon information dictate where to apply the parallelogram rule that they use to predict vertex positions. Since polygons tend to be fairly planar and fairly convex, it is beneficial to make predictions within a polygon rather than across polygons. This, for example, avoids poor predictions due to a crease angle between polygons. Up to 90 percent of the vertices can be predicted this way. Our strategy improves geometry compression by 10 to 40 percent depending on (a) how polygonal the mesh is and (b) on the quality (planarity/convexity) of the polygons.


international conference on computer graphics and interactive techniques | 2005

Predictive point-cloud compression

Stefan Gumhold; Zachi Kami; Martin Isenburg; Hans-Peter Seidel

Point clouds have recently become a popular alternative to polygonal meshes for representing three-dimensional geometric models. 3D photography and scanning systems acquire the geometry and appearance of real-world objects in form of point samples. Rendering directly with points eliminates the complex task of reconstructing a surface and allows handling of non-surfaces like models such as trees. With modern acquisition techniques producing larger and larger amounts of points, efficient schemes for compressing such data have become necessary.


Graphical Models \/graphical Models and Image Processing \/computer Vision, Graphics, and Image Processing | 2005

Centroidal Voronoi diagrams for isotropic surface remeshing

Pierre Alliez; Éric Colin de Verdière; Olivier Devillers; Martin Isenburg

This paper proposes a new method for isotropic remeshing of triangulated surface meshes. Given a triangulated surface mesh to be resampled and a user-specified density function defined over it, we first distribute the desired number of samples by generalizing error diffusion, commonly used in image halftoning, to work directly on mesh triangles and feature edges. We then use the resulting sampling as an initial configuration for building a weighted centroidal Voronoi diagram in a conformal parameter space, where the specified density function is used for weighting. We finally create the mesh by lifting the corresponding constrained Delaunay triangulation from parameter space. A precise control over the sampling is obtained through a flexible design of the density function, the latter being possibly low-pass filtered to obtain a smoother gradation. We demonstrate the versatility of our approach through various remeshing examples.


international conference on computer graphics and interactive techniques | 2005

Streaming compression of triangle meshes

Martin Isenburg; Peter Lindstrom; Jack Snoeyink

Current mesh compression schemes encode triangles and vertices in an order derived from systematically traversing the connectivity graph. These schemes struggle with gigabyte-sized mesh input where the construction and the usage of the data structures that support topological traversal queries become I/O-inefficient and require large amounts of temporary disk space. Furthermore they expect the entire mesh as input. Since meshes cannot be compressed until their generation is complete, they have to be stored at least once in uncompressed form. We radically depart from the traditional approach to mesh compression and propose a scheme that incrementally encodes a mesh in the order it is given to the compressor using only minimal memory resources. This makes the compression process essentially transparent to the user and practically independent of the mesh size. This is especially beneficial for compressing large meshes, where previous approaches spend significant memory, disk, and I/O resources on pre-processing, whereas our scheme starts compressing after receiving the first few triangles.


pacific conference on computer graphics and applications | 2002

Compressing hexahedral volume meshes

Martin Isenburg; Pierre Alliez

Unstructured hexahedral volume meshes are of particular interest for visualization and simulation applications. They allow regular tiling of the three-dimensional space and show good numerical behaviour in finite element computations. Beside such appealing properties, volume meshes take huge amount of space when stored in a raw format. We present a technique for encoding connectivity and geometry of unstructured hexahedral volume meshes. For connectivity compression, we extend the idea of coding with degrees as pioneered by Touma and Gotsman (1998) to volume meshes. Hexahedral connectivity is coded as a sequence of edge degrees. This naturally exploits the regularity of typical hexahedral meshes. We achieve compression rates of around 1.5 bits per hexahedron (bph) that go down to 0.18 bph for regular meshes. On our test meshes the average connectivity compression ratio is 1:162.7. For geometry compression, we perform simple parallelogram prediction on uniformly quantized vertices within the side of a hexahedron. Tests show an average geometry compression ratio of 1:3.7 at a quantization level of 16 bits.


Computer-aided Design and Applications | 2004

Lossless Compression of Floating-Point Geometry

Martin Isenburg; Peter Lindstrom; Jack Snoeyink

ABSTRACT The geometric data sets found in scientific and industrial applications are often very detailed. Storing them using standard uncompressed formats results in large files that are expensive to store and slow to load and transmit. Many efficient mesh compression techniques have been proposed, but scientists and engineers often refrain from using them because they modify the mesh data. While connectivity is encoded in a lossless manner, the floating-point coordinates associated with the vertices are quantized onto a uniform integer grid for efficient predictive compression. Although a fine enough grid can usually represent the data with sufficient precision, the original floating-point values will change, regardless of grid resolution. In this paper we describe how to compress floating-point coordinates using predictive coding in a completely lossless manner. The initial quantization step is omitted and predictions are calculated in floating-point. The predicted and the actual floating-point values are then broken up into sign, exponent, and mantissa and their corrections are compressed separately with context-based arithmetic coding. As the quality of the predictions varies with the exponent, we use the exponent to switch between different arithmetic contexts. Although we report compression results using the popular parallelogram predictor, our approach works with any prediction scheme. The achieved bit-rates for lossless floating-point compression nicely complement those resulting from uniformly quantizing with different precisions.

Collaboration


Dive into the Martin Isenburg's collaboration.

Top Co-Authors

Avatar

Jack Snoeyink

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Peter Lindstrom

Lawrence Livermore National Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Craig Gotsman

Technion – Israel Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Ajith Mascarenhas

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tim Thirion

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yuanxin Liu

University of North Carolina at Chapel Hill

View shared research outputs
Researchain Logo
Decentralizing Knowledge