Marc Nienhaus
University of Potsdam
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Marc Nienhaus.
IEEE Computer Graphics and Applications | 2007
Vidya Setlur; Thomas Lechner; Marc Nienhaus; Bruce Gooch
A nonphotorealistic algorithm for retargeting images adapts large images so that important objects in the image are still recognizable when displayed at a lower target resolution. Unlike existing image manipulation techniques such as cropping and scaling, the retargeting algorithm can handle multiple important objects in an image. To identify the important objects in an image, we must first segment the image. We use mean-shift image segmentation to decompose an image into homogeneous regions.
visualization and data analysis | 2005
Jürgen Döllner; Henrik Buchholz; Marc Nienhaus; Florian Kirsch
This paper presents an illustrative visualization technique that provides expressive representations of large-scale 3D city models, inspired by the tradition of artistic and cartographic visualizations typically found in bird’s-eye view and panoramic maps. We define a collection of city model components and a real-time multi-pass rendering algorithm that achieves comprehensible, abstract 3D city model depictions based on edge enhancement, color-based and shadow-based depth cues, and procedural facade texturing. Illustrative visualization provides an effective visual interface to urban spatial information and associated thematic information complementing visual interfaces based on the Virtual Reality paradigm, offering a huge potential for graphics design. Primary application areas include city and landscape planning, cartoon worlds in computer games, and tourist information systems.
computer graphics, virtual reality, visualisation and interaction in africa | 2004
Marc Nienhaus; Jürgen Döllner
In non-photorealistic rendering sketchiness is essential to communicate visual ideas and can be used to illustrate drafts and concepts in, for instance, architecture and product design. In this paper, we present a hardware-accelerated real-time rendering algorithm for drawings that sketches visually important edges as well as inner color patches of arbitrary 3D objects even beyond the geometrical boundary. The algorithm preserves edges and color patches as intermediate rendering results using textures. To achieve sketchiness it applies uncertainty values in image-space to perturb texture coordinates when accessing intermediate rendering results. The algorithm adjusts depth information derived from 3D objects to ensure visibility when composing sketchy drawings with arbitrary 3D scene contents. Rendering correct depth values while sketching edges and colors beyond the boundary of 3D objects is achieved by depth sprite rendering. Moreover, we maintain frame-to-frame coherence because consecutive uncertainty values have been determined by a Perlin noise function, so that they are correlated in image-space. Finally, we introduce a solution to control and predetermine sketchiness by preserving geometrical properties of 3D objects in order to calculate associated uncertainty values. This method significantly reduces the inherent shower-door effect.
smart graphics | 2003
Marc Nienhaus; Jürgen Döllner
Depicting dynamics offers manifold ways to visualize dynamics in static media, to understand dynamics in the whole, and to relate dynamics of the past and the future with the current state of a 3D scene. The depiction strategy we propose is based on visual elements, called dynamic glyphs, which are integrated in the 3D scene as additional 2D and 3D geometric objects. They are derived from a formal specification of dynamics based on acyclic, directed graphs, called behavior graphs. Different types of dynamics and corresponding mappings to dynamic glyphs can be identified, for instance, scene events at a discrete point in time, transformation processes of scene objects, and activities of scene actors. The designer or the application can control the visual mapping of dynamics to dynamic glyphs, and, thereby, create own styles of dynamic depiction. Applications of dynamic glyphs include the automated production of instruction manuals, illustrations, and storyboards.
computer graphics, virtual reality, visualisation and interaction in africa | 2006
Marc Nienhaus; Florian Kirsch; Jürgen Döllner
For the interactive construction of CSG models understanding the layout of the models is essential to ease their efficient manipulation. To comprehend position and orientation of the aggregated components of a CSG model, we need to realize its visible and occluded parts as a whole. Hence, transparency and enhanced outlines are key techniques to communicate deeper insights.We present a novel real-time non-photorealistic rendering technique that illustrates design and spatial assembly of CSG models.As enabling technology we first present a solution for combining depth peeling with image-based CSG rendering. The rendering technique can then extract layers of ordered depth from the CSG model up to its entire depth complexity. Capturing the surface colors of each layer and combining the results thereafter synthesizes order-independent transparency as one major illustration technique for interactive CSG.We further define perceptually important edges of CSG models and integrate an image-space edge-enhancement technique that can detect them in each layer. In order to outline the models layout, the rendering technique extracts perceptually important edges that are directly visible, i.e., edges that lie on the models outer surface, or edges that are occluded, i.e., edges that are hidden by its interior composition. Finally, we combine these edges with the order-independent transparent depictions to generate edge-enhanced illustrations, which provide a clear insight into the CSG models, let realize their complex, spatial assembly, and, thus, simplify their interactive construction.
Tenth International Conference on Information Visualisation (IV'06) | 2006
Marc Nienhaus; Florian Kirsch; Jürgen Döllner
Illustrating in a sketchy manner is essential to communicate visual ideas and can be used to present and reconsider drafts and concepts in product design. This paper introduces a real-time illustration technique that sketches the design and spatial assembly of CSG models. The illustration technique generates a graphical decomposition of the CSG model into disjunctive layers to extract 1) the perceptually important edges that outline the models outer and inner features and 2) the surface shading of the outer and inner faces. Then, the technique applies uncertainty to these layers to simulate a sketchy effect. Finally, the technique composes the sketched layers in depth-sorted order while ensuring a correct depth behavior in the frame buffer. Because the sketchy illustrations are frame-to-frame coherent, the technique can be used as a tool for interactive presentation and reconsideration of the design and spatial assembly of CSG models
international conference on computer graphics and interactive techniques | 2003
Marc Nienhaus; Jürgen Döllner
Introduction In Non-Photorealistic Rendering (NPR), sketchy drawings are essential to visually communicate and illustrate drafts and ideas, for instance, in architectural or product design. However, current hardware-accelerated, real-time rendering techniques do not concentrate on sketchy drawings of arbitrary 3D scene geometries. We present an image-space rendering technique that uses today’s texture mapping and fragment shading hardware to generate sketchy drawings of arbitrary 3D scene geometry in real-time. We stress sketchiness in our drawings by simulating uncertainty. For simulating uncertainty we have to adjust visibility information using depth sprites, which allow us depth testing and 3D scene composition. Sketchy Drawing Our sketchy drawings primarily include 1) visually important edges and 2) simple surface-style rendering to convey scene objects.We consider silhouette and crease edges as visually important edges of 3D scene geometry. We obtain these edges by extracting discontinuities in the normal and depth buffer [Decaudin 1996]. The assembly of edges and their constituting intensity values forms a single texture TEdge (Figure a) as described in [Nienhaus and Döllner 2003]. We opt for unlit geometry as simple surface-style representation of 3D scene geometry (Figure b). Therefore, we render designated geometry directly into the texture TSurface using a render-to-texture implementation. Sketchiness is controlled by the degree of uncertainty, which is applied for rendering edges and surfaces. To simulate uncertainty, we create a screen-aligned quad that fits completely into the viewport of the canvas and texture that quad using the product of TEdge and TSurface. Furthermore, we apply an additional texture TNoise whose texture values have been determined by a noise function [Perlin 1985]. TNoise serves as an offset texture when accessing TEdge and TSurface, i.e., texture values of TNoise slightly perturb texture coordinates that access TEdge and TSurface. To perturb texture coordinates of TEdge and TSurface nonuniformly, we apply two different 2×2-matrices – one shifts perturbed coordinates of TEdge and one shifts perturbed coordinates of TSurface. Then, we merge texture values of TEdge and TSurface resulting in a sketchy drawing. Figure a’ and b’ show intermediate results after perturbing texture coordinates. Adjusting Visibility Information When rendering a screen-aligned quad that is textured with the texture of 3D scene geometry, z-values as visibility information of that geometry get lost. Furthermore, visibility information of 3D scene geometry is not available in its periphery when uncertainty has been applied. To control visibility we use depth sprites. Conceptually, depth sprites are common 2-dimensional sprites that provide an additional depth component at each pixel for depth testing. We implement depth sprites using fragment programs [Kilgard 2003]. Initially, we generate a high precision depth texture TDepth derived from 3D scene geometry (Figure c). Then, we render the screen-aligned quad textured with TDepth. Thereby, we replace fragment z-values produced by the rasterizer with texture values received from TDepth using the fragment program. To adjust visibility information of the preceding sketchy drawing we additionally access TDepth twice while applying the same perturbations to its texture coordinates. The first perturbation adopts the offset used for accessing TEdge and the second perturbation adopts the offset used for accessing TSurface. The minimum value of both texture accesses produces the final fragment z-value (Figure c’). As a result our sketchy drawings include perturbations of visually important edges and simple surface-style rendering, and adjusts visibility information of 3D scene geometry (Figure d). Conclusions and Future Work Our approach presents a first sketchy rendering technique that takes fully advantage of graphics hardware fragment programming capabilities and that actually achieves real-time performance. In our future work, we expect to mimic handdrawn sketches more realistically by considering geometrical properties derived from 3D scene geometry to precisely control uncertainty offsets.
international conference on computer graphics and interactive techniques | 2007
Marc Nienhaus; Bruce Gooch; Jürgen Döllner
Visual representations of traffic flow and density in 3D city models provide substantial decision support in urban planning. While a large repertoire of efficient techniques exists for visualizing the static components of such environments (e.g., digital terrain models, building models, and vegetation), less is known about illustrating their dynamics nature.
international conference in central europe on computer graphics and visualization | 2003
Marc Nienhaus; Jürgen Döllner
graphics interface | 2004
Marc Nienhaus; Jürgen Döllner