Interactive Focus+Context Rendering for Hexahedral Mesh Inspection
11 Interactive Focus+Context Rendering forHexahedral Mesh Inspection
Christoph Neuhauser, Junpeng Wang, and R ¨udiger Westermann
Abstract —The visual inspection of a hexahedral mesh with respect to element quality is difficult due to clutter and occlusions that areproduced when rendering all element faces or their edges simultaneously. Current approaches overcome this problem by using focus onspecific elements that are then rendered opaque, and carving away all elements occluding their view. In this work, we make use ofadvanced GPU shader functionality to generate a focus+context rendering that highlights the elements in a selected region andsimultaneously conveys the global mesh structure in the surrounding. To achieve this, we propose a gradual transition from edge-basedfocus rendering to volumetric context rendering, by combining fragment shader-based edge and face rendering with per-pixel fragmentlists. A fragment shader smoothly transitions between wireframe and face-based rendering, including focus-dependent rendering styleand depth-dependent edge thickness and halos, and per-pixel fragment lists are used to blend fragments in correct visibility order. Tomaintain the global mesh structure in the context regions, we propose a new method to construct a sheet-based level-of-detail hierarchyand smoothly blend it with volumetric information. The user guides the exploration process by moving a lens-like hotspot. Since alloperations are performed on the GPU, interactive frame rates are achieved even for large meshes.
Index Terms —Visualization of Hex-Meshes, Real-Time Rendering, GPUs and Graphics Hardware. (cid:70)
NTRODUCTION
Hexahedral elements have widespread use in numerical simulationmethods using finite elements and finite volumes. Therefore,hexahedral mesh generation (hex-meshing) has become a topic ofintense research. On the other hand, for all but simple volumetricbodies it is impossible to construct a distortion-free hexahedralmesh, i.e., where the elements are rectilinear cubes (cuboids), thataccurately represents the boundary of the body or aligns withspecific material features in the interior of the body. Thus, it is oneof the grand challenges in hexahedral meshing to construct mesheswith as less distortions as possible.Many different hex-meshing techniques have been proposedover the last years, and for a thorough overview let us refer to[2], [3], [4]. The mesh quality is majorly determined by the scaleof deformation of its elements, which can be deduced from theJacobian matrix that is related to the transformation of a cuboid intothe deformed cell. A thorough review of the Jacobian ratio metricand alternative deformation measures for assessing the quality ofhex-mesh designs is provided by [5]. Interpreting these measures asscalar per-cell or per-vertex attributes yields a volumetric saliencymap that indicates important mesh regions. Depending on theused meshing technique and deformation measure, significantlydifferent saliency maps are obtained and need to be inspected toassess the resulting mesh quality. A visual inspection is difficult,however, since often high deformations occur in the interior ofthe volumetric body, for instance, along notches or at degeneratepoints when aligning elements along major stress directions. Yet itis especially such structures which are important, since they revealthe specific differences between different meshing techniques andhint to problematic regions in the body. • All authors are with the Computer Graphics & Visualization Group,Technische Universit¨at M¨unchen, Garching, Germany.E-mail: { christoph.neuhauser, junpeng.wang, westermann } @tum.de. In principle, direct volume rendering techniques for unstruc-tured volumetric meshes can be used to render the 3D saliency map(Figure 2 (left)). By using color and opacity, this can effectivelylocate regions containing highly deformed cells, yet the edgestructure is entirely lost. Drawing all edges, on the other hand,results in extreme clutter and occlusions (Figure 1 (middle)),and prohibits an intuitive understanding of the underlying meshstructure. Thus, common visualization tools for hexahedral meshesoften restrict the analysis to the boundary faces of the 3D mesh(Figure 2 (middle)), and provide options to clip subsets of elementsso that interior faces appear (Figure 2 (right)).The most recent approach by Bracci et al. [6] improves on thisby providing additional means to peel away layers of elementsfrom outside to inside, filter cells below a certain distortion value,and reveal hidden irregular edges via transparency rendering. Thisenables an interactive user-guided visual exploration of hex-meshes.On the other hand, since there is no guidance at first hand tothe important regions, the user needs to either actively searchthrough the mesh or filter out a large number of cells to revealthose with high distortion. Furthermore, peeling and filtering canmake it difficult to maintain the global mesh structure and spatialrelationships between mesh regions with different properties. Thelatter problem has been addressed by Xu and Chen [7] via thecomputation of a global topological mesh structure, which isdecomposed into a set of contiguous sub-structures to supporta part-based topological complexity analysis.
In this work, we extend on current visualization techniques forhex-meshes by a combination of face-based volume renderingwith fragment-based edge rendering. Our goal is to effectivelycommunicate the mesh structure by embedding a carefully designeddetail view into a surrounding context that conveys importantpositional cues. In the global context view the saliency map is a r X i v : . [ c s . G R ] S e p Fig. 1. From left to right: Contextual visualization using face-based volume rendering, edge visualization using fragment-based rendering, ourproposed focus+context (F+C) rendering using smooth blending between edge and volumetric rendering including contextual lines. Model courtesyof [1]. rendered as a semi-transparent volumetric field in combinationwith few contextual edges, so that important regions and the spatialmesh structure can quickly be recognized. The detailed focus viewis selected via a user-defined screen space lens with depth focus, inwhich edge-based rendering is used.To obtain a smooth transition from edge-based focus renderingto volumetric context rendering, we introduce a GPU rendererfor hex-meshes that solely renders cell faces and performs alloperations that change the mesh appearance in a fragment shader.The shader smoothly blends between different rendering optionsdepending on where a fragment is located w.r.t. the focus regionand whether a fragment is an edge or interior face fragment.Furthermore, we incorporate an edge-based level-of-detail (LoD)structure into the renderer, to adapt the density of rendered meshedges depending on cell distortion and distance to the focus center.The coarser LoDs further serve as shape cues in the context region.Since all rendering options are performed solely on the fragmentlevel, a smooth blending from sharp details to a more fuzzyappearance with embedded characteristic edges as shape cuescan be performed efficiently. Our proposed visualization techniquebuilds upon the following specific contributions: • A single-pass GPU renderer with fragment shader-basededge and volume rendering including transparency. • A LoD line hierarchy that is extracted from a hex-meshusing a topological subdivision scheme based on meshsheet elements. • A rendering technique that smoothly blends between focusand context, by continuously adapting edge density as wellas edge and face appearance.We demonstrate our approach for a number of hex-meshes,including a detailed quality and performance analysis that jus-tifies its feasibility even for meshes comprised of millions ofelements. The code is published on https://github.com/chrismile/HexVolumeRenderer.
ELATED W ORK
Hexahedral Mesh Visualization
Visualization techniques forhexahedral grids can be divided into surface-based and directvolume rendering techniques. Surface-based techniques renderhexahedral elements as opaque cuboids, including wireframerendering and face coloring to emphasize certain element properties.For a thorough overview of the different rendering styles that areused in modelling applications, let us refer to the recent workby [6]. They also introduced novel line-based visualization options
Fig. 2. Conventional hex-mesh visualizations. J is the Jacobian ratioof each cell. From left to right: Direct volume rendering, the boundarysurface, filtering cells with low deformation. Model courtesy of [3]. to maintain the overall model structure and emphasize singularedges in a hex-mesh. For computing the deformation of cells, ourimplementation uses the code from [6], which implements variousmeasures for cell deformation supported by the Verdict library [8].A summary and discussion of different quality metrics for hex-meshes is given by [5]. The code from [2] is used for loading andprocessing hexahedral meshes.Recently, [7] proposed to visualize the mesh structure of hexahe-dral meshes by using a subset of the most important base-complexsheets and dual chords, and show their interrelation using adjacencymatrices. We take inspiration from their approach utilizing base-complex mesh sheets to reduce the structural complexity of a mesh(cf. Section 5). Our approach uses hexahedral sheets [9], [10]instead of base-complex sheets, and merges sheets for creating aLoD structure instead of directly visualizing a subset of them. Theuse of hexahedral sheets for hex-mesh construction, simplificationand reparameterization is thoroughly discussed in the work by [11].Direct volume rendering of hexahedral meshes has a longtradition in volume visualization, and many of the concepts thatare used by more recent works are discussed in the survey by [12].Our GPU-based approach shares similarities with cell projectiontechniques w.r.t. how the cells are rendered and their visibilityorder is established. Cell projection techniques exploit the GPUto efficiently render triangles and perform linear interpolation ofper-vertex attributes for each rendered fragment. Cuboids are firstdecomposed into tetrahedra, and then rasterized and blended usingthe GPU [13], [14], [15], [16]. [14] utilized the GPU for visibilitysorting of rendered fragments, which is conceptually similar to theapproach we employ for visibility sorting using per-pixel fragmentlists [17], a GPU realization of the A-buffer [18] to store theunordered set of fragments falling into each pixel. These fragmentsare then sorted explicitly based on the stored depth information. Recently, SparseLeap [19] has been introduced as a pyramidaloccupancy map to generate geometric structures representing non-empty regions, which makes use of per-pixel fragment lists todetermine occupied space and accelerate volume ray-casting.
Focus and Context
Focus+context (F+C) visualization tech-niques aim at smoothly combining different aspects of the datainto one single visual representation. While the contextual visual-ization provides cues to understand the overall shape and spatialarrangement of the model or scene, the focus emphasizes specificaspects of the data. In F+C data visualization, especially lens-basedapproaches have a long tradition [20]. Distortion lenses havebeen used in volume rendering applications to magnify structuresin focus [21], [22], [23]. [24] discuss how two obtain differentsparsity levels depending on the importance of structures, andpropose importance-based rendering styles. [25] use an object-space lens in combination with a fish-eye view to distort structuresand push them out of focus when occluding features of interest.[26] combine renderings of an exterior and interior isosurface usinga screen-space lens. [27] demonstrate the application of a screen-space lens for F+C stress visualization, by letting the thickness andnumber of stress lines being controlled by the lens. We make useof a circular screen space lens to let the user select a cylindricalfocus region in object space and smoothly blend into a volumeticcontext view with increasing distance to the lens center.Related to our proposed edge-based F+C rendering is the useof transparency and adaptive primitive density for streamline ren-dering. When too many lines are shown simultaneously, occlusionsand visual clutter are quickly introduced. While we address thisby smoothly blending into a volumetric context and using fewrepresentative edge sequences from coarser LoDs, others haveproposed importance- and similarity-based criteria in screen-spaceto select the rendered lines dynamically on a frame-to-frame basis.Screen-space approaches determine for each new view the subsetof lines to be rendered so that occlusions are reduced and moreimportant lines are favored over less important ones [28], [29],[30]. The amount of occlusion is determined by the overdraw, i.e.,the number of projected line points [28], [30] or the maximumprojected entropy [29] per pixel. [31] decrease the opacity of non-important foreground lines using per-frame opacity optimization. Asummary and evaluation of different GPU transparency renderingtechniques for large line sets is given by [32]. [33] build a linehierarchy to continuously decrease the density of less importantlines.
ETHOD O VERVIEW
Our method renders all hex-faces as a quadrilateral formed bytwo triangles. A fragment shader determines the appearance ofeach fragment depending on whether it lies in the focus or contextregion, and further depending on whether it is an edge fragment,i.e., lying closer to a face edge than a given edge thickness, ora face fragment, i.e., lying too far away from any of the faceedges. This classifies each fragment into 4 different types thatdetermine how it is shaded (see Figure 3). While in the contextregion a more volumetric appearance with subtle edge accentuationis used, in the focus region only the edges are clearly emphasized.Edge and face colors and opacities are made dependent on theimportance measure, i.e., the strength of cell deformation, so thatalso in the context region important cells are emphasized. Sincethe importance measure is cell-based, every vertex gets assignedthe maximum importance value of all cells sharing this vertex. The triangle rasterizer then brings the interpolated importance values tothe fragments. In addition, the maximum importance value of allcells sharing an edge are made available in the fragment shader forthat edge.
Fig. 3. Left: Classification of fragments depending on whether they arein focus or context, and whether they are close to a face edge or not.Right: Depending on the classification, the fragments take on differentappearances. For each fragment, the image shows how the renderinglooks like if only fragments of this type are rendered.
The renderer changes the appearance smoothly from edge-based to volumetric with increasing distance to the focus center inscreen space, as described in Section 4. Therefore, the opacity andwidth of edges in focus is smoothly decreased towards the focusborder, and the color is blended towards the face colors used forrendering the context region. For each pixel, all fragments fallinginto that pixel are stored in a per-pixel fragment list on the GPU,and they are sorted w.r.t. increasing distance to the camera. Thisenables opacity-based blending, i.e., α -blending, of fragments inthe correct visibility order. For sorting, we use a GPU-friendlyimplementation of priority queues [32].The described rendering approach has two limitations: Firstly,in the focus region there can be many non-important edges thatocclude important ones. Secondly, in the context region the basicmesh structure gets lost due to increasing volumetric appearance.To address these limitations, we construct a LoD line structure(Section 5), in which mesh edges are continually removed at coarserhierarchy levels. Figure 4 illustrates how the LoD structure is used,by assigning to every edge the maximum level at which this edgeis still present in the LoD structure. In the focus region, insteadof removing edges with an importance value below a selectedthreshold, these edges are rendered if they are also present at somecoarse LoD. We call these edges contextual edges . In the contextregion, only contextual edges are rendered to provide an overviewof the shape of the hex-mesh. Fig. 4. Left: Edges are continually thinned out from level to level in theLoD hierarchy. Single edges get assigned the level at which they arelast contained. Right: The LoD edge structure for a given hex-mesh.Greyscales from bright to dark encode LoD levels from fine to coarse.Model fandisk courtesy of [34].
OCUS +C ONTEXT
In the following, we describe how focus and context renderingis performed, and in particular how a smooth transition betweenboth is achieved. A detailed discussion of the reference GPUimplementation is given in Section 6. The user defines the focusregion by positioning a circular lens with a center and controlled radius in screen space. The focus is 1 at the lens center and goessmoothly down to 0 towards its boundary.Regardless of whether a fragment is finally shaded to appear aspart of an edge or a face, hex-faces are rasterized with two triangles,and per-vertex attributes like the cell importance are barycentricallyinterpolated. For every fragment, a fragment shader determineswhether it should appear as an edge or a face. This is performedby first computing a fragment’s screen space coordinate and itsdistance dist to the focus center (normalized to range from 0 at thefocus center to 1 at the focus boundary), and evaluating the focus as 1 − smoothstep ( . , , dist ) .Then, a fragment’s edge opacity α e , which determines whetherthe fragment belongs to a face ( α e =
0) or an edge ( α e > w = ( + . · f ocus ) · w base e f ocus = ( d edge ≤ w ∧ ( e level ≥ lod ∨ e attr ≥ δ )) ? 1 : 0 e context = ( d edge ≤ w ∧ e level ≥ lod ) ? 1 : 0 α e = lerp ( e context , e f ocus , f ocus ) (1)Here, w base is a minimum edge width, d edge is the fragment’sshortest distance to any of the face edges, e level is the LoD levelof the edge (Figure 4), and e attr is the edge importance. First, theedge width is decreased with increasing distance to the focus center.Then, via e context and e f ocus , respectively, it is determined whetherthe fragment belongs to an edge that should be rendered when lyingin the focus or context region. In focus, an important edge is alwaysrendered, i.e., e attr is greater than a selected importance threshold δ . An unimportant edge is rendered only if it’s at a user-selectedcoarse LoD level lod . In context, every edge with e level ≥ lod isrendered. The final distance-based linear interpolation between e context and e f ocus transitions smoothly from focus to contextualedges.The shader renders the focus edges with a thin white depth-dependent halo [35]. The halo gets thinner with increasing distanceto the focus, and the edge colors are slightly darkened to make theedges stand out against the halo (Figure 5). Focus edges blend intocontextual edges, which are rendered as simple lines without a haloand colored according to the deformation measure. To maintaincertain contextual edges as spatial cues in the focus and contextregion, the user can interactively select the value of lod (Figure 6). Fig. 5. Focus edges are smoothly faded out with increasing distance tothe focus center. Left: Edges colored by LoD level. Right: Edges coloredby interpolated per-vertex deformation measure. Model fandisk courtesyof [34]. Fig. 6. Decreasing lod from 11 (left) to 8 (right) increases the density ofcontextual edges. No focus selected. Model armadillo courtesy of [36].
Furthermore, fragments in the context that are in close vicinityto an edge, but are not visible at the selected LoD level, are slightlyaccentuated. If such a fragment doesn’t belong to an edge accordingto Equation 1, s e determines how strongly it is emphasized: s e = d edge ≤ · w base ? s : 1 (2) s e is used to enhance the face opacity α f (see Equation 4). Since α f depends on the distance to the focus center (Equation 3),accentuated edges fade out accordingly. Figure 7 demonstratesvarying accentuation of contextual edges by variation of theaccentuation strength s . Fig. 7. Weakly ( s = . ) and strongly ( s = ) accentuated edges accordingto Equation 2. No focus selected. Model bunny courtesy of [1]. Both parameters α e and s e are used to assign the fragmentopacity that emphasizes certain edges and smoothly blends betweenfocus edges and contextual edges with increasing distance to thefocus center. The edge colors are set via a color table that maps theedge importance values to colors C e (see Subsection 4.2). If a fragment is not classified as part of an edge, it is rendered as partof a face to generate a volumetric appearance that hints to importantmesh regions. In principle, once the face fragments are renderedand sorted in a fragment list, direct volume rendering using α -compositing of cell contributions can be used (Figure 8 (left)). Thisgives a continuous volumetric appearance, as if the object is filledwith a scalar-valued quantity, yet the mesh structure is mostly lost.To also accentuate the mesh structure in the context region, werefrain from using direct volume rendering. Instead, the faces areblended in correct visibility order, yet the optical depth throughthe cells is neglected and face colors are blended using opacitiesthat continually increase with decreasing distance to the focus. I.e.,the face opacity α f is computed by modulating a user selectedface opacity ˆ α f using the distance to the focus center and the edgeaccentuation factor as α f = ˆ α f · dist . (3)Blending a discrete set of faces generates accentuated jumpsin the final colors whenever there is a change in the number of Fig. 8. Left: Volume rendering using cell contributions. Right: Face-basedvolume rendering. The Jacobian ratio (from low to high) is mapped linearlyto color (from blue to red) and opacity. Model bunny courtesy of [1]. faces falling into adjacent fragments (Figure 8 (right)). Increasingopacity artificially increases these jumps in the context region andmakes them more noticeably. The face colors C f are generated byinterpolation of per-vertex importance values by the rasterizer.We decided to use a dark background, because the rendering,combined with bold saturated colors, tends to stand out. A whitebackground shines through and affects the line colors. However,our visualization tool also allows switching to a white backgroundif desired (cf. Figure 21). Each fragment obtains an edge and a face color ( C e , C f ), andin addition computes the values α e , s e and α f according toEquations 1, 2 and 3. The fragment shader blends the edge colors(focus and contextual edges) and face colors (face colors andaccentuated lines) according to C = α e C e + ( − α e ) s e α f C f α = α e + ( − α e ) s e α f . (4)Thus, focus and context information is blended as shown in Figure 9.Via front-to-back α -compositing, all fragments falling into a pixelare finally merged. Fig. 9. Blend factors for focus and context.
Figure 10 shows the final F+C look. An accentuated edge inthe context takes on the color of the face, brightened a little, andits opacity is increased about 50%. In addition, white exteriorand interior screen-space silhouettes are added to improve theperception of the mesh shape [37], [38], [39]. Therefore, the meshboundary surface is rendered, and fragments along sharp edges inthe depth buffer are emphasized.
EVEL OF D ETAIL S TRUCTURE
In the following, we describe the construction of the LoD edgestructure for a given hex-mesh using topological simplification. Ourapproach builds upon the concept of hexahedral sheets. Hexahedralsheets were introduced by Borden et al. [10], and further formalizedby [9] as a set of hex-elements which are connected to each othervia their topologically parallel shared edges. In Figure 11, we
Fig. 10. A final mesh rendering showing a smooth transition from thefocus edges to the context edges and volumetric representation. Modelgrayloc courtesy of [40]. reproduce images from Woodbury et al. to illustrate the relationshipbetween these two topology-based groups. In a number of works,the concept of hexahedral sheets has been utilized for hex-meshconstruction and simplification [41], as well as re-meshing [11].We make use in particular of sheet-based topology simplification,by successively collapsing pairs of neighboring sheets.
Fig. 11. A hexahedral sheet (left), and the three sets of topologicallyparallel edges (in red) of a hexahedral element [9]. Model courtesyof [42].
We use the approach proposed by [9] to extract each singlesheet: Upon selecting the start edge, all elements incident to theedge are found and added to the sheet (if not done already). Foreach of the newly added elements, the three edges topologicallyparallel to the original edge are determined, and the edges areupdated with the newly found edges. This process is repeated untilthere is no new element found. During the extraction of a singlesheet, all visited element edges are recorded. Then, an unvisitededge is selected for computing a new sheet until no such edge isleft. In this way, the set of sheets covering the entire hex-meshis extracted. Finally, we define for each sheet a sheet componentconsisting of all elements belonging to this sheet.
In an iterative process, pairs of sheet components are merged intoa joint component until no components can be merged anymore.Therefore, for all pairs of sheet components, their neighborhoodrelation is classified analogously to the work by [7] as • adjacent (or tangent), • intersecting, • hybrid (i.e., tangent and intersecting), • none.Figure 12 illustrates the different constellations. In our design,sheet components are neighbors only if they share at least oneboundary face that is no longer on the boundary after merging. Fig. 12. From left to right, the different topological relations (adjacent,hybrid, intersecting) of neighboring hexahedral sheets. Similarity to theconstellations by [7] is intentional. Model courtesy of [42].
In addition to the neighborhood relation, for each pair of neigh-boring components a weight is computed. The weights are used inan iterative merging process to determine the priority of mergingfor each neighboring component pair. Building upon [7], wherethe weights consider the percentage of merged boundary faces tothe overall number of boundary faces in the two components, theweights are computed as w i , j = ∂ C i ∩ ∂ C j | ∂ C i | + | ∂ C j | · | C i | + | C j | (5)Here, ∂ C i ∩ ∂ C j is the number of boundary element faces sharedby the pair of neighboring components C i and C j , and | C i | + | C j | is the number of cells C i and C j contain. Different to [7], theweights consider the topological size (i.e., the number of cells) formerging to reduce the potential ’jumps’ in the LoD structure, i.e.,neighboring pairs with smaller topological sizes are favoured atsimilar ratio between boundary faces. Even though we favour apurely topological measure in this work, alternatively one couldalso opt to use the face areas and cell volumes.The adjacency information is stored in a priority queue, withthe weights serving as the priority measure. Pairs of componentswith highest priority are merged first, yet adjacent sheets alwayshave a higher priority than hybrid sheets, and hybrid sheets alwayshave a higher priority than intersecting sheets. During merging,the two matching components are removed from the componentqueue, and a new component is inserted. The edges on the sharedboundary faces of these components are identified and marked asinvisible on this level (Figure 13). Then the side element faces ofthe new component are recomputed, and the adjacency informationas well as the priority of neighboring components is updated inthe component queue. A next coarser LoD level is established assoon as the number of cells of the merged component is at leastmore than twice as large as the number of cells of the (merged)components on which the last LoD level starts. The merging processis repeated until only one single component is left. Fig. 13. Sheet neighborhoods in a 2D quad mesh. Bold orange linesbecome invisible after merging. From left to right: Adjacent sheets, hybridsheets, intersecting sheets. No edges become invisible when sheetsintersect. Fig. 14. From left to right: LoD levels 0, 2, 3 and 4 of a hex-mesh. Modelfandisk courtesy of [34].
An exception to the rules is made for so-called singular edges.Singular or irregular edges are those edges which do not haveexactly 2 (on the boundary) or 4 (in the interior) incident cells[41]. These edges form curves which separate the hex-mesh intoits regular parts, and they serve as important visual cues regardingthe global mesh topology. In particular, valence 1 edges are neverset to be invisible, and singular edges of all other valences are onlyinvisible at the coarsest LoD level.Figure 14 and Figure 15 show the extracted LoD structures oftwo hex-meshes. The former shows the model from Figure 4, yetnow the edges at different LoD levels, i.e., with e level equal to 0, 2,3, and 4, are shown separately to better demonstrate the sequenceof merging steps. The same representation is used for the latterexamples, yet the edges with e level equal to 0, 3, 5 and 6 are shown.In both cases, the greyscale encoding of LoD levels as in Figure 4is used. MPLEMENTATION
Our reference implementation uses the functionality provided byOpenGL 4.5. All data required for rendering is kept on the GPU,so that no CPU-GPU communication is required during renderingand user interaction. Since the fragment shader always performsall computations described in Section 4, the user can arbitrarilychange the size of the focus lens without affecting performance. Allconstant parameters in Equations 1, 2 and 3 are issued via constantshader parameters that can be changed interactively by the user.In order to make the single-pass face-based rendering of facesand edges possible, we use programmable vertex pulling [43]. Weuse a variant called programmable attribute fetching , where a fixed-function element array is used for indexed primitive rendering, butall vertex attributes are loaded manually from a dedicated buffer.For each cell face, we create two triangles with shared verticesonly between these two triangles. Then, by using the vertex IDthe fragment shader computes which face a vertex belongs to, andloads the correct face data. A geometry representation where allvertices are shared between faces is not possible, as vertices needto pass different data to the fragment shader depending on thecurrent face. Thus, the renderer cannot utilize the post-transform
Fig. 15. From top to bottom: LoD levels 0, 3, 5 and 6 of a hex-mesh.Model eight courtesy of [44]. cache of indexed vertices between faces, letting the pure geometrythroughput fall slightly below the GPU limit.The fragment shader uses the vertex positions of all four facecorner points to compute the shortest distance to any of the faceedges. When rendering edges with per-edge constant color, star-shaped patterns occur at edge intersections (Figure 16a). Sincesmoothly interpolated per-vertex colors are rendered, these patternsare hardly visible (Figure 16b). Only when two edges intersectand one is not rendered (Figure 16c), the pattern is clearly visible.This is avoided by letting the shader ignore edges in the distancecalculation which are not visible (Figure 16d).
Fig. 16. Edge rendering in 2D. (a) Four edges meet in one vertex andform an arrow-like shape. (b) We linearly interpolated colors in order tomake the arrow-like shapes disappear. (c) Making edges invisible createsholes. (d) An approach for closing these holes is ignoring lines with a lowopacity in the calculation of the closest edge.
To keep track of the fragments falling into the same pixel, weemploy GPU per-pixel linked lists [17]. All generated fragmentsare stored in a linked list over all pixels, and a fragment shader sortsthese fragments w.r.t. their screen space depth. Here it is assumedthat the GPU buffers used for storing the fragments along with areference to the next neighbor in the global fragment list are largeenough. We demonstrate in Section 7 that even for hex-mesheswith a few million elements this is case. For scenes with highdepth complexity, however, the number of fragments is so largethat sorting can become a performance bottleneck. For instance,for the largest hex-mesh used in our experiments about 340 millionfragments are generated per frame. Therefore, we use a GPUversion of priority-queues using a binary tree implementation assearch structure [32], which reduces the time required for sortingto slightly more than half of the overall frame time.
ESULTS AND A NALYSIS
All our results were rendered on an NVIDIA RTX 2070 SUPERGPU with 8GB of on-chip memory. Only the construction ofthe LoD hiearchy was performed on the CPU, i.e., a workstationrunning Ubuntu 20.04 with an AMD Ryzen 9 3900X @3.80GHzCPU and 32GB RAM. We have used different viewport sizes todemonstrate the scalability of the rendering approach in the numberof pixels, and in particular to show that even for large meshes andviewports the memory required by per-pixel fragment lists does notexceed the GPU memory. All timings are averages over 128 frameswith different camera views where the data sets cover almost allof the screen. The accompanying video shows one of the camerapaths we have used to record the performance data.Table 1 lists the number of hex-elements of the test data sets,the GPU memory that is required to store these data sets on theGPU, and the time it requires to build the LoD structure for eachdata set. We have in particular included the data sets ”example3”and ”cubic128” (Figure 20) to demonstrate that even large datasets with millions of cells can be stored entirely on the GPU andprocessed in a short time.
Data Set
TABLE 1Data set statistics. Model fandisk courtesy of [34], eight courtesy of [44],dragon, armadillo and dancingchildren courtesy of [36], grayloc courtesyof [40], anc101 a1 courtesy of [1], cognit courtesy of [3], example3courtesy of [45], model cubic 128 is a twisted Cartesian grid of size . Table 2 provides a performance statistics, distinguishing be-tween the fragment shader used to determine the focus+context(F+C) look and the shader that sorts and blends the fragments inthe per-pixel fragment list. In addition, the memory requirementsof per-pixel fragment lists are given. Even for the largest data set,interactive frame rates can be achieved, and only at the largestviewport size the frame rate drops slightly below full interactivity.In all experiments, the fragment shader consumes the vast amountof the total frame time. The time for resolving the per-pixelfragment lists is between 43% and 72% of the total renderingtime, and it is dependent on the depth complexity of the dataset, i.e., the number of cells falling into the single pixels. It canbe seen that going further beyond a few millions of elementscan exceed the available GPU memory. This problem can beaddressed by subdividing the screen into parts and rendering to eachpart separately. Since this approach requires to process each cellmultiple times in the geometry processing stage and the rasterizerbut does not increase the number of fragment shader operations,only a marginal overhead can be expected.In the following, we show results of interactive visual inspec-tions of some of the test data sets using the proposed F+C renderer.In all examples, the per-cell scaled Jacobian ratio is mapped tocolor (from blue to red) and opacity (from 0 to 1). Figure 18 andFigure 19 show the use of F+C rendering to obtain an overviewof the spatial locations of regions with highly deformed cells, andto select a particular focus region for a more detailed analysis.
Data Set Viewport FPS F+C PPFL Mem. PPFLgrayloc 1280x720 154 FPS 2.2ms 4.3ms 0.21 GiB1920x1080 85 FPS 3.9ms 7.8ms 0.47 GiB2560x1440 52 FPS 6.5ms 12.7ms 0.84 GiBanc101 a1 1280x720 96 FPS 4.1ms 6.3ms 0.37 GiB1920x1080 51 FPS 6.7ms 12.9ms 0.83 GiB2560x1440 32 FPS 11.0ms 20.3ms 1.48 GiBcognit 1280x720 127 FPS 3.2ms 4.6ms 0.20 GiB1920x1080 74 FPS 4.6ms 8.8ms 0.45 GiB2560x1440 49 FPS 6.6ms 13.8ms 0.80 GiBexample3 1280x720 44 FPS 13.0ms 9.8ms 0.64 GiB1920x1080 26 FPS 17.8ms 20.6ms 1.45 GiB2560x1440 17 FPS 23.6ms 33.6ms 2.57 GiBcubic128 1280x720 12 FPS 37.3ms 46.9ms 1.19 GiB1920x1080 7 FPS 45.8ms 101.5ms 2.68 GiB2560x1440 - - - 4.8 GiB*
TABLE 2Performance statistics for selected data sets at different viewport sizes:Frames per second (FPS), times required by the F+C fragment shader(F+C) and the shader that sorts and blends the fragments in theper-pixel fragment lists (PPFL), and the memory consumed by thefragment lists (Mem. PPFL). Buffer sizes are capped at 4 GiB due toOpenGL buffer restrictions.
One can see that due to the combination of contextual lineswith volumetric face-based rendering and accentuated edges, theuser quickly understands the basic structure of the mesh and itssubdivision into multiple regular sheet components. Once a focusregion is selected, a detailed analysis of the cells in that region isperformed via close-up views and interactive navigation. Duringinspection, LoD levels, transfer functions for edge and face colorsand opacities, as well as edge thickness can be changed interactivelyto enhance the visual representation.Figure 20 (left) shows a deformed Cartesian grid comprised of128 cells. The deformed grid is created by performing a FiniteElement analysis with a specific boundary condition to let themesh twist. High deformations occur in the orange regions, yet thecells are so small that the mesh structure cannot be seen. Via theedges from a selected coarse LoD level, the basic mesh structureis preserved, and the user can now zoom at a high deformationregion and use focus rendering to investigate the deformations inmore detail. Figure 20 (right) shows a rendering of a hex-meshthat was generated via the method from [45]. As can be seen,the meshing approach creates many singular edge columns, i.e.,cells with higher deformations are laid out along straight verticalstructures, while the remaining parts of the mesh show almostzero deformation. The focus view reveals the structure of the cellswith a deformation larger than a selected threshold in the selectedregion. In Figure 29 we show further results of F+C rendering. InFigure 27 and Figure 28, more applications of our approach can beseen. To evaluate the potential of the proposed visualization tool forhex-mesh inspection, we performed an informal user study with thegoal to assess the strengths and weaknesses of our tool comparedto the one by Bracci and co-workers [6]. With each tool, theusers visualized 3 different hex-meshes. The users were asked tocomment on how effectively they understood the overall shape ofthe objects, determined the regions with highly deformed cells, andcould assess the spatial relationships between regions with differentdeformation strengths and the concrete deformation characteristicsof cells in regions comprised of highly deformed cells. Visualcomparisons to HexaLab [6] and the main sheet extraction methodof Xu et al. [7] are given in Figure 22, Figure 23, Figure 24 and Figure 25. In the user study we did not consider the method by Xuet al., since its focus is on a topological mesh analysis and not onthe visualization of cell deformations. The major findings from thepursued user study are as follows: • Global view
Users appreciate that the global context is al-ways visible when using our tool. Due to the use of volumerendering with deformation strength-based classification inthe context region, all regions with high deformation cellscan be perceived in relation to each other. The visualizationhints on all potentially interesting regions. With only vol-ume rendering, however, users sometimes loose the depthperception and feel that the global mesh structure cannotbe understood well. This limitation becomes obsolete whencoarse-scale edge structures are blended into the contextregion, which enhances the understanding of the globalmesh structure without introducing clutter. HexaLab, incomparison, supports the rendering of singular edges infiltered mesh regions and a transparent mesh outline tomaintain some global context. Users perceived as a minorlimitation the resulting sparseness of regions in which cellsare filtered out. • LoD structure
Unlike when using slicing, peeling andquality-based cell filtering, where cells are removed entirelybased on a binary decision criterion, users appreciate thesmooth LoD-based transition from focus to context and highto low deformations provided by our tool. This effectivelyreveals how the cell quality changes globally, and whetherthese changes are rather smooth or occur abruptly. • Edge-based rendering
When inspecting regions via thescreen space lens and edge rendering, users were ableto quickly access both the relation of deformed cells totheir surroundings and how the cells are deformed. Whenrendering opaque surfaces, almost all neighbors of a cellwould need to be filtered out—increasingly removingcontext information—in order to see the cell edges andbe able to perform a fine-granular deformation analysis. • Scalar field visualization
Two users from computationalscience found it very appealing that also scalar valuesgiven per vertex or cell can be visualized in turn usingvolume rendering (cf. Figure 17). In particular when hex-meshes are used as simulation grids, this option becomesvery effective for visualizing the relationships betweensimulation results and errors on the one hand, and theunderlying cell structures on the other hand. • Interactive modification of visual parameters
It wasperceived very supportive of a detailed mesh analysis thatall rendering parameters could be changed interactively, and,thus, groups of elements could be quickly (de-)emphasizedwhile enabling less and more attention on the global meshstructure.User have also pointed out potential limitations of our approach,some of which could be overcome by only minor adjustments. • It was stated that unshaded lines impact the ability tocorrectly observe spatial relations when looking at astill image (cf. Figure 22). We use depth cues in orderto counteract this effect in the focus region by slightlydesaturating fragments further away from the camera inthe focus region. This is especially useful for lines that arefurther away from each other. • When using a screen space lens, clutter was perceived andsome important regions couldn’t bee seen, as all elementsalong the viewing cone are put into focus. Therefore, wealso provide a mechanism similar to an object space lens(cf. Figure 19). The user can click on the mesh to select theinitial object space lens position by picking the closest meshsurface point along the viewing ray. The initial viewingray is saved, and the user can move the object space lensposition along this ray into the object using the mousewheel. When the user moves the camera, the object spaceposition and the moving ray of the lens stay unchanged. • When a meshing technique produces meshes with a veryhigh number of singularities (e.g., octree-based meshingtechniques, cf. Figure 26), the LoD structure is cluttered aswell and becomes less useful. This drawback also affectstools like HexaLab [6], which renders singular edges inregions where cells are filtered out. Xu et al. [7] also stateregarding their method that “it is still hard to show thestructure of an octree or tet-split hex-mesh due to the overlycomplex structure and a large number of extracted mainsheets”.
Fig. 17. Contextual volume rendering is used to visualize a scalar fieldthat is given at the vertices of Cartesian grid. Values represent the stressanisotropy of a femur model for a certain load condition.
ONCLUSION AND FUTURE WORK
In this paper, we have introduced an interactive F+C renderingtechnique for hex-meshes using fragment-based edge and facerendering. We have demonstrated that even high-resolution meshescan be rendered at high visual quality by using a carefullydesigned combination of detailed cell information and surroundingcontextual information. To achieve this, we have introduced theuse of hexahedral sheets for extracting a hierarchical LoD edgestructure that provides important shape cues in the context region.This allows to significantly reduce occlusions and visual clutter. Byusing a purely fragment-based rendering approach, which smoothlytransitions between highly detailed edge rendering and volumetricface blending, interactive rendering of data sets comprised ofup to a few millions of elements is achieved on current GPUarchitectures. Our results indicate the potential of the proposedrendering technique for an interactive visual inspection of hex-meshes, supported by an automated guidance to important meshregions. In the future, we will shed light on approximate renderingtechniques for transparent fragments that can avoid the use ofper-pixel fragment lists, such as Multi-Layer Alpha Blending[49] or Moment-Based Order Independent Blending [50]. Thesetechniques do not render the fragments in correct visibility order,yet since transparency is mostly used in the context region withmore emphasis on closer mesh structures, such techniques might beable to provide a meaningful approximation. We further envisionan AR-based stereoscopic inspection of hex-meshes to providean improved spatial understanding of shape variations. Here itwill be interesting to analyse whether a purely fragment-basedapproach is suitable for stereoscopic rendering. We will furthershed light on the use of the proposed method for irregular meshes,such as tetrahedral meshes. For such meshes, the constructionof an LoD structure is not possible at first hand, and alternativehierarchical representations need to be developed. Furthermore,visualizing highly topologically irregular hex meshes generated by,e.g., octree-based techniques without resulting in excessive clutteris also a problem that still needs more investigation. Finally, weintend to extend the rendering method to perform volume renderingof physical fields given at the hexahedral cells or vertices. Thisincludes in particular the use of extended barycentric interpolationfor deformed hex-cells and the rendering of implicit isosurfacesgoing through the cells. R EFERENCES [1] J. Gregson, A. Sheffer, and E. Zhang, “All-hex mesh generation viavolumetric polycube deformation,”
Computer Graphics Forum (SpecialIssue of Symposium on Geometry Processing 2011) , vol. 30, no. 5, p. toappear, 2011.[2] X. Gao, D. Panozzo, W. Wang, Z. Deng, and G. Chen, “Robust structuresimplification for hex re-meshing,”
ACM Trans. Graph. , vol. 36, no. 6,Nov. 2017. [Online]. Available: https://doi.org/10.1145/3130800.3130848[3] J. Huang, T. Jiang, Z. Shi, Y. Tong, H. Bao, and M. Desbrun, “ (cid:96)
ACM Transactionson Graphics (TOG) , vol. 33, no. 3, pp. 1–11, 2014.[4] M. Livesu, N. Pietroni, E. Puppo, A. Sheffer, and P. Cignoni, “Loopycuts: Surface-field aware block decomposition for hex-meshing,” arXivpreprint arXiv:1903.10754 , 2019.[5] X. Gao, J. Huang, K. Xu, Z. Pan, Z. Deng, and G. Chen, “Evaluatinghex-mesh quality metrics via correlation analysis,”
Computer GraphicsForum , vol. 36, no. 5, pp. 105–116, 2017.[6] M. Bracci, M. Tarini, N. Pietroni, M. Livesu, and P. Cignoni,“Hexalab.net: An online viewer for hexahedral meshes,”
Computer-Aided Design
IEEE Transactions on Visualization and Computer Graphics ,vol. 25, no. 1, pp. 1173–1182, 2019.[8] C. Stimpson, C. Ernst, P. Knupp, P. Pbay, and D. Thompson, “The verdictlibrary reference manual,” 04 2007.[9] A. C. Woodbury, J. F. Shepherd, M. L. Staten, and S. E. Benzley,“Localized coarsening of conforming all-hexahedral meshes,”
Eng.Comput. (Lond.) , vol. 27, no. 1, pp. 95–104, 2011. [Online]. Available:https://doi.org/10.1007/s00366-010-0183-9[10] M. J. Borden, S. E. Benzley, and J. F. Shepherd, “Hexahedral sheetextraction,” in
IMR , 2002.[11] R. Wang, C. Shen, J. Chen, H. Wu, and S. Gao, “Sheet operation basedblock decomposition of solid models for hex meshing,”
Computer-AidedDesign , vol. 85, pp. 123–137, 2017.[12] C. T. Silva, J. L. D. Comba, S. P. Callahan, and F. F. Bernardon, “Asurvey of gpu-based volume rendering of unstructured grids,”
Revista deinform´atica te´orica e aplicada. Porto Alegre, RS. Vol. 12, n. 2 (out. 2005),p. 9-29 , 2005.[13] M. Weiler, M. Kraus, M. Merz, and T. Ertl, “Hardware-based ray castingfor tetrahedral meshes,” in
IEEE Visualization, 2003. VIS 2003.
IEEE,2003, pp. 333–340. Fig. 18. The first F+C view shows a selected mesh sub-structure with highly deformed cells (framed region) in its global surrounding. Zoom-in andfocus size adjustment enables a fine granular cell analysis. Surrounding cells with high deformation are still present in the context. Model anc101 a1courtesy of [1].Fig. 19. In the first F+C view, important regions are effectively revealed. Framed region shows sub-structures that have been selected via the focuslens. Zoom-in and focus size adjustment enables a fine granular cell analysis. Surrounding cells with high deformation are still present in the context.Model grayloc courtesy of [40].Fig. 20. Left: F+C rendering of cubic128. Right: F+C rendering of example3 reveals mostly elongated sub-structures with high deformations. Modelcourtesy of [45]. Fig. 21. Comparison of black and white background. Model anc101 a1 courtesy of [1].Fig. 22. Top: F+C visualization. Bottom: Same model and views with opaque surface rendering and quality-based cell filtering. Model motor tailcourtesy of [46].Fig. 23. From left to right: Contextual volume rendering (ours). Fully opaque surface rendering. Slicing with oblique slicing plane. Additional slicingplane removing front elements. Quality-based cell filtering. Opaque surface rendering requires multiple operations to reveal the interesting meshstructures, and context information is often lost. Model cube carved courtesy of [46]. Fig. 24. Comparison of different rendering techniques. From left to right: F+C visualization with focus on woman’s head (ours). Visualization inHexaLab [6] with slicing. Main sheet visualization by [7]. Rightmost image courtesy of [7]. Model fertility courtesy of [47].Fig. 25. From top left to bottom right: F+C visualization (ours). Slicing in HexaLab [6] with singular edges. Fully opaque surface rendering. Slicing.Quality-based cell filtering. Model anc101 a1 courtesy of [1].Fig. 26. Contextual volume rendering of meshes from an octree-based meshing approach. Due to the highly irregular topological structure, manyshort line segments are created. Models courtesy of [48].Fig. 27. From left to right: Surface rendering with wireframe edges. Same as before, and edge colors are modulated by their LoD to only slightlyaccentuate edges belonging to a low level. Model metatron courtesy of [46]. Fig. 28. Contextual volume rendering visualizations of LoopyCuts-based models. Models courtesy of [46].Fig. 29. F+C renderings of different meshes. Models cognit, dragon and dancingchildren courtesy of [3], [36]. [14] S. P. Callahan, M. Ikits, J. L. D. Comba, and C. T. Silva, “Hardware-assisted visibility sorting for unstructured volume rendering,”
IEEETransactions on Visualization and Computer Graphics , vol. 11, no. 3, pp.285–295, 2005.[15] R. Marroquim, A. Maximo, R. Farias, and C. Esperanc¸a, “Gpu-basedcell projection for interactive volume rendering,” in . IEEE, 2006,pp. 147–154.[16] J. Georgii and R. Westermann, “A generic and scalable pipeline forgpu tetrahedral grid rendering,”
IEEE Transactions on Visualization andComputer Graphics , vol. 12, no. 5, pp. 1345–1352, 2006.[17] J. C. Yang, J. Hensley, H. Gr¨un, and N. Thibieroz, “Real-time concurrentlinked list construction on the gpu,” in
Proceedings of the 21st Eurograph-ics Conference on Rendering , ser. EGSR’10. Aire-la-Ville, Switzerland,Switzerland: Eurographics Association, 2010, pp. 1297–1304.[18] L. Carpenter, “The a-buffer, an antialiased hidden surface method,” in
Proceedings of the 11th annual conference on Computer graphics andinteractive techniques , 1984, pp. 103–108.[19] M. Hadwiger, A. K. Al-Awami, J. Beyer, M. Agus, and H. Pfister,“Sparseleap: Efficient empty space skipping for large-scale volumerendering,”
IEEE Transactions on Visualization and Computer Graphics ,vol. 24, no. 1, pp. 974–983, Jan 2018.[20] C. Tominski, S. Gladisch, U. Kister, R. Dachselt, and H. Schumann,“Interactive lenses for visualization: An extended survey,”
ComputerGraphics Forum , vol. 36, no. 6, pp. 173–200, 2017. [Online]. Available:https://onlinelibrary.wiley.com/doi/abs/10.1111/cgf.12871[21] E. LaMar, B. Hamann, and K. I. Joy, “A magnification lens for interactivevolume visualization,” in
Proceedings Ninth Pacific Conference onComputer Graphics and Applications. Pacific Graphics 2001 . IEEE,2001, pp. 223–232.[22] M. Ikits and C. D. Hansen, “A focus and context interface for interactivevolume rendering,”
Unpublished Work , 2004.[23] L. Wang, Y. Zhao, K. Mueller, and A. Kaufman, “The magic volume lens:An interactive focus+ context technique for volume rendering,” in
VIS 05.IEEE Visualization, 2005.
IEEE, 2005, pp. 367–374.[24] I. Viola, A. Kanitsar, and M. E. Groller, “Importance-driven featureenhancement in volume visualization,”
IEEE Transactions on Visualizationand Computer Graphics , vol. 11, no. 4, pp. 408–418, 2005. [25] M. Traor, C. Hurter, and A. Telea, “Interactive obstruction-free lensingfor volumetric data visualization,”
IEEE Transactions on Visualizationand Computer Graphics , vol. 25, no. 1, pp. 1029–1039, 2019.[26] J. Kr¨uger, J. Schneider, and R. Westermann, “ClearView: An interactivecontext preserving hotspot visualization technique,”
IEEE Transactionson Visualization and Computer Graphics (Proceedings Visualization /Information Visualization 2006) , vol. 12, no. 5, September-October 2006.[27] C. Dick, J. Georgii, R. Burgkart, and R. Westermann, “Stress tensor fieldvisualization for implant planning in orthopedics,”
IEEE Transactions onVisualization and Computer Graphics (Proceedings of IEEE Visualization2009) , vol. 15, no. 6, pp. 1399–1406, 2009.[28] S. Marchesin, C.-K. Chen, C. Ho, and K.-L. Ma, “View-dependentstreamlines for 3d vector fields,”
IEEE Transactions on Visualizationand Computer Graphics , vol. 16, no. 6, pp. 1578–1586, 2010.[29] T.-Y. Lee, O. Mishchenko, H.-W. Shen, and R. Crawfis, “View pointevaluation and streamline filtering for flow visualization,” in . IEEE, 2011, pp. 83–90.[30] J. Ma, C. Wang, and C.-K. Shene, “Coherent view-dependent streamlineselection for importance-driven flow visualization,” in
Visualization andData Analysis 2013 , vol. 8654. International Society for Optics andPhotonics, 2013, p. 865407.[31] T. G¨unther, C. R¨ossl, and H. Theisel, “Opacity optimization for 3d linefields,”
ACM Transactions on Graphics (TOG) , vol. 32, no. 4, pp. 1–8,2013.[32] M. Kern, C. Neuhauser, T. Maack, M. Han, W. Usher, and R. Westermann,“A comparison of rendering techniques for 3d line sets with transparency,”
IEEE Transactions on Visualization and Computer Graphics , pp. 1–1,2020.[33] M. Kanzler, F. Ferstl, and R. Westermann, “Line density control in screen-space via balanced line hierarchies,”
Computers & Graphics , vol. 61, pp.29–39, 2016.[34] K. Takayama, “Dual sheet meshing: An interactive approach to robusthexahedralization,”
Computer Graphics Forum , vol. 38, no. 2, pp.37–48, 2019. [Online]. Available: https://onlinelibrary.wiley.com/doi/abs/10.1111/cgf.13617[35] M. H. Everts, H. Bekker, J. B. T. M. Roerdink, and T. Isenberg,“Depth-dependent halos: Illustrative rendering of dense line data,”
IEEE Transactions on Visualization and Computer Graphics , vol. 15, no. 6, pp.1299–1306, 2009.[36] M. Livesu, A. Sheffer, N. Vining, and M. Tarini, “Practical hex-meshoptimization via edge-cone rectification,”
ACM Trans. Graph. , vol. 34,no. 4, Jul. 2015. [Online]. Available: https://doi.org/10.1145/2766905[37] T. Saito and T. Takahashi, “Comprehensible rendering of 3-d shapes,”in
Proceedings of the 17th Annual Conference on Computer Graphicsand Interactive Techniques , ser. SIGGRAPH 90. New York, NY,USA: Association for Computing Machinery, 1990, p. 197206. [Online].Available: https://doi.org/10.1145/97879.97901[38] A. Hertzmann, “Introduction to 3d non-photorealistic rendering: Silhou-ettes and outlines,”
SIGGRAPH Course Notes , 01 1999.[39] R. Raskar and M. Cohen, “Image precision silhouette edges,” in
Pro-ceedings of the 1999 symposium on Interactive 3D graphics , 1999, pp.135–140.[40] X. Fang, W. Xu, H. Bao, and J. Huang, “All-hex meshing usingclosed-form induced polycube,”
ACM Trans. Graph. , vol. 35, no. 4, Jul.2016. [Online]. Available: https://doi.org/10.1145/2897824.2925957[41] X. Gao, Z. Deng, and G. Chen, “Hexahedral mesh re-parameterizationfrom aligned base-complex,”
ACM Trans. Graph. , vol. 34, no. 4, Jul.2015. [Online]. Available: https://doi.org/10.1145/2766941[42] Y. Li, Y. Liu, W. Xu, W. Wang, and B. Guo, “All-hex meshing usingsingularity-restricted field,”
ACM Trans. Graph. , vol. 31, no. 6, Nov. 2012.[Online]. Available: https://doi.org/10.1145/2366145.2366196[43] D. Rkos, “Programmable vertex pulling,” in
OpenGL Insights
CoRR , vol. abs/1901.00238, 2019. [Online]. Available:http://arxiv.org/abs/1901.00238[45] H. Wu, S. Gao, R. Wang, and J. Chen, “Fuzzy clustering basedpseudo-swept volume decomposition for hexahedral meshing,”
Computer-Aided Design
ACM Transactions on Graphics , vol. 39, no. 4, 2020.[47] M. Livesu, N. Vining, A. Sheffer, J. Gregson, and R. Scateni,“Polycut: Monotone graph-cuts for polycube base-complex construction,”
ACM Trans. Graph. , vol. 32, no. 6, Nov. 2013. [Online]. Available:https://doi.org/10.1145/2508363.2508388[48] X. Gao, H. Shen, and D. Panozzo, “Feature preserving octree-basedhexahedral meshing,”
Computer Graphics Forum , vol. 38, no. 5, pp.135–149, 2019. [Online]. Available: https://onlinelibrary.wiley.com/doi/abs/10.1111/cgf.13795[49] M. Salvi and K. Vaidyanathan, “Multi-layer alpha blending,” in
Pro-ceedings of the 18th meeting of the ACM SIGGRAPH Symposium onInteractive 3D Graphics and Games , 2014, pp. 151–158.[50] C. M¨unstermann, S. Krumpen, R. Klein, and C. Peters, “Moment-basedorder-independent transparency,”
Proceedings of the ACM on ComputerGraphics and Interactive Techniques , vol. 1, no. 1, pp. 1–20, 2018.
CKNOWLEDGMENTS
The authors would like to thank Maximilian Bandle, TechnicalUniversity of Munich, for his support concerning the efficientimplementation of per-pixel fragment sorting on GPUs, and thevarious authors for providing the hex-meshes we have used. Thiswork was partially funded by the German Research Foundation(DFG) under grant number WE 2754/10-1 “Stress Visualizationvia Force-Induced Material Growth”.
Christoph Neuhauser is a graduate researchassistant at the Computer Graphics and Visual-ization Group at the Technical University of Mu-nich. He received his B.Sc. in computer sciencefrom TUM in 2019. Major interests in researchcomprise scientific visualization and real-timerendering.
Junpeng Wang is a PhD candidate in the Com-puter Graphics and Visualization Group at Tech-nical University of Munich, Germany. He receivedhis Bachelor and Master’s degrees in AerospaceScience and Technology in 2015 and 2018, re-spectively, both from Northwestern PolytechnicalUniversity, China. Currently, his research is fo-cused on tensor field visualization and numericalsimulation for solid mechanics.