LOCALIS: Locally-adaptive Line Simplification for GPU-based Geographic Vector Data Visualization
EEurographics Conference on Visualization (EuroVis) 2020M. Gleicher, T. Landesberger von Antburg, and I. Viola(Guest Editors)
Volume 39 ( ), Number 3
LOCALIS: Locally-adaptive Line Simplificationfor GPU-based Geographic Vector Data Visualization
Alireza Amiraghdam † , Alexandra Diehl , Renato Pajarola Department of Informatics, University of Zürich, Switzerland
Figure 1:
Street network visualized over a textured terrain model using
LOCALIS . Left zoom-in inset highlights the distance based simpli-fication of lines. Red circle on the right demonstrates the flexibility of the approach using a line-refinement lens that can simplify or refinelines inside a region interactively controlled by the user.
Abstract
Visualization of large vector line data is a core task in geographic and cartographic systems. Vector maps are often displayedat different cartographic generalization levels, traditionally by using several discrete levels-of-detail (LODs). This limits thegeneralization levels to a fixed and predefined set of LODs, and generally does not support smooth LOD transitions. How-ever, fast GPUs and novel line rendering techniques can be exploited to integrate dynamic vector map LOD management intoGPU-based algorithms for locally-adaptive line simplification and real-time rendering. We propose a new technique that inter-actively visualizes large line vector datasets at variable LODs. It is based on the Douglas-Peucker line simplification principle,generating an exhaustive set of line segments whose specific subsets represent the lines at any variable LOD. At run time, anappropriate and view-dependent error metric supports screen-space adaptive LOD levels and the display of the correct subsetof line segments accordingly. Our implementation shows that we can simplify and display large line datasets interactively. Wecan successfully apply line style patterns, dynamic LOD selection lenses, and anti-aliasing techniques to our line rendering.
CCS Concepts • Human-centered computing → Geographic visualization;
Visualization techniques; • Theory of computation → Compu-tational geometry; • Computing methodologies → Rendering; Rasterization;
1. Introduction
Interactive visualization of large geographic vector map data isa challenging problem, in particular in combination with real-time adaptive level-of-detail (LOD) methods. LOD-based simpli-fication and rendering techniques offer well-proven solutions for † Authors emails: {amiraghdam,diehl,pajarola}@ifi.uzh.ch dynamically adjusting the amount and resolution of the data tobe displayed. In the context of scientific visualization and com-puter graphics, the development of multiresolution LOD methodshas been an active research area that resulted in many algorithmsand data structures to facilitate real-time rendering of very largeamounts of 3D data, e.g. such as polygonal meshes, volumes orpoint cloud data.For 2D textures, 3D meshes or volumetric data, different LOD c (cid:13) (cid:13) miraghdam et al. / LOCALIS: Loc ally- a daptive Li ne S implification simplification algorithms for both off-line preprocessing and real-time visualization have been proposed. In contrast, online LODmanagement and LOD-based interactive visualization of vectormap line data has not received the same amount of attention. In 3Drendering engines, polyline data is commonly dealt with by beingtransformed into other formats such as textures or meshes, and thecorresponding LOD techniques for these formats are then applied.In geographic information systems (GIS), besides terrain elevationmodels and texture maps, large parts of the particularly importantcartographic data is given in vector format.Cartographic generalization techniques for vector data have beenstudied for decades. However, vector map data processing and in-teractive visualization have typically been handled independently,thus no efficient and fully integrated solutions have been proposeduntil now. Furthermore, the performance of prior online methodsis still insufficient for real-time line simplification and display oflarger datasets, and therefore, past approaches do not easily trans-late to dynamic interactive visualizations. In particular, the outcomeof the polyline generalization has not been tailored towards interac-tive 3D visualization. As a consequence, there is a lack of real-timeadaptive LOD techniques for vector polyline data.We tackle the lack of specific techniques for visualizing vec-tor polylines. Moreover, we consider line features to be displayednot only in 2D but in an interactive 3D geographic visualiza-tion. Following recent achievements for fast vector map render-ing in 3D [TBP16, TBP18, FEP18], we propose a novel algo-rithm, locally-adaptive line simplification (LOCALIS), for GPU-based geographic polyline data visualization. Our main contribu-tions are (1) a GPU-based view-dependent Douglas-Peucker stylepolyline simplification approach, that exploits (2) a novel LODline-segments data structure, and (3) an efficient GPU-based de-ferred line rendering algorithm. Furthermore, in the experimentalresults, we not only demonstrate the interactive LOD simplificationand rendering performance of our approach but also its line-stylingas well as screen-space LOD and data filtering features. We focuson simple static line data, i.e. polylines that only intersect at jointsand which do not have further temporal attributes such as e.g. tra-jectories. Our technique may work with other lines or trajectoriesbut does not take their specific error metrics into account.
2. Related Work2.1. Geographical Vector Data Visualization
In 3D, vector line data is most commonly displayed as an overlayover a terrain model, which can be a simple flat plane with a textureresembling the terrain as a 2D image, or it can be a digital 3D eleva-tion model. In this context, we can categorize the methods for ren-dering geographic vector data into four major groups: (1) texture-based overlays, (2) geometry-based methods, (3) shadow-volume-based techniques, and (4) deferred direct vector rendering.In texture-based methods [KD02, WKW ∗
03, SLL08, WLB09],vector data is rasterized and stored as a texture which is then pro-jected over the terrain during rendering. These methods are fast andeasy to implement but suffer from an insufficient resolution in areascloser to the camera, aliasing artifacts in far areas as well as pro-jective distortions. To overcome these problems, higher or multi- resolution textures are used at the expense of larger texture mem-ory usage. Furthermore, view-dependent and dynamically adaptingvector maps require the textures to be updated each frame.Geometry-based approaches, transform the vector data intoauxiliary meshes modifying them to match the underlying ter-rain [QWS ∗
11, WSFL10]. While not suffering from resolutionproblems, other drawbacks arise. First, creating meshes from large-scale vector maps results in an even larger amount of geometryto be rendered as each line primitive gives rise to several polygonprimitives. Another issue is matching the meshes to the 3D ter-rain, especially in connection with multiresolution view-dependentLOD visualization approaches in which the terrain mesh as wellas the vector maps continuously change as the camera moves. Fur-thermore, like texture based solutions, these auxiliary meshes haveto be recreated whenever the vector maps change, in the worst casebefore each frame. In general, unpredictable scene configurationsare problematic for geometry-based line rendering methods.In shadow-volume techniques, the vector map polylines are con-sidered floating above the terrain and orthogonal shadow polygonsare created intersecting the terrain [DZY08, YZK ∗ Generalization is a key concept in cartography and has been usedfor displaying maps at different scales, with the goal of adjust-ing the amount and visual complexity of cartographic elements tomatch a specific use case and spatial resolution. Such generalizedmaps are supposed to simplify a given task and increase the effi-ciency of the users [WBW10]. Generalization is done by applyingdifferent operations to cartographic elements which are classifiedinto several categories such as elimination, simplification, aggrega-tion, and collapse [MS92, FSK07, RBS11].Automated line simplification and feature selection methodshelp to reduce time-consuming manual work and maintain consis-tency [BW88]. Early batch processing [HW07] methods worked bychaining several operations sequentially and providing the neces-sary control parameters. Subsequent improvements included rule- c (cid:13) (cid:13) miraghdam et al. / LOCALIS: Loc ally- a daptive Li ne S implification based expert systems which modeled cartographic generalizationknowledge as a set of rules [BM91]. Due to the complexity of thegeneralization process, a high number of rules were needed. In ad-dition, as the number of rules increased, new problems emergedsuch as conflicts and competitions between rules [FM87]. Eventu-ally, expert systems can be used for specific problems such as labelplacement, but are getting too complex for the whole process ofgeneralization [Zor91].The constraints concept [Bea91] defines the desired outputby constraints, and an algorithm optimizes the combination ofgeneralization operations in order to produce the best outputbased on the defined constraints. Among the optimization tech-niques that have been developed for this purpose, the agent-basedmethod [LRD ∗
99] has successfully been used in map produc-tion [RRB11]. However, this approach is still not effective enoughfor on-demand map generalization because defining constraints forevery possible situation that users could demand is not possible.To try to overcome this shortcoming, ontology-based approacheswere proposed for road line simplification [KDE05] and road acci-dent visualization [GM16]. Still, a comprehensive ontology has notbeen created to cover the whole generalization process.Early interactive map visualization systems used a set of maps atdifferent discrete cartographic scales which can be selected basedon the user interaction and display resolution. To avoid the lim-itations of using only a given set of discrete LODs, on-the-flygeneralization approaches keep the vector map in data structuresthat can be used to extract maps at a desired detail level on de-mand [WB08]. With respect to linear vector map features, the bi-nary line generalization (BLG) tree [VOVDB89] is an importantbasic line simplification data structure based on Douglas-Peucker(DP) algorithm. Reactive-trees [VO92] as well as generalized areapartitioning trees [VO95] were designed for on-the-fly line simpli-fication, as well as the Multi-VMap [VMPR06].Despite these advances, cartographic vector map line generaliza-tions are still far away from real-time performance on larger scalesand are not considering interactive 3D visualization scenarios. Ourmethod, while being limited to line features, is to the best of ourknowledge the first real-time locally-adaptive line simplificationand visualization solution.
Simplification and multiresolution modeling techniques are widelyknown for various 3D geometry data types [LRC ∗ ∗
18, vKLW18], due to its excellent accuracy [SC06] andsimplicity, we based our approach on the DP technique and the er-ror metric specific to this technique. For efficient view-dependentand screen-space adaptive LOD selection, we adopt the conceptof error saturation known from terrain rendering [LP01, BPS04,PG07]. This allows us to define a BLG-tree supporting view-dependent line-refinement operations as described in Section 3.2.
3. Locally Adaptive Line Simplification3.1. Douglas-Peucker Line Refinement Trees
Our line simplification approach is based on the DP technique andthe BLG-tree [VOVDB89]. Fig. 2 illustrates the DP line refinementprinciple and the corresponding BLG-tree. The process is definedby incrementally refining the current line version, initially startingwith a straight connection between the endpoints. In each step, oneline segment acts as a baseline which is subdivided by adding arefinement point, and this next point is chosen as the one havingthe largest distance from its baseline.The distance e of a point p i to its baseline is considered to be the error that this point introduces when leaving it out for represent-ing the line. Adding a p i thus causes dividing the correspondingbaseline p l p r and hence also splitting the remaining unused pointsbelonging to the same baseline into two groups L i = { p j | l < j < i } and R i = { p j | i < j < r } . The two sets L , R define the two sub-trees in the BLG-tree of node p i . In Fig. 2, starting with the initialbaseline p B p E , inserting p splits the remaining unused points intothe sets L = { p } and R = { p , . . . p } . Subsequently, the two newbaselines p B p and p p E with their respective unused points L and R are processed. At any moment, there is thus a set of baselines,each with its refinement points, and for the next refinement stepthe baseline with the refinement point with the largest distance issubdivided, until the desired or the full LOD is reached.As can be seen in Fig. 2, a binary BLG-tree is built accordingto the above outlined process. We use this tree to simplify a lineadaptively based on a given error threshold. In Fig. 2 for example,if a recursive tree traversal stops when the error e of a node becomesless than 10, the so far traversed tree and selected nodes with e > p B p p p E . If instead we use e > p B p p p p p E .Fig. 3(a) illustrates the situation in which the error threshold isadaptively defined based on the distance from a camera. A function ε ( d ) translates the distance d from the camera to an error thresholdthat is used to compare the error e i of each point p i in the BLG-tree. While traversing the BLG line-refinement tree we thus checkfor the inequality e i > ε ( d i ) , (1)given the function ε () which corresponds to a screen-space errorthreshold. In this equation, e i denotes the error of the node and d i its distance to the camera. Each p i for which this inequality is trueis included and refines its baseline, and the recursive traversal stopswhen the test fails. If the test fails, e.g. at node p in Fig. 3(b), theentire subtree is not included for line refinement, and the final resultis p B p p p p E as shown in Fig. 3(c). The deferred line rendering algorithm described below in Sec-tion 3.6 requires efficient pixel-on-line evaluations . To avoid manyexpensive BLG-tree traversals for all pixels, we thus propose anovel approach that converts the line refinement trees into an ex-haustive set of attributed line segments , and indexes them.These attributed line segments include all possible line config-urations, e.g. as illustrated in Fig. 2, that could be needed for any c (cid:13) (cid:13) miraghdam et al. / LOCALIS: Loc ally- a daptive Li ne S implification (l0)(t0) (l1)(t1) (l2)(t2) (l3)(t3) (l4)(t4) (l5)(t5) (l6)(t6) e = . e = . e = . e = . e = . e = . p B p p p p p p p E p p p p p e = 12.4 e = 2.8 e = 4.3 e = 6.1 e = 18.9 p e = 7.5 p B p p p p p p E p B p p p p p E p B p p p p E p B p p p E p B p p E p B p E p p p p e = 12.4 e = 4.3 e = 6.1 e = 18.9 p e = 7.5 p p p e = 12.4 e = 6.1 e = 18.9 p e = 7.5 p p e = 12.4 e = 18.9 p e = 7.5 p p e = 12.4 e = 18.9 p e = 12.4 Figure 2:
Different steps of the DP algorithm when refining a line (top) and the corresponding tree structure (bottom). (l0) Shows themost simplified and (l6) the fully refined line version. (l1-l5) Indicate the steps where the most impactful point is added each time, with thedistances to the subdivided baseline also illustrated. (t0-t6) Show the nodes corresponding to the inserted refinement points being added toan incrementally growing binary tree. th =28.9 th =20.3 th =14.0 th =10.8 th =5.4 p p p p p p p B p E (a) (c) p p p p B p E (b) p p p p p p th =10.8 e =12.4 th =14.0 e =18.9 th =5.4 e =6.1 th =20.3 e =7.5 th =28.9 e =2.8 th =19.7 e =4.3 Figure 3:
Example with line refinement error dependent on the dis-tance to the camera. (a) For each point p i an error threshold this calculated using the given function ε ( d i ) . (b) While performingan in-order tree traversal, the error of each node is tested againstthe calculated threshold. At p , e < th, thus p and its children arediscarded. (c) The final refined line, with parts closer to the camerahaving more detail. LOD refinement situation. Furthermore, they are constructed suchthat their LOD visibility can be determined individually accordingto a given error threshold. To achieve this, for each BLG-tree wegenerate the set T of all possible line segments that can occur bytraversing the tree. T can be extracted from a BLG-tree using thefollowing three rules which are illustrated in Fig. 4, along with anexample. Rule 1: connecting p B and p E to the root node and treatthe root as the right child of p B and left child of p E . For the nextrules, we denote the descendants of a node by their relative pathsin the subscript such that the left child of the node p i is p il and theright child of the right child of the left child of p i is p ilr . Rule 2:connecting each descendant p ilr t with t > p i . Rule 3: Symmetryof Rule 2 by swapping l and r .In order to determine the LOD visibility of each individual linesegment in T , we need to check the inclusion of its two endpoints.Let us consider the line segment p B p in Fig. 2 which appears forthe first time in Fig. 2(l1) where p is used to refine the line. We can Rule 1
Rootp B p E Rule 2 Rule 3 E x a m p l e p i p il p ilr p ilr p ilr n p i p ir p irl p irl p irl n Figure 4:
Three rules for extracting the set T of all possible linesegments from a BLG-tree that could be drawn regardless of howthe ε () function is defined. Black (BLG-tree), blue (Rule 1), orange(Rule 2) and green (Rule 3) connections form T in the example. observe that p B p will continue to be a part of the simplified lineuntil p is included in Fig. 2(l4). The visibility of the line segment p B p is thus not affected by any other point. Therefore, we call p the generator and p the splitter . Since the generator p g is alwaysone of the two endpoints, the other being p l , we additionally onlyneed to know the splitter p s of a line segment. Thus three points p l , p g , p s and their errors e l , e g , e s need to be known and comparedto the LOD error threshold to determine the visibility of a specificline segment in T . Therefore, a line segment is visible when itsgenerator is included and its splitter is not , i.e. iff e g > ε ( d g ) ∧ e s (cid:54) > ε ( d s ) . (2) Note, however, that Eq. (2) is only correct if it has a monotonicbehavior w.r.t. traversing the BLG-tree top-down, as a node can-not be included in a refined line version without all its ancestorsalready being included. Furthermore, since we are not in fact ex-plicitly traversing the BLG-trees, but testing individual points, wemust enforce this condition in the representation of our attributedline segments.Based on Eqs. 1 and 2, a point could be included without itsancestors if (i) its error is larger than the one of its ancestors or(ii) ε () returns a lower threshold than for one of its ancestors. Toresolve (i) we conservatively set each node’s error to the maximum c (cid:13) (cid:13) miraghdam et al. / LOCALIS: Loc ally- a daptive Li ne S implification ˆ e of its subtree. In Fig. 2, the error ˆ e of node p thus becomes18 .
9, the largest of its descendants. Case (ii) depends on the cam-era and is solved by adopting a view-independent error saturationtechnique [LP01, BPS04, PG07].Considering p and its descendant p in Fig. 5(a), the worst casecamera position is aligned with the two points, not between themand on the side of the descendant (i.e. p ). In this configuration, itcould be that ˆ e (cid:54) > ε ( d ) but ˆ e > ε ( d ) thus causing the ancestor p not to be included. To arrive at a view-independent metric, westore for each node the maximal distance d max to any of its descen-dants in the line-refinement tree, as illustrated in Fig. 5(b) for p .Therefore, we replace Eq. (1) byˆ e i > ε ( d i − d max i ) . (3)However, since d max of a descendant node may still be larger,see also Fig. 5(c), we assign the maximum ˆ d max of each subtree toits root. Given the so saturated and maximized errors and distancevalues, a line segment is visible iff for its generator and splitterpoints p g and p s ˆ e g > ε ( d g − ˆ d max g ) ∧ ˆ e s (cid:54) > ε ( d s − ˆ d max s ) . (4)Using Eq. (4) we may conservatively display more details thanneeded since we consider the worst case configurations. However, itallows us to guarantee a view-dependent LOD approximation errorin screen-space. p p p p p p p B p E p p p p p p p B p E d (a) (c)(b) p p p p p p p B p E Figure 5: (a) The maximum difference between distances of twopoints to a camera occurs when the points and the camera are ina line. (b) Calculating the maximum distance d max to descendantnodes for p . (c) Non saturated d max of descendant p being largerthan that of ancestor p . Arbitrary line simplifications can cause unwanted intersections ofline segments. In Fig. 6, removing p and p will cause the sim-plified line p p to intersect another polyline p p p . This can infact be predicted by testing wether any other visible point lies in-side the triangle formed by the removed point and its baseline,or equivalently the (cid:52) s ( p l , p g , p s ) of line endpoint, generator andsplitter points. We call such points, e.g. p inside the (cid:52) ( p , p , p ) in Fig. 6, dependees . Recall the basic exclusion rule of a point p i being ˆ e i (cid:54) > ε ( d i − ˆ d max i ) . (5)Given a refinement or splitter point p i and its dependees P i = { p j | p j inside (cid:52) i } , we must make sure that p i is excluded only if allpoints in P i are also excluded. Assuming P i implicitly includes p i itself, then we can reformulate Eq. (5) to exclude p i from refining aline segment to max p j ∈P i ˆ e j (cid:54) > min p j ∈P i ε ( d j − ˆ d max j ) . (6)Given the distances d i , d j to the camera for the dependent anddependee points p i , p j as well as the distance d i , j between them, weknow that d j ≥ d i − d i , j . If we plug this into Eq. (6) and reorganizeit given that ε () is a monotonically increasing function, we getmax p j ∈P i ˆ e j (cid:54) > ε (cid:32) d i − max p j ∈P i ( d i , j + ˆ d max j ) (cid:33) . (7)As there is only one term, d i , that varies at run-time in Eq. (7),we can pre-evaluate the remaining terms e ∗ i = max p j ∈P i ˆ e j and d ∗ i = max p j ∈P i ( d i , j + ˆ d max j ) , and store these two pre-evaluatedterms instead of ˆ e i and ˆ d max i with the point p i . p (a) (b) (c) (d) p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p p Figure 6: (a, b) An example of two lines that intersect when onlyone of them is simplified. (c) The intersection happens when p isexcluded and p , its dependee, is not excluded. (d) Exclusion of apoint can depend on several dependees and their dependees. Eventually, points can have multiple nested dependencies, likein Fig. 6(d), p being dependent on p and p which in turn dependson p . While these linear dependencies can be solved by the sat-uration mechanism, potential cycles are not. We handle cyclic de-pendencies by enforcing simultaneous selection of all points in thecycle through the introduction of a proxy point. This proxy pointhas maximized attributes and an average spatial position and it isused for the evaluation of the exclusion criterion. Though the origi-nal point coordinates are still used for drawing. Overlapping cyclesare merged and have a single proxy for all their points. An overview of the preprocess is illustrated in Fig. 7. From the vec-tor map data all points are extracted and stored in a global array. Allpolylines are transformed into BLG-trees and their nodes’ error anddistance attributes are saturated as outlined above and are assignedto their respective points. From the BLG-trees a global list of line c (cid:13) (cid:13) miraghdam et al. / LOCALIS: Loc ally- a daptive Li ne S implification segments is extracted, each containing four point references: twoendpoints, the generator, and the splitter. Now each line segmentcan independently be evaluated w.r.t. being visible, by evaluatingthe generator and splitter exclusion, for a given camera positionand screen-space error threshold function ε () . LS Line Segmetns Quad Tree p Points BLG-TreesLines Saturated BLG-Trees Simplifiable Line SegmentsGridQuad Tree Node P r e - P r o cess M a i n D a t a R un -t i m e S ea r c h A cce l e r a t o r p p p LS n LSI LSI k A dd i t i on a l D a t a Line Dataset Satellite ImageLine Style Image
Color Image
Sattelite Image DataMipmap of Line Styles
Color Image
Height Map
Height Data
Height Image
TransformPopulateReferenceLoadFile
Legend
Figure 7:
The point attribute and line segment data arrays gener-ated in the preprocess are mapped to a combined grid and quadtreespatial search acceleration data structure on the GPU. Addition-ally, line styles (see Section 4.2), terrain height and terrain colorare loaded for rendering the scene.
Our interactive line visualization is based on the deferred vec-tor map rendering proposed in [TBP16, TBP18] and extendedin [FEP18]. Similar to deferred shading techniques, the actualdrawing takes place in a fragment shader that performs pixel-on-line evaluations after the 3D terrain has been rendered in a standardgeometry and texturing pass. Through inverse view-projection,each pixel is mapped back into the coordinate system of the vectormap data and tested against the relevant and visible line segments.If the pixel is determined to be covered by a line, it is colored andstyled accordingly.This last step, as it is performed for all fragments in the frame-buffer in parallel on the GPU, requires a very efficient spatial searchindex over all line segments. Different spatial search data structurescan be used, in [TBP18] a two-level bounding volume hierarchyand in [FEP18] a spatial hash with nested quadtrees is used. Wefollow a similar two-level principle where a coarse global grid al-lows quick constant-time access to a cell, see also Fig. 7. In eachcell, we use a quadtree for locally refining the search by a branch-and-bound based traversal. We evaluate the visibility of each linesegment in the search result using the exclusion criteria of its gen-erator and splitter. If it is visible and its distance to the fragment is less than half the width of the line, as it should appear in screen-space, the corresponding pixel will be colored as indicated by thelines style and pattern.Where lines meet, consistent joints must be displayed as shownin Fig. 8. Round joints can be achieved by drawing a half-circleat each endpoint using the screen-space distance of the pixel tothe line’s endpoint. For uniformly colored lines no further spe-cial treatment is needed. For styled lines, the minimum distanceto both lines’ endpoints is used to determine the final color. Pixel P in Fig. 8(g) is located on the black outline of the righthand lineand within the grey area of the lefthand line. Since P is closer tothe left line, it will be colored in grey. As both lines are needed forcoloring pixels near joints, we continue the search after hitting thefirst line to find the second line. (a)(b)(c) (d)(e)(f) (g) p Figure 8: (a) Separate line segments disconnected at a joint. (b) Asimple solution draws a half-circle at each end. (c) Uniformlycolored line segments appear continuous with overlapping half-circles. (d, e) For styled lines, half-circles do not solve the problemcompletely. In order for joints to look correct (f), we must use thecolor of the line that is closer to the pixel (g).
4. Implementation4.1. Deferred Line Rendering
We have implemented our approach in C++ using OpenGL4.1to limit dependency on advanced graphics features and thus sup-port a wider range of possible applications. In the preprocess, aline dataset in shapefile format is transformed into three data tex-tures for the rendering application, containing the points, line seg-ments and the spatial search grid and line segment quadtrees, seealso Fig. 7. The renderer uses these three data textures along witha terrain heigh-field mesh, an image texture for the terrain, and astyle-texture for the line categories. During rendering, the terrainelevation model with its texture(s) is processed by a regular tex-tured geometry rendering pass, exploiting multiresolution terrainrendering techniques. The vector map line rendering actually takesplace solely in the fragment shader, which not only performs thenormal terrain shading and texturing after vertex transformation,but also executes the per-pixel line evaluations.From the world coordinates provided by the vertex shader, sim-ilar to [TBP18], OpenGL generates the exact world coordinatesfor all fragments. Using per-pixel world coordinates, the fragmentshader finds the grid cell in which the fragment is located and tra-verses the corresponding line segment quadtree, calculating the dis-tance between the fragment and each line segment that it encoun-ters. Given the distance, line width and pixel size, a percentage area c (cid:13) (cid:13) miraghdam et al. / LOCALIS: Loc ally- a daptive Li ne S implification coverage is computed, used for antialiased blending as detailed be-low. For any non-zero overlap, the pixel’s color is set to the linetype’s style texture based on its distance to the line segment. Fur-ther overlapping line segments go through the same process if theyhave the same or higher line type priority, while lower prioritieswill be dropped. In case of higher priority, the fragment color willbe replaced, in case of equal priority the color of the closer linesegment is kept. The latter case also includes handling joints as de-scribed in Section 3.6. Alg. 1 describes the process in more details.The number of texture lookups strongly affects the performanceof the fragment shader. Since each access can return a RGBA colorvalue or XYZW homogeneous coordinate 4-tuple, we structuredour textures in a way to reduce the number of lookups. The firsttwo values of each tuple in the points texture hold the x , y worldcoordinates of the point, the other two are the saturated error e ∗ and distance d ∗ values. For each line segment, we need the indicesof three different points, i.e. two endpoints and the splitter, whichare stored in the first three values. Additionally, the first bit of thelast value indicates which endpoint is the line’s generator, and theremaining bits are used for storing the type of the line. The linesegment quadtree texture contains two types of tuples: address anddata tuples. An address tuple holds the indices of the four childnodes. Multiple consecutive data tuples store the indices of the linesegments inside a node. We use two techniques to overcome staircase and minificationaliasing artifacts that appear when perspectively projecting and dis-cretizing lines onto a fixed resolution screen. Note that magnifica-tion artifacts do not occur in our deferred line rendering method aswe are basically doing a pixel-precise rasterization in the fragmentshader which never causes a magnification.Staircase aliasing at the outer edges of lines is prevented by usingthe per-pixel area coverage of the pixel-line overlap as the alphavalue of the selected line color. In order to avoid aliasing inside thelines that have style patterns, we exploit the OpenGL mipmappingfunctionality. Fig. 9 shows a line style texture that stores nine typesof styles at several different mipmap levels. At the top mipmap level(0), each style type covers a 512 ×
256 pixel matrix which results ina 512 × = wide,a mipmap pyramid of nine levels can be created without mixing thecolors of adjacent styles.Furthermore, given the alpha-blending based antialiasing we canfurther exploit this for adjusting the line thickness dynamically. Ourapproach allows to increase or decrease the line thickness adap-tively based on distance, as well as other spatial or even tempo-ral functions. In our current implementation, we progressively in-crease the thickness of important lines and reduce it for others bydistance. Therefore, small lines smoothly become invisible at fardistances due to becoming subpixel in size while other importantones remain visible. As the camera gets closer, this distance basedadjustment is cancelled out and all lines are gradually adjusted totheir actual thickness. We need to assign the line segments to thequadtree nodes that they cover at their largest thickness. We multi-plied the base thickness of each line type by a factor that we deter- Algorithm 1
Fragment Shader
Input f ragmentWorldCoordinate ( fWCoord ) , pixelSize , lineStyle , points , lines , quadTrees , terrainColor Output f ragmentColor function C ALCULATE F RAGMENT C OLOR currentNode ← root of quadTree containing fWCoord coverage ← currentPriority ← lowest priority currentDistance ← ∞ while currentNode is not empty do for all lines in currentNode do if line is generated and is not split then coverage ← CoveredByLine ( fWCoord , line , pixelSize ) distance ← distance ( fWCoord , line ) if coverage > line > currentPriority or ( priority of line = currentPriority and distance < currentDistance ) then f ragmentColor ← readLineStyleColor ( type of line , distance ) alpha of f ragmentColor ← coverage currentPriority ← priority of line currentDistance ← distance end if end if end for currentNode ← child node of currentNode thatcontains fWCoord end while f ragmentColor ← blend ( terrainColor , f ragmentColor ) return f ragmentColor end function mined practically. This effect is applied in Fig. 10, resulting in amuch less cluttered view of far areas as in the top image.
5. Results
We tested our system on two different computers: a 4GHz In-tel Core i7-6700K, 16GB RAM, AMD Radeon R9 M395x run-ning MacOS (SYS1) and a 3.5GHz Core i7-3770K, 16GB RAM,GeForce GTX 1080Ti running Windows (SYS2). We used threedifferent data layers for our experiments: (1) a heightmap for cre-ating the terrain mesh, (2) a terrain texture provided by Swisstopo,and (3) a street dataset. We used two street datasets: Open StreetMap (DS1) and Swisstopo VECTOR25 (DS2). Tab. 1 containsmore information about these datasets. Due to being hardly dis-tinguishable visually, only screenshots of DS1 are depicted in thissection. The resolution of the frame buffer was 1920 × Our software can successfully load a large-scale vector map datasetand project it on a large terrain. It can deliver pixel-precise line c (cid:13) (cid:13) miraghdam et al. / LOCALIS: Loc ally- a daptive Li ne S implification u n k n o w n m o t o r w a y t r u n k p r i m a r y s e c o n d a r y Road Type t e r t i a r y u n c l a s s i fi e d r e s i d e n t i a l li v i n g _ s t r e e t M i p m a p L eve l Figure 9:
Mipmap pyramid of line style texture. The texture con-tains nine line styles. In the magnified section, the pixels can bedistinguished by their white borders. In level eight, each row of thetexture containing two pixels represent a line style.
DS2
Table 1:
Information of datasets DS1 (Open Street Map) and DS2(Swisstopo VECTOR25) including the number of points, segments(segs), all-possible segments (ap-segs), segments assigned to thequadtrees (qt-segs), segments assigned to the quadtrees when dy-namic thickness is enabled (dt-qt-segs), and the memory they need. rendering without any pixelation artifacts irrespective of the zoomfactor and without any recognizable aliasing artifacts. Although weused a simple technique with no LOD management for renderingthe 3D terrain, our implementation is independent of the terrain ren-dering itself and proves that it is capable of blending the lines withthe textured terrain seamlessly. Fig. 10 presents multiple screen-shots at various zoom levels, displaying the whole street networkof Switzerland at highest LOD. As we zoom in, the lines becomeless cluttered and their style becomes clearly recognizable as soonas the line width covers several pixels.
Our locally-adaptive line simplification technique, LOCALIS,demonstrates that real-time line simplification can be applied tointeractive vector map visualization applications. The approach in-cludes a pre-processing effort, which is performed once offline, anda line data storage cost for managing all line segments that canpossibly appear when refining the lines. The impact of this costis twofold. First, it requires some extra memory to store the linesegments which, however, has not been a bottleneck in our currentimplementation. Second, handling large-scale vector map line datahas only recently been made possible, and doubling the number ofline segments may thus have a limiting effect on the performance ofsuch techniques. Performance is discussed in the following section.The real-time adaptive line simplification of LOCALIS is shownin Fig. 11 by highlighting areas of line-refinement. Using a lens tool, outlined by the red circle, the polylines are interactively re-fined inside and simplified outside. A recorded interactive demon-stration is supplied in the accompanying supplemental video.
We identified four parts of LOCALIS that chiefly affect the per-formance: the dynamic line thickness, the visibility evaluation andthe increase in the number of lines as a result of creating all pos-sible lines. We designed four experiments to demonstrate the in-fluence of each part: (1) AVD: All possible lines, Visibility check,and Dynamic line thickness. (2) AVS: All possible lines, Visibil-ity check, and Static line thickness. (3) ANVS: All possible lines,No Visibility check, and Static line thickness. (4) ONVS: Originalline segments, No Visibility check, and Static line thickness. AVDhas all features of LOCALIS while ONVS is equivalent to just ren-dering the lines. This supports a performance comparison of theLOCALIS specific features to a base-line configuration.We ran our performance tests on datasets DS1 and DS2. A de-tailed information about these datasets is given in Tab. 1. We usedthe datasets without preprocessing since they did not have connec-tions at middle points of the lines. In the left column of Fig. 12,four snapshots at different zoom levels are depicted. In the rightcolumn, the corresponding heatmaps show the number of distance-to-line tests per fragment. In Fig. 13, the amount of time neededto render the terrain and the street lines are shown in millisecondsas bar charts. For each zoom level, four experiments are done withtwo datasets on two machines creating 16 bars.In our case, run-time performance depends on three factors:(1) number of lines inspected per fragment and in total each frame,(2) memory locality of the data on the GPU, and (3) LOD threshold.The performance is higher when the majority of the fragments arecovered by less populated nodes of the quadtrees, a limited part ofthe memory is accessed (e.g., in close-up views), and lower LODsare used. The first factor is the most influential. In Fig. 13, we cansee the effect of the second factor at Zoom 8, where most of thefragments are covered by dense nodes but the performance is higherthan in Zoom 2. The effect of the third factor is not as significantas the other two since all relevant line segments are queried re-gardless of the LOD threshold and only the number of immediatelydiscarded line segments increases in lower LODs (see Section 3).Based on the results of ONVS, our base vector line renderer per-forms at the level of a state-of-the-art technique [TBP16, TBP18]and scales as expected when comparing to ANVS, which dealswith twice the number of line segments. The effect of the visibilitycheck is negligible according to AVS. The results of AVD show thatthe effect of dynamic line thickness on performance is significant.This is due to the widened line segments overlapping extensively atcurves, resulting in dense quadtree nodes. This negative impact issubtle in Zoom 6 where no highways or major roads are visible.To overcome aliasing artifacts we employed two techniques asoutlined in Section 4. Fig. 14 shows a view with different types ofroads. The magnified insets demonstrate that both techniques suc-cessfully smooth the outer edge as well as the inner part of thestyled lines and prevent any aliasing artifacts. The smoothed stair-case on the outer edge is achieved by precise calculation of the pix- c (cid:13) (cid:13) miraghdam et al. / LOCALIS: Loc ally- a daptive Li ne S implification Figure 10:
Screenshots of our system visualizing the whole street network of Switzerland over a terrain mesh at full detail. The LOD is set tothe highest for demonstration purpose.
Figure 11:
Left : Full detail lines without any simplification are drawn.
Top row : Sequential snapshots of a line-refinement lens movinginteractively over the terrain, with all lines outside the lens being heavily simplified.
Bottom row : Close-up of the lens in each snapshot,magnifying the refined and simplified lines in- and outside of the lens respectively. els’ area-coverage, and the smooth interior is obtained by queryingthe line style texture at an appropriate mipmap level. The antialias-ing is consistent in lines with different angles and types.
6. Conclusions
In this paper we have presented LOCALIS, our new locally-adaptive line simplification technique for simple polylines basedon the DP algorithm. Our technique creates every possible linesegment that can emerge during line refinement using BLG-treesand makes them individually processable by attributing them. LO-CALIS exploits the direct access to line data on the GPU as used bydeferred vector map rendering and decides whether a line segment should be displayed based on a given LOD threshold in the laststep of the rendering pipeline. Our implementation shows that LO-CALIS can always produce and display a pixel-precise and validsimplified representation of the lines regardless of the distributionof the required LOD over the screen. It can simplify any part of aline while keeping the details of the other part.We integrated LOCALIS with a state-of-the-art deferred vectormap rendering algorithm using data structures that serve both algo-rithms. We have tested our prototype on the whole street networkof Switzerland rendered on top of a 3D terrain mesh. In this pro-totype, the user can manipulate the LOD by moving the cameracloser to or farther from the terrain in an arbitrary perspective or c (cid:13) (cid:13) miraghdam et al. / LOCALIS: Loc ally- a daptive Li ne S implification Z oo m Z oo m Z oo m Z oo m Figure 12:
Snapshots at different zoom levels: Left shows the ren-dered lines at the highest LOD, the right shows the heatmaps indi-cating the number of distance-to-line tests per pixel. by activating a moving virtual line-refinement lens over the terrain.Our implementation shows that line simplification can be done in-teractively at the cost of an increased number of line segments tobe maintained in our data structure.In our line simplification approach, we took the characteristicsof the target 3D visualization system into account. Knowing thatthe lines will be displayed in an interactive 3D environment, im-plies that the lines must be processed in real-time such that thecamera can move freely. Our Technique enables applications to dy-namically manage the LOD of polylines interactively based on thesituation and user demands without needing other proxy data struc-tures such as auxiliary geometry, image textures, or pre-calculateddatasets with discrete LODs.Regarding the scalability, the bottleneck is the amount of avail-able memory. For larger datasets, it would not be possible to loadthe whole data structures, and further measures are required forloading the parts that are needed based on the viewing angle andLOD. Additionally, the hierarchical data structures can potentiallylimit the run-time search for lines that cover a pixel, if the depth inwhich a line is stored is chosen based on the LODs that contain thatline. These two points and designing locally-adaptive LOD man-agement algorithms for other types of vector data such as closedpolygons and meshes can be done in future work.
Acknowledgements
The authors want to thank the Swiss Federal Office of Topog-raphy Swisstopo for providing the Swiss VECTOR25 and Swis-sTLM data sets as well as the OpenStreetMap Foundation for ac-
Table 1
AVD AVS ANVS ONVS AVD AVS ANVS ONVS77.6 15.2 16.5 8.5 69.8 15.3 18.7 9.8160 41.6 42 21 152 44.8 45.2 23.679.6 28.8 30 16 86.4 33 34.2 18.257.1 20 21.5 12 58.2 22.1 23.8 13.552 14.7 17 10 48.9 15 17.2 10.628 12.5 14.5 9.3 39.2 13 14.8 9.861.6 17.3 20.2 12 57.6 13.1 14.8 10.381.4 13.4 15.2 10.5 74 11.9 13.3 9.8SYS2 DS1 SYS2 DS2AVD AVS ANVS ONVS AVD AVS ANVS ONVS62.2 22 21.6 8.7 47.3 11.8 12.7 5.7137 55.2 52.5 22.4 61.8 21.2 21.8 11.151.7 22.6 21.8 10.3 33.3 14 15.1 828.6 10.7 11 5.7 23.7 9.2 10.2 5.623.5 5.9 7.2 4.1 21.5 6.1 7.3 4.412.2 5.2 6.4 3.9 17.8 5.4 6.5 4.128.9 7.3 9.2 5.2 25 5.5 6.4 4.445.2 6.1 7.7 4.9 40.9 5.4 6.5 4.4 Z oo m Z oo m Z oo m SYS 1 SYS 2DS1 DS2 DS1DS2
AVD : A ll possible lines, V isibility check, D ynamic line thickness AVS : A ll possible lines, V isibility check, S tatic line thickness ANVS : A ll possible lines, N o V isibility check, S tatic line thickness ONVS : O riginal lines, N o V isibility check, S tatic line thickness Z oo m DS1 DS2 DS1 DS2 Figure 13:
Results of four Experiments with LOCALIS (AVD, AVS,ANVS, and ONVS) rendering two datasets (DS1 and DS2) on twomachines (SYS1 and SYS2) at four zoom levels in milliseconds.
Figure 14:
Two regions of the output image (left) are magnified(right). The smooth borders are visible around, and the smoothedout interior within the styled lines. cess to their data. This project was partially supported by a SwissNational Science Foundation (SNSF) research grant (project no.200021_169628).
References [Bea91] B
EARD
K.: Constraints on rule formation.
Map Generalization:Making Rules for Knowledge Representation (1991), 121–135. 3 c (cid:13) (cid:13) miraghdam et al. / LOCALIS: Loc ally- a daptive Li ne S implification [BM91] B UTTENFIELD
B. P., M C M ASTER
R. B.:
Map Generalization:Making Rules for Knowledge Representation . Longman Scientific &Technical New York, 1991. 3[BPS04] B AO X., P
AJAROLA
R., S
HAFAE
M.: SMART: An efficienttechnique for massive terrain visualization from out-of-core. In
Pro-ceedings Vision, Modeling and Visualization (2004), pp. 413–420. 3, 5[BW88] B
RASSEL
K. E., W
EIBEL
R.: A review and conceptual frame-work of automated map generalization.
International Journal of Geo-graphical Information System 2 , 3 (1988), 229–244. 2[DVS03] D
ACHSBACHER
C., V
OGELGSANG
C., S
TAMMINGER
M.: Se-quential point trees.
ACM Trans. Graph. 22 , 3 (July 2003), 657–662. 3[DZY08] D AI C., Z
HANG
Y., Y
ANG
J.: Rendering 3D vector data usingthe theory of stencil shadow volumes.
The International Archives of thePhotogrammetry, Remote Sensing and Spatial Information Sciences 37 (2008), 643–647. 2[FEP18] F
RASSON
A., E
NGEL
T. A., P
OZZER
C. T.: Efficient screen-space rendering of vector features on virtual terrains. In
ProceedingsACM SIGGRAPH Symposium on Interactive 3D Graphics and Games (2018), pp. 7:1–10. 2, 6[FM87] F
ISHER
P. F., M
ACKANESS
W. A.: Are cartographic expertsystems possible. In
Proceedings AutoCarto (1987), vol. 8, Citeseer,pp. 530–534. 3[FSK07] F
OERSTER
T., S
TOTER
J., K
ÖBBEN
B.: Towards a formal clas-sification of generalization operators. In
Proceedings International Car-tographic Conference (2007). 2[GM16] G
OULD
N., M
ACKANESS
W.: From taxonomies to ontologies:Formalizing generalization knowledge for on-demand mapping.
Cartog-raphy and Geographic Information Science 43 , 3 (2016), 208–222. 3[HSH10] H U L., S
ANDER
P. V., H
OPPE
H.: Parallel view-dependentlevel-of-detail control.
IEEE Transactions on Visualization and Com-puter Graphics 16 , 5 (September 2010), 718–728. 3[HW07] H
ARRIE
L., W
EIBEL
R.: Modelling the overall process of gen-eralisation. In
Generalisation of Geographic Information (2007), Else-vier, pp. 67–87. 2[KD02] K
ERSTING
O., D
ÖLLNER
J.: Interactive 3D visualization of vec-tor data in GIS. In
Proceedings ACM SIGSPATIAL International Con-ference on Advances in GIS (2002), pp. 107–112. 2[KDE05] K
ULIK
L., D
UCKHAM
M., E
GENHOFER
M.: Ontology-drivenmap generalization.
Journal of Visual Languages & Computing 16 , 3(2005), 245–267. 3[LP01] L
INDSTROM
P., P
ASCUCCI
V.: Visualization of large terrainsmade easy. In
Proceedings IEEE Visualization (2001), Computer SocietyPress, pp. 363–370. 3, 5[LRC ∗
03] L
UEBKE
D., R
EDDY
M., C
OHEN
J. D., V
ARSHNEY
A.,W
ATSON
B., H
UEBNER
R.:
Level of Detail for 3D Graphics . MorganKaufmann Publishers, San Francisco, California, 2003. 3[LRD ∗
99] L
AMY
S., R
UAS
A., D
EMAZEAU
Y., J
ACKSON
M., M
ACK - ANESS
W., W
EIBEL
R.: The application of agents in automated mapgeneralisation. In
Proceedings Conference International CartographicAssociation (1999), pp. 14–27. 3[MS92] M C M ASTER
R. B., S
HEA
K. S.:
Generalization in Digital Car-tography . Association of American Geographers, 1992. 2[PD04] P
AJAROLA
R., D E C ORO
C.: Efficient implementation of real-time view-dependent multiresolution meshing.
IEEE Transactions onVisualization and Computer Graphics 10 , 3 (May/June 2004), 353–368.3[PG07] P
AJAROLA
R., G
OBBETTI
E.: Survey on semi-regular multireso-lution models for interactive terrain rendering.
The Visual Computer 23 ,8 (2007), 583–605. 3, 5[QWS ∗
11] Q
IAO
Z., W
ENG
J., S UI Z., C AI H., Z
HANG
X.: A rapidvisualization method of vector data over 3D terrain. In
Geoinformatics (2011), IEEE, pp. 1–5. 2 [RBS11] R
OTH
R. E., B
REWER
C. A., S
TRYKER
M. S.: A typology ofoperators for maintaining legible map designs at multiple scales.
Carto-graphic Perspectives , 68 (2011), 29–64. 2[RRB11] R
EVELL
P., R
EGNAULD
N., B
ULBROOKE
G.: OS vectormapdistrict: Automated generalisation, text placement and conflation in sup-port of making public data public. In
International Cartographic Con-ference (2011), pp. 3–8. 3[SC06] S HI W., C
HEUNG
C.: Performance evaluation of line simplifica-tion algorithms for vector generalization.
The Cartographic Journal 43 ,1 (March 2006), 27–44. 3[SLL08] S UN M., L V G., L EI C.: Large-scale vector data displayingfor interactive manipulation in 3D landscape map.
The InternationalArchives of the Photogrammetry, Remote Sensing and Spatial Informa-tion Sciences 37 (2008), 507–511. 2[TBP16] T
HÖNY
M., B
ILLETER
M., P
AJAROLA
R.: Deferred vectormap visualization. In
Proceedings ACM SIGGRAPH ASIA Symposiumon Visualization (2016), pp. 16:1–8. 2, 6, 8[TBP18] T
HÖNY
M., B
ILLETER
M., P
AJAROLA
R.: Large-scale pixel-precise deferred vector maps.
Computer Graphics Forum 37 , 1 (February2018), 338–349. 2, 6, 8[vKLW18]
VAN K REVELD
M., L
ÖFFLER
M., W
IRATMA
L.: On OptimalPolyline Simplification using the Hausdorff and Fréchet Distance. arXivpreprint arXiv:1803.03550 (2018). 3[VMPR06] V
IAÑA
R., M
AGILLO
P., P
UPPO
E., R
AMOS
P. A.: Multi-VMap: A multi-scale model for vector maps.
Geoinformatica 10 , 3(September 2006), 359–394. 3[VO92] V AN O OSTEROM
P.: A storage structure for a multi-scaledatabase: The reactive-tree.
Computers, environment and urban systems16 , 3 (1992), 239–247. 3[VO95] V AN O OSTEROM
P.: The GAP-tree, an approach to ‘on-the-fly’ map generalization of an area partitioning.
GIS and Generalization,Methodology and Practice (1995), 120–132. 3[VOVDB89] V AN O OSTEROM
P., V AN D EN B OS J.: An object-orientedapproach to the design of geographic information systems. In
Design andImplementation of Large Spatial Databases (1989), vol. 409 of
LectureNotes in Computer Science , Springer, pp. 253–269. 3[WB08] W
EIBEL
R., B
URGHARDT
D.: On-the-fly generalization. In
Encyclopedia of GIS (2008), Shekhar S., Xiong H., (Eds.), Springer,pp. 339–344. 3[WBW10] W
ILSON
D., B
ERTOLOTTO
M., W
EAKLIAM
J.: Personaliz-ing map content to improve task completion efficiency.
InternationalJournal of Geographical Information Science 24 , 5 (2010), 741–760. 2[WKW ∗
03] W
ARTELL
Z., K
ANG
E., W
ASILEWSKI
T., R
IBARSKY
W.,F
AUST
N.: Rendering vector data over global, multi-resolution 3D ter-rain. In
Proceedings Eurographics Symposium on Data Visualization (2003), pp. 213–222. 2[WLB09] W
ANG
X., L IU J., B I J.: Rendering of vector data on 3D vir-tual landscapes. In
Proceedings IEEE International Conference on In-formation Science and Engineering (2009), pp. 2125–2128. 2[WSFL10] W
ENBIN
S., S
HIGANG
S., F
ENG
C., L
ICHAO
Z.: Geometry-based mapping of vector data and DEM based on hierarchical longi-tude/latitude grids. In
Geoscience and Remote Sensing (IITA-GRS) (2010), vol. 1, IEEE, pp. 215–218. 2[YZK ∗
10] Y
ANG
L., Z
HANG
L., K
ANG
Z., X
IAO
Z., P
ENG
J., Z
HANG
X., L IU L.: An efficient rendering method for large vector data on largeterrain models.
Science China Information Sciences 53 , 6 (2010), 1122–1129. 2[ZDY ∗
18] Z
HANG
D., D
ING
M., Y
ANG
D., L IU Y., F AN J., S
HEN
H. T.: Trajectory simplification: an experimental study and quality anal-ysis.
Proceedings International Conference on Very Large Data Bases11 , 9 (2018), 934–946. 3[Zor91] Z
ORASTER
S.: Expert systems and the map label placementproblem.
Cartographica: The International Journal for Geographic In-formation and Geovisualization 28 , 1 (1991), 1–9. 3 c (cid:13) (cid:13)(cid:13)