Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jan-Henrik Haunert is active.

Publication


Featured researches published by Jan-Henrik Haunert.


Geoinformatica | 2008

Area Collapse and Road Centerlines based on Straight Skeletons

Jan-Henrik Haunert; Monika Sester

Skeletonization of polygons is a technique, which is often applied to problems of cartography and geographic information science. Especially it is needed for generalization tasks such as the collapse of small or narrow areas, which are negligible for a certain scale. Different skeleton operators can be used for such tasks. One of them is the straight skeleton, which was rediscovered by computer scientists several years ago after decades of neglect. Its full range of practicability and its benefits for cartographic applications have not been revealed yet. Based on the straight skeleton an area collapse that preserves topological constraints as well as a partial area collapse can be performed. An automatic method for the derivation of road centerlines from a cadastral dataset, which uses special characteristics of the straight skeleton, is shown.


IEEE Transactions on Visualization and Computer Graphics | 2011

Drawing Road Networks with Focus Regions

Jan-Henrik Haunert; Leon Sering

Mobile users of maps typically need detailed information about their surroundings plus some context information about remote places. In order to avoid that the map partly gets too dense, cartographers have designed mapping functions that enlarge a user-defined focus region - such functions are sometimes called fish-eye projections. The extra map space occupied by the enlarged focus region is compensated by distorting other parts of the map. We argue that, in a map showing a network of roads relevant to the user, distortion should preferably take place in those areas where the network is sparse. Therefore, we do not apply a predefined mapping function. Instead, we consider the road network as a graph whose edges are the road segments. We compute a new spatial mapping with a graph-based optimization approach, minimizing the square sum of distortions at edges. Our optimization method is based on a convex quadratic program (CQP); CQPs can be solved in polynomial time. Important requirements on the output map are expressed as linear inequalities. In particular, we show how to forbid edge crossings. We have implemented our method in a prototype tool. For instances of different sizes, our method generated output maps that were far less distorted than those generated with a predefined fish-eye projection. Future work is needed to automate the selection of roads relevant to the user. Furthermore, we aim at fast heuristics for application in real-time systems.


Computers & Geosciences | 2009

Constrained set-up of the tGAP structure for progressive vector data transfer

Jan-Henrik Haunert; Arta Dilo; Peter van Oosterom

A promising approach to submit a vector map from a server to a mobile client is to send a coarse representation first, which then is incrementally refined. We consider the problem of defining a sequence of such increments for areas of different land-cover classes in a planar partition. In order to submit well-generalised datasets, we propose a method of two stages: First, we create a generalised representation from a detailed dataset, using an optimisation approach that satisfies certain cartographic constraints. Second, we define a sequence of basic merge and simplification operations that transforms the most detailed dataset gradually into the generalised dataset. The obtained sequence of gradual transformations is stored without geometrical redundancy in a structure that builds up on the previously developed tGAP (topological Generalised Area Partitioning) structure. This structure and the algorithm for intermediate levels of detail (LoD) have been implemented in an object-relational database and tested for land-cover data from the official German topographic dataset ATKIS at scale 1:50000 to the target scale 1:250000. Results of these tests allow us to conclude that the data at lowest LoD and at intermediate LoDs is well generalised. Applying specialised heuristics the applied optimisation method copes with large datasets; the tGAP structure allows users to efficiently query and retrieve a dataset at a specified LoD. Data are sent progressively from the server to the client: First a coarse representation is sent, which is refined until the requested LoD is reached.


Archive | 2014

Integrating and Generalising Volunteered Geographic Information

Monika Sester; Jamal Jokar Arsanjani; Ralf Klammer; Dirk Burghardt; Jan-Henrik Haunert

The availability of spatial data on the web has greatly increased through the availability of user-generated community data and geosensor networks. The integration of such multi-source data is providing promising opportunities, as integrated information is richer than can be found in only one data source, but also poses new challenges due to the heterogeneity of the data, the differences in quality and in respect of tag-based semantic modelling. The chapter describes approaches for the integration of official and informal sources, and discusses the impact of integrating user-generated data on automated generalisation and visualisation.


International Journal of Geographical Information Science | 2010

Area aggregation in map generalisation by mixed-integer programming

Jan-Henrik Haunert; Alexander Wolff

Topographic databases normally contain areas of different land cover classes, commonly defining a planar partition, that is, gaps and overlaps are not allowed. When reducing the scale of such a database, some areas become too small for representation and need to be aggregated. This unintentionally but unavoidably results in changes of classes. In this article we present an optimisation method for the aggregation problem. This method aims to minimise changes of classes and to create compact shapes, subject to hard constraints ensuring aggregates of sufficient size for the target scale. To quantify class changes we apply a semantic distance measure. We give a graph theoretical problem formulation and prove that the problem is NP-hard, meaning that we cannot hope to find an efficient algorithm. Instead, we present a solution by mixed-integer programming that can be used to optimally solve small instances with existing optimisation software. In order to process large datasets, we introduce specialised heuristics that allow certain variables to be eliminated in advance and a problem instance to be decomposed into independent sub-instances. We tested our method for a dataset of the official German topographic database ATKIS with input scale 1:50,000 and output scale 1:250,000. For small instances, we compare results of this approach with optimal solutions that were obtained without heuristics. We compare results for large instances with those of an existing iterative algorithm and an alternative optimisation approach by simulated annealing. These tests allow us to conclude that, with the defined heuristics, our optimisation method yields high-quality results for large datasets in modest time.


IEEE Transactions on Visualization and Computer Graphics | 2012

Algorithms for Labeling Focus Regions

Martin Fink; Jan-Henrik Haunert; André Schulz; Joachim Spoerhase; Alexander Wolff

In this paper, we investigate the problem of labeling point sites in focus regions of maps or diagrams. This problem occurs, for example, when the user of a mapping service wants to see the names of restaurants or other POIs in a crowded downtown area but keep the overview over a larger area. Our approach is to place the labels at the boundary of the focus region and connect each site with its label by a linear connection, which is called a leader. In this way, we move labels from the focus region to the less valuable context region surrounding it. In order to make the leader layout well readable, we present algorithms that rule out crossings between leaders and optimize other characteristics such as total leader length and distance between labels. This yields a new variant of the boundary labeling problem, which has been studied in the literature. Other than in traditional boundary labeling, where leaders are usually schematized polylines, we focus on leaders that are either straight-line segments or Bezier curves. Further, we present algorithms that, given the sites, find a position of the focus region that optimizes the above characteristics. We also consider a variant of the problem where we have more sites than space for labels. In this situation, we assume that the sites are prioritized by the user. Alternatively, we take a new facility-location perspective which yields a clustering of the sites. We label one representative of each cluster. If the user wishes, we apply our approach to the sites within a cluster, giving details on demand.


agile conference | 2009

Matching River Datasets of Different Scales

Birgit Kieler; Wei Huang; Jan-Henrik Haunert; Jie Jiang

In order to ease the propagation of updates between geographic datasets of different scales and to support multi-scale analyses, different datasets need to be matched, that is, objects that represent the same entity in the physical world need to be identified. We propose a method for matching datasets of river systems that were acquired at different scales. This task is related to the problem of matching networks of lines, for example road networks. However, we also take into account that rivers may be represented by polygons. The geometric dimension of a river object may depend, for example, on the width of the river and the scale.


advances in geographic information systems | 2006

Generalization of land cover maps by mixed integer programming

Jan-Henrik Haunert; Alexander Wolff

We present a novel method for the automatic generalization of land cover maps. A land cover map is composed of areas that collectively form a tessellation of the plane and each area is assigned to a land cover class such as lake, forest, or settlement. Our method aggregates areas into contiguous regions of equal class and of size greater than a user-defined threshold. To achieve this goal, some areas need to be enlarged at the expense of others. Given function that defines costs for the transformation between pairs of classes, our method guarantees to return a solution of minimal total cost. The method is based on a mixed integer program (MIP). To process maps with more than 50 areas, heuristics are introduced that lead to an alternative MIP formulation. The effects of the heuristics on the obtained solution and the computation time are discussed. The methods were tested using real data from the official German topographic data set (ATKIS) at scales 1:50.000 and 1:250.000.


advances in geographic information systems | 2012

An algorithm for map matching given incomplete road data

Jan-Henrik Haunert; Benedikt Budig

We consider the problem of matching a GPS trajectory with a road data set in which some roads are missing. To solve this problem, we extend a map-matching algorithm by Newson and Krumm (Proc. ACM GIS 2009, pp. 336--343) that is based on a hidden Markov model and a discrete set of candidate matches for each point of the trajectory. We introduce an additional off-road candidate for each point of the trajectory. The output path becomes determined by selecting one candidate for each point of the trajectory and connecting the selected candidates via shortest paths, which preferably lie in the road network but, if off-road candidates become selected, may also include off-road sections. We discuss experiments with GPS tracks of pedestrians.


advances in geographic information systems | 2010

Optimal and topologically safe simplification of building footprints

Jan-Henrik Haunert; Alexander Wolff

We present an optimization approach to simplify sets of building footprints represented as polygons. We simplify each polygonal ring by selecting a subsequence of its original edges; the vertices of the simplified ring are defined by intersections of consecutive (and possibly extended) edges in the selected sequence. Our aim is to minimize the number of all output edges subject to a user-defined error tolerance. Since we earlier showed that the problem is NP-hard when requiring non-intersecting simple polygons as output, we cannot hope for an efficient, exact algorithm. Therefore, we present an efficient algorithm for a relaxed problem and an integer program (IP) that allows us to solve the original problem with existing software. Our IP is large, since it has O(m6) constraints, where m is the number of input edges. In order to keep the running time small, we first consider a subset of only O(m) constraints. The choice of the constraints ensures some basic properties of the solution. Constraints that were neglected are added during optimization whenever they become violated by a new solution encountered. Using this approach we simplified a set of 144 buildings with a total of 2056 edges in 4.1 seconds on a standard desktop PC; the simplified building set contained 762 edges. During optimization, the number of constraints increased by a mere 13%. We also show how to apply cartographic quality measures in our method and discuss their effects on examples.

Collaboration


Dive into the Jan-Henrik Haunert's collaboration.

Top Co-Authors

Avatar

Alexander Wolff

Eindhoven University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Benjamin Niedermann

Karlsruhe Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Martin Fink

University of California

View shared research outputs
Top Co-Authors

Avatar

Andreas Gemsa

Karlsruhe Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge