Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Christopher B. Jones is active.

Publication


Featured researches published by Christopher B. Jones.


Geoinformatica | 1998

Conflict Reduction in Map Generalization Using Iterative Improvement

J. Mark Ware; Christopher B. Jones

Map data are usually derived from a source that is based on a particular scale of representation and hence are subject to a particular degree of map generalization. Attempts to display data at scales smaller than the source can result in spatial conflict, whereby map symbols become too close or overlap. Several map generalization operators may be applied to resolve the problem, including displacement. In this paper we address the problem of displacing multiple map objects in order to resolve graphic conflict. Each of n objects is assigned k candidate positions into which it can possibly move, resulting in a total of kn map realizations. The assumption is that some of these realizations will contain a reduced level of conflict. Generating and evaluating all realizations is however not practical, even for relatively small values of n and k. We present two iterative improvement algorithms, which limit the number of realizations processed. The first algorithm adopts a steepest gradient descent approach; the second uses simulated annealing. They are tested on a number of data sets and while both are successful in reducing conflict while limiting the number of realizations that are examined, the simulated annealing approach is superior with regard to the degree of conflict reduction. The approach adopted is regarded as generic, in the context of map generalization, in that it appears possible in principle to employ several map generalization operators combined with more sophisticated evaluation functions.


International Journal of Geographic Information Systems | 1996

Database design for a multi-scale spatial information system

Christopher B. Jones; David B. Kidner; L. Q. Luo; G. Ll. Bundy; J. M. Ware

Abstract Growth in the available quantities of digital geographical data has led to major problems in maintaining and integrating data from multiple sources, required by users at differing levels of generalization. Existing GIS and associated database management systems provide few facilities specifically intended for handling spatial data at multiple scales and require time consuming manual intervention to control update and retain consistency between representations. In this paper the GEODYSSEY conceptual design for a multi-scale, multiple representation spatial database is presented and the results of experimental implementation of several aspects of the design are described. Object-oriented, deductive and procedural programming techniques have been applied in several contexts: automated update software, using probabilistic reasoning; deductive query processing using explicit stored semantic and spatial relations combined with geometric data; multiresolution spatial data access methods combining poini,...


conference on information and knowledge management | 1997

Towards maintaining consistency of spatial databases

Alia I. Abdelmoty; Christopher B. Jones

This paper focuses on the consistency issues related to integrating multiple sets of spatial data in spatial information systems such as Geographic Information Systems (GISs). Data sets to be integrated are assumed to hold information about the same geographic features which can be drawn from different sources at different times, which may vary in reliability and accuracy, and which may vary in the scale of presentation resulting in possible multiple spatial representations for these features. A systematic approach is proposed which relies first on breaking down the consistency issue by identifying a range of consistency classes which can be checked in isolation. These classes are a representative set of properties and relationships which can completely identify the geographic objects in the data sets. Different levels of consistency are then proposed, namely, total, partial and conditional, which can be checked for every consistency class. This provides the flexibility for two data sets to be integrated without necessarily being totally consistent in every aspect. The second step of the proposed approach is to explicitly represent the different classes and levels of consistency in the system. As an example, a simple structure which stores adjacency relationships is given which can be used for the explicit representation of topological consistency. The paper also proposes that the set of consistent knowledge in the data sets (which is mostly qualitative) be explicitly represented in the database and that uncertainty or ambiguity inherent in the knowledge be represented as well.


advances in geographic information systems | 1998

Matching and aligning features in overlayed coverages

J. Mark Ware; Christopher B. Jones

1. ABSTRACT The problems caused by locational error when overlaying spatial data from different sources have been recognised for some time, and much research has been directed towards finding solutions. In this paper we present a solution in the form of an algorithm that seeks to match and align semantically equivalent features prior to overlay. It is assumed that, because of locational error, semantically equivalent features will not always be geometrically equivalent. The technique has been developed to assist in the detection of change between multi-date vector-defined data sets. Initial results, obtained by applying our algorithm to land cover data, are presented. 1.1


Lecture Notes in Computer Science | 1999

A Probabilistic Approach to Environmental Change Detection with Area-Class Map Data

Christopher B. Jones; J. Mark Ware; David Miller

One of the primary methods of studying change in the natural and man-made environment is that of comparison of multi-date maps and images of the earths surface. Such comparisons are subject to error from a variety of sources including uncertainty in surveyed location, registration of map overlays, classification of land cover, application of the classification system and variation in degree of generalisation. Existing geographical information systems may be criticised for a lack of adequate facilities for evaluating errors arising from automated change detection. This paper presents methods for change detection using polygon area-class maps in which the reliability of the result is assessed using Bayesian multivariate and univariate statistics. The method involves conflation of overlaid vector maps using a maximum likelihood approach to govern decisions on boundary matching, based on a variety of metrics of geometric and semantic similarity. The probabilities of change in the resulting map regions are then determined for each class of change based on training data and associated knowledge of prior probabilities of transitions between particular types of land cover.


Information & Software Technology | 1994

A semantic database approach to knowledge-based hypermedia systems

Paul Beynon-Davies; Douglas Tudhope; Carl Taylor; Christopher B. Jones

Abstract This paper discusses an architecture for knowledge-based hypermedia systems based on work from semantic databases. Its power derives from its use of a single, uniform data structure which can be used to store both the intensional and extensional information needed to generate hypermedia systems. The architecture is also sufficiently powerful to accommodate the representation of reasonable amounts of knowledge within a hypermedia system. Work has been conducted in building a number of prototypes on a small information base of digital image data. The prototypes serve as demonstrators of systems for managing the large amounts of information held by museums on their artefacts. The aim of this work is to demonstrate the flexibility of the architecture in serving the needs of a number of distinct user groups. To this end, the first prototype has demonstrated that the virtual architecture is capable of supporting some of the main hypermedia access methods. The current demonstrator is being used to investigate the potential of the approach for handling multiple classifications of hypermedia material. The research is particularly directed at the incorporation of evolving temporal and spatial knowledge.


conference on computer supported cooperative work | 1997

A Collaborative Schema Integration System

Paul Beynon-Davies; L. Bonde; D. McPhee; Christopher B. Jones

Conceptual modelling as applied to database development can be described as a two stage process: schema modelling followed by schema integration. Schema modelling is the process of transforming individual user requirements into a conceptual schema: an implementation-independent map of data requirements. Schema integration is the process of combining individual conceptual schemas into a single, unified schema. Single-user tools for schema modelling have enjoyed much success partly because the process of schema modelling has become relatively well formalised. Although a number of formal approaches to conducting schema integration have been proposed, it appears that schema integration tools have not enjoyed the same level of success. This we attribute not so much to the problem of formalisation but to the inherent collaborative nature of schema integration work. This paper first discusses the importance of collaboration to schema integration work. It then describes SISIBIS, a demonstrator system employing the IBIS (Issue Based Information System) scheme to support collaborative database design.


International Journal of Geographical Information Science | 1992

A Multiresolution topographic surface database

J. Mark Ware; Christopher B. Jones

Abstract Multiresolution data structures provide a means of retrieving geographical features from a database at levels of detail which are adaptable to different scales of representation. A database design is presented which integrates multi-scale storage of point, linear and polygonal features, based on the line generalization tree, with a multi-scale surface model based on the Delaunay pyramid. The constituent vertices of topologically-structured geographical features are thus distributed between the triangulated levels of a Delaunay pyramid in which triangle edges are constrained to follow those features at differing degrees of generalization. Efficient locational access is achieved by imposing a spatial index on each level of the pyramid.


conference on spatial information theory | 1995

A triangulated spatial model for cartographic generalisation of areal objects

J. Mark Ware; Christopher B. Jones; Geraint Ll. Bundy

Cartographic generalisation involves interaction between individual operators concerned with processes such as object elimination, detail reductions amalgamation, typification and displacement. Effective automation of these processes requires a means of maintaining knowledge of the spatial relationships between map objects in order to ensure that constraints of topology and of proximity are obeyed in the course of the individual generalisation transformations. Triangulated spatial models, based on the constrained Delaunay triangulation, have proven to be of particular value in representing the proximal and topological relations between map objects and hence in performing many of the essential tasks of fully automated cartographic generalisation. These include the identification of nearby objects; determination of the structure of space between nearby objects; execution of boundary simplification, merge and collapse operations; and the detection and resolution, by displacement, of topological inconsistencies arising from individual operators. In this paper we focus on the use of a triangulated model for operations specific to execution of merge operations between areal objects. The model is exploited to identify the regions of space between nearby objects and to execute merge operations in which the triangulation is used variously to adopt intervening space and to move adjacent rectangular objects to touch each other. Methods for updating the triangulation are described.


advances in geographic information systems | 1997

A data model for representing geological surfaces

J. Mark Ware; Christopher B. Jones

This paper provides details of a technique which enables the automatic creation of a geological surface model from terrain, outcrop and subsurface horizon data The model is created in such a way that each data set provides constraints on the interpretation of others. Both terrain and subsurface formation boundaries are represented by adjoining triangulated irregular networks that am constmined by the linear boundaries of outcrop regions. This approach to model creation demonstrates a step toward automated integration of sparse da@ from multiple sources, that may allow complex geological structures to be stored within JD-GIS. Introduction The aims of this paper are three-fold. Firstly, it presents, in a 3DGIS context, the concept of combining terrain, outcrop and subsurface data for the purpose of constructing geological surface data models. Secondly, the paper outlines a data model construction methodoiogy based on, and motivated by, the previously mentioned concept. This methodology, many aspects of which can be adapted for use within existing surface modeling packages, is intended as a useful guide to those with an interest in computer-based geological surface modeling. The third aim of the paper is to give de&ails of specific algorithms. that include some novel techniques, which, when combined and applied in accordance with thk model construction methodology, automatically produce a geological surface model. The Constrained Geological Model The model described here, termed the Constrained Geological Model (CGM), is surface based and attempts to represent both the ground surface and the boundaries which separate subsurface formations by means of a series of Constrained Delaunay Triangulations (CDTs) (see Chew 1987). A triangulation approach appears sensible since every dam source data point is honoured directly (they form the vertices of triangles) and it offers a relatively easy way of incorporating breaklines and fauhlines (and in this case, outcrop boundaries) within the model (Petrie 1990). The CGM is made up of a series of CDT surface approximations. There is a separate surface triangulation for each surface being represented by the model (the ground surface and each subsurface horizon). The idea is initially to create a CDT for the ground surface, with certain edges being forced to conform to outcrop object edges (regions and faults). A CDT is then produced for each of the’ subsurface horizons using the appropriate subsurface elevation tile and suitably selected subsets of the outcrop data. Intersections between subsurface horizons and the ground surface am accurately represented as a result of having common constraining edges existing within subsurface and ground surface triangulations (see Figure 1). Fault outcrop objects, which are already present as constraining edges in the ground surface triangulation, are also extrapolated onto subsurface triangulations. Pemlission to mnke digitnlfl~nrd copies of nil or pal ol’tbis mnlcrinl I’or personnl or classroom use is gmnlcd withoot Iix provided lhnt the topics rue not made or distrihulod Ibr prolit or commercinl ndvnnlny. tbc copyright notice, the title oflh~: puhlicntion and its dnk appear. and notice is given U~I copy+@ is hy per&scion ol‘lhr: ACM. Inc. To copy olhrrwise. IO republish. lo ~OSI on servc~s or lo redidribnk lo lists. reqnirrs spccilic permission nndlor fee. tiIS 97 LmI+gm Nelvld~ li.Y4 Copyright 1997 ACM l-581 IO-Ol7-1/97!11.3.50 0 subsurfacePoint Figure 1 -The CGM. Outcrop boundaries, in the form of common constraining edges, ensure exact intersection between the ground surface triangulation and subsurface triangulation. It may be argued at the outset that the CGM, which is mnde up of a series of 2-D surface triangulations, mny be more appropriately represented by single 3-D triangulation (tetrahedmllsatlon). There are two main reasons for opting for the 2-D appmnch. Firstly, producing a correct 3-D triangulation of the geological data sets being considered here might be thought of as being a two stage process. The first would involve the construction of a 3-D Delaunay triangulation (see Watson 1981, for example), using all available point information (termin, subsurface horizon and outcrop) as input data. Nwrt, this 3-D triangulation would have to be constrained to ensure that each tetrahedron conformed to the true structure of the geology beiug modelled. In this case, the constraining features would consist of individual horizon boundaries (i.e. 2-D triangulations), thus ensuring the presence of particular triangular faces within the model. It is, therefore, suggested here that, while a truly 3-D model is desimble, the production of a series of structurally correct 2-D triangulations (i.e. the CGM) is a necessary precursor to such a model. The second reason for reporting on a 2-D approach is that it is hoped that much of the methodology presented in this paper will be adopted by users for use within other, currently existing, surface modelling packages, the majority of which only provide 2-D triangulation capabilities. Source Data The terrain, or ground surface data, is in the form of a list of irregularly distributed 3-D coordinates. The outcrop datn, which digitally represents the information that appears on an outcrop map, is arranged in a hierarchical manner and consists of o list of outcrop objects, a list of polygon parts, a list of line parts and a list of point parts. Each outcrop object is made up of an object type identifier, an object description and a list of references to constituent parts. Outcrop objects am of two types, that is, region and fault. Region outcrop objects, which represent the area1 geological features which appear on the outcrop map, reference constituent polygon parts (by means of polygon identlflers). Fault outcrop objects, as the name su-guests. represent the fuult lines which appear on the map, and each references its constituent line parts (by means of line identifiers). Polygon parts are mnde up from a polygon identifier and a list of line identifiers referencing constituent line parts. A line part consists of an identifier, a list of references to its constituent point parts and two integer values indicating which two region outcrop objects lie to its left and right. Each point part is made up from an x, y and z coordinate, plus n unique identifier. The subsurface horizon data consists of a serjes of subsurface elevation files, each of which consists of a colleclion of irregularly distributed 3-D coordinates which describe a particular subsurface formation boundary. Model Creation There are three main stages in the CGM creation process. These are the triangulation of the ground surface data, triangulation of the subsurface horizon data and the modelling of faults. Note that the creation process is currently restricted to working with surfaces that are single-valued with respect to the xy-plane. Suggestions as to how multi-valued surfaces cau be nccommodated in the future are given by Ware (1994). Ground Surfnce Triangulation The first stage in the model creation process is the production of a ground surface triangulation. The surface is defined by the set of irregularly distributed terrain data and the collection of geological outcrop objects (regions and faults) which act as constraints upon the surface. The surface trlaugulatlon is created by applying a constrained Delaunay triangulation algorithm to the terrain and outcrop data. The method used is adapted from that of De Floriani and Puppo (1988). Initially, all terrain points and object points (from which outcrop objects are constructed) are grouped together and Delaunay triangulated, The Delaunay triangulation is produced in two steps. The fmt involves the creation of an initial enclosing Delaunay triangulation. The second step involves the stepwise insertion of untrlangulated points into the initial triangulation. The initial triangulation can be obtained in a number of ways. The approach adopted here is to produce a Delaunay triangulation of those points which define the convex hull of the points to be triangulated, Algorithms for constructing the convex hull of a set of points and computing the Delaunay triangulation of a convex polygon are given by Larkin (1991) and Derijver and Maybank (1982) respectively. The second step involves sequentially inserting each currently untrlangulated point into the current Delaunay triangulation. After each insertion a new Delaunay triangulation will have been ‘formed. Descriptions of methods for inserting a point into an existing Delaunay triangulation have been given in the literature (Watson 1981, De Floriani 1989). These methods are based on the premise that, according to the circle criterion, the insertion of a new point p into a Delaunay triangulation T affects only those triangles I of T whose circumcircles contain p. The process of inserting t R e point p can be summarised as : locating the triangle t of T in which p lies; recursively examining the neighbouting triangles of t until all triangles I are found; constructing the polygon Rp formed by the exte m!l edges of the triangles in I ; deleting the triangles in Ip; and finally Delaunay triangulating (this is done by connecting p to each vertex of Rp). 4 The second stage of the ground surface triangulation process involves the insertion of constraining edges, thus forcing certain trinngle edges to correspond to the constituent edges of the geological outcrop objects. This is achieved by initially reducing the outcrop data (which is made up of polygon and line parts) to a list of component edges. Each edge, defined by its start and end point, is then inserted into the niangulation. Consider the insertion of the edge (pa, pb) into an existing triangulation T. It is assumed that pa and pb already belong to T and that the edge (pa. pb) does not exist, It follows that the proposed edge makes one or more intersections with existing triangle edges. Let th

Collaboration


Dive into the Christopher B. Jones's collaboration.

Top Co-Authors

Avatar

J. Mark Ware

University of South Wales

View shared research outputs
Top Co-Authors

Avatar

David B. Kidner

University of South Wales

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Carl Taylor

University of South Wales

View shared research outputs
Top Co-Authors

Avatar

D. McPhee

University of South Wales

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

G. Ll. Bundy

University of South Wales

View shared research outputs
Top Co-Authors

Avatar

J. M. Ware

University of South Wales

View shared research outputs
Researchain Logo
Decentralizing Knowledge