Sébastien Mustière
Institut géographique national
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Sébastien Mustière.
International Journal of Geographical Information Science | 2005
Sébastien Mustière
This paper presents a local and adaptive approach to road generalization, where different algorithms may be successively applied to each part of a road. The specific problem addressed is how to acquire and formalize cartographic knowledge in order to guide the application of the algorithms during the process. Our approach requires toolboxes of algorithms to transform and analyse the data, as well as an engine to chain them together. First, we present the toolboxes used in our experiments for road generalization. Then, we present two different engines, as well as the knowledge‐acquisition processes used to determine them. The first engine, named GALBE, is an empirically determined process, where the application of algorithms is mainly based on a single criterion: the coalescence. The second engine, which is more complex, uses multiple measures to describe the road. The choice of which algorithm to use given a particular set of measures is determined from examples using supervised learning techniques. Results obtained with both engines are presented.
advances in databases and information systems | 2004
David Sheeren; Sébastien Mustière; Jean-Daniel Zucker
There currently exist many geographical databases that represent a same part of the world, each with its own levels of detail and points of view. The use and management of these databases sometimes requires their integration into a single database. One important issue in this integration process is the ability to analyse and understand the differences among the multiple representations. These differences can of course be explained by the various specifications but can also be due to updates or errors during data capture. In this paper, after describing the overall process of integrating spatial databases, we propose a process to interpret the differences between two representations of the same geographic phenomenon. Each step of the process is based on the use of an expert system. Rules guiding the process are either introduced by hand from the analysis of specifications, or automatically learnt from examples. The process is illustrated through the analysis of the representations of traffic circles in two actual databases.
International Journal of Geographical Information Science | 2009
David Sheeren; Sébastien Mustière; Jean-Daniel Zucker
When different spatial databases are combined, an important issue is the identification of inconsistencies between data. Quite often, representations of the same geographical entities in databases are different and reflect different points of view. In order to fully take advantage of these differences when object instances are associated, a key issue is to determine whether the differences are normal, i.e. explained by the database specifications, or if they are due to erroneous or outdated data in one database. In this paper, we propose a knowledge‐based approach to partially automate the consistency assessment between multiple representations of data. The inconsistency detection is viewed as a knowledge‐acquisition problem, the source of knowledge being the data. The consistency assessment is carried out by applying a proposed method called MECO. This method is itself parameterized by some domain knowledge obtained from a second method called MACO. MACO supports two approaches (direct or indirect) to perform the knowledge acquisition using data‐mining techniques. In particular, a supervised learning approach is defined to automate the knowledge acquisition so as to drastically reduce the human‐domain experts work. Thanks to this approach, the knowledge‐acquisition process is sped up and less expert‐dependent. Training examples are obtained automatically upon completion of the spatial data matching. Knowledge extraction from data following this bottom‐up approach is particularly useful, since the database specifications are generally complex, difficult to analyse, and manually encoded. Such a data‐driven process also sheds some light on the gap between textual specifications and those actually used to produce the data. The methodology is illustrated and experimentally validated by comparing geometrical representations and attribute values of different vector spatial databases. The advantages and limits of such partially automatic approaches are discussed, and some future works are suggested.
Generalisation of Geographic Information#R##N#Cartographic Modelling and Applications | 2007
Sébastien Mustière; John van Smaalen
Publisher Summary This chapter presents the database requirements for generalization and multiple representations focusing on modeling requirements. The chapter stresses the requirements for efficiently modeling data before and during the generalization process. Geographic databases have very important qualities. First, the geographic world is complex: geographic phenomena are numerous and highly interrelated. That makes the issue of modeling geographical phenomena particularly important. Second, when one looks at a set of displayed geographic data, one can immediately identify a lot of spatial relationships between objects, groups of objects, or parts of objects. The complexity of modeling and manipulating geographic data becomes very apparent when issues of generalization and integration of databases are considered. First, generalization is concerned with the transformation of a representation of a part of the world. Before generalization, the implicit relations between objects must be explicitly manipulated. One of the reasons for the complexity of generalization is the need for data enrichment in order to identify and represent the implicit geographic phenomena before manipulating them. The efficiency of GIS would greatly improve if they were able to support complex processes such as generalization and management of multiple representations. This requires a rich modeling of data and processes, and so database management becomes a key issue in GIS.
SDH | 2005
David Sheeren; Sébastien Mustière; Jean-Daniel Zucker
There currently exist many geographical databases that represent a same part of the world, each with its own levels of detail and points of view. The use and management of these databases therefore sometimes requires their integration into a single database. The main issue in this integration process is the ability to analyse and understand the differences among the multiple representations. These differences can of course be explained by the various specifications but can also be due to updates or errors during data capture. In this paper, we propose an new approach to interpret the differences in representation in a semiautomatic way. We consider the specifications of each database as the “knowledge” to evaluate the conformity of each representation. This information is grasped from existing documents but also from data, by means of machine learning tools. The management of this knowledge is enabled by a rule-based system. Application of this approach is illustrated with a case study from two IGN databases. It concerns the differences between the representations of traffic circles.
advances in geographic information systems | 1999
Sébastien Mustière; Jean-Daniel Zucker
This article shows that cartographic generalization is best viewed as representing (formulating, renaming knowledge) and abstracting (simplifying a given representation). The general process of creating map is described so as to show how it fits into an abstraction framework developed in artificial intelligence to emphasize the difference between abstraction and representation. The utility of the framework lies in its efficiency to automate knowledge acquisition for the cartographic generalization as a combined acquisition of knowledge for abstraction and knowledge for changing a representation.
OGRS | 2012
Bénédicte Bucher; Mickaël Brasebin; Elodie Buard; Eric Grosso; Sébastien Mustière; Julien Perret
Back to the 80s and 90s, researchers at the COGIT laboratory of the French National Mapping Agency often developed their own vector platforms dedicated to their respective research purposes like generalization, integration or 3D. This multiplicity of platforms became an obstacle to many important research activities like capitalizing and sharing. In 2000, based on ISO/OGC standard for data exchange, the COGIT laboratory decided to invest in the development of an ad hoc interoperable GIS platform that would host most of its research developments. This platform, called GeOxygene, was designed to have a rich data model and a genuine programming language to manipulate data. It facilitates the design and prototyping of processes which geographical data model are object oriented. It is also intended to facilitate the sharing of knowledge and programming efforts with other research labs. It is also shared through regular open source releases. This paper highlights the relevance for a research team like COGIT to contribute to Open Source software.
web and wireless geographical information systems | 2009
Eric Grosso; Alain Bouju; Sébastien Mustière
Geographical data users increasingly express the need to access a data integration process, notably to integrate historical data into a current frame of reference. However, easily accessible data integration treatments are not yet available for users. To tackle this problem, this paper focuses on proposing solutions to build a data integration geoservice. This paper focuses specifically on the historical data management, improving of geographical data adjustment process into a current frame of reference and finally on building of a data integration geoservice.
international syposium on methodologies for intelligent systems | 2000
Sébastien Mustière; Jean-Daniel Zucker
This article shows that cartographic generalization is best viewed as representing (formulating, renaming knowledge) and abstracting (simplifying a given representation). The general process of creating map is described so as to show how it fits into an abstraction framework developed in artificial intelligence to emphasize the difference between abstraction and representation. The utility of the framework lies in its efficiency to automate knowledge acquisition for the cartographic generalization as a combined acquisition of knowledge for abstraction and knowledge for changing a representation.
EGC (best of volume) | 2013
Ammar Mechouche; Nathalie Abadie; Emeric Prouteau; Sébastien Mustière
Nowadays, a huge amount of geodata is available. In this context, efficient discovery and retrieval of geographic data is a key issue to assess their fitness for use and optimal reuse. Such processes are mainly based on metadata. Over the last decade, standardization efforts have been made by the International Standard Organization (ISO) and the Open Geospatial Consortium (OGC) in the field of syntactic interoperability between geographic information components. Among existing standards, ISO-19115 standard provides an abstract specification of metadata for geospatial data discovery and exploration, ISO-19110 defines feature catalogues, and ISO-19131 defines data product specifications. Yet, information provided by these standardized metadata is not always formalized enough to enable efficient discovery, understanding and comparison of available datasets. Notably, information regarding geodata capture rules can be represented only through free text into the standard metadata (ISO 19131).More generally, geospatial database capture specifications are a very complex but very rich source of knowledge about geospatial data. They could be used with benefits in geospatial data discovery process if they would be represented in a more formal way than in existing ISO standards. In this context, we propose a model to formalize geospatial databases capture specifications. Firstly, we extend existing standards to represent information that was insufficiently formalized. Second, we propose an OWL representation of our global model. This ontological approach enables to overcome the limitations related to string-based queries and to provide efficient access to data capture information. Finally, we implement this model as a Web application allowing users to discover the available geospatial data and to access to their specifications through a user-friendly interface.