Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Georges Gardarin is active.

Publication


Featured researches published by Georges Gardarin.


international conference on data engineering | 1998

Leveraging mediator cost models with heterogeneous data sources

Hubert Naacke; Georges Gardarin; Anthony Tomasic

Distributed systems require declarative access to diverse information sources. One approach to solving this heterogeneous distributed database problem is based on mediator architectures. In these architectures, mediators accept queries from users, process them with respect to wrappers, and return answers. Wrappers provide access to underlying sources. To efficiently process queries, the mediator must optimize the plan used for processing the query. In classical databases, cost-estimate based query optimization is effective. In a heterogeneous distributed databases, cost-estimate based query optimization is difficult to achieve because the underlying data sources do not export cost information. This paper describes a new method that permits the wrapper programmer to export cost estimates. For the wrapper programmer to describe all cost estimates may be impossible due to lack of information or burdensome due to the amount of information. We ease this responsibility of the wrapper programmer by leveraging the generic cost model of the mediator with specific cost estimates from the wrappers.


very large data bases | 1979

Proving Consistency Of Database Transactions

Georges Gardarin; Michel A. Melkanoff

The purpose of this paper is to present an approach for verifying that explicitely stated integrity constraints are not violated by certain transactions. We utilize a relational model where-in constraints are given in a language based on the first order predicate calculus. Transactions are written in terms of an ALGOL.60 like host language with embedded first order predicate calculus capabilities allowing queries and updates. The technique for proving consistency of the transactions is based upon the Hoare axiomatic approach. We illustrate the method by means of an explicit example of a database updated by four types of transaction. A generalized transaction consistency verifier embodying this approach would considerably enhance transaction programming in a relational database management system.


database and expert systems applications | 2002

Integrating heterogeneous data sources with XML and XQuery

Georges Gardarin; Antoine Mensch; Tuyet-Tram Dang-Ngoc; L. Smit

XML has emerged as the leading language for representing and exchanging data not only on the Web, but also in general in the enterprise. XQuery is emerging as the standard query language for XML. Thus, tools are required to mediate between XML queries and heterogeneous data sources to integrate data in XML. This paper presents the e-XMLMedia mediator, a unique tool for integrating and querying disparate heterogeneous information as unified XML views. It describes the mediator architecture and focuses on the unique distributed query processing technology implemented in this component. Further, we evoke the various applications that are currently being experimented with the e-XMLMedia Mediator.


very large data bases | 2008

WebContent: efficient P2P Warehousing of web data

Serge Abiteboul; Tristan Allard; Philippe Chatalic; Georges Gardarin; A. Ghitescu; François Goasdoué; Ioana Manolescu; Benjamin Nguyen; M. Ouazara; A. Somani; Nicolas Travers; Gabriel Vasile; Spyros Zoupanos

We present the WebContent platform for managing distributed repositories of XML and semantic Web data. The platform allows integrating various data processing building blocks (crawling, translation, semantic annotation, full-text search, structured XML querying, and semantic querying), presented as Web services, into a large-scale efficient platform. Calls to various services are combined inside ActiveXML [8] documents, which are XML documents including service calls. An ActiveXML optimizer is used to: (i) efficiently distribute computations among sites; (ii) perform XQuery-specific optimizations by leveraging an algebraic XQuery optimizer; and (iii) given an XML query, chose among several distributed indices the most appropriate in order to answer the query.


International Journal of Telemedicine and Applications | 2008

A tamper-resistant and portable healthcare folder

Nicolas Anciaux; Morgane Berthelot; Laurent Braconnier; Luc Bouganim; Martine De la Blache; Georges Gardarin; Philippe Kesmarszky; Sophie Lartigue; Jean-François Navarre; Philippe Pucheral; Jean-Jacques Vandewalle; Karine Zeitouni

Electronic health record (EHR) projects have been launched in most developed countries to increase the quality of healthcare while decreasing its cost. The benefits provided by centralizing the healthcare information in database systems are unquestionable in terms of information quality, availability, and protection against failure. Yet, patients are reluctant to give to a distant server the control over highly sensitive data (e.g., data revealing a severe or shameful disease). This paper capitalizes on a new hardware portable device, associating the security of a smart card to the storage capacity of a USB key, to give back to the patient the control over his medical data. This paper shows how this device can complement a traditional EHR server to (1) protect and share highly sensitive data among trusted parties and (2) provide a seamless access to the data even in disconnected mode. The proposed architecture is experimented in the context of a medicosocial network providing medical care and social services at home for elderly people.


International Journal of Data Warehousing and Mining | 2007

GeoCache: A Cache for GML Geographical Data

Lionel Savary; Georges Gardarin; Karine Zeitouni

GML is a promising model for integrating geodata within data warehouses. The resulting databases are generally large and require spatial operators to be handled. Depending on the size of the target geographical data and the number and complexity of operators in a query, the processing time may quickly become prohibitive. To optimize spatial queries over GML encoded data, this chapter introduces a novel cache-based architecture. A new cache replacement policy is then proposed. It takes into account the containment properties of geographical data and predicates, and allows evicting the most irrelevant values from the cache. Experiences with the GeoCache prototype show the effectiveness of the proposed architecture with the associated replacement policy, compared to existing works.


international conference on enterprise information systems | 2006

P2P Semantic Mediation of Web Sources

Georges Gardarin; Florin Dragan; Laurent Yeh

XML Mediators are focused on supporting the XQuery (or sometimes the SQL/XML) query language on XML views of heterogeneous data sources. Wrappers leverage data sources to XML views with basic query capabilities. The data are integrated on demand by the mediator delegating sub-queries to wrappers. Using such information integration platforms to query the semantic web is a challenge both for scalability and data semantic reasons. On another side, semantic Peer-to-peer (P2P) networks are emerging as an important infrastructure to manage distributed data, notably on the web. An important goal concerns improving query facilities of distributed heterogeneous data. Coupling data mediation and P2P technology, P2P data mediation strives to support efficiently advanced queries upon heterogeneous data sources annotated with various metadata and mapping schemes. We analyze the main services provided by mediation systems and discuss their extension to the semantic web in P2P mode. We shortly discuss the annotation service for describing sources syntactically and semantically, the routing service to route data location requests, and the query service to resolve distributed queries. We survey some projects and report on PathFinder, an experimental P2P mediation system developed at University of Versailles in the Context of the Satine European project.


computer-based medical systems | 2007

Text Categorization for Multi-label Documents and Many Categories

I. Sandu Popa; Karine Zeitouni; Georges Gardarin; Didier Nakache; Elisabeth Métais

In this paper, we propose a new classification method that addresses classification in multiple categories of textual documents. We call it Matrix Regression (MR) due to its resemblance to regression in a high dimensional space. Experiences on a medical corpus of hospital records to be classified by ICD (International Classification of Diseases) code demonstrate the validity of the MR approach. We compared MR with three frequently used algorithms in text categorization that are k-Nearest Neighbors, Centroide and Support Vector Machine. The experimental results show that our method outperforms them in both precision and time of classification.


database and expert systems applications | 2005

MediaPeer: A Safe, Scalable P2P Architecture for XML Query Processing

Florin Dragan; Georges Gardarin; Laurent Yeh

Increasing popularity of XML and P2P networks has generated much interest in distributed processing of XML data. We propose a novel solution organized around a mediator capable of processing XQueries over multiple heterogeneous data sources. The solution consists in a hierarchical overlay network formed of peers and super-peers indexing the XML path of data sources. Our solution is: (i) Scalable as the network can grow dynamically with size adaptative trie-based indexes in each super-peer. (ii) Reliable as procedures are developed for recovering from node failures or root saturations. (iii) Efficient as query processing is done by an existing optimized mediator that can track query process progresses and response sizes


database systems for advanced applications | 2007

Indexing textual XML in P2P networks using distributed bloom filters

Clément Jamard; Georges Gardarin; Laurent Yeh

Nowadays P2P information systems can be considered as large scale databases where all peers can store and query data in the network. Keywords and structure indexes must be maintained. However, indexing XML documents with massive set of words brings out a major problem: The number of entries to be shipped in the network is huge.We define Distributed Bloom Filter, a data structure derived from Bloom Filters, a probabilistic data structure to test whether an element is member of a set, to summarize peer XML content and structure. Our strength is to split the traditional Bloom Filter into several segments. We rely on a DHT network to distribute these segments in a P2P network. Our measurements show that our indexing method is scalable for a large number of words, and outperforms similar methods.

Collaboration


Dive into the Georges Gardarin's collaboration.

Top Co-Authors

Avatar

Florin Dragan

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Fei Sha

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Anthony Tomasic

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Nicolas Travers

Conservatoire national des arts et métiers

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Elisabeth Métais

Conservatoire national des arts et métiers

View shared research outputs
Top Co-Authors

Avatar

Zhao-Hui Tang

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Serge Abiteboul

École normale supérieure de Cachan

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge