Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Eric Prud'hommeaux is active.

Publication


Featured researches published by Eric Prud'hommeaux.


Journal of Cheminformatics | 2011

Linked open drug data for pharmaceutical research and development

Matthias Samwald; Anja Jentzsch; Christopher Bouton; Claus Stie Kallesøe; Egon Willighagen; Janos Hajagos; M. Scott Marshall; Eric Prud'hommeaux; Oktie Hassanzadeh; Elgar Pichler; Susie Stephens

There is an abundance of information about drugs available on the Web. Data sources range from medicinal chemistry results, over the impact of drugs on gene expression, to the outcomes of drugs in clinical trials. These data are typically not connected together, which reduces the ease with which insights can be gained. Linking Open Drug Data (LODD) is a task force within the World Wide Web Consortiums (W3C) Health Care and Life Sciences Interest Group (HCLS IG). LODD has surveyed publicly available data about drugs, created Linked Data representations of the data sets, and identified interesting scientific and business questions that can be answered once the data sets are connected. The task force provides recommendations for the best practices of exposing data in a Linked Data representation. In this paper, we present past and ongoing work of LODD and discuss the growing importance of Linked Data as a foundation for pharmaceutical R&D data sharing.


Computer Networks | 2002

Annotea: an open RDF infrastructure for shared Web annotations

José Kahan; Marja-Riitta Koivunen; Eric Prud'hommeaux; Ralph R. Swick

Abstract Annotea is a Web-based shared annotation system based on a general-purpose open resource description framework (RDF) infrastructure, where annotations are modeled as a class of metadata . Annotations are viewed as statements made by an author about a Web document. Annotations are external to the documents and can be stored in one or more annotation servers . One of the goals of this project has been to re-use as much existing W3C technology as possible. We have reached it mostly by combining RDF with XPointer, XLink, and HTTP. We have also implemented an instance of our system using the Amaya editor/browser and a generic RDF database, accessible through an Apache HTTP server. In this implementation, the merging of annotations with documents takes place within the client. The paper presents the overall design of Annotea and describes some of the issues we have faced and how we have solved them.


Briefings in Bioinformatics | 2009

Semantic Web for Health Care and Life Sciences: a review of the state of the art

Kei-Hoi Cheung; Eric Prud'hommeaux; Yimin Wang; Susie Stephens

Biomedical researchers need to be able to ask questions that span many heterogeneous data sources in order to make well-informed decisions that may lead to important scientific breakthroughs. For this to be achieved, diverse types of data about drugs, patients, diseases, proteins, cells, pathways and so on must be effectively integrated. Yet, linking disparate biomedical data continues to be a challenge due to inconsistency in naming and heterogeneity in data models and formats. Many organizations are now exploring the use of Semantic Web technologies in the hope of easing the cost of data integration [1]. The benefits promised by the Semantic Web include integration of heterogeneous data using explicit semantics, simplified annotation and sharing of findings, rich explicit models for data representation, aggregation and search, easier re-use of data in unanticipated ways, and the application of logic to infer additional information [2]. The World Wide Web Consortium (W3C) (http://www.w3.org/) has established the Semantic Web for Health Care and Life Sciences Interest Group (HCLS IG) (http://www.w3.org/2001/sw/hcls/) to help organizations in their adoption of the Semantic Web. The HCLS IG is chartered to develop and support the use of Semantic Web technologies to improve collaboration, research and development, innovation, and adoption in the domains of Health Care and Life Sciences. As a part of realizing this vision, a workshop on the Semantic Web for Health Care and Life Sciences was organized in conjunction with WWW2008 (http://esw.w3.org/topic/HCLS/WWW2008) [3]. The workshop provided a review of the latest positions and research in this domain. Five of the seven papers within this issue originated from the HCLS/WWW2008 workshop and review a range of Semantic Web technologies/approaches employed in different biomedical domains. Vandervalk et al. describe ‘The State of the Union’ for the adoption of Semantic Web standards by key institutes in bioinformatics. The paper explores the nature and connectivity of several community-driven semantic warehousing projects. It reports on the progress with the CardioSHARE/Moby-2 project, which aims to make the resources of the ‘Deep Web’ transparently accessible through SPARQL queries. It points out that the warehouse approach is limited, in that queries are confined to the resources that have been selected for inclusion. It also discusses a related problem that the majority of bioinformatics data exist in the ‘Deep Web’, that is, the data does not exist until an application or analytical tool is invoked, and therefore does not have a predictable Web address. It also highlights that the inability to utilize Uniform Resource Identifiers (URIs) to address bioinformatics data is a barrier to its accessibility in the Semantic Web. Das et al. discuss the use of ontologies to bridge diverse Web-based communities. The paper introduces the Science Collaboration Framework (SCF) as a reusable platform for advanced online collaboration in biomedical research. SCF supports structured Web 2.0 community discourse amongst researchers, makes heterogeneous data resources available to collaborating scientists, captures the semantics of the relationships between resources, and structures discourse around the resources. The first instance of the SCF framework is being used to create an open-access online community for stem cell research—StemBook (http://www.stembook.org). The SCF framework has been applied to interdisciplinary areas such as neurodegenerative disease and neuro-repair research, but has broad utility across the natural sciences. Zhao et al. describe various design patterns for representing and querying provenance information relating to mapping links between heterogeneous data from sources in the domain of functional genomics. The paper illustrates the use of named RDF graphs at different levels of granularity to make provenance assertions about linked data. It also demonstrates that these assertions are sufficient to support requirements including data currency, integrity, evidential support and historical queries. Dumontier et al. discuss a number of approaches for capturing pharmacogenomic data and other related information to facilitate data sharing and knowledge discovery. The paper describes how recent advances in Semantic Web technologies have presented exciting new opportunities for knowledge discovery related to pharmacogenomics by representing information with machine-understandable semantics. It illustrates progress in this area with respect to a personalized medicine project which aims to facilitate pharmacogenomics knowledge discovery through intuitive knowledge capture and sophisticated question answering using automated reasoning over expressive ontologies. Manning et al. review several data integration approaches that involve extracting data from a wide variety of public and private data repositories, each of which is associated with a unique vocabulary and schema. The paper presents an implemented data architecture that leverages semantic mapping of experimental metadata to support the rapid development of scientific discovery applications. This achieves the twin goals of reducing architectural complexity while leveraging Semantic Web technologies to provide flexibility, efficiency and more fully characterized data relationships. The architecture consists of a metadata ontology, a metadata repository and an interface that allows access to the repository. The paper describes how this approach allows scientists to discover and link relevant data across diverse data sources. It provides a platform for development of integrative informatics applications. Chen et al. survey the feasibility and state of the art for using Semantic Web technology to represent, integrate and analyze knowledge in a range of biomedical networks. The paper introduces a conceptual framework to enable researchers to integrate graph mining with ontology reasoning in network data analysis. Four case studies are used to demonstrate how semantic graph mining can be applied to the analysis of disease-causal genes, Gene Ontology (GO) category cross-talks, drug efficacy analysis and herb–drug interaction analysis. Ruttenberg et al. review the use of Semantic Web technologies for assembling and querying biomedical knowledge from multiple sources and disciplines. The paper presents the Neurocommons prototype knowledge base, a demonstration intended to show the feasibility and benefits of using Semantic Web technologies. The prototype allows one to explore the scalability of current Semantic Web tools and methods for creating such a resource, and to reveal issues that will need to be addressed in order to further expand its scope and use. The paper demonstrates the utility of the knowledge base by reviewing a few example queries that provide answers to precise questions relevant to the understanding of the disease.


international conference on semantic systems | 2014

Shape expressions: an RDF validation and transformation language

Eric Prud'hommeaux; José Emilio Labra Gayo; Harold R. Solbrig

RDF is a graph based data model which is widely used for semantic web and linked data applications. In this paper we describe a Shape Expression definition language which enables RDF validation through the declaration of constraints on the RDF model. Shape Expressions can be used to validate RDF data, communicate expected graph patterns for interfaces and generate user interface forms. In this paper we describe the syntax and the formal semantics of Shape Expressions using inference rules. Shape Expressions can be seen as domain specific language to define Shapes of RDF graphs based on regular expressions. Attached to Shape Expressions are semantic actions which provide an extension point for validation or for arbitrary code execution such as those in parser generators. Using semantic actions, it is possible to augment the validation expressiveness of Shape Expressions and to transform RDF graphs in a easy way. We have implemented several validation tools that check if an RDF graph matches against a Shape Expressions schema and infer the corresponding Shapes. We have also implemented two extensions, called GenX and GenJ that leverage the predictability of the graph traversal and create ordered, closed content, XML/Json documents, providing a simple, declarative mapping from RDF data to XML and Json documents.


international conference on database theory | 2015

Complexity and Expressiveness of ShEx for RDF

Slawomir Staworko; Iovka Boneva; José Emilio Labra Gayo; Samuel Hym; Eric Prud'hommeaux; Harold R. Solbrig

We study the expressiveness and complexity of Shape Expression Schema (ShEx), a novel schema formalism for RDF currently under development by W3C. ShEx assigns types to the nodes of an RDF graph and allows to constrain the admissible neighborhoods of nodes of a given type with regular bag expressions (RBEs). We formalize and investigate two alternative semantics, multi-and single-type, depending on whether or not a node may have more than one type. We study the expressive power of ShEx and study the complexity of the validation problem. We show that the single-type semantics is strictly more expressive than the multi-type semantics, single-type validation is generally intractable and multi-type validation is feasible for a small (yet practical) subclass of RBEs. To curb the high computational complexity of validation, we propose a natural notion of determinism and show that multi-type validation for the class of deterministic schemas using single-occurrence regular bag expressions (SORBEs) is tractable.


Journal of Biomedical Informatics | 2017

Modeling and validating HL7 FHIR profiles using semantic web Shape Expressions (ShEx)

Harold R. Solbrig; Eric Prud'hommeaux; Grahame Grieve; Lloyd McKenzie; Joshua C. Mandel; Deepak K. Sharma; Guoqian Jiang

BACKGROUND HL7 Fast Healthcare Interoperability Resources (FHIR) is an emerging open standard for the exchange of electronic healthcare information. FHIR resources are defined in a specialized modeling language. FHIR instances can currently be represented in either XML or JSON. The FHIR and Semantic Web communities are developing a third FHIR instance representation format in Resource Description Framework (RDF). Shape Expressions (ShEx), a formal RDF data constraint language, is a candidate for describing and validating the FHIR RDF representation. OBJECTIVE Create a FHIR to ShEx model transformation and assess its ability to describe and validate FHIR RDF data. METHODS We created the methods and tools that generate the ShEx schemas modeling the FHIR to RDF specification being developed by HL7 ITS/W3C RDF Task Force, and evaluated the applicability of ShEx in the description and validation of FHIR to RDF transformations. RESULTS The ShEx models contributed significantly to workgroup consensus. Algorithmic transformations from the FHIR model to ShEx schemas and FHIR example data to RDF transformations were incorporated into the FHIR build process. ShEx schemas representing 109 FHIR resources were used to validate 511 FHIR RDF data examples from the Standards for Trial Use (STU 3) Ballot version. We were able to uncover unresolved issues in the FHIR to RDF specification and detect 10 types of errors and root causes in the actual implementation. The FHIR ShEx representations have been included in the official FHIR web pages for the STU 3 Ballot version since September 2016. DISCUSSION ShEx can be used to define and validate the syntax of a FHIR resource, which is complementary to the use of RDF Schema (RDFS) and Web Ontology Language (OWL) for semantic validation. CONCLUSION ShEx proved useful for describing a standard model of FHIR RDF data. The combination of a formal model and a succinct format enabled comprehensive review and automated validation.


16th World Congress of Medical and Health Informatics: Precision Healthcare through Informatics, MedInfo 2017 | 2017

Building interoperable FHIR-based vocabulary mapping services: A case study of OHDSI vocabularies and mappings

Guoqian Jiang; Richard C. Kiefer; Eric Prud'hommeaux; Harold R. Solbrig

The OHDSI Common Data Model (CDM) is a deep information model, in which its vocabulary component plays a critical role in enabling consistent coding and query of clinical data. The objective of the study is to create methods and tools to expose the OHDSI vocabularies and mappings as the vocabulary mapping services using two HL7 FHIR core terminology resources ConceptMap and ValueSet. We discuss the benefits and challenges in building the FHIR-based terminology services.


16th World Congress of Medical and Health Informatics: Precision Healthcare through Informatics, MedInfo 2017 | 2017

A consensus-based approach for harmonizing the ohdsi common data model with HL7 FHIR

Guoqian Jiang; Richard C. Kiefer; Deepak K. Sharma; Eric Prud'hommeaux; Harold R. Solbrig

A variety of data models have been developed to provide a standardized data interface that supports organizing clinical research data into a standard structure for building the integrated data repositories. HL7 Fast Healthcare Interoperability Resources (FHIR) is emerging as a next generation standards framework for facilitating health care and electronic health records-based data exchange. The objective of the study was to design and assess a consensus-based approach for harmonizing the OHDSI CDM with HL7 FHIR. We leverage a FHIR W5 (Who, What, When, Where, and Why) Classification System for designing the harmonization approaches and assess their utility in achieving the consensus among curators using a standard inter-rater agreement measure. Moderate agreement was achieved for the model-level harmonization (kappa = 0.50) whereas only fair agreement was achieved for the property-level harmonization (kappa = 0.21). FHIR W5 is a useful tool in designing the harmonization approaches between data models and FHIR, and facilitating the consensus achievement.


international semantic technology conference | 2013

A Formal Model for RDF Dataset Constraints

Harold R. Solbrig; Eric Prud'hommeaux; Christopher G. Chute; Jim Davies

Linked Data has forged new ground in developing easy-to-use, distributed databases. The prevalence of this data has enabled a new genre of social and scientific applications. At the same time, Semantic Web technology has failed to significantly displace SQL or XML in industrial applications, in part because it offers no equivalent schema publication and enforcement mechanisms to ensure data consistency. The RDF community has recognized the need for a formal mechanism to publish verifiable assertions about the structure and content of RDF Graphs, RDF Datasets and related resources. We propose a formal model that could serve as a foundation for describing the various types invariants, pre- and post-conditions for RDF datasets and then demonstrate how the model can be used to analyze selected example constraints.


international world wide web conferences | 2008

Report on semantic web for health care and life sciences workshop

Huajun Chen; Kei-Hoi Cheung; Michel Dumontier; Eric Prud'hommeaux; Alan Ruttenberg; Susie Stephens; Yimin Wang

The Semantic Web for Health Care and Life Sciences Workshop will be held in Beijing, China, on April 22, 2008. The goal of the workshop is to foster the development and advancement in the use of Semantic Web technologies to facilitate collaboration, research and development, and innovation adoption in the domains of Health Care and Life Sciences, We also encourage the participation of all research communities in this event, with enhanced participation from Asia due to the location of the event. The workshop consists of two invited keynote talks, eight peer-reviewed presentations, and one panel discussion.

Collaboration


Dive into the Eric Prud'hommeaux's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Iovka Boneva

Centre national de la recherche scientifique

View shared research outputs
Top Co-Authors

Avatar

Helena F. Deus

National University of Ireland

View shared research outputs
Top Co-Authors

Avatar

Jun Zhao

University of Oxford

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge