Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Marie-Christine Jaulent is active.

Publication


Featured researches published by Marie-Christine Jaulent.


Drug Safety | 2005

Appraisal of the MedDRA conceptual structure for describing and grouping adverse drug reactions.

Cédric Bousquet; Georges Lagier; Agnès Lillo-Le Louët; Christine Le Beller; Alain Venot; Marie-Christine Jaulent

Computerised queries in spontaneous reporting systems for pharmacovigilance require reliable and reproducible coding of adverse drug reactions (ADRs). The aim of the Medical Dictionary for Regulatory Activities (MedDRA) terminology is to provide an internationally approved classification for efficient communication of ADR data between countries. Several studies have evaluated the domain completeness of MedDRA and whether encoded terms are coherent with physicians’ original verbatim descriptions of the ADR.MedDRA terms are organised into five levels: system organ class (SOC), high level group terms (HLGTs), high level terms (HLTs), preferred terms (PTs) and low level terms (LLTs). Although terms may belong to different SOCs, no PT is related to more than one HLT within the same SOC. This hierarchical property ensures that terms cannot be counted twice in statistical studies, though it does not allow appropriate semantic grouping of PTs. For this purpose, special search categories (SSCs) [collections of PTs assembled from various SOCs] have been introduced in MedDRA to group terms with similar meanings. However, only a small number of categories are currently available and the criteria used to construct these categories have not been clarified.The objective of this work is to determine whether MedDRA contains the structural and terminological properties to group semantically linked adverse events in order to improve the performance of spontaneous reporting systems.Rossi Mori classifies terminological systems in three categories: first-generation systems, which represent terms as strings; second-generation systems, which dissect terminological phrases into a set of simpler terms; and third-generation systems, which provide advanced features to automatically retrieve the position of new terms in the classification and group sets of meaning-related terms.We applied Cimino’s desiderata to show that MedDRA is not compatible with the properties of third-generation systems. Consequently, no tool can help for the automated positioning of new terms inside the hierarchy and SSCs have to be entered manually rather than automatically using the MedDRA files. One solution could be to link MedDRA to a third-generation system. This would allow the current MedDRA structure to be kept to ensure that end users have a common view on the same data and the addition of new computational properties to MedDRA.


International Journal of Medical Informatics | 2005

Implementation of automated signal generation in pharmacovigilance using a knowledge-based approach

Cédric Bousquet; Corneliu Henegar; Agnès Lillo-Le Louët; Patrice Degoulet; Marie-Christine Jaulent

Automated signal generation is a growing field in pharmacovigilance that relies on data mining of huge spontaneous reporting systems for detecting unknown adverse drug reactions (ADR). Previous implementations of quantitative techniques did not take into account issues related to the medical dictionary for regulatory activities (MedDRA) terminology used for coding ADRs. MedDRA is a first generation terminology lacking formal definitions; grouping of similar medical conditions is not accurate due to taxonomic limitations. Our objective was to build a data-mining tool that improves signal detection algorithms by performing terminological reasoning on MedDRA codes described with the DAML+OIL description logic. We propose the PharmaMiner tool that implements quantitative techniques based on underlying statistical and bayesian models. It is a JAVA application displaying results in tabular format and performing terminological reasoning with the Racer inference engine. The mean frequency of drug-adverse effect associations in the French database was 2.66. Subsumption reasoning based on MedDRA taxonomical hierarchy produced a mean number of occurrence of 2.92 versus 3.63 (p < 0.001) obtained with a combined technique using subsumption and approximate matching reasoning based on the ontological structure. Semantic integration of terminological systems with data mining methods is a promising technique for improving machine learning in medical databases.


Journal of Medical Internet Research | 2015

Adverse Drug Reaction Identification and Extraction in Social Media: A Scoping Review

Jérémy Lardon; Redhouane Abdellaoui; Florelle Bellet; Hadyl Asfari; Julien Souvignet; Nathalie Texier; Marie-Christine Jaulent; Marie-Noëlle Beyens; Anita Burgun; Cédric Bousquet

Background The underreporting of adverse drug reactions (ADRs) through traditional reporting channels is a limitation in the efficiency of the current pharmacovigilance system. Patients’ experiences with drugs that they report on social media represent a new source of data that may have some value in postmarketing safety surveillance. Objective A scoping review was undertaken to explore the breadth of evidence about the use of social media as a new source of knowledge for pharmacovigilance. Methods Daubt et al’s recommendations for scoping reviews were followed. The research questions were as follows: How can social media be used as a data source for postmarketing drug surveillance? What are the available methods for extracting data? What are the different ways to use these data? We queried PubMed, Embase, and Google Scholar to extract relevant articles that were published before June 2014 and with no lower date limit. Two pairs of reviewers independently screened the selected studies and proposed two themes of review: manual ADR identification (theme 1) and automated ADR extraction from social media (theme 2). Descriptive characteristics were collected from the publications to create a database for themes 1 and 2. Results Of the 1032 citations from PubMed and Embase, 11 were relevant to the research question. An additional 13 citations were added after further research on the Internet and in reference lists. Themes 1 and 2 explored 11 and 13 articles, respectively. Ways of approaching the use of social media as a pharmacovigilance data source were identified. Conclusions This scoping review noted multiple methods for identifying target data, extracting them, and evaluating the quality of medical information from social media. It also showed some remaining gaps in the field. Studies related to the identification theme usually failed to accurately assess the completeness, quality, and reliability of the data that were analyzed from social media. Regarding extraction, no study proposed a generic approach to easily adding a new site or data source. Additional studies are required to precisely determine the role of social media in the pharmacovigilance system.


International Journal of Medical Informatics | 2007

Building an ontology of pulmonary diseases with natural language processing tools using textual corpora.

Audrey Baneyx; Jean Charlet; Marie-Christine Jaulent

Pathologies and acts are classified in thesauri to help physicians to code their activity. In practice, the use of thesauri is not sufficient to reduce variability in coding and thesauri are not suitable for computer processing. We think the automation of the coding task requires a conceptual modeling of medical items: an ontology. Our task is to help lung specialists code acts and diagnoses with software that represents medical knowledge of this concerned specialty by an ontology. The objective of the reported work was to build an ontology of pulmonary diseases dedicated to the coding process. To carry out this objective, we develop a precise methodological process for the knowledge engineer in order to build various types of medical ontologies. This process is based on the need to express precisely in natural language the meaning of each concept using differential semantics principles. A differential ontology is a hierarchy of concepts and relationships organized according to their similarities and differences. Our main research hypothesis is to apply natural language processing tools to corpora to develop the resources needed to build the ontology. We consider two corpora, one composed of patient discharge summaries and the other being a teaching book. We propose to combine two approaches to enrich the ontology building: (i) a method which consists of building terminological resources through distributional analysis and (ii) a method based on the observation of corpus sequences in order to reveal semantic relationships. Our ontology currently includes 1550 concepts and the software implementing the coding process is still under development. Results show that the proposed approach is operational and indicates that the combination of these methods and the comparison of the resulting terminological structures give interesting clues to a knowledge engineer for the building of an ontology.


Journal of The Medical Library Association | 2012

Improving information retrieval using Medical Subject Headings Concepts: a test case on rare and chronic diseases.

Stéfan Jacques Darmoni; Lina Fatima Soualmia; Catherine Letord; Marie-Christine Jaulent; Nicolas Griffon; Benoît Thirion; Aurélie Névéol

BACKGROUND As more scientific work is published, it is important to improve access to the biomedical literature. Since 2000, when Medical Subject Headings (MeSH) Concepts were introduced, the MeSH Thesaurus has been concept based. Nevertheless, information retrieval is still performed at the MeSH Descriptor or Supplementary Concept level. OBJECTIVE The study assesses the benefit of using MeSH Concepts for indexing and information retrieval. METHODS Three sets of queries were built for thirty-two rare diseases and twenty-two chronic diseases: (1) using PubMed Automatic Term Mapping (ATM), (2) using Catalog and Index of French-language Health Internet (CISMeF) ATM, and (3) extrapolating the MEDLINE citations that should be indexed with a MeSH Concept. RESULTS Type 3 queries retrieve significantly fewer results than type 1 or type 2 queries (about 18,000 citations versus 200,000 for rare diseases; about 300,000 citations versus 2,000,000 for chronic diseases). CISMeF ATM also provides better precision than PubMed ATM for both disease categories. DISCUSSION Using MeSH Concept indexing instead of ATM is theoretically possible to improve retrieval performance with the current indexing policy. However, using MeSH Concept information retrieval and indexing rules would be a fundamentally better approach. These modifications have already been implemented in the CISMeF search engine.


International Journal of Medical Informatics | 2005

Electronic implementation of guidelines in the EsPeR system: A knowledge specification method

Isabelle Colombet; Angel-Ricardo Aguirre-Junco; Sylvain Zunino; Marie-Christine Jaulent; Laurence Leneveut; Gilles Chatellier

Despite initiatives to standardize methods for the development of clinical guidelines, several barriers hinder their integration in daily clinical practice: failure to fulfil quality criteria, poor effectiveness of their dissemination. Computerization of guidelines can favor their dissemination. The initial step of computerization is the knowledge specification from the text of the guideline. We describe the method of knowledge specification, which is used in EsPeR (Personalized Estimate of Risks), a web-based decision support system in preventive medicine, which allows, for a given person, to estimate risks and access recommendations, based on clinical profile. This method is based on a structured and systematic analysis of text allowing detailed specification of a decision tree. We use decision tables to validate the decision algorithm and decision trees to specify this algorithm, along with elementary messages of recommendation. Editing tools are used to facilitate the process of validation and the workflow between expert physicians and computer scientists. Applied to eleven different guidelines, the method allows a quick and valid computerization and integration in the EsPeR system. The method used for computerization could help to define a framework usable at the initial step of guideline development in order to produce guidelines ready for electronic implementation.


Drug Safety | 2015

Computational Approaches for Pharmacovigilance Signal Detection: Toward Integrated and Semantically-Enriched Frameworks

Vassilis Koutkias; Marie-Christine Jaulent

Computational signal detection constitutes a key element of postmarketing drug monitoring and surveillance. Diverse data sources are considered within the ‘search space’ of pharmacovigilance scientists, and respective data analysis methods are employed, all with their qualities and shortcomings, towards more timely and accurate signal detection. Recent systematic comparative studies highlighted not only event-based and data-source-based differential performance across methods but also their complementarity. These findings reinforce the arguments for exploiting all possible information sources for drug safety and the parallel use of multiple signal detection methods. Combinatorial signal detection has been pursued in few studies up to now, employing a rather limited number of methods and data sources but illustrating well-promising outcomes. However, the large-scale realization of this approach requires systematic frameworks to address the challenges of the concurrent analysis setting. In this paper, we argue that semantic technologies provide the means to address some of these challenges, and we particularly highlight their contribution in (a) annotating data sources and analysis methods with quality attributes to facilitate their selection given the analysis scope; (b) consistently defining study parameters such as health outcomes and drugs of interest, and providing guidance for study setup; (c) expressing analysis outcomes in a common format enabling data sharing and systematic comparisons; and (d) assessing/supporting the novelty of the aggregated outcomes through access to reference knowledge sources related to drug safety. A semantically-enriched framework can facilitate seamless access and use of different data sources and computational methods in an integrated fashion, bringing a new perspective for large-scale, knowledge-intensive signal detection.


Artificial Intelligence in Medicine | 2003

Component-based mediation services for the integration of medical applications

Yigang Xu; Dominique Sauquet; Patrice Degoulet; Marie-Christine Jaulent

Allowing exchange of information and cooperation among network-wide distributed and heterogeneous applications is a major need of current health-care information systems. The European project SynEx aims at developing an integration platform for both new and legacy applications on each partners site. We developed, in this project, mediation services based on the generic and reusable software components that facilitate the construction of an integration platform and ease the communication and the meaningful transformation among distributed and heterogeneous applications. The main component of the mediation services is named Pilot, which serves as an intelligent broker. It uses a multi-agents service model allowing the integration platform to be multi-servers. It transforms a client request into a valid high level service on the platform. Each service is broken up into several elementary steps by the Pilot. For each step, the Pilot uses an agent to realize the operation configured by the step. At runtime, the Pilot synchronizes the execution of different steps. To ease the communication and the interaction with the heterogeneous systems, an agent can integrate a Mediator. The Mediators are the communication and interpretation tools within the mediation services. We have developed a generic model that can be specialized for creating specific mediators for the different use cases. The mediator model uses two interfaces to connect the mediator with two systems that need to communicate. Each interface deals with the three aspects through three managers (the Communication Manager, the Syntax Manager and the Semantic Manager). Some ready-to-use specializations are developed for some well defined cases which can reduce the development effort. Once a manager is specialized, it can be used in different combinations with other managers to resolve different problems. The meaningful transformation is ensured on a semantic level in each mediator through the Semantic Model component. This last component allows the mapping among different vocabularies used by different systems through a shared ontology which allows the mapping process to focus on the meaning of the transformed information. We have used XML in different components of the mediation services as the interchange format and the description format. This has enhanced the flexibility of the components. The component based approach allows the generic components to be reused in different contexts and also allows the mediations services to be open to integrate other available technologies thus largely reduce the development efforts.


computer-based medical systems | 2012

Sequential pattern mining to discover relations between genes and rare diseases

Nicolas Béchet; Peggy Cellier; Thierry Charnois; Bruno Crémilleux; Marie-Christine Jaulent

Orphanet provides an international web-based knowledge portal for rare diseases including a collection of review articles. However, reviews and literature monitoring are manual. Thus, new documentation about a rare disease is a time-consuming process and automatically discovering knowledge from a large collection of texts is a crucial issue. This context represents a strong motivation to address the problem of extracting gene-rare diseases relationships from texts. In this paper, we tackle this issue with a cross-fertilization of information extraction and data mining techniques (sequential pattern mining under constraints). Experiments show the interest of the method for the documentation of rare diseases.


artificial intelligence in medicine in europe | 1997

A Case-Based Reasoning Method for Computer-Assisted Diagnosis in Histopathology

Marie-Christine Jaulent; Christel Le Bozec; Eric Zapletal; Patrice Degoulet

This article addresses the issue of exploiting knowledge acquired from experience in the diagnosis process in histopathology. We present the functional architecture of a Case-Based-Reasoning system in this domain. The main procedure, the selection of similar previous cases, has been implemented. The selection procedure is based on an original similarity measure that takes into account both semantic and structural resemblances and differences between the cases. A first evaluation of the system was performed on a base of 35 pathological cases of specimen of breast palpable tumours.

Collaboration


Dive into the Marie-Christine Jaulent's collaboration.

Top Co-Authors

Avatar

Patrice Degoulet

Paris Descartes University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gilles Chatellier

Paris Descartes University

View shared research outputs
Top Co-Authors

Avatar

Isabelle Colombet

Paris Descartes University

View shared research outputs
Top Co-Authors

Avatar

Eric Zapletal

École Normale Supérieure

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge