Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Laritza Rodriguez is active.

Publication


Featured researches published by Laritza Rodriguez.


Journal of Biomedical Informatics | 2015

The role of fine-grained annotations in supervised recognition of risk factors for heart disease from EHRs

Kirk Roberts; Sonya E. Shooshan; Laritza Rodriguez; Swapna Abhyankar; Halil Kilicoglu; Dina Demner-Fushman

This paper describes a supervised machine learning approach for identifying heart disease risk factors in clinical text, and assessing the impact of annotation granularity and quality on the systems ability to recognize these risk factors. We utilize a series of support vector machine models in conjunction with manually built lexicons to classify triggers specific to each risk factor. The features used for classification were quite simple, utilizing only lexical information and ignoring higher-level linguistic information such as syntax and semantics. Instead, we incorporated high-quality data to train the models by annotating additional information on top of a standard corpus. Despite the relative simplicity of the system, it achieves the highest scores (micro- and macro-F1, and micro- and macro-recall) out of the 20 participants in the 2014 i2b2/UTHealth Shared Task. This system obtains a micro- (macro-) precision of 0.8951 (0.8965), recall of 0.9625 (0.9611), and F1-measure of 0.9276 (0.9277). Additionally, we perform a series of experiments to assess the value of the annotated data we created. These experiments show how manually-labeled negative annotations can improve information extraction performance, demonstrating the importance of high-quality, fine-grained natural language annotations.


Scientific Data | 2018

A dataset of 200 structured product labels annotated for adverse drug reactions

Dina Demner-Fushman; Sonya E. Shooshan; Laritza Rodriguez; Alan R. Aronson; François Michel Lang; Willie J. Rogers; Kirk Roberts; Joseph Tonning

Adverse drug reactions (ADRs), unintended and sometimes dangerous effects that a drug may have, are one of the leading causes of morbidity and mortality during medical care. To date, there is no structured machine-readable authoritative source of known ADRs. The United States Food and Drug Administration (FDA) partnered with the National Library of Medicine to create a pilot dataset containing standardised information about known adverse reactions for 200 FDA-approved drugs. The Structured Product Labels (SPLs), the documents FDA uses to exchange information about drugs and other products, were manually annotated for adverse reactions at the mention level to facilitate development and evaluation of text mining tools for extraction of ADRs from all SPLs. The ADRs were then normalised to the Unified Medical Language System (UMLS) and to the Medical Dictionary for Regulatory Activities (MedDRA). We present the curation process and the structure of the publicly available database SPL-ADR-200db containing 5,098 distinct ADRs. The database is available at https://bionlp.nlm.nih.gov/tac2017adversereactions/; the code for preparing and validating the data is available at https://github.com/lhncbc/fda-ars.


Revised Selected Papers from the First International Workshop on Multimodal Retrieval in the Medical Domain - Volume 9059 | 2015

Annotation of Chest Radiology Reports forźIndexing and Retrieval

Dina Demner-Fushman; Sonya E. Shooshan; Laritza Rodriguez; Sameer K. Antani; George R. Thoma

Annotation of MEDLINE citations with controlled vocabulary terms improves the quality of retrieval results. Due to variety in descriptions of similar clinical phenomena and abundance of negation and uncertainty, annotation of clinical radiology reports for subsequent indexing and retrieval with a search engine is even more important. Provided with an opportunity to add about 4,000 radiology reports to collections indexed with NLM image retrieval engine Open-i, we needed to assure good retrieval quality. To accomplish this, we explored automatic and manual approaches to annotation, as well as developed a small controlled vocabulary of chest x-ray indexing terms and guidelines for manual annotation. Manual annotation captured the most salient findings in the reports and normalized the sparse distinct descriptions of similar findings to one controlled vocabulary term. This paper presents the vocabulary and the manual annotation process, as well as an evaluation of the automatic annotation of the reports.


BMC Bioinformatics | 2018

Semantic annotation of consumer health questions

Halil Kilicoglu; Asma Ben Abacha; Yassine Mrabet; Sonya E. Shooshan; Laritza Rodriguez; Kate Masterton; Dina Demner-Fushman

BackgroundConsumers increasingly use online resources for their health information needs. While current search engines can address these needs to some extent, they generally do not take into account that most health information needs are complex and can only fully be expressed in natural language. Consumer health question answering (QA) systems aim to fill this gap. A major challenge in developing consumer health QA systems is extracting relevant semantic content from the natural language questions (question understanding). To develop effective question understanding tools, question corpora semantically annotated for relevant question elements are needed. In this paper, we present a two-part consumer health question corpus annotated with several semantic categories: named entities, question triggers/types, question frames, and question topic. The first part (CHQA-email) consists of relatively long email requests received by the U.S. National Library of Medicine (NLM) customer service, while the second part (CHQA-web) consists of shorter questions posed to MedlinePlus search engine as queries. Each question has been annotated by two annotators. The annotation methodology is largely the same between the two parts of the corpus; however, we also explain and justify the differences between them. Additionally, we provide information about corpus characteristics, inter-annotator agreement, and our attempts to measure annotation confidence in the absence of adjudication of annotations.ResultsThe resulting corpus consists of 2614 questions (CHQA-email: 1740, CHQA-web: 874). Problems are the most frequent named entities, while treatment and general information questions are the most common question types. Inter-annotator agreement was generally modest: question types and topics yielded highest agreement, while the agreement for more complex frame annotations was lower. Agreement in CHQA-web was consistently higher than that in CHQA-email. Pairwise inter-annotator agreement proved most useful in estimating annotation confidence.ConclusionsTo our knowledge, our corpus is the first focusing on annotation of uncurated consumer health questions. It is currently used to develop machine learning-based methods for question understanding. We make the corpus publicly available to stimulate further research on consumer health QA.


Journal of the American Medical Informatics Association | 2016

Preparing a collection of radiology examinations for distribution and retrieval

Dina Demner-Fushman; Marc D. Kohli; Marc B. Rosenman; Sonya E. Shooshan; Laritza Rodriguez; Sameer K. Antani; George R. Thoma; Clement J. McDonald


language resources and evaluation | 2016

Annotating Named Entities in Consumer Health Questions.

Halil Kilicoglu; Asma Ben Abacha; Yassine Mrabet; Kirk Roberts; Laritza Rodriguez; Sonya E. Shooshan; Dina Demner-Fushman


american medical informatics association annual symposium | 2015

Automatic Extraction and Post-coordination of Spatial Relations in Consumer Language.

Kirk Roberts; Laritza Rodriguez; Sonya E. Shooshan; Dina Demner-Fushman


AMIA | 2016

Resource Classification for Medical Questions.

Kirk Roberts; Laritza Rodriguez; Sonya E. Shooshan; Dina Demner-Fushman


american medical informatics association annual symposium | 2014

Analyzing U.S. prescription lists with RxNorm and the ATC/DDD Index

Olivier Bodenreider; Laritza Rodriguez


AMIA | 2017

Mining the literature for genes associated with placenta-mediated maternal diseases.

Laritza Rodriguez; Stephanie M. Morrison; Kathleen Greenberg; Dina Demner-Fushman

Collaboration


Dive into the Laritza Rodriguez's collaboration.

Top Co-Authors

Avatar

Dina Demner-Fushman

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Sonya E. Shooshan

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Kirk Roberts

University of Texas Health Science Center at Houston

View shared research outputs
Top Co-Authors

Avatar

Olivier Bodenreider

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Asma Ben Abacha

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Halil Kilicoglu

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

George R. Thoma

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar

Sameer K. Antani

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yassine Mrabet

National Institutes of Health

View shared research outputs
Researchain Logo
Decentralizing Knowledge