Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ruth M. Reeves is active.

Publication


Featured researches published by Ruth M. Reeves.


Medical Care | 2013

Exploring the frontier of electronic health record surveillance: the case of postoperative complications.

Fern FitzHenry; Harvey J. Murff; Michael E. Matheny; Nancy Gentry; Elliot M. Fielstein; Steven H. Brown; Ruth M. Reeves; Dominik Aronsky; Peter L. Elkin; Vincent P Messina; Theodore Speroff

Background:The aim of this study was to build electronic algorithms using a combination of structured data and natural language processing (NLP) of text notes for potential safety surveillance of 9 postoperative complications. Methods:Postoperative complications from 6 medical centers in the Southeastern United States were obtained from the Veterans Affairs Surgical Quality Improvement Program (VASQIP) registry. Development and test datasets were constructed using stratification by facility and date of procedure for patients with and without complications. Algorithms were developed from VASQIP outcome definitions using NLP-coded concepts, regular expressions, and structured data. The VASQIP nurse reviewer served as the reference standard for evaluating sensitivity and specificity. The algorithms were designed in the development and evaluated in the test dataset. Results:Sensitivity and specificity in the test set were 85% and 92% for acute renal failure, 80% and 93% for sepsis, 56% and 94% for deep vein thrombosis, 80% and 97% for pulmonary embolism, 88% and 89% for acute myocardial infarction, 88% and 92% for cardiac arrest, 80% and 90% for pneumonia, 95% and 80% for urinary tract infection, and 77% and 63% for wound infection, respectively. A third of the complications occurred outside of the hospital setting. Conclusions:Computer algorithms on data extracted from the electronic health record produced respectable sensitivity and specificity across a large sample of patients seen in 6 different medical centers. This study demonstrates the utility of combining NLP with structured data for mining the information contained within the electronic health record.


Journal of the American Medical Informatics Association | 2014

Assisted annotation of medical free text using RapTAT.

Glenn T. Gobbel; Jennifer H. Garvin; Ruth M. Reeves; Robert M. Cronin; Julia Heavirland; Jenifer Williams; Allison Weaver; Shrimalini Jayaramaraja; Dario A. Giuse; Theodore Speroff; Steven H. Brown; Hua Xu; Michael E. Matheny

OBJECTIVE To determine whether assisted annotation using interactive training can reduce the time required to annotate a clinical document corpus without introducing bias. MATERIALS AND METHODS A tool, RapTAT, was designed to assist annotation by iteratively pre-annotating probable phrases of interest within a document, presenting the annotations to a reviewer for correction, and then using the corrected annotations for further machine learning-based training before pre-annotating subsequent documents. Annotators reviewed 404 clinical notes either manually or using RapTAT assistance for concepts related to quality of care during heart failure treatment. Notes were divided into 20 batches of 19-21 documents for iterative annotation and training. RESULTS The number of correct RapTAT pre-annotations increased significantly and annotation time per batch decreased by ~50% over the course of annotation. Annotation rate increased from batch to batch for assisted but not manual reviewers. Pre-annotation F-measure increased from 0.5 to 0.6 to >0.80 (relative to both assisted reviewer and reference annotations) over the first three batches and more slowly thereafter. Overall inter-annotator agreement was significantly higher between RapTAT-assisted reviewers (0.89) than between manual reviewers (0.85). DISCUSSION The tool reduced workload by decreasing the number of annotations needing to be added and helping reviewers to annotate at an increased rate. Agreement between the pre-annotations and reference standard, and agreement between the pre-annotations and assisted annotations, were similar throughout the annotation process, which suggests that pre-annotation did not introduce bias. CONCLUSIONS Pre-annotations generated by a tool capable of interactive training can reduce the time required to create an annotated document corpus by up to 50%.


Journal of Biomedical Informatics | 2014

Development and evaluation of RapTAT

Glenn T. Gobbel; Ruth M. Reeves; Shrimalini Jayaramaraja; Dario A. Giuse; Theodore Speroff; Steven H. Brown; Peter L. Elkin; Michael E. Matheny

Rapid, automated determination of the mapping of free text phrases to pre-defined concepts could assist in the annotation of clinical notes and increase the speed of natural language processing systems. The aim of this study was to design and evaluate a token-order-specific naïve Bayes-based machine learning system (RapTAT) to predict associations between phrases and concepts. Performance was assessed using a reference standard generated from 2860 VA discharge summaries containing 567,520 phrases that had been mapped to 12,056 distinct Systematized Nomenclature of Medicine - Clinical Terms (SNOMED CT) concepts by the MCVS natural language processing system. It was also assessed on the manually annotated, 2010 i2b2 challenge data. Performance was established with regard to precision, recall, and F-measure for each of the concepts within the VA documents using bootstrapping. Within that corpus, concepts identified by MCVS were broadly distributed throughout SNOMED CT, and the token-order-specific language model achieved better performance based on precision, recall, and F-measure (0.95±0.15, 0.96±0.16, and 0.95±0.16, respectively; mean±SD) than the bag-of-words based, naïve Bayes model (0.64±0.45, 0.61±0.46, and 0.60±0.45, respectively) that has previously been used for concept mapping. Precision, recall, and F-measure on the i2b2 test set were 92.9%, 85.9%, and 89.2% respectively, using the token-order-specific model. RapTAT required just 7.2ms to map all phrases within a single discharge summary, and mapping rate did not decrease as the number of processed documents increased. The high performance attained by the tool in terms of both accuracy and speed was encouraging, and the mapping rate should be sufficient to support near-real-time, interactive annotation of medical narratives. These results demonstrate the feasibility of rapidly and accurately mapping phrases to a wide range of medical concepts based on a token-order-specific naïve Bayes model and machine learning.


JMIR medical informatics | 2018

Automating Quality Measures for Heart Failure Using Natural Language Processing: A Descriptive Study in the Department of Veterans Affairs

Jennifer H. Garvin; Youngjun Kim; Glenn T. Gobbel; Michael E. Matheny; Andrew Redd; Bruce E. Bray; Paul A. Heidenreich; Dan Bolton; Julia Heavirland; Natalie Kelly; Ruth M. Reeves; Megha Kalsy; Mary K. Goldstein; Stéphane M. Meystre

Background We developed an accurate, stakeholder-informed, automated, natural language processing (NLP) system to measure the quality of heart failure (HF) inpatient care, and explored the potential for adoption of this system within an integrated health care system. Objective To accurately automate a United States Department of Veterans Affairs (VA) quality measure for inpatients with HF. Methods We automated the HF quality measure Congestive Heart Failure Inpatient Measure 19 (CHI19) that identifies whether a given patient has left ventricular ejection fraction (LVEF) <40%, and if so, whether an angiotensin-converting enzyme inhibitor or angiotensin-receptor blocker was prescribed at discharge if there were no contraindications. We used documents from 1083 unique inpatients from eight VA medical centers to develop a reference standard (RS) to train (n=314) and test (n=769) the Congestive Heart Failure Information Extraction Framework (CHIEF). We also conducted semi-structured interviews (n=15) for stakeholder feedback on implementation of the CHIEF. Results The CHIEF classified each hospitalization in the test set with a sensitivity (SN) of 98.9% and positive predictive value of 98.7%, compared with an RS and SN of 98.5% for available External Peer Review Program assessments. Of the 1083 patients available for the NLP system, the CHIEF evaluated and classified 100% of cases. Stakeholders identified potential implementation facilitators and clinical uses of the CHIEF. Conclusions The CHIEF provided complete data for all patients in the cohort and could potentially improve the efficiency, timeliness, and utility of HF quality measurements.


Journal of the American College of Cardiology | 2018

DETERMINING POST-TEST RISK IN A SAMPLE OF STRESS NUCLEAR MYOCARDIAL PERFUSION IMAGING REPORTS: IMPLICATIONS FOR NATURAL LANGUAGE PROCESSING

Andrew P. Levy; Nishant R. Shah; Ruth M. Reeves; Michael E. Matheny; Glenn T. Gobbel; Steven M. Bradley

Improving pre-procedure risk stratification has been suggested as a strategy to improve the appropriate use of coronary angiography and revascularization procedures. Reporting standards promote clarity and consistency of stress myocardial perfusion imaging (MPI) reports, but do not require an


Journal of Nuclear Cardiology | 2018

Determining post-test risk in a national sample of stress nuclear myocardial perfusion imaging reports: Implications for natural language processing tools

Andrew Levy; Nishant R. Shah; Michael E. Matheny; Ruth M. Reeves; Glenn T. Gobbel; Steven M. Bradley

BackgroundReporting standards promote clarity and consistency of stress myocardial perfusion imaging (MPI) reports, but do not require an assessment of post-test risk. Natural Language Processing (NLP) tools could potentially help estimate this risk, yet it is unknown whether reports contain adequate descriptive data to use NLP.MethodsAmong VA patients who underwent stress MPI and coronary angiography between January 1, 2009 and December 31, 2011, 99 stress test reports were randomly selected for analysis. Two reviewers independently categorized each report for the presence of critical data elements essential to describing post-test ischemic risk.ResultsFew stress MPI reports provided a formal assessment of post-test risk within the impression section (3%) or the entire document (4%). In most cases, risk was determinable by combining critical data elements (74% impression, 98% whole). If ischemic risk was not determinable (25% impression, 2% whole), inadequate description of systolic function (9% impression, 1% whole) and inadequate description of ischemia (5% impression, 1% whole) were most commonly implicated.ConclusionsPost-test ischemic risk was determinable but rarely reported in this sample of stress MPI reports. This supports the potential use of NLP to help clarify risk. Further study of NLP in this context is needed.


International Journal of Medical Informatics | 2013

Detecting temporal expressions in medical narratives

Ruth M. Reeves; Ferdo R. Ong; Michael E. Matheny; Joshua C. Denny; Dominik Aronsky; Glenn T. Gobbel; Diane Montella; Theodore Speroff; Steven H. Brown


AMIA | 2017

Natural Language Data Sampling: Principles & Strategies in Corpus Selection.

Ruth M. Reeves; Nancy Gentry; Fern FitzHenry; Glenn T. Gobbel; Michael E. Matheny


AMIA | 2016

Informatics Challenges in Working with "Big Data" - A Use Case in Identifying Predictors of Constipation.

Fern FitzHenry; Svetlana K. Eden; Jason N. Denton; Robert J. LoCasale; Hui Cao; Aize Cao; Ruth M. Reeves; Nancy Wells; Michael E. Matheny


AMIA | 2016

Event Coreference in Support of Temporal Reasoning in Mental Health Notes.

Ruth M. Reeves; Marcus Verhagen; Cynthia Brandt; Wendy W. Chapman; Michael E. Matheny; Steven H. Brown; Brian P. Marx; Theodore Speroff

Collaboration


Dive into the Ruth M. Reeves's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Robert M. Cronin

Vanderbilt University Medical Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Diane Montella

Vanderbilt University Medical Center

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge