Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Andrew Redd is active.

Publication


Featured researches published by Andrew Redd.


PLOS ONE | 2015

Identifying homelessness among veterans using VA administrative data: Opportunities to expand detection criteria

Rachel Peterson; Adi V. Gundlapalli; Stephen Metraux; Marjorie E. Carter; Miland Palmer; Andrew Redd; Matthew H. Samore; Jamison D. Fargo

Researchers at the U.S. Department of Veterans Affairs (VA) have used administrative criteria to identify homelessness among U.S. Veterans. Our objective was to explore the use of these codes in VA health care facilities. We examined VA health records (2002-2012) of Veterans recently separated from the military and identified as homeless using VA conventional identification criteria (ICD-9-CM code V60.0, VA specific codes for homeless services), plus closely allied V60 codes indicating housing instability. Logistic regression analyses examined differences between Veterans who received these codes. Health care services and co-morbidities were analyzed in the 90 days post-identification of homelessness. VA conventional criteria identified 21,021 homeless Veterans from Operations Enduring Freedom, Iraqi Freedom, and New Dawn (rate 2.5%). Adding allied V60 codes increased that to 31,260 (rate 3.3%). While certain demographic differences were noted, Veterans identified as homeless using conventional or allied codes were similar with regards to utilization of homeless, mental health, and substance abuse services, as well as co-morbidities. Differences were noted in the pattern of usage of homelessness-related diagnostic codes in VA facilities nation-wide. Creating an official VA case definition for homelessness, which would include additional ICD-9-CM and other administrative codes for VA homeless services, would likely allow improved identification of homeless and at-risk Veterans. This also presents an opportunity for encouraging uniformity in applying these codes in VA facilities nationwide as well as in other large health care organizations.


Journal of the American Medical Informatics Association | 2016

Congestive heart failure information extraction framework for automated treatment performance measures assessment

Stéphane M. Meystre; Youngjun Kim; Glenn T. Gobbel; Michael E. Matheny; Andrew Redd; Bruce E. Bray; Jennifer H. Garvin

Objective: This paper describes a new congestive heart failure (CHF) treatment performance measure information extraction system – CHIEF – developed as part of the Automated Data Acquisition for Heart Failure project, a Veterans Health Administration project aiming at improving the detection of patients not receiving recommended care for CHF. Design: CHIEF is based on the Apache Unstructured Information Management Architecture framework, and uses a combination of rules, dictionaries, and machine learning methods to extract left ventricular function mentions and values, CHF medications, and documented reasons for a patient not receiving these medications. Measurements: The training and evaluation of CHIEF were based on subsets of a reference standard of various clinical notes from 1083 Veterans Health Administration patients. Domain experts manually annotated these notes to create our reference standard. Metrics used included recall, precision, and the F1-measure. Results: In general, CHIEF extracted CHF medications with high recall (>0.990) and good precision (0.960–0.978). Mentions of Left Ventricular Ejection Fraction were also extracted with high recall (0.978–0.986) and precision (0.986–0.994), and quantitative values of Left Ventricular Ejection Fraction were found with 0.910–0.945 recall and with high precision (0.939–0.976). Reasons for not prescribing CHF medications were more difficult to extract, only reaching fair accuracy with about 0.310–0.400 recall and 0.250–0.320 precision. Conclusion: This study demonstrated that applying natural language processing to unlock the rich and detailed clinical information found in clinical narrative text notes makes fast and scalable quality improvement approaches possible, eventually improving management and outpatient treatment of patients suffering from CHF.


Journal of the American Medical Informatics Association | 2013

Validating a strategy for psychosocial phenotyping using a large corpus of clinical text

Adi V. Gundlapalli; Andrew Redd; Marjorie E. Carter; Guy Divita; Shuying Shen; Miland Palmer; Matthew H. Samore

OBJECTIVE To develop algorithms to improve efficiency of patient phenotyping using natural language processing (NLP) on text data. Of a large number of note titles available in our database, we sought to determine those with highest yield and precision for psychosocial concepts. MATERIALS AND METHODS From a database of over 1 billion documents from US Department of Veterans Affairs medical facilities, a random sample of 1500 documents from each of 218 enterprise note titles were chosen. Psychosocial concepts were extracted using a UIMA-AS-based NLP pipeline (v3NLP), using a lexicon of relevant concepts with negation and template format annotators. Human reviewers evaluated a subset of documents for false positives and sensitivity. High-yield documents were identified by hit rate and precision. Reasons for false positivity were characterized. RESULTS A total of 58 707 psychosocial concepts were identified from 316 355 documents for an overall hit rate of 0.2 concepts per document (median 0.1, range 1.6-0). Of 6031 concepts reviewed from a high-yield set of note titles, the overall precision for all concept categories was 80%, with variability among note titles and concept categories. Reasons for false positivity included templating, negation, context, and alternate meaning of words. The sensitivity of the NLP system was noted to be 49% (95% CI 43% to 55%). CONCLUSIONS Phenotyping using NLP need not involve the entire document corpus. Our methods offer a generalizable strategy for scaling NLP pipelines to large free text corpora with complex linguistic annotations in attempts to identify patients of a certain phenotype.


Studies in health technology and informatics | 2014

Detecting earlier indicators of homelessness in the free text of medical records.

Andrew Redd; Marjorie E. Carter; Guy Divita; Shuying Shen; Miland Palmer; Matthew H. Samore; Adi V. Gundlapalli

Early warning indicators to identify US Veterans at risk of homelessness are currently only inferred from administrative data. References to indicators of risk or instances of homelessness in the free text of medical notes written by Department of Veterans Affairs (VA) providers may precede formal identification of Veterans as being homeless. This represents a potentially untapped resource for early identification. Using natural language processing (NLP), we investigated the idea that concepts related to homelessness written in the free text of the medical record precede the identification of homelessness by administrative data. We found that homeless Veterans were much higher utilizers of VA resources producing approximately 12 times as many documents as non-homeless Veterans. NLP detected mentions of either direct or indirect evidence of homelessness in a significant portion of Veterans earlier than structured data.


Medical Care | 2017

Patient-aligned Care Team Engagement to Connect Veterans Experiencing Homelessness With Appropriate Health Care

Adi V. Gundlapalli; Andrew Redd; Daniel Bolton; Megan E. Vanneman; Marjorie E. Carter; Erin E. Johnson; Matthew H. Samore; Jamison D. Fargo; Thomas P. O'Toole

Background: Veterans experiencing homelessness frequently use emergency and urgent care (ED). Objective: To examine the effect of a Patient-aligned Care Team (PACT) model tailored to the unique needs of Veterans experiencing homelessness (H-PACT) on frequency and type of ED visits in Veterans Health Administration (VHA) medical facilities. Research Design: During a 12-month period, ED visits for 3981 homeless Veterans enrolled in (1) H-PACT at 20 VHA medical centers (enrolled) were compared with those of (2) 24,363 homeless Veterans not enrolled in H-PACT at the same sites (nonenrolled), and (3) 23,542 homeless Veterans at 12 non-H-PACT sites (usual care) using a difference-in-differences approach. Measure(s): The primary outcome was ED and other health care utilization and the secondary outcome was emergent (not preventable/avoidable) ED visits. Results: H-PACT enrollees were predominantly white males with a higher baseline Charlson comorbidity index. In comparing H-PACT enrollees with usual care, there was a significant decrease in ED usage among the highest ED utilizers (difference-in-differences, −4.43; P<0.001). The decrease in ED visits were significant though less intense for H-PACT enrollees versus nonenrolled (−0.29, P<0.001). H-PACT enrollees demonstrated a significant increase in the proportion of ED care visits that were not preventable/avoidable in the 6 months after enrollment, but had stable rates of primary care, mental health, social work, and substance abuse visits over the 12 months. Conclusions: Primary care treatment engagement can reduce ED visits and increase appropriate use of ED services in VHA for Veterans experiencing homelessness, especially in the highest ED utilizers.


Mathematical Medicine and Biology-a Journal of The Ima | 2015

Efficient parameter estimation for models of healthcare-associated pathogen transmission in discrete and continuous time

Alun Thomas; Andrew Redd; Karim Khader; Molly Leecaster; Tom Greene; Matthew H. Samore

We describe two novel Markov chain Monte Carlo approaches to computing estimates of parameters concerned with healthcare-associated infections. The first approach frames the discrete time, patient level, hospital transmission model as a Bayesian network, and exploits this framework to improve greatly on the computational efficiency of estimation compared with existing programs. The second approach is in continuous time and shares the same computational advantages. Both methods have been implemented in programs that are available from the authors. We use these programs to show that time discretization can lead to statistical bias in the underestimation of the rate of transmission of pathogens. We show that the continuous implementation has similar running time to the discrete implementation, has better Markov chain mixing properties, and eliminates the potential statistical bias. We, therefore, recommend its use when continuous-time data are available.


Studies in health technology and informatics | 2014

Recognizing Questions and Answers in EMR Templates Using Natural Language Processing.

Guy Divita; Shuying Shen; Marjorie E. Carter; Andrew Redd; Tyler Forbush; Miland Palmer; Matthew H. Samore; Adi V. Gundlapalli

Templated boilerplate structures pose challenges to natural language processing (NLP) tools used for information extraction (IE). Routine error analyses while performing an IE task using Veterans Affairs (VA) medical records identified templates as an important cause of false positives. The baseline NLP pipeline (V3NLP) was adapted to recognize negation, questions and answers (QA) in various template types by adding a negation and slot:value identification annotator. The system was trained using a corpus of 975 documents developed as a reference standard for extracting psychosocial concepts. Iterative processing using the baseline tool and baseline+negation+QA revealed loss of numbers of concepts with a modest increase in true positives in several concept categories. Similar improvement was noted when the adapted V3NLP was used to process a random sample of 318,000 notes. We demonstrate the feasibility of adapting an NLP pipeline to recognize templates.


Journal of Biomedical Informatics | 2017

A pilot study of a heuristic algorithm for novel template identification from VA electronic medical record text.

Andrew Redd; Adi V. Gundlapalli; Guy Divita; Marjorie E. Carter; Le Thuy Tran; Matthew H. Samore

RATIONALE Templates in text notes pose challenges for automated information extraction algorithms. We propose a method that identifies novel templates in plain text medical notes. The identification can then be used to either include or exclude templates when processing notes for information extraction. METHODS The two-module method is based on the framework of information foraging and addresses the hypothesis that documents containing templates and the templates within those documents can be identified by common features. The first module takes documents from the corpus and groups those with common templates. This is accomplished through a binned word count hierarchical clustering algorithm. The second module extracts the templates. It uses the groupings and performs a longest common subsequence (LCS) algorithm to obtain the constituent parts of the templates. The method was developed and tested on a random document corpus of 750 notes derived from a large database of US Department of Veterans Affairs (VA) electronic medical notes. RESULTS The grouping module, using hierarchical clustering, identified 23 groups with 3 documents or more, consisting of 120 documents from the 750 documents in our test corpus. Of these, 18 groups had at least one common template that was present in all documents in the group for a positive predictive value of 78%. The LCS extraction module performed with 100% positive predictive value, 94% sensitivity, and 83% negative predictive value. The human review determined that in 4 groups the template covered the entire document, with the remaining 14 groups containing a common section template. Among documents with templates, the number of templates per document ranged from 1 to 14. The mean and median number of templates per group was 5.9 and 5, respectively. DISCUSSION The grouping method was successful in finding like documents containing templates. Of the groups of documents containing templates, the LCS module was successful in deciphering text belonging to the template and text that was extraneous. Major obstacles to improved performance included documents composed of multiple templates, templates that included other templates embedded within them, and variants of templates. We demonstrate proof of concept of the grouping and extraction method of identifying templates in electronic medical records in this pilot study and propose methods to improve performance and scaling up.


Journal of Biomedical Informatics | 2017

Detecting the presence of an indwelling urinary catheter and urinary symptoms in hospitalized patients using natural language processing.

Adi V. Gundlapalli; Guy Divita; Andrew Redd; Marjorie E. Carter; Danette Ko; Michael A. Rubin; Matthew H. Samore; Judith Strymish; Sarah L. Krein; Kalpana Gupta; Anne Sales

OBJECTIVE To develop a natural language processing pipeline to extract positively asserted concepts related to the presence of an indwelling urinary catheter in hospitalized patients from the free text of the electronic medical note. The goal is to assist infection preventionists and other healthcare professionals in determining whether a patient has an indwelling urinary catheter when a catheter-associated urinary tract infection is suspected. Currently, data on indwelling urinary catheters is not consistently captured in the electronic medical record in structured format and thus cannot be reliably extracted for clinical and research purposes. MATERIALS AND METHODS We developed a lexicon of terms related to indwelling urinary catheters and urinary symptoms based on domain knowledge, prior experience in the field, and review of medical notes. A reference standard of 1595 randomly selected documents from inpatient admissions was annotated by human reviewers to identify all positively and negatively asserted concepts related to indwelling urinary catheters. We trained a natural language processing pipeline based on the V3NLP framework using 1050 documents and tested on 545 documents to determine agreement with the human reference standard. Metrics reported are positive predictive value and recall. RESULTS The lexicon contained 590 terms related to the presence of an indwelling urinary catheter in various categories including insertion, care, change, and removal of urinary catheters and 67 terms for urinary symptoms. Nursing notes were the most frequent inpatient note titles in the reference standard document corpus; these also yielded the highest number of positively asserted concepts with respect to urinary catheters. Comparing the performance of the natural language processing pipeline against the human reference standard, the overall recall was 75% and positive predictive value was 99% on the training set; on the testing set, the recall was 72% and positive predictive value was 98%. The performance on extracting urinary symptoms (including fever) was high with recall and precision greater than 90%. CONCLUSIONS We have shown that it is possible to identify the presence of an indwelling urinary catheter and urinary symptoms from the free text of electronic medical notes from inpatients using natural language processing. These are two key steps in developing automated protocols to assist humans in large-scale review of patient charts for catheter-associated urinary tract infection. The challenges associated with extracting indwelling urinary catheter-related concepts also inform the design of electronic medical record templates to reliably and consistently capture data on indwelling urinary catheters.


Journal of Biomedical Informatics | 2017

Extraction of left ventricular ejection fraction information from various types of clinical reports

Youngjun Kim; Jennifer H. Garvin; Mary K. Goldstein; Tammy S. Hwang; Andrew Redd; Daniel Bolton; Paul A. Heidenreich; Stéphane M. Meystre

Efforts to improve the treatment of congestive heart failure, a common and serious medical condition, include the use of quality measures to assess guideline-concordant care. The goal of this study is to identify left ventricular ejection fraction (LVEF) information from various types of clinical notes, and to then use this information for heart failure quality measurement. We analyzed the annotation differences between a new corpus of clinical notes from the Echocardiography, Radiology, and Text Integrated Utility package and other corpora annotated for natural language processing (NLP) research in the Department of Veterans Affairs. These reports contain varying degrees of structure. To examine whether existing LVEF extraction modules we developed in prior research improve the accuracy of LVEF information extraction from the new corpus, we created two sequence-tagging NLP modules trained with a new data set, with or without predictions from the existing LVEF extraction modules. We also conducted a set of experiments to examine the impact of training data size on information extraction accuracy. We found that less training data is needed when reports are highly structured, and that combining predictions from existing LVEF extraction modules improves information extraction when reports have less structured formats and a rich set of vocabulary.

Collaboration


Dive into the Andrew Redd's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Stéphane M. Meystre

Medical University of South Carolina

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge