Lee M. Christensen
University of Utah
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Lee M. Christensen.
Artificial Intelligence in Medicine | 2005
Wendy W. Chapman; Lee M. Christensen; Michael M. Wagner; Peter J. Haug; Oleg Ivanov; John N. Dowling; Robert T. Olszewski
OBJECTIVE Develop and evaluate a natural language processing application for classifying chief complaints into syndromic categories for syndromic surveillance. INTRODUCTION Much of the input data for artificial intelligence applications in the medical field are free-text patient medical records, including dictated medical reports and triage chief complaints. To be useful for automated systems, the free-text must be translated into encoded form. METHODS We implemented a biosurveillance detection system from Pennsylvania to monitor the 2002 Winter Olympic Games. Because input data was in free-text format, we used a natural language processing text classifier to automatically classify free-text triage chief complaints into syndromic categories used by the biosurveillance system. The classifier was trained on 4700 chief complaints from Pennsylvania. We evaluated the ability of the classifier to classify free-text chief complaints into syndromic categories with a test set of 800 chief complaints from Utah. RESULTS The classifier produced the following areas under the ROC curve: Constitutional = 0.95; Gastrointestinal = 0.97; Hemorrhagic = 0.99; Neurological = 0.96; Rash = 1.0; Respiratory = 0.99; Other = 0.96. Using information stored in the systems semantic model, we extracted from the Respiratory classifications lower respiratory complaints and lower respiratory complaints with fever with a precision of 0.97 and 0.96, respectively. CONCLUSION Results suggest that a trainable natural language processing text classifier can accurately extract data from free-text chief complaints for biosurveillance.
meeting of the association for computational linguistics | 2002
Lee M. Christensen; Peter J. Haug; Marcelo Fiszman
This paper describes the basic philosophy and implementation of MPLUS (M+), a robust medical text analysis tool that uses a semantic model based on Bayesian Networks (BNs). BNs provide a concise and useful formalism for representing semantic patterns in medical text, and for recognizing and reasoning over those patterns. BNs are noise-tolerant, and facilitate the training of M+.
Journal of the American Medical Informatics Association | 2015
Sameer Pradhan; Noémie Elhadad; Brett R. South; David Martinez; Lee M. Christensen; Amy Vogel; Hanna Suominen; Wendy W. Chapman; Guergana Savova
Objective The ShARe/CLEF eHealth 2013 Evaluation Lab Task 1 was organized to evaluate the state of the art on the clinical text in (i) disorder mention identification/recognition based on Unified Medical Language System (UMLS) definition (Task 1a) and (ii) disorder mention normalization to an ontology (Task 1b). Such a community evaluation has not been previously executed. Task 1a included a total of 22 system submissions, and Task 1b included 17. Most of the systems employed a combination of rules and machine learners. Materials and methods We used a subset of the Shared Annotated Resources (ShARe) corpus of annotated clinical text—199 clinical notes for training and 99 for testing (roughly 180 K words in total). We provided the community with the annotated gold standard training documents to build systems to identify and normalize disorder mentions. The systems were tested on a held-out gold standard test set to measure their performance. Results For Task 1a, the best-performing system achieved an F1 score of 0.75 (0.80 precision; 0.71 recall). For Task 1b, another system performed best with an accuracy of 0.59. Discussion Most of the participating systems used a hybrid approach by supplementing machine-learning algorithms with features generated by rules and gazetteers created from the training data and from external resources. Conclusions The task of disorder normalization is more challenging than that of identification. The ShARe corpus is available to the community as a reference standard for future studies.
Pharmacoepidemiology and Drug Safety | 2013
Sascha Dublin; Eric Baldwin; Rod Walker; Lee M. Christensen; Peter J. Haug; Michael L. Jackson; Jennifer C. Nelson; Jeffrey P. Ferraro; David Carrell; Wendy W. Chapman
This study aimed to develop Natural Language Processing (NLP) approaches to supplement manual outcome validation, specifically to validate pneumonia cases from chest radiograph reports.
Journal of trauma nursing | 2007
Suzanne Day; Lee M. Christensen; Joseph Dalto; Peter J. Haug
Trauma centers use trauma registries to collect information on injured patients they receive. The information is used for evaluation of care rendered, research, system and process improvement, and evaluation of injury prevention programs. Identification of patients qualifying for inclusion in registries can be problematic. Searching for those who meet inclusion criteria is often time consuming and inefficient. This has changed at a Salt Lake City trauma center, with an application designed to automate the process of identifying trauma patients. This program uses natural language processing and decision support technologies and is in daily use by the trauma team registry personnel.
north american chapter of the association for computational linguistics | 2015
Sumithra Velupillai; Danielle L. Mowery; Samir E. AbdelRahman; Lee M. Christensen; Wendy W. Chapman
The 2015 Clinical TempEval Challenge addressed the problem of temporal reasoning in the clinical domain by providing an annotated corpus of pathology and clinical notes related to colon cancer patients. The challenge consisted of six subtasks: TIMEX3 and event span detection, TIMEX3 and event attribute classification, document relation time and narrative container relation classification. Our BluLab team participated in all six subtasks. For the TIMEX3 and event subtasks, we developed a ClearTK support vector machine pipeline using mainly simple lexical features along with information from rule-based systems. For the relation subtasks, we employed a conditional random fields classification approach, with input from a rule-based system for the narrative container relation subtask. Our team ranked first for all TIMEX3 and event subtasks, as well as for the document relation subtask.
Archive | 2001
Peter J. Haug; Spencer B. Koehler; Lee M. Christensen; Michael L. Gundersen; Rudy E. Van Bree
Archive | 1998
Peter J. Haug; Spencer B. Koehler; Lee M. Christensen; Michael L. Gundersen; Rudy E. Van Bree
conference of american medical informatics association | 1997
Peter J. Haug; Lee M. Christensen; Michael L. Gundersen; B. Clemons; Spencer B. Koehler; K. Bauer
cross language evaluation forum | 2013
Danielle L. Mowery; Sumithra Velupillai; Brett R. South; Lee M. Christensen; David Martinez; Liadh Kelly; Lorraine Goeuriot; Noémie Elhadad; Sameer Pradhan; Guergana Savova; Wendy W. Chapman