Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Lemuel R. Waitman is active.

Publication


Featured researches published by Lemuel R. Waitman.


Journal of the American Medical Informatics Association | 2010

MedEx: a medication information extraction system for clinical narratives

Hua Xu; Shane P. Stenner; Son Doan; Kevin B. Johnson; Lemuel R. Waitman; Joshua C. Denny

Medication information is one of the most important types of clinical data in electronic medical records. It is critical for healthcare safety and quality, as well as for clinical research that uses electronic medical record data. However, medication data are often recorded in clinical notes as free-text. As such, they are not accessible to other computerized applications that rely on coded data. We describe a new natural language processing system (MedEx), which extracts medication information from clinical notes. MedEx was initially developed using discharge summaries. An evaluation using a data set of 50 discharge summaries showed it performed well on identifying not only drug names (F-measure 93.2%), but also signature information, such as strength, route, and frequency, with F-measures of 94.5%, 93.9%, and 96.0% respectively. We then applied MedEx unchanged to outpatient clinic visit notes. It performed similarly with F-measures over 90% on a set of 25 clinic visit notes.


Kidney International | 2010

Commonly used surrogates for baseline renal function affect the classification and prognosis of acute kidney injury

Edward D. Siew; Michael E. Matheny; T. Alp Ikizler; Julie B. Lewis; Randolph A. Miller; Lemuel R. Waitman; Alan S. Go; Chirag R. Parikh; Josh F. Peterson

Studies of acute kidney injury usually lack data on pre-admission kidney function and often substitute an inpatient or imputed serum creatinine as an estimate for baseline renal function. In this study, we compared the potential error introduced by using surrogates such as (1) an estimated glomerular filtration rate of 75 ml/min per 1.73 m(2) (suggested by the Acute Dialysis Quality Initiative), (2) a minimum inpatient serum creatinine value, and (3) the first admission serum creatinine value, with values computed using pre-admission renal function. The study covered a 12-month period and included a cohort of 4863 adults admitted to the Vanderbilt University Hospital. Use of both imputed and minimum baseline serum creatinine values significantly inflated the incidence of acute kidney injury by about half, producing low specificities of 77-80%. In contrast, use of the admission serum creatinine value as baseline significantly underestimated the incidence by about a third, yielding a low sensitivity of 39%. Application of any surrogate marker led to frequent misclassification of patient deaths after acute kidney injury and differences in both in-hospital and 60-day mortality rates. Our study found that commonly used surrogates for baseline serum creatinine result in bi-directional misclassification of the incidence and prognosis of acute kidney injury in a hospital setting.


Journal of the American Medical Informatics Association | 2007

Computer-based Insulin Infusion Protocol Improves Glycemia Control over Manual Protocol

Jeffrey B. Boord; Mona Sharifi; Robert A. Greevy; Marie R. Griffin; Vivian K. Lee; Ty A. Webb; Michael E. May; Lemuel R. Waitman; Addison K. May; Randolph A. Miller

OBJECTIVE Hyperglycemia worsens clinical outcomes in critically ill patients. Precise glycemia control using intravenous insulin improves outcomes. To determine if we could improve glycemia control over a previous paper-based, manual protocol, authors implemented, in a surgical intensive care unit (SICU), an intravenous insulin protocol integrated into a care provider order entry (CPOE) system. DESIGN Retrospective before-after study of consecutive adult patients admitted to a SICU during pre (manual protocol, 32 days) and post (computer-based protocol, 49 days) periods. MEASUREMENTS Percentage of glucose readings in ideal range of 70-109 mg/dl, and minutes spent in ideal range of control during the first 5 days of SICU stay. RESULTS The computer-based protocol reduced time from first glucose measurement to initiation of insulin protocol, improved the percentage of all SICU glucose readings in the ideal range, and improved control in patients on IV insulin for > or =24 hours. Hypoglycemia (<40 mg/dl) was rare in both groups. CONCLUSION The CPOE-based intravenous insulin protocol improved glycemia control in SICU patients compared to a previous manual protocol, and reduced time to insulin therapy initiation. Integrating a computer-based insulin protocol into a CPOE system achieved efficient, safe, and effective glycemia control in SICU patients.


American Journal of Kidney Diseases | 2010

A Computerized Provider Order Entry Intervention for Medication Safety During Acute Kidney Injury: A Quality Improvement Report

Allison B. McCoy; Lemuel R. Waitman; Cynthia S. Gadd; Ioana Danciu; James P. Smith; Julia B. Lewis; Jonathan S. Schildcrout; Josh F. Peterson

BACKGROUND Frequently, prescribers fail to account for changing kidney function when prescribing medications. We evaluated the use of a computerized provider order entry intervention to improve medication management during acute kidney injury. STUDY DESIGN Quality improvement report with time series analyses. SETTING & PARTICIPANTS 1,598 adult inpatients with a minimum 0.5-mg/dL increase in serum creatinine level over 48 hours after an order for at least one of 122 nephrotoxic or renally cleared medications. QUALITY IMPROVEMENT PLAN Passive noninteractive warnings about increasing serum creatinine level appeared within the computerized provider order entry interface and on printed rounding reports. For contraindicated or high-toxicity medications that should be avoided or adjusted, an interruptive alert within the system asked providers to modify or discontinue the targeted orders, mark the current dosing as correct and to remain unchanged, or defer the alert to reappear in the next session. OUTCOMES & MEASUREMENTS Intervention effect on drug modification or discontinuation, time to modification or discontinuation, and provider interactions with alerts. RESULTS The modification or discontinuation rate per 100 events for medications included in the interruptive alert within 24 hours of increasing creatinine level improved from 35.2 preintervention to 52.6 postintervention (P < 0.001); orders were modified or discontinued more quickly (P < 0.001). During the postintervention period, providers initially deferred 78.1% of interruptive alerts, although 54% of these eventually were modified or discontinued before patient death, discharge, or transfer. The response to passive alerts about medications requiring review did not significantly change compared with baseline. LIMITATIONS Single tertiary-care academic medical center; provider actions were not independently adjudicated for appropriateness. CONCLUSIONS A computerized provider order entry-based alerting system to support medication management after acute kidney injury significantly increased the rate and timeliness of modification or discontinuation of targeted medications.


Journal of the American Medical Informatics Association | 2012

A framework for evaluating the appropriateness of clinical decision support alerts and responses

Allison B. McCoy; Lemuel R. Waitman; Julia B. Lewis; Julie Wright; David P. Choma; Randolph A. Miller; Josh F. Peterson

OBJECTIVE Alerting systems, a type of clinical decision support, are increasingly prevalent in healthcare, yet few studies have concurrently measured the appropriateness of alerts with provider responses to alerts. Recent reports of suboptimal alert system design and implementation highlight the need for better evaluation to inform future designs. The authors present a comprehensive framework for evaluating the clinical appropriateness of synchronous, interruptive medication safety alerts. METHODS Through literature review and iterative testing, metrics were developed that describe successes, justifiable overrides, provider non-adherence, and unintended adverse consequences of clinical decision support alerts. The framework was validated by applying it to a medication alerting system for patients with acute kidney injury (AKI). RESULTS Through expert review, the framework assesses each alert episode for appropriateness of the alert display and the necessity and urgency of a clinical response. Primary outcomes of the framework include the false positive alert rate, alert override rate, provider non-adherence rate, and rate of provider response appropriateness. Application of the framework to evaluate an existing AKI medication alerting system provided a more complete understanding of the process outcomes measured in the AKI medication alerting system. The authors confirmed that previous alerts and provider responses were most often appropriate. CONCLUSION The new evaluation model offers a potentially effective method for assessing the clinical appropriateness of synchronous interruptive medication alerts prior to evaluating patient outcomes in a comparative trial. More work can determine the generalizability of the framework for use in other settings and other alert types.


Medical Decision Making | 2010

Development of Inpatient Risk Stratification Models of Acute Kidney Injury for Use in Electronic Health Records

Michael E. Matheny; Randolph A. Miller; T. Alp Ikizler; Lemuel R. Waitman; Joshua C. Denny; Jonathan S. Schildcrout; Robert S. Dittus; Josh F. Peterson

Objective. Patients with hospital-acquired acute kidney injury (AKI) are at risk for increased mortality and further medical complications. Evaluating these patients with a prediction tool easily implemented within an electronic health record (EHR) would identify high-risk patients prior to the development of AKI and could prevent iatrogenically induced episodes of AKI and improve clinical management. Methods. The authors used structured clinical data acquired from an EHR to identify patients with normal kidney function for admissions from 1 August 1999 to 31 July 2003. Using administrative, computerized provider order entry and laboratory test data, they developed a 3-level risk stratification model to predict each of 2 severity levels of in-hospital AKI as defined by RIFLE criteria. The severity levels were defined as 150% or 200% of baseline serum creatinine. Model discrimination and calibration were evaluated using 10-fold cross-validation. Results. Cross-validation of the models resulted in area under the receiver operating characteristic (AUC) curves of 0.75 (150% elevation) and 0.78 (200% elevation). Both models were adequately calibrated as measured by the Hosmer-Lemeshow goodness-of-fit test chi-squared values of 9.7 (P = 0.29) and 12.7 (P = 0.12), respectively. Conclusions. The authors generated risk prediction models for hospital-acquired AKI using only commonly available electronic data. The models identify patients at high risk for AKI who might benefit from early intervention or increased monitoring.


The Joint Commission Journal on Quality and Patient Safety | 2011

Adopting Real-Time Surveillance Dashboards as a Component of an Enterprisewide Medication Safety Strategy

Lemuel R. Waitman; Ira E. Phillips; Allison B. McCoy; Ioana Danciu; Robert M. Halpenny; Cori L. Nelsen; Daniel C. Johnson; John M. Starmer; Josh F. Peterson

BACKGROUND High-alert medications are frequently responsible for adverse drug events and present significant hazards to inpatients, despite technical improvements in the way they are ordered, dispensed, and administered. METHODS A real-time surveillance application was designed and implemented to enable pharmacy review of high-alert medication orders to complement existing computerized provider order entry and integrated clinical decision support systems in a tertiary care hospital. The surveillance tool integrated real-time data from multiple clinical systems and applied logical criteria to highlight potentially high-risk scenarios. Use of the surveillance system for adult inpatients was analyzed for warfarin, heparin and enoxaparin, and aminoglycoside antibiotics. RESULTS Among 28,929 hospitalizations during the study period, patients eligible to appear on a dashboard included 2224 exposed to warfarin, 8383 to heparin or enoxaparin, and 893 to aminoglycosides. Clinical pharmacists reviewed the warfarin and aminoglycoside dashboards during 100% of the days in the study period-and the heparinlenoxaparin dashboard during 71% of the days. Displayed alert conditions ranged from common events, such as 55% of patients receiving aminoglycosides were missing a baseline creatinine, to rare events, such as 0.1% of patients exposed to heparin were given a bolus greater than 10,000 units. On the basis of interpharmacist communication and electronic medical record notes recorded within the dashboards, interventions to prevent further patient harm were frequent. CONCLUSIONS Even in an environment with sophisticated computerized provider order entry and clinical decision support systems, real-time pharmacy surveillance of high-alert medications provides an important platform for intercepting medication errors and optimizing therapy.


Journal of Clinical Monitoring and Computing | 2000

Representation and classification of breath sounds recorded in an intensive care setting using neural networks.

Lemuel R. Waitman; Kevin P. Clarkson; John A. Barwise; Paul H. King

Objective.Develop and test methods for representing and classifying breath sounds in an intensive care setting. Methods.Breath sounds were recorded over the bronchial regions of the chest. The breath sounds were represented by their averaged power spectral density, summed into feature vectors across the frequency spectrum from 0 to 800 Hertz. The sounds were segmented by individual breath and each breath was divided into inspiratory and expiratory segments. Sounds were classified as normal or abnormal. Different back-propagation neural network configurations were evaluated. The number of input features, hidden units, and hidden layers were varied.Results.2127 individual breath sounds from the ICU patients and 321breaths from training tapes were obtained. Best overall classification rate for the ICU breath sounds was 73% with 62% sensitivity and 85% specificity. Best overall classification rate for the training tapes was 91% with 87%sensitivity and 95% specificity. Conclusions.Long term monitoring of lung sounds is not feasible unless several barriers can be overcome. Several choices in signal representation and neural network design greatly improved the classification rates of breath sounds. The analysis of transmitted sounds from the trachea to the lung is suggested as an area for future study.


Journal of the American Medical Informatics Association | 2004

Pragmatics of Implementing Guidelines on the Front Lines

Lemuel R. Waitman; Randolph A. Miller

We commend Shiffman and colleagues (“Bridging the Guideline Implementation Gap: A Systematic, Document-Centered Approach to Guideline Implementation”1) for highlighting the challenges of integrating guidelines into clinical practice and proposing pragmatic mechanisms for addressing them. We note, however, that the approach advocated by Shiffman et al., as well as by numerous other groups recently,2–8 is fundamentally a document-centric model. This approach may lead others to assume that representing a guideline correctly as a “computer-readable” document is the majority of the work required for implementation success. Although the “understanding” and representation of the clinical content of a guideline are a sine qua non for its local implementation, the document-centric approach leaves a substantial gap between the idealized document model and any specific guideline implementation in a local clinical system. This considerable gap is not unlike the “curly braces” problem documented for the Arden Syntax a decade ago.3–5 We estimate that 90% of the effort required for successful guideline implementation is (and must be) local, and the remaining 10% of the effort involves “getting the document right.” We believe that an alternative approach to local guideline implementation is to focus on the guidelines recommended actions; on the capabilities of the local care provider order entry (CPOE) or electronic health record (EHR) system that will serve as the “effector mechanism” for the guideline; on locally available computational and clinical resources; and on the guidelines required “clinical infrastructure.” We believe that guidelines should be implemented locally and directly (with a systematic approach, as described below) via local clinical systems (as opposed to a quasi-automatic implementation using a computer-readable, nationally disseminated document). The goal of both the “document-centric” and the “locally customized and guided” approaches is the same: implementation of locally effective guidelines that appropriately influence clinical decision making, …


American Journal of Health-system Pharmacy | 2011

Effects of clinical decision support on initial dosing and monitoring of tobramycin and amikacin.

Zachary L. Cox; Cori L. Nelsen; Lemuel R. Waitman; Jacob A. McCoy; Josh F. Peterson

PURPOSE The impact of clinical decision support (CDS) on initial doses and intervals and pharmacokinetic outcomes of amikacin and tobramycin therapy was evaluated. METHODS A complex CDS advisor to provide guidance on initial dosing and monitoring of aminoglycoside orders, using both traditional-dosing and extended-interval-dosing strategies, was integrated into a computerized prescriber-order-entry (CPOE) system and compared with a control group whose aminoglycoside orders were closely monitored by pharmacists. The primary outcome measured was an initial dose within 10% of a dose calculated to be adherent to published dose guidelines. Secondary outcomes included a guideline-adherent interval, trough and peak concentrations in goal range, and rate of nephrotoxicity. RESULTS Of 216 patients studied, 97 were prescribed amikacin and 119 were prescribed tobramycin. The number of orders with initial doses consistent with reference standards increased from 40% in the preadvisor group to 80% in the postadvisor group (p < 0.001). Selection of the correct initial interval based on renal function increased from 63% to 87% (p < 0.001). The changes in the initial dosing and interval resulted in an increase of trough concentrations at goal (59% in the preadvisor group versus 89% in the postadvisor group, p = 0.0004). There was no significant difference in peak concentrations in the goal range or rate of nephrotoxicity. CONCLUSION An advisor for aminoglycoside dosing and monitoring integrated into a CPOE system significantly improved selection of initial doses and intervals and resulted in an improvement in the rate of trough serum drug concentrations at goal compared with standard provider dosing.

Collaboration


Dive into the Lemuel R. Waitman's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Addison K. May

Vanderbilt University Medical Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

John A. Morris

Vanderbilt University Medical Center

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge