Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Paul D. Clayton is active.

Publication


Featured researches published by Paul D. Clayton.


Journal of Medical Systems | 1983

The HELP system.

T. A. Pryor; Reed M. Gardner; Paul D. Clayton; Homer R. Warner

Development of a comprehensive computer system for acquiring medical data and implementing medical decision logic has been ongoing for over 15 years at the University of Utah and the LDS Hospital in Salt Lake City, Utah. This system is known as HELP and is currently operational at LDS Hospital, which is a 550-bed tertiary care hospital serving the needs of the intermountain west. This hospital also serves as one of the primary teaching centers for the University of Utah Medical School. Having been developed in this environment, the design of the HELP system was required to meet the administrative, clinical, teaching, and research needs of hospitals, as well as provide the decision-making capability.


Annals of Internal Medicine | 1995

Unlocking Clinical Data from Narrative Reports: A Study of Natural Language Processing

George Hripcsak; Carol Friedman; Philip O. Alderson; William DuMouchel; Stephen B. Johnson; Paul D. Clayton

The use of automated systems and electronic databases to enhance the quality, reduce the cost, and improve the management of health care has become common. Recent examples include using these systems to prevent adverse drug events [1, 2] and to encourage efficient treatment [3]. To function properly, automated systems require accurate, complete data. Although laboratory results are routinely available in electronic form, the most important clinical informationsymptoms, signs, and assessmentsremains largely inaccessible to automated systems. Investigators have attempted to use data from nonclinical sources to fill in the gaps, but such data have been found to be unreliable [4]. Much clinical data is locked up in departmental word-processor files, clinical databases, and research databases in the form of narrative reports such as discharge summaries, radiology reports, pathology reports, admission histories, and reports of physical examinations. Untold volumes of data are deleted every day after word-processor files are printed for the paper chart and for the mailing of reports. Exploiting this information is not trivial, however. Sentences that are easy for a person to understand are difficult for a computer to sort out. Problems include the many ways in which the same concept can be expressed (for example, heart failure, congestive heart failure, CHF, and so forth); ambiguities in interpreting grammatical constructs (possible worsening infiltrate may refer to a definite infiltrate that may be worsening or to an uncertain infiltrate that, if present, is worsening); and negation (lung fields are unremarkable implies a lack of infiltrate). To be accurate, automated systems require coded data: The concepts must come from a well-defined, finite vocabulary, and the relations among the concepts must be expressed in an unambiguous, formal structure. How do we unlock the contents of narrative reports? Human coders can be trained to read and manually structure reports [5]. Few institutions have been willing to invest in the personnel necessary for manual coding (other than for billing purposes), and the human coders can introduce an additional delay in obtaining coded data. The producers of reports (for example, radiologists for radiology reports) can be trained to directly create coded reports. Unfortunately, because manual coding systems do not match the speed and simplicity of dictating narrative reports, this approach has not attained widespread use. It also does not address the large number of reports already available in institutions. Natural language processing offers an automated solution [6-11]. The processor converts narrative reports that are available in electronic formeither through word processors or electronic scanningto coded descriptions that are appropriate for automated systems. The promise of efficient, accurate extraction of coded clinical data from narrative reports is certainly enticing. The question is whether natural language processors are up to the taskjust how efficient and accurate are they, and how easy is it to use their coded output? Methods We evaluated a general-purpose processor [12] that is intended to cover various clinical reports. To be used in a particular domain (for example, radiology) and subdomain (chest radiograph), the processor must have initial programming under the supervision of an appropriate expert (radiologist). This programming process involves enumerating the vocabulary of the domain (for example, patchy infiltrate) and formulating the grammar rules that are specific to the domain. The natural language processor works as follows. The narrative report is fed into a preprocessor, which uses its vocabulary to recognize words and phrases in the report (for example, lungs, CHF), map them to standard terms (lung, congestive heart failure), and classify them into semantic categories (bodylocation, finding). The parser then matches sequences of semantic categories in the report to structures defined in the grammar. For example, if the original report read, infiltrate in lung, then the phrase might match this structure: finding, in, bodylocation. Far more complex semantic structures are also supported through the grammar. This structure is then mapped to the processors result: a set of findings, each of which is associated with its own descriptive modifiers, such as certainty, status, location, quantity, degree, and change. For example, the following is an excerpt from a narrative report: Probable mild pulmonary vascular congestion with new left pleural effusion, question mild congestive changes. From this report, the natural language processor generated the following three coded findings: Pulmonary vascular congestion certainty: high degree: low Pleural effusion region: left status: new Congestive changes certainty: moderate degree: low The processor attempts to encode all clinical information available in reports, including the clinical indication, description, and impression. These findings are stored in a clinical database, where they can be exploited for automated decision-support and clinical research. At Columbia-Presbyterian Medical Center, New York, New York, the processor has been trained to handle chest radiograph and mammogram reports. In normal operation, the radiologist dictates a report, which is then transcribed by a clerk with a word processor. The word-processor files are printed for the paper chart, stored in the clinical database in their narrative form for on-line review by clinicians, and transmitted to the natural language processor for coding. The coded data produced by the processor are exploited for automated decision-support by the use of a computer program called a clinical event monitor [13]. The event monitor generates alerts, reminders, and interpretations that are based on the Arden Syntax for Medical Logic Modules [14]. The event monitor follows all clinical events (for example, admissions and laboratory results) in the medical center that can be tracked by computer. Whenever a clinically important situation is detected, the event monitor sends a message to the health care provider. For example, the storage of a low serum potassium level prompts the monitor to check whether the patient is receiving digoxin; if so, the monitor warns the health care provider that the hypokalemia may potentiate cardiac arrhythmias. Our study was designed and conducted by an evaluation team that was separate from the development team responsible for the natural language processor. At the time of the evaluation, members of the evaluation team had no knowledge of the operation of the processor or of its strengths and weaknesses. They knew that the processor accepted chest radiograph reports and produced some coded result. Two hundred admission chest radiograph reports were randomly selected from among those of all adult patients discharged from the inpatient service of Columbia-Presbyterian Medical Center during a particular week. An admission chest radiograph was defined as the first chest radiograph obtained during the hospital stay, even if it was not obtained on the first day. Chest radiographs were chosen because they display a broad range of disease, vocabulary, and grammatical variation. To better assess true performance, no corrections were made to reports, despite misspellings and even the inclusion of other types of reports in the same electronic files as the chest radiograph reports. Study subjects (humans and automated methods) detected the presence or absence of six clinical conditions (Table 1). To ensure that the conditions were reasonable candidates for automated decision-support, they were selected from an independent published list of automated protocols that exploited chest radiographs [15]. An internist on the evaluation team selected the six conditions, thus ensuring that the conditions were common enough to be reasonably expected to appear several times in a set of 200 reports and that overlap would be minimized. Table 1. Conditions The 200 reports were processed by the natural language processor, and the resulting coded data were fed into the clinical event monitor. For each clinical condition, the monitor had a rule expressed as a Medical Logic Module [14] to detect the condition on the basis of the processors coded output. The Medical Logic Modules concluded true (present) or false (absent). For example, the Medical Logic Module that detected pneumothorax was the simplest and used the following logic: if finding is in (pneumothorax; hydropneumothorax) and certainty-modifier is not in (no; rule out; cannot evaluate) and status-modifier is not in (resolved) then conclude true; endif; The Medical Logic Module looks for reports with appropriate findings but eliminates reports that are actually stating that the finding is absent, unknown, or resolved. The Medical Logic Modules were written by a member of the evaluation team who was given access to the six condition definitions (Table 1), a sample of the natural language processors output based on an independent set of chest radiographs, and a complete list of all vocabulary terms that the processor could generate in its output. No changes were made to the natural language processor, its grammar, or its vocabulary for the entire duration of the study (including the design phase). Once written, Medical Logic Modules were also held constant. Human participants were recruited as follows. Six board-certified radiologists and six board-certified internists were selected as experts. All 12 physicians actively practice medicine in their respective fields at Columbia-Presbyterian Medical Center. Six professional lay persons without experience in the practice of medicine were selected as additional controls. Each human participant analyzed 100 reports; the time required to analyze all 200 reports (about 4 hours) would have been a disincentive to participate in the study and might have led participants


Journal of the American Geriatrics Society | 2006

Use of health-related, quality-of-life metrics to predict mortality and hospitalizations in community-dwelling seniors

David A. Dorr; Spencer S. Jones; Laurie Burns; Steven M. Donnelly; Cherie P. Brunker; Adam B. Wilcox; Paul D. Clayton

OBJECTIVES: To investigate whether health‐related quality‐of‐life (HRQoL) scores in a primary care population can be used as a predictor of future hospital utilization and mortality.


Journal of the American Medical Informatics Association | 1995

Computer-generated Informational Messages Directed to Physicians: Effect on Length of Hospital Stay

Steven Shea; Robert V. Sideli; William DuMouchel; Gerald Pulver; Raymond R. Arons; Paul D. Clayton

Objective : With the advent of hospital payment by diagnosis-related group (DRG), length of stay (LOS) has become a major issue in hospital efforts to control costs. Because the Columbia-Presbyterian Medical Center (CPMC) has had above-average LOSs for many DRGs, the authors tested the hypothesis that a computer-generated informational message directed to physicians would shorten LOS. Design : Randomized clinical trial with the patient as the unit of randomization. Setting and Study Population : From June 1991 to April 1993, at CPMC in New York, 7,109 patient admissions were randomly assigned to an intervention (informational message) group and 6,990 to a control (no message) group. Intervention : A message giving the average LOS for the patients admission or provisional DRG, as assigned by hospital utilization review, and the current LOS, in days, was included in the main menu for review of test results in the hospitals clinical information system, available at all nursing stations in the hospital. Main outcome Measure : Hospital LOS. Results : The median LOS for study patients was 7 days. After adjustment for covariates including age, sex, payor, patient care unit, and time trends, the mean LOS in the intervention group was 3.2% shorter than that in the control group ( p = 0.022). Conclusion : Computer-generated patient-specific LOS information directed to physicians was associated with a reduction in hospital LOS.


Computers and Biomedical Research | 1974

A technique for the detection of asynergistic motion in the left ventricle.

Lowell D. Harris; Paul D. Clayton; Hiram W. Marshall; Homer R. Warner

Abstract A method is described whereby 60/second, monoplane, video images of the opacified left ventricle are digitized and the location of the endocardial surface is determined by a computer-based algorithm. For each contour, representing the location of the endocardial surface in a single plane at a given point in time, a reference point is defined as the midpoint of a straight line connecting the center of the aortic valve to the apex. The radial distances from the reference point to the contour are determined at five-degree increments around the contour for each of the contours during systole. The correlation coefficients and linear regression slopes for each radius sequence versus the mean radius sequence are calculated and plotted. The correlation coefficient and linear regression slope means and ranges are determined for normal hearts and then compared with the values from a heart demonstrating an abnormal pattern. By a computer method, the location and characteristics of the motion abnormality are described.


Computers and Biomedical Research | 1974

Left ventricular videometry.

Paul D. Clayton; Lowell D. Harris; Steven R. Rumel; Homer R. Warner

Abstract To quantitatively describe left-ventricular dynamics, a computer is used to process video angiocardiographic recordings. A special interface for transferring the video information to the computer and a border definition algorithm are used to automatically obtain the border coordinates of the ventricular chamber for each video field ( 1 60 sec ) during systole. For each point in the digitized picture matrix, the probability that the point should be designated as the border is computed. This probability is the product of four separate border definition criteria. The computer determined borders are in good visual agreement when superimposed upon the original video images.


The Annals of Thoracic Surgery | 1982

The Rehabilitation of Coronary Surgical Patients

Harold V. Liddle; Robert L. Jensen; Paul D. Clayton

Coronary revascularization has been reported to have failed to effectively rehabilitate working-age patients. This study of 565 patients demonstrates that motivation to return to work is strongly influences by age and educational level. Patients under age 55 are more likely to return to work than are patients over that age, but preoperative job classification does not influence rehabilitation. Although preoperative disability was associated with a slightly lower return-to-work rate (90%) than was the case with patients working preoperatively (97%) preoperative retirement was a strongly negative influence on rehabilitation. In this study, 80% of the patients worked to or beyond retirement age, and duration of work was not influenced by preoperative disability. The salary produced by those patients who were rehabilitated by surgery was four and a half times greater than the total cost of care and disability payments for the entire patient population. The factors which seemed to be the most important in effective rehabilitation were the psychological preparation of patients and their families and the attitude toward rehabilitation expressed by physicians and employers.


Computers and Biomedical Research | 1980

Determination of left ventricular contours: A probabilistic algorithm derived from angiographic images

William A. Barrett; Paul D. Clayton; Homer R. Warner

Abstract A probabilistic algorithm for automated left ventricular contour detection is developed which uses information extracted from a variety of angiographic images. These images serve as a training set for the development as well as the evaluation of the algorithm. The algorithm consists of four separate edge detectors combined in a product, each of which is described by a unique probability function derived from the training images. These functions are optimally designed to detect the endocardial border in left ventricular angiograms. A flexible template or model of the left ventricle is constructed from key anatomical features found in the training images and provides global guidance to the edge detection process. The algorithm requires less than 10 sec per contour and a comparison of hand-traced and computed contours shows over 90% of computer-determined coordinates to lie within the interval of reproducibility for manually traced contours.


Medical Decision Making | 1989

Revision of Diagnostic Logic Using a Clinical Database

Peter J. Haug; Paul D. Clayton; Pamela Shelton; Tracy Rich; Irena Tocino; Philip R. Frederick; Robert O. Crapo; William J. Morrison; Homer R. Warner

Statistical pattern-recognition techniques have been frequently applied to the problem of medical diagnosis. Sequential Bayesian approaches are appealing because of the possibility of generating the underlying sensitivities, specificities, and prevalence statistics from the estimates of medical experts. The accuracy of these estimates and the consequences of inaccuracies carry implications for the future development of this type of system. In an effort to explore these subjects, the authors used statistics derived from a clinical database to revise the diagnostic logic in a Bayesian system for generating a differential diagnostic list. Substantial changes in estimated a priori probabilities, sensitivities, and specificities were made to correct for significant under- and overestimations of these values by a group of medical experts. The system based on the derived values appears to perform better than the original system. It is concluded that the statistics used in a Bayesian diagnostic system should be derived from a database representative of the patient population for which the system is designed. Key words: diagnosis; computer-assisted; Bayes theorem; lung dis eases. (Med Decis Making 1989;9:84-90)


conference of american medical informatics association | 1996

OzCare: A Workflow Automation System for Care Plans

Wenke Lee; Gail E. Kaiser; Paul D. Clayton; Eric H. Sherman

An automated environment for implementing and monitoring care plans and practice guidelines is very important to the reduction of hospital costs and optimization of medical care. The goal of our research effort is to design a general system architecture that facilitates the implementation of (potentially) numerous care plans. Our approach is unique in that we apply the principles and technologies of Oz a multi-user collaborative workflow system that has been used as a software engineering environment framework, to hospital care planning. We utilize not only the workflow modeling and execution facilities of Oz, but also its open-system architecture to interface it with the World Wide Web, the Medical Logic Module server, and other components of the clinical information system. Our initial proof-of-concept system, OzCare, is constructed on top of the existing Oz system. Through several experiments in which we used this system to implement some Columbia-Presbyterian Medical Center care plans, we demonstrated that our system is capable and flexible for care plan automation.

Collaboration


Dive into the Paul D. Clayton's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

James J. Cimino

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Peter J. Haug

Intermountain Healthcare

View shared research outputs
Researchain Logo
Decentralizing Knowledge