Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Blaine Y. Takesue is active.

Publication


Featured researches published by Blaine Y. Takesue.


Annals of Internal Medicine | 1997

A Framework for Capturing Clinical Data Sets from Computerized Sources

Clement J. McDonald; J. Marc Overhage; Paul R. Dexter; Blaine Y. Takesue; Diane M. Dwyer

The pressure to improve health care and provide better care at a lower cost has created new needs to access clinical data for outcome analysis [1], quality assessment, guideline development [2], utilization review, pharmacoepidemiology [3], public health, benefits management, and other purposes. These needs are usually identified as data sets (that is, predefined lists of clinical questions or observations). Data sets are not new to the health care industry. The UB92 hospital billing form and UB82, its progenitor, from The Health Care Financing Administration (HCFA) have been around for some time. Recently, however, the number and richness of clinical data sets have grown dramatically. New data sets have been established by the National Center for Vital Health Statistics [4] and the National Committee for Quality Assurance [5]. The HCFA piloted an 1800-element quality-assurance data set called the Uniform Clinical Data Set System from 1989 to 1993 [6] and is working on a simpler version called the Medicare Quality Indicator System. Other HCFA data sets include the Resident Assessment Instrument for long-term health care [7] and a draft Outcome and Assessment Information Set for providers of home health care [8]. The U.S. Centers for Disease Control and Prevention (CDC) has developed Data Elements for Emergency Department Systems (DEEDS) for reporting information on visits to emergency departments [9]; the National Immunization Program for reporting data on immunizations [10]; and, in collaboration with the Council of State and Territorial Epidemiologists (CSTE) and the Association of State and Territorial Public Health Laboratory Directors (ASTPHLD), a data set that reports laboratory findings on communicable diseases [11]. Other national data sets include the Trauma Registry of American College of Surgeons [12], the Cardiovascular Data Standards for coronary arteriography [13], the Cooperative Project for coronary artery bypass graft surgery [14], and the Musculoskeletal Outcomes Data Evaluation and Management System for knee and hip replacements [15]. Cancer registries, hospitals, group practices, managed care providers, researchers, and pharmaceutical manufacturers are developing additional clinical data sets. We refer to the databases that carry data sets as analytic databases because they are usually designed for direct statistical analysis. As the need formal data sets has burgeoned, so has the use of computers to process patient information in direct support of patient care. Operational systems in the laboratory, pharmacy, patient registration area, surgical suites, and electrocardiography carts (to name a few) now include most data on laboratory procedures, prescriptions, demographics and appointments, surgical logs, and electrocardiographic measurements. Unfortunately, the two developments are occurring in independent orbits with little interaction. With a few important exceptions, developers of national data sets do not consider operational systems as sources for the contents of their data sets. Developers can find the information they want by abstracting charts. However, chart abstraction is prone to error and expensive. In one study, chart reviewers could not find 10% of the laboratory test results that were in the charts [16] and commercial chart reviews cost between


International Journal of Medical Informatics | 1998

What is done, what is needed and what is realistic to expect from medical informatics standards

Clement J. McDonald; J. Marc Overhage; Paul R. Dexter; Blaine Y. Takesue; Jeffrey G. Suico

10 and


Journal of General Internal Medicine | 1996

Using electronic medical records to predict mortality in primary care patients with heart disease - Prognostic power and pathophysiologic implications

William M. Tierney; Blaine Y. Takesue; Dennis L. Vargo; Xiao Hua Zhou

15 per admission, depending on the amount of data retrieved (Kriss E. Personal communication. Boston, MA: MediQual). Chart reviews remain the only option for retrieving some kinds of information. However, when information exists in the databases of health care providers, manually extracting it from reports that are printed from one database and reentering the information into another database is time-consuming and inefficient. In this article, we review the barriers to the direct flow of operational data into analytic databases and the technical developments that have minimized these barriers. We also suggest specific actions that can unify the two orbits as the health care industry enters the computer age. The Difference between Operational and Analytic Databases Examples of operational databases are found in hospital pharmacies, laboratories, radiology departments, critical care units, and order-processing units. The first barriers to the direct use of operational system data in analytic databases are the differences in structure and detail that obscure similarities in the content of their information. A laboratory system would typically dedicate an entire record to each observation (for example, clinical measurement or laboratory test result). An ordering or pharmacy system would do the same for each item or prescription that is ordered. Table 1 shows the structure of an operational database for a clinical reporting system. Table 1. Operational Database: One Record per Observation* In contrast, analytic databases typically carry all variables of interest (for example, the most recent hemoglobin value, whether the patient is anemic, the number of units of blood transfused, and the lowest systolic blood pressure) in a single record that describes one patient, patient encounter, or patient procedure. Table 2 shows an analytic database analogue to the operational database of Table 1. In analytic databases, the variable is identified by the name of the field (for example, most recent cholesterol level) in which its value is stored, and all variables of interest are stored horizontally as separate fields in one record. The variables in an operational database are usually defined by a code or name stored in one field (with a name such as observation ID as shown in the third column of Table 1) and their values are stored in another field (with a name such as value as shown in the fourth column of Table 1). Different variables are stacked vertically in separate records. Table 2. Revised Model of an Analytic Database: One Record per Patient Event* Operational databases often contain repeated measurements (for example, all recent hemoglobin values for a patient), whereas analytic databases often contain a single measurement (for example, the lowest hemoglobin value during the first 24 hours of a hospital stay or the first Glasgow coma score during an emergency department visit). Operational databases usually carry many items of information about each value reported (for example, its units, date and time, and where the measurement was taken) as separate fields in the same record, whereas analytic databases usually contain only the variables value. However, analytic databases may contain slightly more information. For example, an analytic database may have the value and date of the last measurement of diastolic blood pressure. Operational databases usually contain raw data [for example, the hemoglobin value], whereas analytic databases frequently carry conclusions or yes or no answers to questions, such as is the patient anemic?). Finally, the identifying codes in operational databases tend to be more detailed than the corresponding codes in analytic databases. For example, an operational database in the pharmacy might identify a prescription by the National Drug Code (NDC), which identifies the brand name, dose, and bottle size. In comparison, the corresponding variable in an analytic database might identify drugs by a more generalized code that identifies only the generic drug (such as propranolol) or drug class (such as -blockers). In many cases, operational data can be converted into analytic variables. Three simple conversion rules are worth emphasizing. First, a continuous variable, such as the hemoglobin value or cholesterol level, can be converted into a binary diagnostic variable (such as specifying yes or no to the presence of anemia) and be given a numeric threshold that defines the diagnosis (for example, a hemoglobin value < 12). Second, detailed codes can be converted into more generalized codes by using simple cross-links (for example, converting NDC codes into generic drug codes). Finally, repeated values of a variable can be converted into a single value. Conversion occurs by selecting the first, last, or worst of a series of repeated values or by combining all occurrences on the basis of some rule. Examples include taking the mean value (as might be done for blood pressure levels), the sum (as might be done for determining chemotherapy drug doses), or the count (as might be done for records of blood transfusions). It is easy to imagine more complicated conversion rules. For example, a variable that specifies yes or no for the presence of diabetes might be defined in terms of thresholds on fasting blood sugar and hemoglobin A1c or for the current use of insulin or oral agents. Variations in the Codes and Structures of Operational Systems Until recently, a second barrier to the use of operational databases has been the lack of standards for reporting data from operational systems. Each vendor structured and reported the contents of its products differently. In some cases, each implementation of a vendors product also varied. In addition, each laboratory and medical records department tended to define its own unique and idiosyncratic codes for identifying observations and findings. This cacophony presented an enormous barrier to the use of operational databases by external agencies. Today, standard message structures and formats exist for exporting patient information from operational systems. Message standards specify a uniform structure for electronically reporting clinical data from source databases to other databases. These standards also specify the format for reporting dates, times, names, numeric values, and codes. For example, the standard for date formats is CCYYMMDD (century, year, month, date). Therefore, 12 April 1979 is recorded as 19790412 and not as 4-12-79, 12-apr-79, or any other option. The American National Standards Institute Health Level 7 (HL7) standard is the most relevant to this discussion. Th


Clinical Decision Support (Second Edition)#R##N#The Road to Broad Adoption | 2014

Chapter 5 – Regenstrief Medical Informatics: Experiences with Clinical Decision Support Systems

Paul G. Biondich; Brian E. Dixon; Jon D. Duke; Burke W. Mamlin; Shaun J. Grannis; Blaine Y. Takesue; Steve Downs; William Tierney

Medical informatic experts have made considerable progress in the development of standards for orders and clinical results (CEN, HL7, ASTM), EKG tracings (CEN), diagnostic images (DICOM), claims processing (X12 and EDIFAC) and in vocabulary and codes (SNOMED, Read Codes, the MED, LOINC). Considerable work still remains to be carried out. Abstract models of health care information have to be created, to cover the necessary domain, and yet be simple enough to assimilate, implement, and manage. This requires a high degree of abstraction. Enormous amounts to develop standardized vocabulary are still required to complement such a model, and to define the subsets that apply to given contexts.


Medical Education Online | 2018

A pilot study: a teaching electronic medical record for educating and assessing residents in the care of patients

Joshua Smith; W. Graham Carlos; Cynthia S. Johnson; Blaine Y. Takesue; Debra K. Litzelman

OBJECTIVE: To identify high-risk patients with heart disease by using data stored in an electronic medical record system to predict six-year mortality.DESIGN: Retrospective cohort study.SETTING: Academic primary care general internal medicine practice affiliated with an urban teaching hospital with a state-of-the-art electronic medical record system.PATIENTS: Of 2,434 patients with evidence of ischemic heart disease or heart failure or both who visited an urban primary care practice in 1986, half were used to derive a proportional hazards model, and half were used to validate it.MEASUREMENTS: Mortality from any cause within six years of inception date. Model discrimination was assessed with the C statistic, and goodness-of-fit was measured with a calibration curve and Hosmer-Lemeshow statistic.MAIN RESULTS: Of these patients 82% had evidence of ischemic heart disease, 53% heart failure, and 35% both conditions. Mean survival among the 653 (27%) who died was 2.8 years; mean follow-up among survivors was 5.0 years. Those with both heart conditions had the highest mortality rate (45% at 6 years), followed by isolated heart failure (39%) and ischemic heart disease (18%). Of 300 potential predictive characteristics, 100 passed a univariate screen and were submitted to multivariable proportional hazards regression. Twelve variables contributed independent predictive information: age, weight, more than one previous hospitalization for heart failure, and nine conditions indicated on diagnostic tests and problem lists. No drug treatment variables were independent predictors. The model C statistic was 0.76 in the derivation sample of patients and 0.74 in a randomly selected validation sample, and it was well calibrated. Patients in the lowest and highest quartiles of risk differed more than five-fold in their average risk.CONCLUSIONS: Routine clinical data stored in patients’ electronic medical records are capable of predicting mortality among patients with heart disease. This could allow increasingly scarce health care resources to be focused on those at highest mortality risk.


Journal of the American Medical Informatics Association | 1995

Computerizing Guidelines to Improve Care and Patient Outcomes: The Example of Heart Failure

William M. Tierney; J. Marc Overhage; Blaine Y. Takesue; Lisa E. Harris; Michael D. Murray; Dennis L. Vargo; Clement J. McDonald

The discipline of clinical informatics endeavors to improve the process and outcomes of health care by enabling efficient access to information. Care providers can then use this information, both in the form of medical knowledge and in the form of patient data collected during clinical practice, to make decisions and comply with appropriate standards of care. The Regenstrief Institute began work on clinical information systems in 1972, when Dr. Clement McDonald and colleagues conceptualized and began construction of a computerized patient management system for outpatient diabetes care, developed to meet three primary goals: first, it was built to eliminate the problems inherent in paper records by making clinical data available to authorized users “just-in-time” as medical decisions are made; second, it was designed to aid in the recognition of diagnoses and adoption of pertinent care practices by assisting clinicians during their record-keeping activities; third, the system was designed to aggregate and analyze clinical information to be used in health care support systems, such as those for public health, health services research, and quality improvement. The first installation of the Regenstrief Medical Record System (RMRS) at Wishard Memorial Hospital occurred in 1974 and, over the next few years, the use of this system expanded outside of the diabetic clinic into a few of the hospital’s many general medicine clinics. From early in its history, the Regenstrief system has included mechanisms for tailoring rules based on the data, to generate reminders and alerts to care providers. This chapter provides a history of the development and growth of the RMRS into a region-wide source of clinical data, the Indiana Network for Patient Care (INPC), and a summary of the research on the decision support interventions themselves, made possible by this infrastructure. Additionally, lessons learned throughout the more than 30 years of experience in both building and maintaining this system are detailed, alongside some reflections that may be useful for future system builders.


International Journal of Medical Informatics | 2014

Regenstrief Institute's Medical Gopher: A next-generation homegrown electronic medical record system

Jon D. Duke; Justin Morea; Burke W. Mamlin; Douglas K. Martin; Linas Simonaitis; Blaine Y. Takesue; Brian E. Dixon; Paul R. Dexter

ABSTRACT Objective: We tested a novel, web-based teaching electronic medical record to teach and assess residents’ ability to enter appropriate admission orders for patients admitted to the intensive care unit. The primary objective was to determine if this tool could improve the learners’ ability to enter an evidence-based, comprehensive initial care plan for critically ill patients. Methods: The authors created three modules using de-identifed real patient data from selected patients that were admitted to the intensive care unit. All senior residents (113 total) were invited to participate in a dedicated two-hour educational session to complete the modules. Learner performance was graded against gold standard admission order sets created by study investigators based on the latest evidence-based medicine and guidelines. Results: The session was attended by 39 residents (34.5% of invitees). There was an average improvement of at least 20% in users’ scores across the three modules (Module 3-Module 1 mean difference 22.5%; p = 0.001 and Module 3-Module 2 mean difference 20.3%; p = 0.001). Diagnostic acumen improved in successive modules. Almost 90% of the residents reported the technology was an effective form of teaching and would use it autonomously if more modules were provided. Conclusions: In this pilot project, using a novel educational tool, users’ patient care performance scores improved with a high level of user satisfaction. These results identify a realistic and well-received way to supplement residents’ training and assessment on core clinical care and patient management in the face of duty hour restrictions.


Yearb Med Inform | 1997

Health Informatics Standards: A View From Mid-America.

Clement J. McDonald; Paul R. Dexter; Blaine Y. Takesue; J. M. Overhage


AMIA | 2017

Creating a Cluster of Preclinical Lessons Using Pharmacogenomic-Focused Patient Case Presentations.

Blaine Y. Takesue; Maureen Harrington; Bradley Allen; Debra K. Litzelman; Paul R. Dexter


Archive | 2014

Regenstrief Medical Informatics

Paul G. Biondich; Brian E. Dixon; Jon D. Duke; Burke W. Mamlin; Shaun J. Grannis; Blaine Y. Takesue; Steve Downs; William Tierney

Collaboration


Dive into the Blaine Y. Takesue's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Clement J. McDonald

National Institutes of Health

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jon D. Duke

Georgia Tech Research Institute

View shared research outputs
Top Co-Authors

Avatar

Brian E. Dixon

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

William M. Tierney

University of Oklahoma Health Sciences Center

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge