Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Steven G. Johnson is active.

Publication


Featured researches published by Steven G. Johnson.


The New England Journal of Medicine | 1976

Mechanism of Increased Renal Clearance of Amylase/Creatinine in Acute Pancreatitis

Steven G. Johnson; Carol J. Ellis; Michael D. Levitt

We investigated three possible causes of the increased ratio of amylase/creatinine clearance observed in acute pancreatitis. The presence of rapidly cleared isoamylase was excluded by studies of serum and urine, which demonstrated no anomalous isoamylases. In pancreatitis, the ratios (+/-1 S.E.M.) of both pancreatic isoamylase (9.2+/-0.6 per cent) and salivary isoamylase (8.6+/-1.6 per cent) were significantly (P less than 0.01) elevated over respective control values (2.4+/-0.2 and 1.8+/-0.2 per cent). Increased glomerular permeability to amylase was excluded by the demonstration of normal renal clearance of dextrans. We tested tubular reabsorption of protein by measuring the renal clearance of beta2-microglobulin, which is relatively freely filtered at the glomerulus and then avidly reabsorbed by the normal tubule. During acute pancreatitis the ratio of the renal clearance of beta2-microglobulin to that of creatinine was 1.22+/-0.52 per cent, an 80-fold increase over normal (0.015+/-0.002 per cent), with a rapid return toward normal during convalescence. Presumably, this reversible renal tubular defect also reduces amylase reabsorption and accounts for the elevated renal clearance of amylase/creatinine observed in acute pancreatitis.


Digestive Diseases and Sciences | 1978

Relation between serum pancreatic isoamylase concentration and pancreatic exocrine function

Steven G. Johnson; Michael D. Levitt

Serum pancreatic isoamylase concentrations were compared to secretory and clinical evidence of pancreatic insufficiency in 19 consecutive alcoholic patients undergoing pancreatic stimulation testing for suspected pancreatic insufficiency. In patients with normal total serum amylase levels, there was a good correlation (r=0.83) between serum pancreatic isoamylase activity and stimulated pancreatic secretion of amylase and the 8 patients with a low pancreatic isoamylase concentration had markedly diminished pancreatic secretion of amylase, lipase, and bicarbonate. However, patients with elevated total serum amylase activity frequently had extremely poor pancreatic exocrine function despite normal or elevated levels of pancreatic serum isoamylase. Thus, the finding of a subnormal serum concentration of pancreatic isoamylase provides strong evidence for pancreatic exocrine insufficiency; however, a normal or elevated serum pancreatic isoamylase activity cannot be used as evidence for normal pancreatic exocrine function.


eGEMs (Generating Evidence & Methods to improve patient outcomes) | 2016

A Harmonized Data Quality Assessment Terminology and Framework for the Secondary Use of Electronic Health Record Data.

Michael Kahn; Tiffany J. Callahan; Juliana Barnard; Alan Bauck; Jeff Brown; Bruce N. Davidson; Hossein Estiri; Carsten Goerg; Erin Holve; Steven G. Johnson; Siaw-Teng Liaw; Marianne Hamilton-Lopez; Daniella Meeker; Toan C. Ong; Patrick B. Ryan; Ning Shang; Nicole Gray Weiskopf; Chunhua Weng; Meredith Nahm Zozus; Lisa M. Schilling

Objective: Harmonized data quality (DQ) assessment terms, methods, and reporting practices can establish a common understanding of the strengths and limitations of electronic health record (EHR) data for operational analytics, quality improvement, and research. Existing published DQ terms were harmonized to a comprehensive unified terminology with definitions and examples and organized into a conceptual framework to support a common approach to defining whether EHR data is ‘fit’ for specific uses. Materials and Methods: DQ publications, informatics and analytics experts, managers of established DQ programs, and operational manuals from several mature EHR-based research networks were reviewed to identify potential DQ terms and categories. Two face-to-face stakeholder meetings were used to vet an initial set of DQ terms and definitions that were grouped into an overall conceptual framework. Feedback received from data producers and users was used to construct a draft set of harmonized DQ terms and categories. Multiple rounds of iterative refinement resulted in a set of terms and organizing framework consisting of DQ categories, subcategories, terms, definitions, and examples. The harmonized terminology and logical framework’s inclusiveness was evaluated against ten published DQ terminologies. Results: Existing DQ terms were harmonized and organized into a framework by defining three DQ categories: (1) Conformance (2) Completeness and (3) Plausibility and two DQ assessment contexts: (1) Verification and (2) Validation. Conformance and Plausibility categories were further divided into subcategories. Each category and subcategory was defined with respect to whether the data may be verified with organizational data, or validated against an accepted gold standard, depending on proposed context and uses. The coverage of the harmonized DQ terminology was validated by successfully aligning to multiple published DQ terminologies. Discussion: Existing DQ concepts, community input, and expert review informed the development of a distinct set of terms, organized into categories and subcategories. The resulting DQ terms successfully encompassed a wide range of disparate DQ terminologies. Operational definitions were developed to provide guidance for implementing DQ assessment procedures. The resulting structure is an inclusive DQ framework for standardizing DQ assessment and reporting. While our analysis focused on the DQ issues often found in EHR data, the new terminology may be applicable to a wide range of electronic health data such as administrative, research, and patient-reported data. Conclusion: A consistent, common DQ terminology, organized into a logical framework, is an initial step in enabling data owners and users, patients, and policy makers to evaluate and communicate data quality findings in a well-defined manner with a shared vocabulary. Future work will leverage the framework and terminology to develop reusable data quality assessment and reporting methods.


Cin-computers Informatics Nursing | 2017

Modeling flowsheet data to support secondary use

Bonnie L. Westra; Beverly Christie; Steven G. Johnson; Lisiane Pruinelli; Anne LaFlamme; Suzan Sherman; Jung In Park; Connie Delaney; Grace Gao; Stuart M. Speedie

The purpose of this study was to create information models from flowsheet data using a data-driven consensus-based method. Electronic health records contain a large volume of data about patient assessments and interventions captured in flowsheets that measure the same “thing,” but the names of these observations often differ, according to who performs documentation or the location of the service (eg, pulse rate in an intensive care, the emergency department, or a surgical unit documented by a nurse or therapist or captured by automated monitoring). Flowsheet data are challenging for secondary use because of the existence of multiple semantically equivalent measures representing the same concepts. Ten information models were created in this study: five related to quality measures (falls, pressure ulcers, venous thromboembolism, genitourinary system including catheter-associated urinary tract infection, and pain management) and five high-volume physiological systems: cardiac, gastrointestinal, musculoskeletal, respiratory, and expanded vital signs/anthropometrics. The value of the information models is that flowsheet data can be extracted and mapped for semantically comparable flowsheet measures from a clinical data repository regardless of the time frame, discipline, or setting in which documentation occurred. The 10 information models simplify the representation of the content in flowsheet data, reducing 1552 source measures to 557 concepts. The amount of representational reduction ranges from 3% for falls to 78% for the respiratory system. The information models provide a foundation for including nursing and interprofessional assessments and interventions in common data models, to support research within and across health systems.


Applied Clinical Informatics | 2017

Quantifying the Effect of Data Quality on the Validity of an eMeasure

Steven G. Johnson; Stuart M. Speedie; Gyorgy Simon; Vipin Kumar; Bonnie L. Westra

Objective The objective of this study was to demonstrate the utility of a healthcare data quality framework by using it to measure the impact of synthetic data quality issues on the validity of an eMeasure (CMS178—urinary catheter removal after surgery). Methods Data quality issues were artificially created by systematically degrading the underlying quality of EHR data using two methods: independent and correlated degradation. A linear model that describes the change in the events included in the eMeasure quantifies the impact of each data quality issue. Results Catheter duration had the most impact on the CMS178 eMeasure with every 1% reduction in data quality causing a 1.21% increase in the number of missing events. For birth date and admission type, every 1% reduction in data quality resulted in a 1% increase in missing events. Conclusion This research demonstrated that the impact of data quality issues can be quantified using a generalized process and that the CMS178 eMeasure, as currently defined, may not measure how well an organization is meeting the intended best practice goal. Secondary use of EHR data is warranted only if the data are of sufficient quality. The assessment approach described in this study demonstrates how the impact of data quality issues on an eMeasure can be quantified and the approach can be generalized for other data analysis tasks. Healthcare organizations can prioritize data quality improvement efforts to focus on the areas that will have the most impact on validity and assess whether the values that are reported should be trusted.


Applied Clinical Informatics | 2018

Validation and Refinement of a Pain Information Model from EHR Flowsheet Data

Bonnie L. Westra; Steven G. Johnson; Samira Ali; Karen Bavuso; Christopher Cruz; Sarah A. Collins; Meg Furukawa; Mary L Hook; Anne LaFlamme; Kay Lytle; Lisiane Pruinelli; Tari Rajchel; Theresa Tess Settergren; Kathryn F. Westman; Luann Whittenburg

BACKGROUND Secondary use of electronic health record (EHR) data can reduce costs of research and quality reporting. However, EHR data must be consistent within and across organizations. Flowsheet data provide a rich source of interprofessional data and represents a high volume of documentation; however, content is not standardized. Health care organizations design and implement customized content for different care areas creating duplicative data that is noncomparable. In a prior study, 10 information models (IMs) were derived from an EHR that included 2.4 million patients. There was a need to evaluate the generalizability of the models across organizations. The pain IM was selected for evaluation and refinement because pain is a commonly occurring problem associated with high costs for pain management. OBJECTIVE The purpose of our study was to validate and further refine a pain IM from EHR flowsheet data that standardizes pain concepts, definitions, and associated value sets for assessments, goals, interventions, and outcomes. METHODS A retrospective observational study was conducted using an iterative consensus-based approach to map, analyze, and evaluate data from 10 organizations. RESULTS The aggregated metadata from the EHRs of 8 large health care organizations and the design build in 2 additional organizations represented flowsheet data from 6.6 million patients, 27 million encounters, and 683 million observations. The final pain IM has 30 concepts, 4 panels (classes), and 396 value set items. Results are built on Logical Observation Identifiers Names and Codes (LOINC) pain assessment terms and extend the need for additional terms to support interoperability. CONCLUSION The resulting pain IM is a consensus model based on actual EHR documentation in the participating health systems. The IM captures the most important concepts related to pain.


Archive | 2017

Inclusion of Flowsheets from Electronic Health Records to Extend Data for Clinical and Translational Science Awards (CTSA) Research

Bonnie L. Westra; Beverly Christie; Grace Gao; Steven G. Johnson; Lisiane Pruinelli; Anne LaFlamme; Jung In Park; Suzan G. Sherman; Piper Ranallo; Stuart M. Speedie; Connie Delaney

Clinical data repositories increasingly are used for big data science; flowsheet data can extend current CDRs with rich, highly granular data documented by nursing and other healthcare professionals. Standardization of the data, however, is required for it to be useful for big data science. In this chapter, an example of one CDR funded by NIH’s CTSA demonstrates how flowsheet data can add data repositories for big data science. A specific example of pressure ulcers demonstrates the strengths of flowsheet data and also the challenges of using this data. Through standardization of this highly granular data documented by nurses, a more precise understanding about patient characteristics and tailoring of interventions provided by the health team and patient conditions and states can be achieved. Additional efforts by national workgroups to create information models from flowsheets and standardize assessment terms are described to support big data science.


Computerized Radiology | 1985

Evaluation of right subphrenic fluid collections with emission computed tomography: An experimental study

Steven G. Johnson; Mathis P. Frick; Timothy J. Johnson; Merle K. Loken

An animal model to study subphrenic fluid collections is presented. Performance of single photon emission computed tomography (SPECT) for detection of subphrenic fluid collections was compared to that of liver-lung scintigraphy and transmission computed tomography (TCT). TCT detected a water volume of 25 ml in the right subphrenic space. This volume was barely seen using SPECT and not detectable on scintigraphy. Differences in performance of the three imaging modalities are discussed.


Applied Clinical Informatics | 2016

Application of an Ontology for Characterizing Data Quality for a Secondary Use of EHR Data

Steven G. Johnson; Stuart M. Speedie; György J. Simon; Vipin Kumar; Bonnie L. Westra


AMIA Joint Summits on Translational Science proceedings. AMIA Joint Summits on Translational Science | 2015

Modeling Flowsheet Data for Clinical Research.

Steven G. Johnson; Byrne; Beverly Christie; Connie Delaney; Anne LaFlamme; Jung In Park; Lisiane Pruinelli; Suzan Sherman; Stuart M. Speedie; Bonnie L. Westra

Collaboration


Dive into the Steven G. Johnson's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jung In Park

University of Minnesota

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Grace Gao

University of Minnesota

View shared research outputs
Top Co-Authors

Avatar

Vipin Kumar

University of Arkansas for Medical Sciences

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge