Justine E. Pang
Brigham and Women's Hospital
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Justine E. Pang.
Journal of the American Medical Informatics Association | 2009
Adam Wright; Dean F. Sittig; Joan S. Ash; Sapna Sharma; Justine E. Pang; Blackford Middleton
BACKGROUND The most effective decision support systems are integrated with clinical information systems, such as inpatient and outpatient electronic health records (EHRs) and computerized provider order entry (CPOE) systems. Purpose The goal of this project was to describe and quantify the results of a study of decision support capabilities in Certification Commission for Health Information Technology (CCHIT) certified electronic health record systems. METHODS The authors conducted a series of interviews with representatives of nine commercially available clinical information systems, evaluating their capabilities against 42 different clinical decision support features. RESULTS Six of the nine reviewed systems offered all the applicable event-driven, action-oriented, real-time clinical decision support triggers required for initiating clinical decision support interventions. Five of the nine systems could access all the patient-specific data items identified as necessary. Six of the nine systems supported all the intervention types identified as necessary to allow clinical information systems to tailor their interventions based on the severity of the clinical situation and the users workflow. Only one system supported all the offered choices identified as key to allowing physicians to take action directly from within the alert. Discussion The principal finding relates to system-by-system variability. The best system in our analysis had only a single missing feature (from 42 total) while the worst had eighteen.This dramatic variability in CDS capability among commercially available systems was unexpected and is a cause for concern. CONCLUSIONS These findings have implications for four distinct constituencies: purchasers of clinical information systems, developers of clinical decision support, vendors of clinical information systems and certification bodies.
Journal of the American Medical Informatics Association | 2011
Adam Wright; Justine E. Pang; Joshua Feblowitz; Francine L. Maloney; Allison R. Wilcox; Harley Z. Ramelson; Louise I. Schneider; David W. Bates
BACKGROUND Accurate knowledge of a patients medical problems is critical for clinical decision making, quality measurement, research, billing and clinical decision support. Common structured sources of problem information include the patient problem list and billing data; however, these sources are often inaccurate or incomplete. OBJECTIVE To develop and validate methods of automatically inferring patient problems from clinical and billing data, and to provide a knowledge base for inferring problems. STUDY DESIGN AND METHODS We identified 17 target conditions and designed and validated a set of rules for identifying patient problems based on medications, laboratory results, billing codes, and vital signs. A panel of physicians provided input on a preliminary set of rules. Based on this input, we tested candidate rules on a sample of 100,000 patient records to assess their performance compared to gold standard manual chart review. The physician panel selected a final rule for each condition, which was validated on an independent sample of 100,000 records to assess its accuracy. RESULTS Seventeen rules were developed for inferring patient problems. Analysis using a validation set of 100,000 randomly selected patients showed high sensitivity (range: 62.8-100.0%) and positive predictive value (range: 79.8-99.6%) for most rules. Overall, the inference rules performed better than using either the problem list or billing data alone. CONCLUSION We developed and validated a set of rules for inferring patient problems. These rules have a variety of applications, including clinical decision support, care improvement, augmentation of the problem list, and identification of patients for research cohorts.
Journal of Biomedical Informatics | 2010
Jan Horsky; Kerry McColgan; Justine E. Pang; Andrea J. Melnikas; Jeffrey A. Linder; Jeffrey L. Schnipper; Blackford Middleton
Poor usability of clinical information systems delays their adoption by clinicians and limits potential improvements to the efficiency and safety of care. Recurring usability evaluations are therefore, integral to the system design process. We compared four methods employed during the development of outpatient clinical documentation software: clinician email response, online survey, observations and interviews. Results suggest that no single method identifies all or most problems. Rather, each approach is optimal for evaluations at a different stage of design and characterizes different usability aspect. Email responses elicited from clinicians and surveys report mostly technical, biomedical, terminology and control problems and are most effective when a working prototype has been completed. Observations of clinical work and interviews inform conceptual and workflow-related problems and are best performed early in the cycle. Appropriate use of these methods consistently during development may significantly improve system usability and contribute to higher adoption rates among clinicians and to improved quality of care.
Journal of General Internal Medicine | 2012
Adam Wright; Eric G. Poon; Jonathan S. Wald; Joshua Feblowitz; Justine E. Pang; Jeffrey L. Schnipper; Richard W. Grant; Tejal K. Gandhi; Lynn A. Volk; Amy Bloom; Deborah H. Williams; Kate Gardner; Marianna Epstein; Lisa Nelson; Alex Businger; Qi Li; David W. Bates; Blackford Middleton
BACKGROUNDProvider and patient reminders can be effective in increasing rates of preventive screenings and vaccinations. However, the effect of patient-directed electronic reminders is understudied.OBJECTIVETo determine whether providing reminders directly to patients via an electronic Personal Health Record (PHR) improved adherence to care recommendations.DESIGNWe conducted a cluster randomized trial without blinding from 2005 to 2007 at 11 primary care practices in the Partners HealthCare system.PARTICIPANTSA total of 21,533 patients with access to a PHR were invited to the study, and 3,979 (18.5%) consented to enroll.INTERVENTIONSPatients in the intervention arm received health maintenance (HM) reminders via a secure PHR “eJournal,” which allowed them to review and update HM and family history information. Patients in the active control arm received access to an eJournal that allowed them to input and review information related to medications, allergies and diabetes management.MAIN MEASURESThe primary outcome measure was adherence to guideline-based care recommendations.KEY RESULTSIntention-to-treat analysis showed that patients in the intervention arm were significantly more likely to receive mammography (48.6% vs 29.5%, p = 0.006) and influenza vaccinations (22.0% vs 14.0%, p = 0.018). No significant improvement was observed in rates of other screenings. Although Pap smear completion rates were higher in the intervention arm (41.0% vs 10.4%, p < 0.001), this finding was no longer significant after excluding women’s health clinics. Additional on-treatment analysis showed significant increases in mammography (p = 0.019) and influenza vaccination (p = 0.015) for intervention arm patients who opened an eJournal compared to control arm patients, but no differences for any measure among patients who did not open an eJournal.CONCLUSIONSProviding patients with HM reminders via a PHR may be effective in improving some elements of preventive care.
Journal of the American Medical Informatics Association | 2011
Adam Wright; Dean F. Sittig; Joan S. Ash; David W. Bates; Joshua Feblowitz; Greg Fraser; Saverio M. Maviglia; Carmit K. McMullen; W. Paul Nichol; Justine E. Pang; Jack Starmer; Blackford Middleton
OBJECTIVE Clinical decision support (CDS) is a powerful tool for improving healthcare quality and ensuring patient safety; however, effective implementation of CDS requires effective clinical and technical governance structures. The authors sought to determine the range and variety of these governance structures and identify a set of recommended practices through observational study. DESIGN Three site visits were conducted at institutions across the USA to learn about CDS capabilities and processes from clinical, technical, and organizational perspectives. Based on the results of these visits, written questionnaires were sent to the three institutions visited and two additional sites. Together, these five organizations encompass a variety of academic and community hospitals as well as small and large ambulatory practices. These organizations use both commercially available and internally developed clinical information systems. MEASUREMENTS Characteristics of clinical information systems and CDS systems used at each site as well as governance structures and content management approaches were identified through extensive field interviews and follow-up surveys. RESULTS Six recommended practices were identified in the area of governance, and four were identified in the area of content management. Key similarities and differences between the organizations studied were also highlighted. CONCLUSION Each of the five sites studied contributed to the recommended practices presented in this paper for CDS governance. Since these strategies appear to be useful at a diverse range of institutions, they should be considered by any future implementers of decision support.
Journal of the American Medical Informatics Association | 2012
Adam Wright; Justine E. Pang; Joshua Feblowitz; Francine L. Maloney; Allison R. Wilcox; Karen Sax McLoughlin; Harley Z. Ramelson; Louise I. Schneider; David W. Bates
Background Accurate clinical problem lists are critical for patient care, clinical decision support, population reporting, quality improvement, and research. However, problem lists are often incomplete or out of date. Objective To determine whether a clinical alerting system, which uses inference rules to notify providers of undocumented problems, improves problem list documentation. Study Design and Methods Inference rules for 17 conditions were constructed and an electronic health record-based intervention was evaluated to improve problem documentation. A cluster randomized trial was conducted of 11 participating clinics affiliated with a large academic medical center, totaling 28 primary care clinical areas, with 14 receiving the intervention and 14 as controls. The intervention was a clinical alert directed to the provider that suggested adding a problem to the electronic problem list based on inference rules. The primary outcome measure was acceptance of the alert. The number of study problems added in each arm as a pre-specified secondary outcome was also assessed. Data were collected during 6-month pre-intervention (11/2009–5/2010) and intervention (5/2010–11/2010) periods. Results 17 043 alerts were presented, of which 41.1% were accepted. In the intervention arm, providers documented significantly more study problems (adjusted OR=3.4, p<0.001), with an absolute difference of 6277 additional problems. In the intervention group, 70.4% of all study problems were added via the problem list alerts. Significant increases in problem notation were observed for 13 of 17 conditions. Conclusion Problem inference alerts significantly increase notation of important patient problems in primary care, which in turn has the potential to facilitate quality improvement. Trial Registration ClinicalTrials.gov: NCT01105923.
International Journal of Medical Informatics | 2012
Adam Wright; Joshua Feblowitz; Justine E. Pang; James D. Carpenter; Michael Krall; Blackford Middleton; Dean F. Sittig
BACKGROUND Many computerized provider order entry (CPOE) systems include the ability to create electronic order sets: collections of clinically related orders grouped by purpose. Order sets promise to make CPOE systems more efficient, improve care quality and increase adherence to evidence-based guidelines. However, the development and implementation of order sets can be expensive and time-consuming and limited literature exists about their utilization. METHODS Based on analysis of order set usage logs from a diverse purposive sample of seven sites with commercially and internally developed inpatient CPOE systems, we developed an original order set classification system. Order sets were categorized across seven non-mutually exclusive axes: admission/discharge/transfer (ADT), perioperative, condition-specific, task-specific, service-specific, convenience, and personal. In addition, 731 unique subtypes were identified within five axes: four in ADT (S=4), three in perioperative, 144 in condition-specific, 513 in task-specific, and 67 in service-specific. RESULTS Order sets (n=1914) were used a total of 676,142 times at the participating sites during a one-year period. ADT and perioperative order sets accounted for 27.6% and 24.2% of usage respectively. Peripartum/labor, chest pain/acute coronary syndrome/myocardial infarction and diabetes order sets accounted for 51.6% of condition-specific usage. Insulin, angiography/angioplasty and arthroplasty order sets accounted for 19.4% of task-specific usage. Emergency/trauma, obstetrics/gynecology/labor delivery and anesthesia accounted for 32.4% of service-specific usage. Overall, the top 20% of order sets accounted for 90.1% of all usage. Additional salient patterns are identified and described. CONCLUSION We observed recurrent patterns in order set usage across multiple sites as well as meaningful variations between sites. Vendors and institutional developers should identify high-value order set types through concrete data analysis in order to optimize the resources devoted to development and implementation.
american medical informatics association annual symposium | 2010
Adam Wright; Dean F. Sittig; James D. Carpenter; Michael Krall; Justine E. Pang; Blackford Middleton
Applied Clinical Informatics | 2013
J. Feblowitz; Stanislav Henkin; Justine E. Pang; Harley Z. Ramelson; Louise I. Schneider; Francine L. Maloney; Allison R. Wilcox; David W. Bates; Adam Wright
american medical informatics association annual symposium | 2009
Patricia C. Dykes; Diane L. Carroll; Ann C. Hurley; Ronna Gersh-Zaremski; Ann Kennedy; Jan Kurowski; Kim Tierney; Angela Benoit; Frank Y. Chang; Stuart R. Lipsitz; Justine E. Pang; Ruslana Tsurkova; Lyubov Zuyov; Blackford Middleton