Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Shobha Phansalkar is active.

Publication


Featured researches published by Shobha Phansalkar.


Journal of the American Medical Informatics Association | 2010

A review of human factors principles for the design and implementation of medication safety alerts in clinical information systems

Shobha Phansalkar; Judy Edworthy; Elizabeth Hellier; Diane L. Seger; Angela Schedlbauer; Anthony J Avery; David W. Bates

The objective of this review is to describe the implementation of human factors principles for the design of alerts in clinical information systems. First, we conduct a review of alarm systems to identify human factors principles that are employed in the design and implementation of alerts. Second, we review the medical informatics literature to provide examples of the implementation of human factors principles in current clinical information systems using alerts to provide medication decision support. Last, we suggest actionable recommendations for delivering effective clinical decision support using alerts. A review of studies from the medical informatics literature suggests that many basic human factors principles are not followed, possibly contributing to the lack of acceptance of alerts in clinical information systems. We evaluate the limitations of current alerting philosophies and provide recommendations for improving acceptance of alerts by incorporating human factors principles in their design.


Journal of the American Medical Informatics Association | 2013

Drug—drug interactions that should be non-interruptive in order to reduce alert fatigue in electronic health records

Shobha Phansalkar; Heleen van der Sijs; Alisha D. Tucker; Amrita A. Desai; Douglas S. Bell; Jonathan M. Teich; Blackford Middleton; David W. Bates

OBJECTIVE Alert fatigue represents a common problem associated with the use of clinical decision support systems in electronic health records (EHR). This problem is particularly profound with drug-drug interaction (DDI) alerts for which studies have reported override rates of approximately 90%. The objective of this study is to report consensus-based recommendations of an expert panel on DDI that can be safely made non-interruptive to the providers workflow, in EHR, in an attempt to reduce alert fatigue. METHODS We utilized an expert panel process to rate the interactions. Panelists had expertise in medicine, pharmacy, pharmacology and clinical informatics, and represented both academic institutions and vendors of medication knowledge bases and EHR. In addition, representatives from the US Food and Drug Administration and the American Society of Health-System Pharmacy contributed to the discussions. RESULTS Recommendations and considerations of the panel resulted in the creation of a list of 33 class-based low-priority DDI that do not warrant being interruptive alerts in EHR. In one institution, these accounted for 36% of the interactions displayed. DISCUSSION Development and customization of the content of medication knowledge bases that drive DDI alerting represents a resource-intensive task. Creation of a standardized list of low-priority DDI may help reduce alert fatigue across EHR. CONCLUSIONS Future efforts might include the development of a consortium to maintain this list over time. Such a list could also be used in conjunction with financial incentives tied to its adoption in EHR.


Journal of the American Medical Informatics Association | 2011

Factors influencing alert acceptance: a novel approach for predicting the success of clinical decision support.

Hanna M. Seidling; Shobha Phansalkar; Diane L. Seger; Marilyn D. Paterno; Shimon Shaykevich; Walter E. Haefeli; David W. Bates

BACKGROUND Clinical decision support systems can prevent knowledge-based prescription errors and improve patient outcomes. The clinical effectiveness of these systems, however, is substantially limited by poor user acceptance of presented warnings. To enhance alert acceptance it may be useful to quantify the impact of potential modulators of acceptance. METHODS We built a logistic regression model to predict alert acceptance of drug-drug interaction (DDI) alerts in three different settings. Ten variables from the clinical and human factors literature were evaluated as potential modulators of provider alert acceptance. ORs were calculated for the impact of knowledge quality, alert display, textual information, prioritization, setting, patient age, dose-dependent toxicity, alert frequency, alert level, and required acknowledgment on acceptance of the DDI alert. RESULTS 50,788 DDI alerts were analyzed. Providers accepted only 1.4% of non-interruptive alerts. For interruptive alerts, user acceptance positively correlated with frequency of the alert (OR 1.30, 95% CI 1.23 to 1.38), quality of display (4.75, 3.87 to 5.84), and alert level (1.74, 1.63 to 1.86). Alert acceptance was higher in inpatients (2.63, 2.32 to 2.97) and for drugs with dose-dependent toxicity (1.13, 1.07 to 1.21). The textual information influenced the mode of reaction and providers were more likely to modify the prescription if the message contained detailed advice on how to manage the DDI. CONCLUSION We evaluated potential modulators of alert acceptance by assessing content and human factors issues, and quantified the impact of a number of specific factors which influence alert acceptance. This information may help improve clinical decision support systems design.


Journal of the American Medical Informatics Association | 2012

High-priority drug–drug interactions for use in electronic health records

Shobha Phansalkar; Amrita A. Desai; Douglas S. Bell; Eileeen Yoshida; John Doole; Melissa Czochanski; Blackford Middleton; David W. Bates

OBJECTIVE To develop a set of high-severity, clinically significant drug-drug interactions (DDIs) for use in electronic health records (EHRs). METHODS A panel of experts was convened with the goal of identifying critical DDIs that should be used for generating medication-related decision support alerts in all EHRs. Panelists included medication knowledge base vendors, EHR vendors, in-house knowledge base developers from academic medical centers, and both federal and private agencies involved in the regulation of medication use. Candidate DDIs were assessed by the panel based on the consequence of the interaction, severity levels assigned to them across various medication knowledge bases, availability of therapeutic alternatives, monitoring/management options, predisposing factors, and the probability of the interaction based on the strength of evidence available in the literature. RESULTS Of 31 DDIs considered to be high risk, the panel approved a final list of 15 interactions. Panelists agreed that this list represented drugs that are contraindicated for concurrent use, though it does not necessarily represent a complete list of all such interacting drug pairs. For other drug interactions, severity may depend on additional factors, such as patient conditions or timing of co-administration. DISCUSSION The panel provided recommendations on the creation, maintenance, and implementation of a central repository of high severity interactions. CONCLUSIONS A set of highly clinically significant drug-drug interactions was identified, for which warnings should be generated in all EHRs. The panel highlighted the complexity of issues surrounding development and implementation of such a list.


Journal of the American Medical Informatics Association | 2015

Recommendations to Improve the Usability of Drug-Drug Interaction Clinical Decision Support Alerts

Thomas H. Payne; Lisa E. Hines; Raymond C. Chan; Seth Hartman; Joan Kapusnik-Uner; Alissa L. Russ; Bruce W. Chaffee; Christian Hartman; Victoria Tamis; Brian Galbreth; Peter Glassman; Shobha Phansalkar; Heleen van der Sijs; Sheila M. Gephart; Gordon Mann; Howard R. Strasberg; Amy J. Grizzle; Mary Brown; Gilad J. Kuperman; Chris Steiner; Amanda Kathleen Sullins; Hugh H. Ryan; Michael A. Wittie; Daniel C. Malone

OBJECTIVE To establish preferred strategies for presenting drug-drug interaction (DDI) clinical decision support alerts. MATERIALS AND METHODS A DDI Clinical Decision Support Conference Series included a workgroup consisting of 24 clinical, usability, and informatics experts representing academia, health information technology (IT) vendors, healthcare organizations, and the Office of the National Coordinator for Health IT. Workgroup members met via web-based meetings 12 times from January 2013 to February 2014, and two in-person meetings to reach consensus on recommendations to improve decision support for DDIs. We addressed three key questions: (1) what, how, where, and when do we display DDI decision support? (2) should presentation of DDI decision support vary by clinicians? and (3) how should effectiveness of DDI decision support be measured? RESULTS Our recommendations include the consistent use of terminology, visual cues, minimal text, formatting, content, and reporting standards to facilitate usability. All clinicians involved in the medication use process should be able to view DDI alerts and actions by other clinicians. Override rates are common but may not be a good measure of effectiveness. DISCUSSION Seven core elements should be included with DDI decision support. DDI information should be presented to all clinicians. Finally, in their current form, override rates have limited capability to evaluate alert effectiveness. CONCLUSION DDI clinical decision support alerts need major improvements. We provide recommendations for healthcare organizations and IT vendors to improve the clinician interface of DDI alerts, with the aim of reducing alert fatigue and improving patient safety.


International Journal of Medical Informatics | 2013

Design of decision support interventions for medication prescribing

Jan Horsky; Shobha Phansalkar; Amrita A. Desai; Douglas S. Bell; Blackford Middleton

OBJECTIVE Describe optimal design attributes of clinical decision support (CDS) interventions for medication prescribing, emphasizing perceptual, cognitive and functional characteristics that improve human-computer interaction (HCI) and patient safety. METHODS Findings from published reports on success, failures and lessons learned during implementation of CDS systems were reviewed and interpreted with regard to HCI and software usability principles. We then formulated design recommendations for CDS alerts that would reduce unnecessary workflow interruptions and allow clinicians to make informed decisions quickly, accurately and without extraneous cognitive and interactive effort. RESULTS Excessive alerting that tends to distract clinicians rather than provide effective CDS can be reduced by designing only high severity alerts as interruptive dialog boxes and less severe warnings without explicit response requirement, by curating system knowledge bases to suppress warnings with low clinical utility and by integrating contextual patient data into the decision logic. Recommended design principles include parsimonious and consistent use of color and language, minimalist approach to the layout of information and controls, the use of font attributes to convey hierarchy and visual prominence of important data over supporting information, the inclusion of relevant patient data in the context of the alert and allowing clinicians to respond with one or two clicks. CONCLUSION Although HCI and usability principles are well established and robust, CDS and EHR system interfaces rarely conform to the best known design conventions and are seldom conceived and designed well enough to be truly versatile and dependable tools. These relatively novel interventions still require careful monitoring, research and analysis of its track record to mature. Clarity and specificity of alert content and optimal perceptual and cognitive attributes, for example, are essential for providing effective decision support to clinicians.


Journal of Patient Safety | 2011

Critical drug-drug interactions for use in electronic health records systems with computerized physician order entry: review of leading approaches.

David C. Classen; Shobha Phansalkar; David W. Bates

Medications represent the most common intervention in health care, despite their benefits; they also lead to an estimated 1.5 million adverse drug events and tens of thousands of hospital admissions each year. Although some are not preventable given what is known today, many types are, and one key cause which is preventable is drug-drug interactions (DDIs). Most electronic health record systems include programs that can check and prevent these types of interactions as a routine part of medication ordering. Studies suggest that these systems as implemented often do not effectively screen for these DDIs. A major reason for this deficiency is the lack of any national standard for the critical DDIs that should be routinely operationlized in these complex systems. We review the leading critical DDI lists from multiple sources including several leading health systems, a leading commercial content provider, the Leapfrog CPOE Testing Standard, and the new Office of the National Coordinator (ONC) DDI List. Implementation of strong DDI checking is one of the important steps in terms of realizing the benefits of electronic prescribing with respect to safety. Hopefully, the ONC list will make it easier for organizations to ensure they are including the most important interactions, and the Leapfrog List may help these organizations develop an operational DDI list that can be practically implemented. In addition, this review has identified 7 common DDIs that can be the starting point for all organizations in this area of medication safety.


Patient Education and Counseling | 2014

Patient-centered interventions to improve medication management and adherence: A qualitative review of research findings

Jennifer L. Kuntz; Monika M. Safford; Jasvinder A. Singh; Shobha Phansalkar; Sarah P. Slight; Qoua L. Her; Nancy M. Allen LaPointe; Robin Mathews; Emily C. O’Brien; William B. Brinkman; Kevin A. Hommel; Kevin C. Farmer; Elissa V. Klinger; Nivethietha Maniam; Heather J. Sobko; Stacy Cooper Bailey; Insook Cho; Maureen H. Rumptz; Meredith Vandermeer; Mark C. Hornbrook

OBJECTIVE Patient-centered approaches to improving medication adherence hold promise, but evidence of their effectiveness is unclear. This review reports the current state of scientific research around interventions to improve medication management through four patient-centered domains: shared decision-making, methods to enhance effective prescribing, systems for eliciting and acting on patient feedback about medication use and treatment goals, and medication-taking behavior. METHODS We reviewed literature on interventions that fell into these domains and were published between January 2007 and May 2013. Two reviewers abstracted information and categorized studies by intervention type. RESULTS We identified 60 studies, of which 40% focused on patient education. Other intervention types included augmented pharmacy services, decision aids, shared decision-making, and clinical review of patient adherence. Medication adherence was an outcome in most (70%) of the studies, although 50% also examined patient-centered outcomes. CONCLUSIONS We identified a large number of medication management interventions that incorporated patient-centered care and improved patient outcomes. We were unable to determine whether these interventions are more effective than traditional medication adherence interventions. PRACTICE IMPLICATIONS Additional research is needed to identify effective and feasible approaches to incorporate patient-centeredness into the medication management processes of the current health care system, if appropriate.


International Journal of Medical Informatics | 2009

The state of the evidence for computerized provider order entry: A systematic review and analysis of the quality of the literature

Charlene R. Weir; Nancy Staggers; Shobha Phansalkar

OBJECTIVE This paper presents the results of a systematic literature review and a formal analysis of the scientific quality of empirical research on computerized provider order-entry (CPOE) applications. DESIGN Formal, systematic review techniques were used to search the literature, determine study relevance, and evaluate study quality. MEASUREMENT A search of multiple databases from 1976 through mid-2007 yielded a final set of 46 articles. Relevance criteria included: (1) a direct comparison of a CPOE system with a non-CPOE system; (2) implementation in a clinical setting; and (3) clinically relevant outcomes. RESULTS Study quality varied widely. Three major areas were identified for improvement in future studies: (1) internal validity, especially in terms of study designs, blinding, and instrumentation bias; (2) construct validity of the phenomenon of CPOE itself; and (3) measurement strategies, including reliability and validity assessments. CONCLUSIONS The evidence for the impact of CPOE needs to be improved to support scientific generalizability. Several common confounds are found in this literature. Future researchers will want to address them to improve the strength of the inference between CPOE and clinical outcomes. Discussion focuses on methods to improve future CPOE research.


Journal of the American Medical Informatics Association | 2014

Evaluation of medication alerts in electronic health records for compliance with human factors principles

Shobha Phansalkar; Marianne Zachariah; Hanna M. Seidling; Chantal Mendes; Lynn A. Volk; David W. Bates

INTRODUCTION Increasing the adoption of electronic health records (EHRs) with integrated clinical decision support (CDS) is a key initiative of the current US healthcare administration. High over-ride rates of CDS alerts strongly limit these potential benefits. As a result, EHR designers aspire to improve alert design to achieve better acceptance rates. In this study, we evaluated drug-drug interaction (DDI) alerts generated in EHRs and compared them for compliance with human factors principles. METHODS We utilized a previously validated questionnaire, the I-MeDeSA, to assess compliance with nine human factors principles of DDI alerts generated in 14 EHRs. Two reviewers independently assigned scores evaluating the human factors characteristics of each EHR. Rankings were assigned based on these scores and recommendations for appropriate alert design were derived. RESULTS The 14 EHRs evaluated in this study received scores ranging from 8 to 18.33, with a maximum possible score of 26. Cohens κ (κ=0.86) reflected excellent agreement among reviewers. The six vendor products tied for second and third place rankings, while the top system and bottom five systems were home-grown products. The most common weaknesses included the absence of characteristics such as alert prioritization, clear and concise alert messages indicating interacting drugs, actions for clinical management, and a statement indicating the consequences of over-riding the alert. CONCLUSIONS We provided detailed analyses of the human factors principles which were assessed and described our recommendations for effective alert design. Future studies should assess whether adherence to these recommendations can improve alert acceptance.

Collaboration


Dive into the Shobha Phansalkar's collaboration.

Top Co-Authors

Avatar

David W. Bates

Brigham and Women's Hospital

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sarah P. Slight

Newcastle upon Tyne Hospitals NHS Foundation Trust

View shared research outputs
Top Co-Authors

Avatar

Adam Wright

Brigham and Women's Hospital

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge