Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Skye Aaron is active.

Publication


Featured researches published by Skye Aaron.


Journal of the American Medical Informatics Association | 2016

Analysis of clinical decision support system malfunctions: a case series and survey

Adam Wright; Thu-Trang T. Hickman; Dustin McEvoy; Skye Aaron; Angela Ai; Jan Marie Andersen; Salman T. Hussain; Rachel B. Ramoni; Julie M. Fiskio; Dean F. Sittig; David W. Bates

Objective To illustrate ways in which clinical decision support systems (CDSSs) malfunction and identify patterns of such malfunctions. Materials and Methods We identified and investigated several CDSS malfunctions at Brigham and Women’s Hospital and present them as a case series. We also conducted a preliminary survey of Chief Medical Information Officers to assess the frequency of such malfunctions. Results We identified four CDSS malfunctions at Brigham and Women’s Hospital: (1) an alert for monitoring thyroid function in patients receiving amiodarone stopped working when an internal identifier for amiodarone was changed in another system; (2) an alert for lead screening for children stopped working when the rule was inadvertently edited; (3) a software upgrade of the electronic health record software caused numerous spurious alerts to fire; and (4) a malfunction in an external drug classification system caused an alert to inappropriately suggest antiplatelet drugs, such as aspirin, for patients already taking one. We found that 93% of the Chief Medical Information Officers who responded to our survey had experienced at least one CDSS malfunction, and two-thirds experienced malfunctions at least annually. Discussion CDSS malfunctions are widespread and often persist for long periods. The failure of alerts to fire is particularly difficult to detect. A range of causes, including changes in codes and fields, software upgrades, inadvertent disabling or editing of rules, and malfunctions of external systems commonly contribute to CDSS malfunctions, and current approaches for preventing and detecting such malfunctions are inadequate. Conclusion CDSS malfunctions occur commonly and often go undetected. Better methods are needed to prevent and detect these malfunctions.


Journal of the American Medical Informatics Association | 2016

Variation in high-priority drug-drug interaction alerts across institutions and electronic health records.

Dustin McEvoy; Dean F. Sittig; Thu-Trang T. Hickman; Skye Aaron; Angela Ai; Mary G. Amato; David W Bauer; Gregory M. Fraser; Jeremy Harper; Angela Kennemer; Michael Krall; Christoph U. Lehmann; Sameer Malhotra; Daniel R. Murphy; Brandi O’Kelley; Lipika Samal; Richard Schreiber; Hardeep Singh; Eric J. Thomas; Carl V Vartian; Jennifer Westmorland; Allison B. McCoy; Adam Wright

Objective: The United States Office of the National Coordinator for Health Information Technology sponsored the development of a “high-priority” list of drug-drug interactions (DDIs) to be used for clinical decision support. We assessed current adoption of this list and current alerting practice for these DDIs with regard to alert implementation (presence or absence of an alert) and display (alert appearance as interruptive or passive). Materials and methods: We conducted evaluations of electronic health records (EHRs) at a convenience sample of health care organizations across the United States using a standardized testing protocol with simulated orders. Results: Evaluations of 19 systems were conducted at 13 sites using 14 different EHRs. Across systems, 69% of the high-priority DDI pairs produced alerts. Implementation and display of the DDI alerts tested varied between systems, even when the same EHR vendor was used. Across the drug pairs evaluated, implementation and display of DDI alerts differed, ranging from 27% (4/15) to 93% (14/15) implementation. Discussion: Currently, there is no standard of care covering which DDI alerts to implement or how to display them to providers. Opportunities to improve DDI alerting include using differential displays based on DDI severity, establishing improved lists of clinically significant DDIs, and thoroughly reviewing organizational implementation decisions regarding DDIs. Conclusion: DDI alerting is clinically important but not standardized. There is significant room for improvement and standardization around evidence-based DDIs.


Journal of the American Medical Informatics Association | 2017

Testing electronic health records in the “production” environment: an essential step in the journey to a safe and effective health care system

Adam Wright; Skye Aaron; Dean F. Sittig

Thorough and ongoing testing of electronic health records (EHRs) is key to ensuring their safety and effectiveness. Many health care organizations limit testing to test environments separate from, and often different than, the production environment used by clinicians. Because EHRs are complex hardware and software systems that often interact with other hardware and software systems, no test environment can exactly mimic how the production environment will behave. An effective testing process must integrate safely conducted testing in the production environment itself, using test patients. We propose recommendations for how to safely incorporate testing in production into current EHR testing practices, with suggestions regarding the incremental release of upgrades, test patients, tester accounts, downstream personnel, and reporting.


Journal of the American Medical Informatics Association | 2018

Clinical decision support alert malfunctions: analysis and empirically derived taxonomy

Adam Wright; Angela Ai; Joan S. Ash; Jane Wiesen; Thu-Trang T. Hickman; Skye Aaron; Dustin McEvoy; Shane Borkowsky; Pavithra I. Dissanayake; Peter J. Embi; William L. Galanter; Jeremy Harper; Steve Z. Kassakian; Rachel B. Ramoni; Richard Schreiber; Anwar Sirajuddin; David W. Bates; Dean F. Sittig

Abstract Objective To develop an empirically derived taxonomy of clinical decision support (CDS) alert malfunctions. Materials and Methods We identified CDS alert malfunctions using a mix of qualitative and quantitative methods: (1) site visits with interviews of chief medical informatics officers, CDS developers, clinical leaders, and CDS end users; (2) surveys of chief medical informatics officers; (3) analysis of CDS firing rates; and (4) analysis of CDS overrides. We used a multi-round, manual, iterative card sort to develop a multi-axial, empirically derived taxonomy of CDS malfunctions. Results We analyzed 68 CDS alert malfunction cases from 14 sites across the United States with diverse electronic health record systems. Four primary axes emerged: the cause of the malfunction, its mode of discovery, when it began, and how it affected rule firing. Build errors, conceptualization errors, and the introduction of new concepts or terms were the most frequent causes. User reports were the predominant mode of discovery. Many malfunctions within our database caused rules to fire for patients for whom they should not have (false positives), but the reverse (false negatives) was also common. Discussion Across organizations and electronic health record systems, similar malfunction patterns recurred. Challenges included updates to code sets and values, software issues at the time of system upgrades, difficulties with migration of CDS content between computing environments, and the challenge of correctly conceptualizing and building CDS. Conclusion CDS alert malfunctions are frequent. The empirically derived taxonomy formalizes the common recurring issues that cause these malfunctions, helping CDS developers anticipate and prevent CDS malfunctions before they occur or detect and resolve them expediently.


Journal of General Internal Medicine | 2018

Reduced Effectiveness of Interruptive Drug-Drug Interaction Alerts after Conversion to a Commercial Electronic Health Record

Adam Wright; Skye Aaron; Diane L. Seger; Lipika Samal; Gordon D. Schiff; David W. Bates

BackgroundDrug-drug interaction (DDI) alerts in electronic health records (EHRs) can help prevent adverse drug events, but such alerts are frequently overridden, raising concerns about their clinical usefulness and contribution to alert fatigue.ObjectiveTo study the effect of conversion to a commercial EHR on DDI alert and acceptance rates.DesignTwo before-and-after studies.Participants3277 clinicians who received a DDI alert in the outpatient setting.InterventionIntroduction of a new, commercial EHR and subsequent adjustment of DDI alerting criteria.Main MeasuresAlert burden and proportion of alerts accepted.Key ResultsOverall interruptive DDI alert burden increased by a factor of 6 from the legacy EHR to the commercial EHR. The acceptance rate for the most severe alerts fell from 100 to 8.4%, and from 29.3 to 7.5% for medium severity alerts (P < 0.001). After disabling the least severe alerts, total DDI alert burden fell by 50.5%, and acceptance of Tier 1 alerts rose from 9.1 to 12.7% (P < 0.01).ConclusionsChanging from a highly tailored DDI alerting system to a more general one as part of an EHR conversion decreased acceptance of DDI alerts and increased alert burden on users. The decrease in acceptance rates cannot be fully explained by differences in the clinical knowledge base, nor can it be fully explained by alert fatigue associated with increased alert burden. Instead, workflow factors probably predominate, including timing of alerts in the prescribing process, lack of differentiation of more and less severe alerts, and features of how users interact with alerts.


Journal of General Internal Medicine | 2016

The Big Phish: Cyberattacks Against U.S. Healthcare Systems

Adam Wright; Skye Aaron; David W. Bates

Phishing, the practice of obtaining computer credentials from users through manipulation or deceit, dates back at least 20 years to America Online (AOL), where users would impersonate AOL staff members and send instant messages to other users convincing them to disclose their passwords or credit card numbers. The term itself was coined by Koceilah Rekouche, a hacker known online by the pseudonym BDa Chronic,^ who created a tool for automating and accelerating this process in 1995. The manual process had sometimes been called fishing (as in fishing for passwords), and Rekouche termed the password-stealing function of his software Bphishing^—the term stuck, and the behavior has subsequently expanded far beyond AOL over the last two decades. The email above was sent to users at our hospital and is one of many like this we receive every month. It encourages recipients to click a link where they are asked to enter their username and password. However, the site is operated not by our IT department, but by hackers seeking to gather passwords. When a user takes the bait and enters a password on the hacker’s site, the hacker gains the ability to access a range of online services by impersonating the user. While most users who receive an email like this one should know better than to click the link, phishing exercise results show otherwise. Users do fall victim to these manipulations, and some provide information, such as passwords, that is useful to hackers. The success of phishing messages is often tied to realism and authority—they may appear to be from an authority such as a hospital IT department and warn users that their accounts will be shut off if they don’t Bupdate^ them by entering their passwords. Phishing websites, which users access after clicking links in emails, are often designed to mimic institutional sites with misappropriated logos and similar designs, and they have addresses that resemble official sites, sometimes with minor misspellings or a lowercase letter L replaced with the number 1. Over time, phishing attacks have become more sophisticated, with higher quality emails and more convincing sites for capturing credentials. Although many phishing attacks are indiscriminate, targeting large numbers of users, a variant called Bspear phishing^ focuses on smaller groups of users or even specific individuals. Spear phishing attacks can be particularly effective because they can be carefully targeted to the sorts of links and deception most likely to trap a particular user—for example, a note apparently from the user’s boss or even a journal that the user regularly submits to.


Journal of the American Medical Informatics Association | 2018

Using statistical anomaly detection models to find clinical decision support malfunctions

Soumi Ray; Dustin McEvoy; Skye Aaron; Thu-Trang T. Hickman; Adam Wright

Objective Malfunctions in Clinical Decision Support (CDS) systems occur due to a multitude of reasons, and often go unnoticed, leading to potentially poor outcomes. Our goal was to identify malfunctions within CDS systems. Methods We evaluated 6 anomaly detection models: (1) Poisson Changepoint Model, (2) Autoregressive Integrated Moving Average (ARIMA) Model, (3) Hierarchical Divisive Changepoint (HDC) Model, (4) Bayesian Changepoint Model, (5) Seasonal Hybrid Extreme Studentized Deviate (SHESD) Model, and (6) E-Divisive with Median (EDM) Model and characterized their ability to find known anomalies. We analyzed 4 CDS alerts with known malfunctions from the Longitudinal Medical Record (LMR) and Epic® (Epic Systems Corporation, Madison, WI, USA) at Brigham and Womens Hospital, Boston, MA. The 4 rules recommend lead testing in children, aspirin therapy in patients with coronary artery disease, pneumococcal vaccination in immunocompromised adults and thyroid testing in patients taking amiodarone. Results Poisson changepoint, ARIMA, HDC, Bayesian changepoint and the SHESD model were able to detect anomalies in an alert for lead screening in children and in an alert for pneumococcal conjugate vaccine in immunocompromised adults. EDM was able to detect anomalies in an alert for monitoring thyroid function in patients on amiodarone. Conclusions Malfunctions/anomalies occur frequently in CDS alert systems. It is important to be able to detect such anomalies promptly. Anomaly detection models are useful tools to aid such detections.


Journal of the American Medical Informatics Association | 2018

Changes in hospital bond ratings after the transition to a new electronic health record

Dustin McEvoy; Michael L. Barnett; Dean F. Sittig; Skye Aaron; Ateev Mehrotra; Adam Wright

Objective To assess the impact of electronic health record (EHR) implementation on hospital finances. Materials and Methods We analyzed the impact of EHR implementation on bond ratings and net income from service to patients (NISP) at 32 hospitals that recently implemented a new EHR and a set of controls. Results After implementing an EHR, 7 hospitals had a bond downgrade, 7 had a bond upgrade, and 18 had no changes. There was no difference in the likelihood of bond rating changes or in changes to NISP following EHR go-live when compared to control hospitals. Discussion Most hospitals in our analysis saw no change in bond ratings following EHR go-live, with no significant differences observed between EHR implementation and control hospitals. There was also no apparent difference in NISP. Conclusions Implementation of an EHR did not appear to have an impact on bond ratings at the hospitals in our analysis.


Journal of the American Medical Informatics Association | 2018

Smashing the strict hierarchy: three cases of clinical decision support malfunctions involving carvedilol

Adam Wright; Aileen P Wright; Skye Aaron; Dean F. Sittig

Abstract Clinical vocabularies allow for standard representation of clinical concepts, and can also contain knowledge structures, such as hierarchy, that facilitate the creation of maintainable and accurate clinical decision support (CDS). A key architectural feature of clinical hierarchies is how they handle parent-child relationships — specifically whether hierarchies are strict hierarchies (allowing a single parent per concept) or polyhierarchies (allowing multiple parents per concept). These structures handle subsumption relationships (ie, ancestor and descendant relationships) differently. In this paper, we describe three real-world malfunctions of clinical decision support related to incorrect assumptions about subsumption checking for β-blocker, specifically carvedilol, a non-selective β-blocker that also has α-blocker activity. We recommend that 1) CDS implementers should learn about the limitations of terminologies, hierarchies, and classification, 2) CDS implementers should thoroughly test CDS, with a focus on special or unusual cases, 3) CDS implementers should monitor feedback from users, and 4) electronic health record (EHR) and clinical content developers should offer and support polyhierarchical clinical terminologies, especially for medications.


International Journal of Medical Informatics | 2018

Best practices for preventing malfunctions in rule-based clinical decision support alerts and reminders: Results of a Delphi study

Adam Wright; Joan S. Ash; Skye Aaron; Angela Ai; Thu-Trang T. Hickman; Jane Wiesen; William L. Galanter; Allison B. McCoy; Richard Schreiber; Christopher A. Longhurst; Dean F. Sittig

OBJECTIVE Developing effective and reliable rule-based clinical decision support (CDS) alerts and reminders is challenging. Using a previously developed taxonomy for alert malfunctions, we identified best practices for developing, testing, implementing, and maintaining alerts and avoiding malfunctions. MATERIALS AND METHODS We identified 72 initial practices from the literature, interviews with subject matter experts, and prior research. To refine, enrich, and prioritize the list of practices, we used the Delphi method with two rounds of consensus-building and refinement. We used a larger than normal panel of experts to include a wide representation of CDS subject matter experts from various disciplines. RESULTS 28 experts completed Round 1 and 25 completed Round 2. Round 1 narrowed the list to 47 best practices in 7 categories: knowledge management, designing and specifying, building, testing, deployment, monitoring and feedback, and people and governance. Round 2 developed consensus on the importance and feasibility of each best practice. DISCUSSION The Delphi panel identified a range of best practices that may help to improve implementation of rule-based CDS and avert malfunctions. Due to limitations on resources and personnel, not everyone can implement all best practices. The most robust processes require investing in a data warehouse. Experts also pointed to the issue of shared responsibility between the healthcare organization and the electronic health record vendor. CONCLUSION These 47 best practices represent an ideal situation. The research identifies the balance between importance and difficulty, highlights the challenges faced by organizations seeking to implement CDS, and describes several opportunities for future research to reduce alert malfunctions.

Collaboration


Dive into the Skye Aaron's collaboration.

Top Co-Authors

Avatar

Adam Wright

Brigham and Women's Hospital

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dean F. Sittig

University of California

View shared research outputs
Top Co-Authors

Avatar

Angela Ai

Brigham and Women's Hospital

View shared research outputs
Top Co-Authors

Avatar

Thu-Trang T. Hickman

Brigham and Women's Hospital

View shared research outputs
Top Co-Authors

Avatar

David W. Bates

Brigham and Women's Hospital

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge