Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Mary Brown is active.

Publication


Featured researches published by Mary Brown.


Journal of the American Medical Informatics Association | 2015

Recommendations to Improve the Usability of Drug-Drug Interaction Clinical Decision Support Alerts

Thomas H. Payne; Lisa E. Hines; Raymond C. Chan; Seth Hartman; Joan Kapusnik-Uner; Alissa L. Russ; Bruce W. Chaffee; Christian Hartman; Victoria Tamis; Brian Galbreth; Peter Glassman; Shobha Phansalkar; Heleen van der Sijs; Sheila M. Gephart; Gordon Mann; Howard R. Strasberg; Amy J. Grizzle; Mary Brown; Gilad J. Kuperman; Chris Steiner; Amanda Kathleen Sullins; Hugh H. Ryan; Michael A. Wittie; Daniel C. Malone

OBJECTIVE To establish preferred strategies for presenting drug-drug interaction (DDI) clinical decision support alerts. MATERIALS AND METHODS A DDI Clinical Decision Support Conference Series included a workgroup consisting of 24 clinical, usability, and informatics experts representing academia, health information technology (IT) vendors, healthcare organizations, and the Office of the National Coordinator for Health IT. Workgroup members met via web-based meetings 12 times from January 2013 to February 2014, and two in-person meetings to reach consensus on recommendations to improve decision support for DDIs. We addressed three key questions: (1) what, how, where, and when do we display DDI decision support? (2) should presentation of DDI decision support vary by clinicians? and (3) how should effectiveness of DDI decision support be measured? RESULTS Our recommendations include the consistent use of terminology, visual cues, minimal text, formatting, content, and reporting standards to facilitate usability. All clinicians involved in the medication use process should be able to view DDI alerts and actions by other clinicians. Override rates are common but may not be a good measure of effectiveness. DISCUSSION Seven core elements should be included with DDI decision support. DDI information should be presented to all clinicians. Finally, in their current form, override rates have limited capability to evaluate alert effectiveness. CONCLUSION DDI clinical decision support alerts need major improvements. We provide recommendations for healthcare organizations and IT vendors to improve the clinician interface of DDI alerts, with the aim of reducing alert fatigue and improving patient safety.


Drug Safety | 2015

Consensus recommendations for systematic evaluation of drug-drug interaction evidence for clinical decision support.

Richard T. Scheife; Lisa E. Hines; Richard D. Boyce; Sophie P. Chung; Jeremiah D. Momper; Christine D. Sommer; Darrell R. Abernethy; John R. Horn; Stephen J. Sklar; Samantha K. Wong; Gretchen Jones; Mary Brown; Amy J. Grizzle; Susan Comes; Tricia Lee Wilkins; Clarissa Borst; Michael A. Wittie; Daniel C. Malone

BackgroundHealthcare organizations, compendia, and drug knowledgebase vendors use varying methods to evaluate and synthesize evidence on drug–drug interactions (DDIs). This situation has a negative effect on electronic prescribing and medication information systems that warn clinicians of potentially harmful medication combinations.ObjectiveThe aim of this study was to provide recommendations for systematic evaluation of evidence for DDIs from the scientific literature, drug product labeling, and regulatory documents.MethodsA conference series was conducted to develop a structured process to improve the quality of DDI alerting systems. Three expert workgroups were assembled to address the goals of the conference. The Evidence Workgroup consisted of 18 individuals with expertise in pharmacology, drug information, biomedical informatics, and clinical decision support. Workgroup members met via webinar 12 times from January 2013 to February 2014. Two in-person meetings were conducted in May and September 2013 to reach consensus on recommendations.ResultsWe developed expert consensus answers to the following three key questions. (i) What is the best approach to evaluate DDI evidence? (ii) What evidence is required for a DDI to be applicable to an entire class of drugs? (iii) How should a structured evaluation process be vetted and validated?ConclusionEvidence-based decision support for DDIs requires consistent application of transparent and systematic methods to evaluate the evidence. Drug compendia and clinical decision support systems in which these recommendations are implemented should be able to provide higher-quality information about DDIs.


American Journal of Health-system Pharmacy | 2016

Recommendations for selecting drug–drug interactions for clinical decision support

Hugh H. Tilson; Lisa E. Hines; Gerald McEvoy; David M. Weinstein; Philip D. Hansten; Karl Matuszewski; Marianne Le Comte; Stefanie Higby-Baker; Joseph T. Hanlon; Lynn Pezzullo; Kathleen Vieson; Amy Helwig; Shiew Mei Huang; Anthony Perre; David W. Bates; John Poikonen; Michael A. Wittie; Amy J. Grizzle; Mary Brown; Daniel C. Malone

PURPOSE Recommendations for including drug-drug interactions (DDIs) in clinical decision support (CDS) are presented. SUMMARY A conference series was conducted to improve CDS for DDIs. A work group consisting of 20 experts in pharmacology, drug information, and CDS from academia, government agencies, health information vendors, and healthcare organizations was convened to address (1) the process to use for developing and maintaining a standard set of DDIs, (2) the information that should be included in a knowledge base of standard DDIs, (3) whether a list of contraindicated drug pairs can or should be established, and (4) how to more intelligently filter DDI alerts. We recommend a transparent, systematic, and evidence-driven process with graded recommendations by a consensus panel of experts and oversight by a national organization. We outline key DDI information needed to help guide clinician decision-making. We recommend judicious classification of DDIs as contraindicated and more research to identify methods to safely reduce repetitive and less-relevant alerts. CONCLUSION An expert panel with a centralized organizer or convener should be established to develop and maintain a standard set of DDIs for CDS in the United States. The process should be evidence driven, transparent, and systematic, with feedback from multiple stakeholders for continuous improvement. The scope of the expert panels work should be carefully managed to ensure that the process is sustainable. Support for research to improve DDI alerting in the future is also needed. Adoption of these steps may lead to consistent and clinically relevant content for interruptive DDIs, thus reducing alert fatigue and improving patient safety.


Journal of General Internal Medicine | 2006

Brief Report: Development of a prescription medication information webliography for consumers

Yu Ko; Mary Brown; Rowan Frost; Raymond L. Woosley

AbstractBACKGROUND: Websites offering drug information vary in coverage and quality, and most health care consumers are poorly equipped to assess the quality of internet medication information. OBJECTIVE: To establish a webliography of recommended prescription medication information websites for health care consumers and providers. DESIGN AND METHODS: Drug information websites were systematically identified based on recommendations from health professionals and text-word searches of MEDLINE and Google. The resulting sample of websites was evaluated in a 2-step process. Candidate websites were first screened using inclusion/exclusion criteria representing minimum information requirements. Websites that passed the inclusion/exclusion criteria were then rated on 16 quality criteria using a 5-point scale by 3 trained judges. Website ratings were averaged, then multiplied by the corresponding importance weight of each criterion and summed to generate a total score. Websites with the highest total scores were included in the webliography. RESULTS: Ten websites were selected for inclusion in the webliography. The 3 highest-scoring websites were Anthem Blue Cross and Blue Shield (http://home.anthemhealth.com/topic/drugcenter), U.S. National Library of Medicine (www.nlm.nih.gov/medlineplus/druginformation.html), and Healthvision (http://www.yourhealthinformation.com/library/healthguide/en-us/drugguide/default.htm). CONCLUSION: Medication information websites vary widely in quality and content. The online webliography is a valuable and easily accessed tool that can be recommended by health care professionals to patients who request referral to reliable websites.


Journal of Managed Care Pharmacy | 2017

Is Real-World Evidence Used in P&T Monographs and Therapeutic Class Reviews?

Jason T. Hurwitz; Mary Brown; Jennifer S. Graff; Loretta Peters; Daniel C. Malone

BACKGROUND Payers are faced with making coverage and reimbursement decisions based on the best available evidence. Often these decisions apply to patient populations, provider networks, and care settings not typically studied in clinical trials. Treatment effectiveness evidence is increasingly available from electronic health records, registries, and administrative claims. However, little is known about when and what types of real-world evidence (RWE) studies inform pharmacy and therapeutic (P&T) committee decisions. OBJECTIVE To evaluate evidence sources cited in P&T committee monographs and therapeutic class reviews and assess the design features and quality of cited RWE studies. METHODS A convenience sample of representatives from pharmacy benefit management, health system, and health plan organizations provided recent P&T monographs and therapeutic class reviews (or references from such documents). Two investigators examined and grouped references into major categories (published studies, unpublished studies, and other/unknown) and multiple subcategories (e.g., product label, clinical trials, RWE, systematic reviews). Cited comparative RWE was reviewed to assess design features (e.g., population, data source, comparators) and quality using the Good ReseArch for Comparative Effectiveness (GRACE) Checklist. RESULTS Investigators evaluated 565 references cited in 27 monographs/therapeutic class reviews from 6 managed care organizations. Therapeutic class reviews mostly cited published clinical trials (35.3%, 155/439), while single-product monographs relied most on manufacturer-supplied information (42.1%, 53/126). Published RWE comprised 4.8% (21/439) of therapeutic class review references, and none (0/126) of the monograph references. Of the 21 RWE studies, 12 were comparative and assessed patient care settings and outcomes typically not included in clinical trials (community ambulatory settings [10], long-term safety [8]). RWE studies most frequently were based on registry data (6), conducted in the United States (6), and funded by the pharmaceutical industry (5). GRACE Checklist ratings suggested the data and methods of these comparative RWE studies were of high quality. CONCLUSIONS RWE was infrequently cited in P&T materials, even among therapeutic class reviews where RWE is more readily available. Although few P&T materials cited RWE, the comparative RWE studies were generally high quality. More research is needed to understand when and what types of real-world studies can more routinely inform coverage and reimbursement decisions. DISCLOSURES This project was funded by the National Pharmaceutical Council. Hurwitz, Brown, Peters, and Malone have nothing to disclose. Graff is employed by the National Pharmaceutical Council Part of this study was presented as a poster presentation at the AMCP Managed Care & Specialty Pharmacy 2016 Annual Meeting; April 19-22, 2016; San Francisco, CA. Study concept and design were primarily contributed by Malone and Graff, along with Hurwitz and Brown. All authors participated in data collection, and data interpretation was performed by Malone, Hurwitz, and Graff, with assistance from Brown and Peters. The manuscript was written primarily by Hurwitz and Malone, along with Graff, Brown, and Peters, and revised by Malone, Brown, Peters, Hurwitz, and Graff.


Journal of The American Pharmacists Association | 2011

Community pharmacy and pharmacist staff call center: Assessment of medication safety and effectiveness

Lisa Higgins; Mary Brown; John E. Murphy; Daniel C. Malone; Edward P. Armstrong; Raymond L. Woosley

OBJECTIVE To determine proof of concept for use of a network of pharmacists to evaluate the safety of medications. DESIGN Pilot, comparative, prospective evaluation. SETTING Community pharmacies and a pharmacist-staffed call center in Arizona during January through August 2006. PATIENTS Patients filling prescriptions for ipratropium or tiotropium bromide at 1 of 55 Arizona pharmacies were encouraged to call a pharmacist-staffed call center. A total of 67 patients contacted the center and 41 participated. INTERVENTION A network of community pharmacies and a call center were used to collect data on patients receiving one of two medications for the treatment of chronic obstructive pulmonary disease. Pharmacists in the community pharmacies recruited patients who presented with a prescription or requested a refill for one of the medications. The call center was used to collect patient data. Patients provided data on medication use, completed the chronic respiratory questionnaire (CRQ), and were encouraged to call the center to report health problems. After 30 days, patients were called to determine whether they experienced any adverse events while taking their medication and the CRQ was readministered. MAIN OUTCOME MEASURE Knowledge gained on the feasibility of the model using pharmacists to assess drug safety. RESULTS A total of 67 (6.7%) of a possible 995 patients contacted the call center about participating in the study. Approximately one-half (n = 28) of the 55 pharmacies had one or more patients contact the center about the study. A total of 41 patients met inclusion/exclusion criteria and were enrolled. Six (15%) patients reported an adverse effect, including one serious adverse event (acute glaucoma). CONCLUSION This study provides limited evidence that community pharmacies and a pharmacist-staffed call center can be used to assess medication safety; however, a number of issues need to be examined to determine whether the approaches can be sufficiently effective.


Value in Health | 2017

Real-World Evidence: Useful in the Real World of US Payer Decision Making? How? When? And What Studies?

Daniel C. Malone; Mary Brown; Jason T. Hurwitz; Loretta Peters; Jennifer Graff

OBJECTIVES To examine how real-world evidence (RWE) is currently perceived and used in managed care environments, especially to inform pharmacy and therapeutic (P&T) committee decisions, to assess which study factors (e.g., data, design, and funding source) contribute to RWE utility in decisions, and to identify barriers to consideration of RWE studies in P&T decision making. METHODS We conducted focus groups/telephone-based interviews and surveys to understand perceptions of RWE and assess awareness, quality, and relevance of two high-profile examples of published RWE studies. A purposive sample comprised 4 physicians, 15 pharmacists, and 1 researcher representing 18 US health plans and health system organizations. RESULTS Participants reported that RWE was generally used, or useful, to inform safety monitoring, utilization management, and cost analysis, but less so to guide P&T decisions. Participants were not aware of the two sample RWE studies but considered both studies to be valuable. Relevant research questions and outcomes, transparent methods, study quality, and timely results contribute to the utility of published RWE. Perceived organizational barriers to the use of published RWE included lack of skill, training, and timely study results. CONCLUSIONS Payers recognize the value of RWE, but use of such studies to inform P&T decisions varies from organization to organization and is limited. Relevance to payers, timeliness, and transparent methods were key concerns with RWE. Participants recognized the need for continuing education on evaluating and using RWE to better understand the study methods, findings, and applicability to their organizations.


Patient Education and Counseling | 2006

Diagramming patients’ views of root causes of adverse drug events in ambulatory care: An online tool for planning education and research

Mary Brown; Rowan Frost; Yu Ko; Raymond L. Woosley


Research in Social & Administrative Pharmacy | 2008

Potential Determinants of Prescribers' Drug-Drug Interaction Knowledge

Yu Ko; Daniel C. Malone; Jerome V. D'Agostino; Grant H. Skrepnek; Edward P. Armstrong; Mary Brown; Raymond L. Woosley


Journal of Managed Care Pharmacy | 2013

Health Care Decision Makers' Use of Comparative Effectiveness Research: Report from a Series of Focus Groups

Lorenzo Villa; Terri L. Warholak; Lisa E. Hines; Ann M. Taylor; Mary Brown; Jason T. Hurwitz; Diana I. Brixner; Daniel C. Malone

Collaboration


Dive into the Mary Brown's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yu Ko

University of Arizona

View shared research outputs
Top Co-Authors

Avatar

Grant H. Skrepnek

University of Oklahoma Health Sciences Center

View shared research outputs
Researchain Logo
Decentralizing Knowledge