Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Christopher M. Shea is active.

Publication


Featured researches published by Christopher M. Shea.


Implementation Science | 2014

Organizational readiness for implementing change: a psychometric assessment of a new measure

Christopher M. Shea; Sara Jacobs; Denise A. Esserman; Kerry Bruce; Bryan J. Weiner

BackgroundOrganizational readiness for change in healthcare settings is an important factor in successful implementation of new policies, programs, and practices. However, research on the topic is hindered by the absence of a brief, reliable, and valid measure. Until such a measure is developed, we cannot advance scientific knowledge about readiness or provide evidence-based guidance to organizational leaders about how to increase readiness. This article presents results of a psychometric assessment of a new measure called Organizational Readiness for Implementing Change (ORIC), which we developed based on Weiner’s theory of organizational readiness for change.MethodsWe conducted four studies to assess the psychometric properties of ORIC. In study one, we assessed the content adequacy of the new measure using quantitative methods. In study two, we examined the measure’s factor structure and reliability in a laboratory simulation. In study three, we assessed the reliability and validity of an organization-level measure of readiness based on aggregated individual-level data from study two. In study four, we conducted a small field study utilizing the same analytic methods as in study three.ResultsContent adequacy assessment indicated that the items developed to measure change commitment and change efficacy reflected the theoretical content of these two facets of organizational readiness and distinguished the facets from hypothesized determinants of readiness. Exploratory and confirmatory factor analysis in the lab and field studies revealed two correlated factors, as expected, with good model fit and high item loadings. Reliability analysis in the lab and field studies showed high inter-item consistency for the resulting individual-level scales for change commitment and change efficacy. Inter-rater reliability and inter-rater agreement statistics supported the aggregation of individual level readiness perceptions to the organizational level of analysis.ConclusionsThis article provides evidence in support of the ORIC measure. We believe this measure will enable testing of theories about determinants and consequences of organizational readiness and, ultimately, assist healthcare leaders to reduce the number of health organization change efforts that do not achieve desired benefits. Although ORIC shows promise, further assessment is needed to test for convergent, discriminant, and predictive validity.


Implementation Science | 2017

Criteria for selecting implementation science theories and frameworks: Results from an international survey

Sarah A. Birken; Byron J. Powell; Christopher M. Shea; Emily Haines; M. Alexis Kirk; Jennifer Leeman; Catherine L. Rohweder; Laura J. Damschroder; Justin Presseau

BackgroundTheories provide a synthesizing architecture for implementation science. The underuse, superficial use, and misuse of theories pose a substantial scientific challenge for implementation science and may relate to challenges in selecting from the many theories in the field. Implementation scientists may benefit from guidance for selecting a theory for a specific study or project. Understanding how implementation scientists select theories will help inform efforts to develop such guidance. Our objective was to identify which theories implementation scientists use, how they use theories, and the criteria used to select theories.MethodsWe identified initial lists of uses and criteria for selecting implementation theories based on seminal articles and an iterative consensus process. We incorporated these lists into a self-administered survey for completion by self-identified implementation scientists. We recruited potential respondents at the 8th Annual Conference on the Science of Dissemination and Implementation in Health and via several international email lists. We used frequencies and percentages to report results.ResultsTwo hundred twenty-three implementation scientists from 12 countries responded to the survey. They reported using more than 100 different theories spanning several disciplines. Respondents reported using theories primarily to identify implementation determinants, inform data collection, enhance conceptual clarity, and guide implementation planning. Of the 19 criteria presented in the survey, the criteria used by the most respondents to select theory included analytic level (58%), logical consistency/plausibility (56%), empirical support (53%), and description of a change process (54%). The criteria used by the fewest respondents included fecundity (10%), uniqueness (12%), and falsifiability (15%).ConclusionsImplementation scientists use a large number of criteria to select theories, but there is little consensus on which are most important. Our results suggest that the selection of implementation theories is often haphazard or driven by convenience or prior exposure. Variation in approaches to selecting theory warn against prescriptive guidance for theory selection. Instead, implementation scientists may benefit from considering the criteria that we propose in this paper and using them to justify their theory selection. Future research should seek to refine the criteria for theory selection to promote more consistent and appropriate use of theory in implementation science.


Health Care Management Review | 2014

Assessing organizational capacity for achieving meaningful use of electronic health records.

Christopher M. Shea; Robb Malone; Morris Weinberger; Kristin L. Reiter; Jonathan Thornhill; Jennifer Lord; Nicholas G. Nguyen; Bryan J. Weiner

Background: Health care institutions are scrambling to manage the complex organizational change required for achieving meaningful use (MU) of electronic health records (EHR). Assessing baseline organizational capacity for the change can be a useful step toward effective planning and resource allocation. Purpose: The aim of this article is to describe an adaptable method and tool for assessing organizational capacity for achieving MU of EHR. Data on organizational capacity (people, processes, and technology resources) and barriers are presented from outpatient clinics within one integrated health care delivery system; thus, the focus is on MU requirements for eligible professionals, not eligible hospitals. Methods: We conducted 109 interviews with representatives from 46 outpatient clinics. Findings: Most clinics had core elements of the people domain of capacity in place. However, the process domain was problematic for many clinics, specifically, capturing problem lists as structured data and having standard processes for maintaining the problem list in the EHR. Also, nearly half of all clinics did not have methods for tracking compliance with their existing processes. Finally, most clinics maintained clinical information in multiple systems, not just the EHR. The most common perceived barriers to MU for eligible professionals included EHR functionality, changes to workflows, increased workload, and resistance to change. Practice Implications: Organizational capacity assessments provide a broad institutional perspective and an in-depth clinic-level perspective useful for making resource decisions and tailoring strategies to support the MU change effort for eligible professionals.


BMC Medical Informatics and Decision Making | 2014

Stage 1 of the meaningful use incentive program for electronic health records: a study of readiness for change in ambulatory practice settings in one integrated delivery system.

Christopher M. Shea; Kristin L. Reiter; Mark A. Weaver; Molly McIntyre; Jason Mose; Jonathan Thornhill; Robb Malone; Bryan J. Weiner

BackgroundMeaningful Use (MU) provides financial incentives for electronic health record (EHR) implementation. EHR implementation holds promise for improving healthcare delivery, but also requires substantial changes for providers and staff. Establishing readiness for these changes may be important for realizing potential EHR benefits. Our study assesses whether provider/staff perceptions about the appropriateness of MU and their departments’ ability to support MU-related changes are associated with their reported readiness for MU-related changes.MethodsWe surveyed providers and staff representing 47 ambulatory practices within an integrated delivery system. We assessed whether respondent’s role and practice-setting type (primary versus specialty care) were associated with reported readiness for MU (i.e., willingness to change practice behavior and ability to document actions for MU) and hypothesized predictors of readiness (i.e., perceived appropriateness of MU and department support for MU). We then assessed associations between reported readiness and the hypothesized predictors of readiness.ResultsIn total, 400 providers/staff responded (response rate approximately 25%). Individuals working in specialty settings were more likely to report that MU will divert attention from other patient-care priorities (12.6% vs. 4.4%, p = 0.019), as compared to those in primary-care settings. As compared to advanced-practice providers and nursing staff, physicians were less likely to have strong confidence in their department’s ability to solve MU implementation problems (28.4% vs. 47.1% vs. 42.6%, p = 0.023) and to report strong willingness to change their work practices for MU (57.9% vs. 83.3% vs. 82.0%, p < 0.001). Finally, provider/staff perceptions about whether MU aligns with departmental goals (OR = 3.99, 95% confidence interval (CI) = 2.13 to 7.48); MU will divert attention from other patient-care priorities (OR = 2.26, 95% CI = 1.26 to 4.06); their department will support MU-related change efforts (OR = 3.99, 95% CI = 2.13 to 7.48); and their department will be able to solve MU implementation problems (OR = 2.26, 95% CI = 1.26 to 4.06) were associated with their willingness to change practice behavior for MU.ConclusionsOrganizational leaders should gauge provider/staff perceptions about appropriateness and management support of MU-related change, as these perceptions might be related to subsequent implementation.


Implementation Science | 2017

Combined use of the Consolidated Framework for Implementation Research (CFIR) and the Theoretical Domains Framework (TDF): a systematic review.

Sarah A. Birken; Byron J. Powell; Justin Presseau; M. Alexis Kirk; Fabiana Lorencatto; Natalie J. Gould; Christopher M. Shea; Bryan J. Weiner; Jill J Francis; Yan Yu; Emily Haines; Laura J. Damschroder

BackgroundOver 60 implementation frameworks exist. Using multiple frameworks may help researchers to address multiple study purposes, levels, and degrees of theoretical heritage and operationalizability; however, using multiple frameworks may result in unnecessary complexity and redundancy if doing so does not address study needs. The Consolidated Framework for Implementation Research (CFIR) and the Theoretical Domains Framework (TDF) are both well-operationalized, multi-level implementation determinant frameworks derived from theory. As such, the rationale for using the frameworks in combination (i.e., CFIR + TDF) is unclear. The objective of this systematic review was to elucidate the rationale for using CFIR + TDF by (1) describing studies that have used CFIR + TDF, (2) how they used CFIR + TDF, and (2) their stated rationale for using CFIR + TDF.MethodsWe undertook a systematic review to identify studies that mentioned both the CFIR and the TDF, were written in English, were peer-reviewed, and reported either a protocol or results of an empirical study in MEDLINE/PubMed, PsycInfo, Web of Science, or Google Scholar. We then abstracted data into a matrix and analyzed it qualitatively, identifying salient themes.FindingsWe identified five protocols and seven completed studies that used CFIR + TDF. CFIR + TDF was applied to studies in several countries, to a range of healthcare interventions, and at multiple intervention phases; used many designs, methods, and units of analysis; and assessed a variety of outcomes. Three studies indicated that using CFIR + TDF addressed multiple study purposes. Six studies indicated that using CFIR + TDF addressed multiple conceptual levels. Four studies did not explicitly state their rationale for using CFIR + TDF.ConclusionsDifferences in the purposes that authors of the CFIR (e.g., comprehensive set of implementation determinants) and the TDF (e.g., intervention development) propose help to justify the use of CFIR + TDF. Given that the CFIR and the TDF are both multi-level frameworks, the rationale that using CFIR + TDF is needed to address multiple conceptual levels may reflect potentially misleading conventional wisdom. On the other hand, using CFIR + TDF may more fully define the multi-level nature of implementation. To avoid concerns about unnecessary complexity and redundancy, scholars who use CFIR + TDF and combinations of other frameworks should specify how the frameworks contribute to their study.Trial registrationPROSPERO CRD42015027615


Implementation Science | 2017

Beyond “implementation strategies”: classifying the full range of strategies used in implementation science and practice

Jennifer Leeman; Sarah A. Birken; Byron J. Powell; Catherine L. Rohweder; Christopher M. Shea

BackgroundStrategies are central to the National Institutes of Health’s definition of implementation research as “the study of strategies to integrate evidence-based interventions into specific settings.” Multiple scholars have proposed lists of the strategies used in implementation research and practice, which they increasingly are classifying under the single term “implementation strategies.” We contend that classifying all strategies under a single term leads to confusion, impedes synthesis across studies, and limits advancement of the full range of strategies of importance to implementation. To address this concern, we offer a system for classifying implementation strategies that builds on Proctor and colleagues’ (2013) reporting guidelines, which recommend that authors not only name and define their implementation strategies but also specify who enacted the strategy (i.e., the actor) and the level and determinants that were targeted (i.e., the action targets).Main bodyWe build on Wandersman and colleagues’ Interactive Systems Framework to distinguish strategies based on whether they are enacted by actors functioning as part of a Delivery, Support, or Synthesis and Translation System. We build on Damschroder and colleague’s Consolidated Framework for Implementation Research to distinguish the levels that strategies target (intervention, inner setting, outer setting, individual, and process). We then draw on numerous resources to identify determinants, which are conceptualized as modifiable factors that prevent or enable the adoption and implementation of evidence-based interventions. Identifying actors and targets resulted in five conceptually distinct classes of implementation strategies: dissemination, implementation process, integration, capacity-building, and scale-up. In our descriptions of each class, we identify the level of the Interactive System Framework at which the strategy is enacted (actors), level and determinants targeted (action targets), and outcomes used to assess strategy effectiveness. We illustrate how each class would apply to efforts to improve colorectal cancer screening rates in Federally Qualified Health Centers.ConclusionsStructuring strategies into classes will aid reporting of implementation research findings, alignment of strategies with relevant theories, synthesis of findings across studies, and identification of potential gaps in current strategy listings. Organizing strategies into classes also will assist users in locating the strategies that best match their needs.


Patient Education and Counseling | 2014

A method to determine the impact of patient-centered care interventions in primary care

Timothy P. Daaleman; Christopher M. Shea; Jacqueline R. Halladay; David Reed

OBJECTIVE The implementation of patient-centered care (PCC) innovations continues to be poorly understood. We used the implementation effectiveness framework to pilot a method for measuring the impact of a PCC innovation in primary care practices. METHODS We analyzed data from a prior study that assessed the implementation of an electronic geriatric quality-of-life (QOL) module in 3 primary care practices in central North Carolina in 2011-2012. Patients responded to the items and the subsequent patient-provider encounter was coded using the Roter Interaction Analysis System (RIAS) system. We developed an implementation effectiveness measure specific to the QOL module (i.e., frequency of usage during the encounter) using RIAS and then tested if there were differences with RIAS codes using analysis of variance. RESULTS A total of 60 patient-provider encounters examined differences in the uptake of the QOL module (i.e., implementation-effectiveness measure) with the frequency of RIAS codes during the encounter (i.e., patient-centeredness measure). There was a significant association between the effectiveness measure and patient-centered RIAS codes. CONCLUSION The concept of implementation effectiveness provided a useful framework determine the impact of a PCC innovation. PRACTICE IMPLICATIONS A method that captures real-time interactions between patients and care staff over time can meaningfully evaluate PCC innovations.


Journal of Healthcare Management | 2014

Assessing the feasibility of a virtual tumor board program: a case study.

Christopher M. Shea; Lindsey Haynes-Maslow; Molly McIntyre; Bryan J. Weiner; Stephanie B. Wheeler; Sara Jacobs; Deborah K. Mayer; Michael Young; Thomas C. Shea

EXECUTIVE SUMMARY Multidisciplinary tumor boards involve various providers (e.g., oncology physicians, nurses) in patient care. Although many community hospitals have local tumor boards that review all types of cases, numerous providers, particularly in rural areas and smaller institutions, still lack access to tumor boards specializing in a particular type of cancer (e.g., hematologic). Videoconferencing technology can connect providers across geographic locations and institutions; however, virtual tumor board (VTB) programs using this technology are uncommon. In this study, we evaluated the feasibility of a new VTB program at the University of North Carolina (UNC) Lineberger Comprehensive Cancer Center, which connects community‐based clinicians to UNC tumor boards representing different cancer types. Methods included observations, interviews, and surveys. Our findings suggest that participants were generally satisfied with the VTB. Cases presented to the VTB were appropriate, sufficient information was available for discussion, and technology problems were uncommon. UNC clinicians viewed the VTB as a service to patients and colleagues and an opportunity for clinical trial recruitment. Community‐based clinicians presenting at VTBs valued the discussion, even if it simply confirmed their original treatment plan or did not yield consensus recommendations. Barriers to participation for community‐based clinicians included timing of the VTB and lack of reimbursement. To maximize benefits of the VTB, these barriers should be addressed, scheduling and preparation processes optimized, and appropriate measures for evaluating impact identified.


Implementation Science | 2017

Organizational theory for dissemination and implementation research

Sarah A. Birken; Alicia C. Bunger; Byron J. Powell; Kea Turner; Alecia S. Clary; Stacey L. Klaman; Yan Yu; Daniel J. Whitaker; Shannon R. Self; Whitney L. Rostad; Jenelle R. Shanley Chatham; M. Alexis Kirk; Christopher M. Shea; Emily Haines; Bryan J. Weiner

BackgroundEven under optimal internal organizational conditions, implementation can be undermined by changes in organizations’ external environments, such as fluctuations in funding, adjustments in contracting practices, new technology, new legislation, changes in clinical practice guidelines and recommendations, or other environmental shifts. Internal organizational conditions are increasingly reflected in implementation frameworks, but nuanced explanations of how organizations’ external environments influence implementation success are lacking in implementation research. Organizational theories offer implementation researchers a host of existing, highly relevant, and heretofore largely untapped explanations of the complex interaction between organizations and their environment. In this paper, we demonstrate the utility of organizational theories for implementation research.DiscussionWe applied four well-known organizational theories (institutional theory, transaction cost economics, contingency theories, and resource dependency theory) to published descriptions of efforts to implement SafeCare, an evidence-based practice for preventing child abuse and neglect. Transaction cost economics theory explained how frequent, uncertain processes for contracting for SafeCare may have generated inefficiencies and thus compromised implementation among private child welfare organizations. Institutional theory explained how child welfare systems may have been motivated to implement SafeCare because doing so aligned with expectations of key stakeholders within child welfare systems’ professional communities. Contingency theories explained how efforts such as interagency collaborative teams promoted SafeCare implementation by facilitating adaptation to child welfare agencies’ internal and external contexts. Resource dependency theory (RDT) explained how interagency relationships, supported by contracts, memoranda of understanding, and negotiations, facilitated SafeCare implementation by balancing autonomy and dependence on funding agencies and SafeCare developers.SummaryIn addition to the retrospective application of organizational theories demonstrated above, we advocate for the proactive use of organizational theories to design implementation research. For example, implementation strategies should be selected to minimize transaction costs, promote and maintain congruence between organizations’ dynamic internal and external contexts over time, and simultaneously attend to organizations’ financial needs while preserving their autonomy. We describe implications of applying organizational theory in implementation research for implementation strategies, the evaluation of implementation efforts, measurement, research design, theory, and practice. We also offer guidance to implementation researchers for applying organizational theory.


Translational behavioral medicine | 2017

Researcher readiness for participating in community-engaged dissemination and implementation research: a conceptual framework of core competencies

Christopher M. Shea; Tiffany L. Young; Byron J. Powell; Catherine L. Rohweder; Zoe Enga; Jennifer Elissa Scott; Lori Carter-Edwards; Giselle Corbie-Smith

Participating in community-engaged dissemination and implementation (CEDI) research is challenging for a variety of reasons. Currently, there is not specific guidance or a tool available for researchers to assess their readiness to conduct CEDI research. We propose a conceptual framework that identifies detailed competencies for researchers participating in CEDI and maps these competencies to domains. The framework is a necessary step toward developing a CEDI research readiness survey that measures a researcher’s attitudes, willingness, and self-reported ability for acquiring the knowledge and performing the behaviors necessary for effective community engagement. The conceptual framework for CEDI competencies was developed by a team of eight faculty and staff affiliated with a university’s Clinical and Translational Science Award (CTSA). The authors developed CEDI competencies by identifying the attitudes, knowledge, and behaviors necessary for carrying out commonly accepted CE principles. After collectively developing an initial list of competencies, team members individually mapped each competency to a single domain that provided the best fit. Following the individual mapping, the group held two sessions in which the sorting preferences were shared and discrepancies were discussed until consensus was reached. During this discussion, modifications to wording of competencies and domains were made as needed. The team then engaged five community stakeholders to review and modify the competencies and domains. The CEDI framework consists of 40 competencies organized into nine domains: perceived value of CE in D&I research, introspection and openness, knowledge of community characteristics, appreciation for stakeholder’s experience with and attitudes toward research, preparing the partnership for collaborative decision-making, collaborative planning for the research design and goals, communication effectiveness, equitable distribution of resources and credit, and sustaining the partnership. Delineation of CEDI competencies advances the broader CE principles and D&I research goals found in the literature and facilitates development of readiness assessments tied to specific training resources for researchers interested in conducting CEDI research.

Collaboration


Dive into the Christopher M. Shea's collaboration.

Top Co-Authors

Avatar

Kea Turner

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Stefanie P. Ferreri

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Byron J. Powell

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Kristin L. Reiter

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Sarah A. Birken

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Catherine L. Rohweder

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Charles M. Belden

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar

Chelsea Renfro

University of Tennessee Health Science Center

View shared research outputs
Top Co-Authors

Avatar

David Reed

University of North Carolina at Chapel Hill

View shared research outputs
Researchain Logo
Decentralizing Knowledge