Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Cara C. Lewis is active.

Publication


Featured researches published by Cara C. Lewis.


Implementation Science | 2017

Toward criteria for pragmatic measurement in implementation research and practice: a stakeholder-driven approach using concept mapping

Byron J. Powell; Cameo Stanick; Heather Halko; Caitlin N. Dorsey; Bryan J. Weiner; Melanie Barwick; Laura J. Damschroder; Michel Wensing; Luke Wolfenden; Cara C. Lewis

BackgroundAdvancing implementation research and practice requires valid and reliable measures of implementation determinants, mechanisms, processes, strategies, and outcomes. However, researchers and implementation stakeholders are unlikely to use measures if they are not also pragmatic. The purpose of this study was to establish a stakeholder-driven conceptualization of the domains that comprise the pragmatic measure construct. It built upon a systematic review of the literature and semi-structured stakeholder interviews that generated 47 criteria for pragmatic measures, and aimed to further refine that set of criteria by identifying conceptually distinct categories of the pragmatic measure construct and providing quantitative ratings of the criteria’s clarity and importance.MethodsTwenty-four stakeholders with expertise in implementation practice completed a concept mapping activity wherein they organized the initial list of 47 criteria into conceptually distinct categories and rated their clarity and importance. Multidimensional scaling, hierarchical cluster analysis, and descriptive statistics were used to analyze the data.FindingsThe 47 criteria were meaningfully grouped into four distinct categories: (1) acceptable, (2) compatible, (3) easy, and (4) useful. Average ratings of clarity and importance at the category and individual criteria level will be presented.ConclusionsThis study advances the field of implementation science and practice by providing clear and conceptually distinct domains of the pragmatic measure construct. Next steps will include a Delphi process to develop consensus on the most important criteria and the development of quantifiable pragmatic rating criteria that can be used to assess measures.


Implementation Science | 2017

A content analysis of dissemination and implementation science resource initiatives: what types of resources do they offer to advance the field?

Doyanne Darnell; Caitlin N. Dorsey; Abigail Melvin; Jonathan Chi; Aaron R. Lyon; Cara C. Lewis

BackgroundThe recent growth in organized efforts to advance dissemination and implementation (D & I) science suggests a rapidly expanding community focused on the adoption and sustainment of evidence-based practices (EBPs). Although promising for the D & I of EBPs, the proliferation of initiatives is difficult for any one individual to navigate and summarize. Such proliferation may also result in redundant efforts or missed opportunities for participation and advancement. A review of existing D & I science resource initiatives and their unique merits would be a significant step for the field. The present study aimed to describe the global landscape of these organized efforts to advance D & I science.MethodsWe conducted a content analysis between October 2015 and March 2016 to examine resources and characteristics of D & I science resource initiatives using public, web-based information. Included resource initiatives must have engaged in multiple efforts to advance D & I science beyond conferences, offered D & I science resources, and provided content in English. The sampling method included an Internet search using D & I terms and inquiry among internationally representative D & I science experts. Using a coding scheme based on a priori and grounded approaches, two authors consensus coded website information including interactive and non-interactive resources and information regarding accessibility (membership, cost, competitive application, and location).ResultsThe vast majority (83%) of resource initiatives offered at least one of seven interactive resources (consultation/technical assistance, mentorship, workshops, workgroups, networking, conferences, and social media) and one of six non-interactive resources (resource library, news and updates from the field, archived talks or slides, links pages, grant writing resources, and funding opportunities). Non-interactive resources were most common, with some appearing frequently across resource initiatives (e.g., news and updates from the field).ConclusionFindings generated by this study offer insight into what types of D & I science resources exist and what new resources may have the greatest potential to make a unique and needed contribution to the field. Additional interactive resources may benefit the field, particularly mentorship opportunities and resources that can be accessed virtually. Moving forward, it may be useful to consider strategic attention to the core tenets of D & I science put forth by Glasgow and colleagues to most efficiently and effectively advance the field.


Frontiers in Public Health | 2018

From Classification to Causality: Advancing Understanding of Mechanisms of Change in Implementation Science

Cara C. Lewis; Predrag Klasnja; Byron J. Powell; Aaron R. Lyon; Leah Tuzzio; Salene M. W. Jones; Callie Walsh-Bailey; Bryan J. Weiner

Background The science of implementation has offered little toward understanding how different implementation strategies work. To improve outcomes of implementation efforts, the field needs precise, testable theories that describe the causal pathways through which implementation strategies function. In this perspective piece, we describe a four-step approach to developing causal pathway models for implementation strategies. Building causal models First, it is important to ensure that implementation strategies are appropriately specified. Some strategies in published compilations are well defined but may not be specified in terms of its core component that can have a reliable and measureable impact. Second, linkages between strategies and mechanisms need to be generated. Existing compilations do not offer mechanisms by which strategies act, or the processes or events through which an implementation strategy operates to affect desired implementation outcomes. Third, it is critical to identify proximal and distal outcomes the strategy is theorized to impact, with the former being direct, measurable products of the strategy and the latter being one of eight implementation outcomes (1). Finally, articulating effect modifiers, like preconditions and moderators, allow for an understanding of where, when, and why strategies have an effect on outcomes of interest. Future directions We argue for greater precision in use of terms for factors implicated in implementation processes; development of guidelines for selecting research design and study plans that account for practical constructs and allow for the study of mechanisms; psychometrically strong and pragmatic measures of mechanisms; and more robust curation of evidence for knowledge transfer and use.


BMC Research Notes | 2018

Implementing measurement based care in community mental health: a description of tailored and standardized methods

Cara C. Lewis; Ajeng Puspitasari; Meredith R. Boyd; Kelli Scott; Brigid R. Marriott; Mira Hoffman; Elena Navarro; Hannah Kassab

ObjectiveAlthough tailored implementation methods are touted as superior to standardized, few researchers have directly compared the two and little guidance regarding the specific details of each method exist. Our study compares these methods in a dynamic cluster randomized trial seeking to optimize implementation of measurement based care (MBC) for depression in community behavioral health. This specific manuscript provides a detailed, replicable account of the components of each multi-faceted implementation method.ResultsThe standardized best practice method includes training, consultation, a clinical guideline, and electronic health record enhancements with the goal to optimize the delivery of MBC with fidelity. Conversely, the tailored, customized and collaborative method is informed by recent implementation science advancements and begins with a needs assessment, followed by tailored training that feeds back barriers data to clinicians, the formation of an implementation team, a clinician-driven clinic-specific guideline, and the use of fidelity data to inform implementation team activities; the goal of the tailored condition is to ensure the intervention and implementation strategies address unique factors of the context. The description of these methods will inform others seeking to implement MBC, as well as those planning to use standardized or tailored implementation methods for interventions beyond behavioral health.


Systematic Reviews | 2018

An updated protocol for a systematic review of implementation-related measures

Cara C. Lewis; Kayne D. Mettert; Caitlin N. Dorsey; Ruben G. Martinez; Bryan J. Weiner; Elspeth Nolen; Cameo Stanick; Heather Halko; Byron J. Powell

BackgroundImplementation science is the study of strategies used to integrate evidence-based practices into real-world settings (Eccles and Mittman, Implement Sci. 1(1):1, 2006). Central to the identification of replicable, feasible, and effective implementation strategies is the ability to assess the impact of contextual constructs and intervention characteristics that may influence implementation, but several measurement issues make this work quite difficult. For instance, it is unclear which constructs have no measures and which measures have any evidence of psychometric properties like reliability and validity. As part of a larger set of studies to advance implementation science measurement (Lewis et al., Implement Sci. 10:102, 2015), we will complete systematic reviews of measures that map onto the Consolidated Framework for Implementation Research (Damschroder et al., Implement Sci. 4:50, 2009) and the Implementation Outcomes Framework (Proctor et al., Adm Policy Ment Health. 38(2):65-76, 2011), the protocol for which is described in this manuscript.MethodsOur primary databases will be PubMed and Embase. Our search strings will be comprised of five levels: (1) the outcome or construct term; (2) terms for measure; (3) terms for evidence-based practice; (4) terms for implementation; and (5) terms for mental health. Two trained research specialists will independently review all titles and abstracts followed by full-text review for inclusion. The research specialists will then conduct measure-forward searches using the “cited by” function to identify all published empirical studies using each measure. The measure and associated publications will be compiled in a packet for data extraction. Data relevant to our Psychometric and Pragmatic Evidence Rating Scale (PAPERS) will be independently extracted and then rated using a worst score counts methodology reflecting “poor” to “excellent” evidence.DiscussionWe will build a centralized, accessible, searchable repository through which researchers, practitioners, and other stakeholders can identify psychometrically and pragmatically strong measures of implementation contexts, processes, and outcomes. By facilitating the employment of psychometrically and pragmatically strong measures identified through this systematic review, the repository would enhance the cumulativeness, reproducibility, and applicability of research findings in the rapidly growing field of implementation science.


Psychotherapy Research | 2018

A meta-analysis of the effect of therapist experience on outcomes for clients with internalizing disorders

Lucia M. Walsh; McKenzie K. Roddy; Kelli Scott; Cara C. Lewis; Amanda Jensen-Doss

Abstract Objective: This meta-analysis synthesized the literature regarding the effect of therapist experience on internalizing client outcomes to evaluate the utility of lay providers in delivering treatment and to inform therapist training. Method: The analysis included 22 studies, contributing 208 effect sizes. Study and client characteristics were coded to examine moderators. We conducted subgroup meta-analyses examining the relationship of therapist experience across a diverse set of internalizing client outcomes. Results: Results demonstrated a small, but significant relationship between therapist experience and internalizing client outcomes. There was no relationship between therapist experience and outcomes in clients with primary anxiety disorders. In samples of clients with primary depressive disorders and in samples of clients with mixed internalizing disorders, there was a significant relationship between experience and outcomes. The relationship between therapist experience and outcomes was stronger when clients were randomized to therapists, treatment was not manualized, and for measures of client satisfaction and “other” outcomes (e.g., dropout). Conclusions: It appears that therapist experience may matter for internalizing clients under certain circumstances, but this relationship is modest. Continuing methodological concerns in the literature are noted, as well as recommendations to address these concerns.


Implementation Science | 2018

Proceedings of the Fourth Biennial Conference of the Society for Implementation Research Collaboration (SIRC) 2017: implementation mechanisms: what makes implementation work and why? part 1

Cara C. Lewis; Cameo Stanick; Aaron R. Lyon; Doyanne Darnell; Jill Locke; Ajeng Puspitasari; Brigid R. Marriott; Caitlin N. Dorsey; Madeline Larson; Carrie B. Jackson; Jordan Thayer; Callie Walsh Bailey; Rebecca Lengnick-Hall; Shannon Dorsey; Sara J. Landes

Proceedings of the Fourth Biennial Conference of the Society for Implementation Research Collaboration (SIRC) 2017: implementation mechanisms: what makes implementation work and why? part 1 Cara C. Lewis, Cameo Stanick, Aaron Lyon, Doyanne Darnell, Jill Locke, Ajeng Puspitasari, Brigid R. Marriott, Caitlin N. Dorsey, Madeline Larson, Carrie Jackson, Jordan Thayer, Callie Walsh Bailey, Rebecca Lengnick-Hall, Shannon Dorsey and Sara J. Landes


Implementation Science | 2017

The creation and validation of the Measure of Effective Attributes of Trainers (MEAT)

Meredith R. Boyd; Cara C. Lewis; Kelli Scott; Anne C. Krendl; Aaron R. Lyon

BackgroundTraining is a core component in the implementation of empirically supported treatments, especially in the case of psychosocial interventions targeting mental illness. However, common forms of training are relatively ineffective in producing behavioral changes in providers. Trainers are in a strategic position to influence the success of training, but no research, to our knowledge, has explored whether personal characteristics of trainers (e.g., enthusiasm, charisma) increase effectiveness of training empirically supported treatments in the field of mental health. To address this gap, the current study created a measure of trainer characteristics (the Measure of Effective Attributes of Trainers (MEAT)) and assessed preliminary evidence for its reliability and validity by following gold standard measure development procedures.MethodsMeasure development consisted of three steps: (1) An initial pool of items was generated based on extant literature, input from the target population, and expert input; (2) target users of the measure interacted with the initial item pool to ensure face validity as well as clarity of measure instructions, response options, and items; and (3) a convenience sample viewed training videos and completed the measure resulting from step 2 to establish preliminary evidence of reliability and validity. An exploratory factor analysis was performed on the measure to determine whether latent factors (i.e., subscales of characteristics) underlie the data.ResultsThe final solution consisted of two factors that demonstrated preliminary evidence for structural validity of the measure. The first factor, labeled “Charisma,” contained items related to characteristics that facilitate a positive personal relationship with the trainee (e.g., friendly, warm), and the second factor, labeled “Credibility,” contained items related to characteristics that emphasize the qualification of the trainer (e.g., professional, experienced). There was also evidence for face validity, content validity, reliability, and known groups validity of the measure.ConclusionsThe MEAT demonstrated preliminary evidence of key psychometric properties. Future research is needed to further explore and contribute to its psychometric evidence, which could be done in conjunction with measures of trainee knowledge, attitudes towards empirically supported treatments, and evaluations of trainee behavior change to delineate key characteristics of trainers to be leveraged for more effective training.


Implementation Science | 2017

Psychometric assessment of three newly developed implementation outcome measures

Bryan J. Weiner; Cara C. Lewis; Cameo Stanick; Byron J. Powell; Caitlin N. Dorsey; Alecia S. Clary; Marcella H. Boynton; Heather Halko


Implementation Science | 2018

A methodology for generating a tailored implementation blueprint: an exemplar from a youth residential setting

Cara C. Lewis; Kelli Scott; Brigid R. Marriott

Collaboration


Dive into the Cara C. Lewis's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Aaron R. Lyon

University of Washington

View shared research outputs
Top Co-Authors

Avatar

Byron J. Powell

University of North Carolina at Chapel Hill

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kelli Scott

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge