Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jean A. King is active.

Publication


Featured researches published by Jean A. King.


American Journal of Evaluation | 2009

Research on Evaluation Use A Review of the Empirical Literature From 1986 to 2005

Kelli Johnson; Lija O. Greenseid; Stacie A. Toal; Jean A. King; Frances Lawrenz; Boris B. Volkov

This paper reviews empirical research on the use of evaluation from 1986 to 2005 using Cousins and Leithwood’s 1986 framework for categorizing empirical studies of evaluation use conducted since that time. The literature review located 41 empirical studies of evaluation use conducted between 1986 and 2005 that met minimum quality standards. The Cousins and Leithwood framework allowed a comparison over time. After initially grouping these studies according to Cousins and Leithwood’s two categories and twelve characteristics, one additional category and one new characteristic were added to their framework. The new category is stakeholder involvement, and the new characteristic is evaluator competence (under the category of evaluation implementation). Findings point to the importance of stakeholder involvement in facilitating evaluation use and suggest that engagement, interaction, and communication between evaluation clients and evaluators is critical to the meaningful use of evaluations.


American Journal of Evaluation | 2005

Establishing Essential Competencies for Program Evaluators

Laurie Stevahn; Jean A. King; Gail Ghere; Jane Minnema

This article presents a comprehensive taxonomy of essential competencies for program evaluators. First, the authors provide a rationale for developing evaluator competencies, along with a brief history of the initial development and validation of the taxonomy of essential evaluator competencies in King, Stevahn, Ghere, and Minnema (2001). Second, they present a revised version of that taxonomy and describe the revision process. Third, a crosswalk accompanying the taxonomy indicates which competencies address standards, principles, and skills endorsed by major evaluation associations in North America. Finally, the authors identify future needs related to the taxonomy, including the need for validation research, a shared understanding of terms, and the construction of descriptive rubrics for assessing competence.


Journal of Interprofessional Care | 2014

A scoping review of interprofessional collaborative practice and education using the lens of the Triple Aim

Barbara F. Brandt; May Nawal Lutfiyya; Jean A. King; Catherine Chioreso

Abstract The Triple Aim unequivocally connects interprofessional healthcare teams to the provision of better healthcare services that would eventually lead to improved health outcomes. This review of the interprofessional education (IPE) and collaborative practice empirical literature from 2008 to 2013 focused on the impact of this area of inquiry on the outcomes identified in the Triple Aim. The preferred reporting items for systematic reviews and meta-analyses methodology were employed including: a clearly formulated question, clear inclusion criteria to identify relevant studies based on the question, an appraisal of the studies or a subset of the studies, a summary of the evidence using an explicit methodology and an interpretation of the findings of the review. The initial search yielded 1176 published manuscripts that were reduced to 496 when the inclusion criteria were applied to refine the selection of published manuscripts. Despite a four-decade history of inquiry into IPE and/or collaborative practice, scholars have not yet demonstrated the impact of IPE and/or collaborative practice on simultaneously improving population health, reducing healthcare costs or improving the quality of delivered care and patients’ experiences of care received. We propose moving this area of inquiry beyond theoretical assumptions to systematic research that will strengthen the evidence base for the effectiveness of IPE and collaborative practice within the context of the evolving imperative of the Triple Aim.


American Journal of Evaluation | 2001

Toward a Taxonomy of Essential Evaluator Competencies

Jean A. King; Laurie Stevahn; Gail Ghere; Jane Minnema

This article discusses an exploratory study designed to determine the extent to which evaluation professionals, representing diverse backgrounds and approaches, could reach agreement on a proposed taxonomy of essential evaluator competencies. Participants were 31 diverse individuals in the field of program evaluation in the greater Minneapolis-St. Paul, Minnesota area who systematically engaged in a Multi-Attribute Consensus Reaching process. Both quantitative and qualitative results predominantly indicated consensus on more than three-fourths of the proposed competencies. Areas of disagreement reflected the role- and context-specific nature of evaluation practice.


American Journal of Evaluation | 2006

A Professional Development Unit for Reflecting on Program Evaluator Competencies

Gail Ghere; Jean A. King; Laurie Stevahn; Jane Minnema

This article describes an interactive professional development unit that engages both novice and experienced evaluators in (a) learning about the Essential Competencies for Program Evaluators (ECPE), (b) applying the competencies to program evaluation contexts, and (c) using the ECPE to reflect on their own professional practices. The article briefly summarizes current issues about program evaluator competencies and the components of effective professional development. It then describes the ECPE; the objectives, content, and process of the professional development session; and the ECPE Self-Assessment Instrument. Facilitators can adapt and use the unit in a variety of settings, including university courses and program evaluation conferences.


Evaluation and Program Planning | 2009

The unique character of involvement in multi-site evaluation settings

Stacie A. Toal; Jean A. King; Kelli Johnson; Frances Lawrenz

As the number of large federal programs increases, so, too, does the need for a more complete understanding of how to conduct evaluations of such complex programs. The research literature has documented the benefits of stakeholder participation in smaller-scale program evaluations. However, given the scope and diversity of projects in multi-site program evaluations, traditional notions of participatory evaluation do not apply. The purpose of this research is to determine the ways in which stakeholders are involved in large-scale, multi-site STEM evaluations. This article describes the findings from a survey of 313 program leaders and evaluators and from follow-up interviews with 12 of these individuals. Findings from this study indicate that attendance at meetings and conferences, planning discussions within the project related to use of the program evaluation, and participation in data collection should be added to the list of activities that foster feelings of evaluation involvement among stakeholders. In addition, perceptions of involvement may vary according to breadth or depth of evaluation activities, but not always both. Overall, this study suggests that despite the contextual challenges of large, multi-site evaluations, it is feasible to build feelings of involvement among stakeholders.


Evaluation | 2005

Managing Conflict Constructively in Program Evaluation

Laurie Stevahn; Jean A. King

Evaluators almost inevitably experience conflict in the course of conducting evaluation studies. This article first presents two theoretical frameworks from social psychology - conflict strategies theory and constructive conflict resolution theory - useful for constructively managing conflict in evaluation settings. Second, we discuss theory-derived skills related to structuring cooperative goals and tasks in evaluation studies as well as how to use integrative negotiation procedures to address disputes that arise during the evaluation process. Finally, we explain how these theories can provide evaluators with a lens through which to analyze evaluation contexts, thereby helping them to make wise decisions for effective evaluation practice.


Journal of Teacher Education | 1983

Rethinking Teacher Recruitment.

Robert K. Wimpelberg; Jean A. King

The latest essay of the Carnegie Foundation for the Advancement of Teaching comes almost 90 years after the first comprehensive study of American schooling, known as the Report of the Committee of Ten on Secondary School Studied (1894). While the Carnegie report, Higher Learning in the Nation’s Service, (Boyer & Hechinger, 1981), focuses primarily on the role of the college or university, it also addresses questions concerning secondary schooling that share striking similarities to the issues discussed in the 1894 committee report. Both critique the articulation be-


Journal of Teacher Education | 1987

The Uneasy Relationship between Teacher Education and the Liberal Arts and Sciences

Jean A. King

King examines the relationship between teacher educators and their colleagues in the liberal arts and sciences. Two reasons are identified regarding why lib eral arts faculty historically have come to distrust or to be disdainful of teacher education programs. Further, the author discusses specific activities that liberal arts faculty members might engage in to improve teacher education and she includes suggestions for increasing their participation in the teacher prepara tion process.


American Journal of Evaluation | 2008

Bringing Evaluative Learning to Life

Jean A. King

This excerpt from the opening plenary asks evaluators to consider two questions regarding learning and evaluation: (a) How do evaluators know if, how, when, and what people are learning during an evaluation? and (b) In what ways can evaluation be a learning experience? To answer the first question, evaluators can apply the commonplaces of evaluative learning, where, in a given evaluative context, the evaluator is a teacher, the clients/participants are students, and the process and results of the evaluation are the curriculum. To answer the second question, evaluators can consider two ideas for understanding evaluative learning: (a) evaluation for accountability and control and (b) evaluation for program development.

Collaboration


Dive into the Jean A. King's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bruce Thompson

Baylor College of Medicine

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Melvin M. Mark

Pennsylvania State University

View shared research outputs
Top Co-Authors

Avatar

Boris B. Volkov

University of North Dakota

View shared research outputs
Top Co-Authors

Avatar

Gail Ghere

University of Minnesota

View shared research outputs
Top Co-Authors

Avatar

Jane Minnema

St. Cloud State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge