Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Christina A. Christie is active.

Publication


Featured researches published by Christina A. Christie.


American Journal of Evaluation | 2009

Evaluation Use: Results From a Survey of U.S. American Evaluation Association Members

Dreolin N. Fleischer; Christina A. Christie

This paper presents the results of a cross-sectional survey on evaluation use completed by 1,140 U.S. American Evaluation Association members. This study had three foci: evaluators’ current attitudes, perceptions, and experiences related to evaluation use theory and practice, how these data are similar to those reported in a previous study conducted by Preskill and Caracelli (1997), and to identify characteristics that distinguish high endorsers of use items from others in the sample. Findings suggest a fair level of agreement on several dimensions of use including stakeholder involvement, factors that influence use, and the varied roles of the evaluator. Logistic regression results indicated that external evaluators were less likely to be high item endorsers while those who reported being members of the Evaluation Use Topical Interest Group of the American Evaluation Association were more likely to be high item endorsers.


American Journal of Evaluation | 2007

Reported Influence of Evaluation Data on Decision Makers’ Actions An Empirical Examination

Christina A. Christie

Using a set of scenarios derived from actual evaluation studies, this simulation study examines the reported influence of evaluation information on decision makers’ potential actions. Each scenario described a context where one of three types of evaluation information (large-scale study data, case study data, or anecdotal accounts) is presented and a specific decision needs to be made. Participants were asked to indicate which type of data presented would influence their decision making. Results from 131 participants indicate that participants were influenced by all types of information, yet large-scale and case study data are more influential relative to anecdotal accounts; certain types of evaluation data are more influential among certain groups of decision makers; and choosing to use one type of evaluation data over the other two depends on the independent influence of other types of evaluation data on the decision maker, as well as prior beliefs about program efficacy.


American Journal of Evaluation | 2003

The User-Oriented Evaluator's Role in Formulating a Program Theory: Using a Theory-Driven Approach

Christina A. Christie; Marvin C. Alkin

Program theory plays a prominent role in many evaluations, not only in theory-driven evaluations. This paper presents a case study of the process of developing and refining a program’s theory within a user-oriented evaluation. In user-oriented (or utilization-focused) evaluations, primary users can play a role in defining their own program theory. This is different, however, from the typical process by which a program theory is developed when using theory-driven evaluation framework. This case study concerns a university’s academic outreach program with three local school districts. The program’s objective is to increase the number of University of California eligible and specifically University of California, Los Angeles (UCLA) admissible students from targeted local public schools. The authors helped develop and refine outreach staff’s program theory. The evaluation procedures are described and results of the theory building process presented.


American Journal of Evaluation | 2003

Learning About Evaluation Through Dialogue: Lessons From an Informal Discussion Group

Christina A. Christie; Mike Rose

Dialogue and discussion are fundamental to advancing the field of evaluation. Small informal discussion groups that focus on particular issues serve to promote productive dialogue about evaluation. This paper describes the dialogue of such a group. Because it is interactive, dynamic, and applied, we assert that the group is a personification of the nature of evaluation knowledge. We further argue that this group, and the way that it has been formulated, is particularly suitable for learning about evaluation, for socializing new people into the field, and for promoting continuing education.


American Journal of Evaluation | 2006

Appreciative Inquiry as a Method for Evaluation: An Interview with Hallie Preskill.

Christina A. Christie

a A. Christie, Claremont laremont, CA 91711; eIn my experience, wh ays been a bit of a cha reciative inquiry (AI) ck me about the AI ev rview. I’ve never read luation. Could you des l: I came across AI in rs ago and became exc participatory, collabor rned, the more I becam on findings but that it ted reading everything reciative inquiry (Dav applications of AI to e Before we get into the me a brief explanatio l: Sure. AI has been us -1980s. In a nutshell, A rt to design and imple roaches to organization ds, when we look for ge, we often end up fee onents of AI have foun e excited and energize d language, participan perrider, Sorensen, Wh Okay, let’s talk a bit a l: CEDT stands for C rnal training function ional Laboratories for gies, and most of thei uquerque, New Mexic he CEDT department ructor-led classroom t ng, and consulting on a s and offerings had be Christina A. Christie Claremont Graduate University


New Directions for Evaluation | 2003

What Guides Evaluation? A Study of How Evaluation Practice Maps onto Evaluation Theory

Christina A. Christie


Studies in Educational Evaluation | 2008

Evaluation theory tree re-examined

Christina A. Christie; Marvin C. Alkin


Archive | 2004

The 2004 Claremont Debate: Lipsey vs. Scriven Determining Causality in Program Evaluation & Applied Research: Should Experimental Evidence Be the Gold Standard?

Stewart I. Donaldson; Christina A. Christie


Studies in Educational Evaluation | 2004

Moving toward collaboration by creating a participatory internal-external evaluation team: A case study

Christina A. Christie; Rachel M. Ross; Brock M. Klein


American Journal of Evaluation | 2004

What’s All the Talk About? Examining EVALTALK, an Evaluation Listserv

Christina A. Christie; Tarek Azzam

Collaboration


Dive into the Christina A. Christie's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dreolin N. Fleischer

Claremont Graduate University

View shared research outputs
Top Co-Authors

Avatar

Stewart I. Donaldson

Claremont Graduate University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jean A. King

University of Minnesota

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rachel M. Ross

Claremont Graduate University

View shared research outputs
Top Co-Authors

Avatar

Tarek Azzam

Claremont Graduate University

View shared research outputs
Researchain Logo
Decentralizing Knowledge