Michael Quinn Patton
Union Institute & University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Michael Quinn Patton.
The Modern Language Journal | 1992
Jill K. Welch; Michael Quinn Patton
PART ONE: CONCEPTUAL ISSUES IN THE USE OF QUALITATIVE METHODS The Nature of Qualitative Inquiry Strategic Themes in Qualitative Methods Variety in Qualitative Inquiry Theoretical Orientations Particularly Appropriate Qualitative Applications PART TWO: QUALITATIVE DESIGNS AND DATA COLLECTION Designing Qualitative Studies Fieldwork Strategies and Observation Methods Qualitative Interviewing PART THREE: ANALYSIS, INTERPRETATION, AND REPORTING Qualitative Analysis and Interpretation Enhancing the Quality and Credibility of Qualitative Analysis
Evaluation | 1998
Michael Quinn Patton
Trying to figure out what’s really going on is, of course, a core function of evaluation. Part of such reality testing at a conference includes sorting out what our profession has become and is becoming, what our core disciplines are, and what issues deserve our attention. I have spent a good part of my evaluation career reflecting on these concerns, particularly from the point of view of use; for example, how to work with intended users to achieve intended uses, and how to distinguish the general community of stakeholders from primary users so as to work with them. In all of that work, and indeed through the first two editions of Utilization-Focused Evaluation (a period spanning 20 years), I have been engaging in evaluations with a focus on enhancing utility, both the amount and quality of use. However, when I went to prepare the third edition of the book (Patton, 1997), and was trying to sort out what had happened in the field in the 10 years since the last edition, it occurred to me that I had missed something. I was struck by something which my own myopia had not allowed me to see before. When I have followed up my own evaluations over the years, I have asked intended users about actual use. What I would typically hear was something like: ‘Yes, the findings were helpful in this way and that, and here’s what we did with them’. If there had been recommendations, I would ask what subsequent actions, if any, followed. But, beyond the focus on findings and recommendations, what they almost inevitably added was something to the effect that ‘it wasn’t really the findings that were so important in the end, it was going through the process’. Consequently, I would reply: ‘That’s nice. I’m glad you appreciated the process, but what did you really do with the findings?’. In reflecting on these interactions, I came to realize that the entire field has narrowly defined use as ‘use of findings’. Thus, we have not had ways to conceptualize or talk about what happens to people and organizations as a result of being involved in an evaluation process: what I have come to call ‘process use’.
Evaluation Practice | 1997
Michael Quinn Patton
In Too Many Daves by Dr. Seuss (1961), he tells of a Mrs. McCave who had 23 sons, all named Dave. This turned out to create some havoc in the McCave household for, when she wanted a particular Dave and called for him, all of them came running. Presumably to avoid this problem in the house of Evaluation, we have many different names for evaluation so that when we call for one, we get the one we want. At least, that’s the intent and hope. The challenge is that Mrs. Evaluation has birthed many more than 23 children. In Utilization-Focused Evaluation (Patton, 1997b, pp. 192-194), I contrast some 60 different types, and Scriven’s (1991) Evaluation Thesaurus has more still. Further complicating the situation is that some of these children appear to be twins, not identical twins, but sharing an awful lot of intellectual DNA. Others are merely cousins. And still others refuse to acknowledge that they are even part of the same family or household. Moreover, definitional issues never quite get settled. As I am writing this, EvalTalk (the American Evaluation Association listserv) has yet another vigorous discussion going on trying to distinguish formative from summative evaluation. If confusion remains about that classic distinction, as it surely does, then what hope is there for more nuanced distinctions among alternative participatory orientations? Alas, the Sisyphean endeavor goes on, as it must. And that’s precisely the context for understanding this book and its contribution. Fetterman, Wandersman, and colleagues are herein answering the challenge to distinguish empowerment evaluation from close relatives like participatory, collaborative, inclusive, democratic, feminist, and emancipatory evaluation, among others. Earlier reviews of their writings, including my own (Patton, 1997a), questioned the conceptual meaningfulness and practical applications of these distinctions, a matter to which we shall return shortly. The other question raised in early reviews was whether the idea, or at least the language, of empowerment evaluation had staying power. The phrase “empowerment evaluation” gained prominence in the lexicon of evaluation when Fetterman, as president of the American Evaluation Association, made it the theme of the association’s 1993 annual national conference in Dallas. At the time, it was not at all clear that the term empowerment could long coexist with the term evaluation, especially because the word empower stimulates the gag reflex in many, like calls to be “proactive” or to “liberate” (as in Iraq). Yet endure it has. It is hard to argue with Fetterman’s conclusion in the final chapter that during the course of the last decade, empowerment evaluation “has become a part of the intellectual landscape of evaluation” (p. 213). The book offers substantial and convincing evidence that empowerment evaluation has become a significant and important approach. Its longevity and status established and documented, the question of precisely what it is becomes all the more important. Fetterman’s own basic definition of empowerment evaluation has not changed and has been consistent across his writings. It is “the use of evaluation concepts, techniques, and findings to foster improvement and self-determination” (Fetterman, 1994, p. 1; Fetterman, 2005, p. 10; Fetterman, Kaftarian, & Wandersman, 1996, p. 5). Of course, using evaluation processes for improvement was nothing new in 1993. It was the emphasis on fostering self-determination that was the defining—and controversial— niche of empowerment evaluation and the heart of its explicit political and social change agenda. In the 1996 volume edited with Wandersman and Kaftarian, Fetterman’s opening chapter elaborated five “fac-
Evaluation Practice | 1996
Michael Quinn Patton
Abstract Patton continues the debate by identifying three arenas of evaluation practice in which the formative/summative dichotomy appears limited: knowledge-generating evaluations aimed at conceptual rather than instrumental use; developmental evaluation; and use of evaluation processes to support interventions or empower participants. In so doing, the essence of “evaluation” is more broadly defined, and the impact of harsh criticism on the listener is demonstrated through personal example.
Evaluation | 2002
Michael Quinn Patton
Over the relatively short history of professional evaluation, those working in the field have directed considerable attention to both a vision of democratic approaches to evaluation and practice wisdom about how to realize that vision. In Europe, the democratic evaluation model of Barry MacDonald (1987) stands out. He argued that ‘the democratic evaluator’ recognizes and supports value pluralism with the consequence that the evaluator should seek to represent the full range of interests in the course of designing an evaluation. In that way an evaluator can support an informed citizenry, the sine qua non of strong democracy, by acting as an information broker between groups who want and need knowledge about each other. The democratic evaluator must make the methods and techniques of evaluation accessible to non-specialists, that is, the general citizenry. MacDonald’s democratic evaluator seeks to survey a range of interests by assuring confidentiality to sources, engaging in negotiation between interest groups, and making evaluation findings widely accessible. The guiding ethic is the public’s right to know. Saville Kushner (2000) has carried forward, deepened and updated MacDonald’s democratic evaluation model. He sees evaluation as a form of personal expression and political action with a special obligation to be critical of those in power. He places the experiences of people in programs at the center of evaluation. The experiences and perceptions of the people, the supposed beneficiaries, are where, for Kushner, we will find the intersection of Politics (big P – Policy) and politics (small p – people). Much of evaluation these days (i.e. logic models, theories of action, outcomes evaluation) is driven by the need and desire to simplify and bring order to chaos. Kushner, in contrast, embraces chaos and complexity because democracy is complex and chaotic. He challenges the facile perspectives and bureaucratic imperatives that dominate much of current institutionally based evaluation practice. Over and over he returns to the people, to the children and teachers and parents, and the realities of their lives in program settings as they experience those realities. He elevates their judgments over professional and external judgments. He feels a special obligation to focus on, Evaluation Copyright
Evaluation Practice | 1990
Michael Quinn Patton
Editor’s Note: On July 20, 1989, former AEA President Michael Quinn Patton delivered the keynote address to the Australasian Evaluation Society in Queensland, Australia. He also delivered a proclamation from the AEA inviting our Australasian colleagues to the AEA meeting which focused on New Perspectives From International and Cross-Cultural Evaluation. A shortened version of his keynote address is reprinted below with the permission of the Australasian Evaluation Society. Even though Patton’s focus is on evaluation in Australia, his points are also appropriate for us in America.
Evaluation | 2012
Michael Quinn Patton
Utilization-focused evaluation involves identifying and working with primary intended users to design and interpret an evaluation. This includes the process of working with primary intended users to render judgments about the extent to which the preponderance of evidence supports a meaningful and useful conclusion about degree to which an intervention has affected observed outcomes and impacts. This is the essence of contribution analysis. Two in-depth examples illustrate this process.
Evaluation and Program Planning | 1980
Michael Quinn Patton
Abstract This article is a further discussion of methodological paradigms in evaluation research. More specifically, this article is a response to the attacks on paradigmatic perspectives made by Reichardt and Cook in the opening chapter of their edited book Qalitative and Quantitative Methods in Evaluation Research. They “suggest that part of this current debate over qualitative and quantitative methods is not centered on productive issues and so is not being argued in as logical a fashion as it should be.” For better or worse paradigm debates are, by their nature, only partly subject to logical analysis. Paradigms — methodological or otherwise — involve values, world view, empirical tendencies, and patterned responses. Because their central arguments rest entirely on a logical foundation, Reichardt and Cook may have done precisely what they accuse others of doing: “obscuring issues and unnecessarily creating schisms between the two methodtypes.”
American Journal of Evaluation | 2014
Michael Quinn Patton
Theory and practice are integrated in the human brain. Situation recognition and response are key to this integration. Scholars of decision making and expertise have found that people with great expertise are more adept at situational recognition and intentional about their decision-making processes. Several interdisciplinary fields of inquiry provide insights into how we manage situation recognition in the face of complexity. Classic works on bounded rationality and satisficing, contingency theory, cognitive science, and decision sciences have been identifying how the brain processes information through conceptual screens to facilitate cutting through the messy, confusing, overwhelming chaos of the real world so that we can avoid analysis paralysis. This article presents six conceptual screens that, in combination, constitute a theory to practice situation recognition framework: (1) intended users’ contingencies; (2) nature of the evaluand; (3) evaluation purpose: findings use options; (4) process options; (5) context & situational contingencies; and (6) evaluator characteristics.
Science Communication | 1988
Michael Quinn Patton
Cooperative Extension can no longer be understood primarily in terms of technology transfer approaches. The future Cooperative Extension model incorporates educational, developmental, and problem-solving mandates with technology transfer through “issues programming.” This has important political and substantive implications for Extension. The final section of this article contrasts the issues programming approach to the more traditional technology transfer model and considers the implications of this shift for the future of Cooperative Extension.