Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Patricia J. Rogers is active.

Publication


Featured researches published by Patricia J. Rogers.


Evaluation | 2003

Teaching People to Fish? Building the Evaluation Capability of Public Sector Organizations

Bron McDonald; Patricia J. Rogers; Bruce Kefford

In response to an increasing demand for public sector accountability, many government agencies have sought to develop their internal evaluation capabilities. Often these efforts have focused on increasing the capacity to supply credible evaluations, yet addressing demand is just as important. This article focuses on a government agency and tracks its five-year journey towards developing such a capability. It documents contextual matters, drivers for change, the actions taken by the agency, and its response to emergent challenges during four phases. Based on feedback from project staff and managers and those involved in the capability development project, it offers seven recommendations. These are: start small and grow evaluation; address both supply and demand; work top-down and bottom-up simultaneously; use a theory of change behaviour; develop a common evaluation framework, including a generic programme theory; build knowledge of what works within the agencys context; and systematically and visibly evaluate each stage.


Journal of Development Effectiveness | 2009

Matching impact evaluation design to the nature of the intervention and the purpose of the evaluation

Patricia J. Rogers

Appropriate impact evaluation design requires situational responsiveness – matching the design to the needs, constraints, and opportunities of the particular case. The design needs to reflect the nature of the intervention and the purposes of the impact evaluation. In particular, impact evaluation needs to address simple, complicated, and complex aspects of the intervention. Simple aspects can be tightly specified and standardised; complicated aspects work as part of a causal package; complex aspects are appropriately dynamic and adaptive. Different designs are recommended for each case, including RCT, regression discontinuity, unstructured community interviews, Participatory Performance Story Reporting, and developmental evaluation.


Evaluation Review | 2009

Projected Sustainability of Innovative Social Programs

Riki Savaya; Gerald R. Elsworth; Patricia J. Rogers

This study is an exploratory examination of the projected sustainability of more than 100 projects funded by the Australian government. Using data collected by the body that evaluated the projects and data from a government database, it examines the predictors of various forms of sustainability. Findings show that some two thirds of the project leaders who expected their programs to continue after the expiration of the initial funding expected them to continue with the same activities and target population; almost half envisioned them diversifying to new activities, target groups, or locations. Auspice organization involvement increased the expectation that the project would be continued, project effectiveness decreased that expectation, and diversity of initial funding became less important as other sources of support and sustainability were taken into consideration.


Evaluation and Program Planning | 1995

Improving the effectiveness of evaluations: Making the link to organizational theory

Patricia J. Rogers; Gary Hough

Abstract Most approaches to evaluation are based on unexamined assumptions about how organizations work — assumptions which do not adequately describe much important activity in organizations. Because of this, many program evaluations ignore important elements in program operation. In addition, many evaluations fail to be effectively implemented or used because of their deficient theories about how organizations implement changes to programs. This paper explores the implications for evaluation practice of using five different perspectives on organizations, drawing on the four models of social program implementation developed by Elmore (1978). It illustrates how many popular approaches to evaluation, including utilization-focused, performance indicators, and fourth-generation evaluation, and key approaches to meta-evaluation assume that organizations operate exclusively in a particular way. It argues that evaluation will only really be effective when its focus, methods and management reflect realistic assumptions about how organizations work.


Evaluation and Program Planning | 2009

Qualitative cost-benefit evaluation of complex, emergent programs

Patricia J. Rogers; Kaye Stevens; Jonathan Boymal

This paper discusses a methodology used for a qualitative cost-benefit evaluation of a complex, emergent program. Complex, emergent programs, where implementation varies considerably over time and across sites to respond to local needs and opportunities, present challenges to conventional methods for cost-benefit evaluation. Such programs are characterized by: ill-defined boundaries of what constitutes the intervention, and hence the resources used; non-standardized procedures; differing short-term outcomes across projects, even within the same long-term goals; and outcomes that are the result of multiple factors and co-production, making counter-factual approaches to attribution inadequate and the use of standardized outcome measures problematic. The paper discusses the advantages and limitations of this method and its implications for cost-benefit evaluation of complex programs.


BMC Medical Research Methodology | 2015

Development, inter-rater reliability and feasibility of a checklist to assess implementation (Ch-IMP) in systematic reviews: the case of provider-based prevention and treatment programs targeting children and youth

Margaret Cargo; Ivana Stankov; James Thomas; Michael Saini; Patricia J. Rogers; Evan Mayo-Wilson; Karin Hannes

BackgroundSeveral papers report deficiencies in the reporting of information about the implementation of interventions in clinical trials. Information about implementation is also required in systematic reviews of complex interventions to facilitate the translation and uptake of evidence of provider-based prevention and treatment programs. To capture whether and how implementation is assessed within systematic effectiveness reviews, we developed a checklist for implementation (Ch-IMP) and piloted it in a cohort of reviews on provider-based prevention and treatment interventions for children and young people. This paper reports on the inter-rater reliability, feasibility and reasons for discrepant ratings.MethodsChecklist domains were informed by a framework for program theory; items within domains were generated from a literature review. The checklist was pilot-tested on a cohort of 27 effectiveness reviews targeting children and youth. Two raters independently extracted information on 47 items. Inter-rater reliability was evaluated using percentage agreement and unweighted kappa coefficients. Reasons for discrepant ratings were content analysed.ResultsKappa coefficients ranged from 0.37 to 1.00 and were not influenced by one-sided bias. Most kappa values were classified as excellent (n = 20) or good (n = 17) with a few items categorised as fair (n = 7) or poor (n = 1). Prevalence-adjusted kappa coefficients indicate good or excellent agreement for all but one item. Four areas contributed to scoring discrepancies: 1) clarity or sufficiency of information provided in the review; 2) information missed in the review; 3) issues encountered with the tool; and 4) issues encountered at the review level. Use of the tool demands time investment and it requires adjustment to improve its feasibility for wider use.ConclusionsThe case of provider-based prevention and treatment interventions showed relevancy in developing and piloting the Ch-IMP as a useful tool for assessing the extent to which systematic reviews assess the quality of implementation. The checklist could be used by authors and editors to improve the quality of systematic reviews, and shows promise as a pedagogical tool to facilitate the extraction and reporting of implementation characteristics.


IDS Bulletin | 2014

Developing a Research Agenda for Impact Evaluation in Development

Patricia J. Rogers; Greet Peersman

This article sets out what would be required to develop a research agenda for impact evaluation. It begins by explaining why it is needed and what process it would involve. It outlines four areas where research is needed – the enabling environment, practice, products and impacts. It reviews the different research methods that can be used to research impact evaluation and argues for particular attention to detailed, theory‐informed, mixed‐method comparative case studies of the actual processes and impacts of impact evaluation. It explores some examples of research questions that would be valuable to focus on and how they might be addressed. Finally, it makes some suggestions about the process that is needed to create a formal and collaborative research agenda.


American Journal of Evaluation | 1999

Book Review: Realistic Evaluation

Patricia J. Rogers

Books applicable to the broad field of program evaluation are reviewed. Books may be reviewed singly or in groups to illuminate similarities and differences in intent, philosophy, and usefulness. Persons with suggestions of books to be reviewed, or those who wish to submit a review, should contact Perry Sailor (Planning, Budget, and Analysis, University of Colorado at Boulder, Campus Box 15, Boulder CO 80309; 303-492-2514 or e-mail: [email protected]). In general, a review of an individual book should not exceed four double-spaced typewritten pages; groups of books may require additional length. In the past, AJE policy has been to commission a single review for each book. That will continue to be the practice for most books, but two reviews will be commissioned for books judged by the Editor and/or Book Review Editor to be major works in evaluation. Such major evaluation books would include evaluation texts, edited books of evaluation readings, or other books dedicated solely to extending evaluation theory or practice. Conversely, books that focus on methods or techniques shared by evaluation and other disciplines or professions (e.g., statistics; measurement; observation methods) will receive a single review.


Evaluation | 2016

Using realist action research for service redesign

Gill Westhorp; Kaye Stevens; Patricia J. Rogers

This case demonstrates the integration of realist action research and co-design to address the complex social problem of long-term reliance on welfare benefits. Realist action research combines a realist philosophy of science and the questions that flow from it with an action research cycle. Realist approaches to evaluation and planning seek to explain for whom in what contexts and how impacts are generated or might be generated. Action research seeks to solve real world problems, trialling solutions until a ‘best fit’ solution is reached. The article describes the principles underpinning the methodology and the research cycles through which the project worked – situation analysis; prioritizing; co-design; trialling and further refining ideas for change. It demonstrates the development and testing of program theory for one service innovation. It also reflects on the experience and potential benefits of this approach.


Evaluation | 2008

Using Programme Theory to Evaluate Complicated and Complex Aspects of Interventions

Patricia J. Rogers

Collaboration


Dive into the Patricia J. Rogers's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Margaret Cargo

University of South Australia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ivana Stankov

University of South Australia

View shared research outputs
Top Co-Authors

Avatar

Karin Hannes

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

Gill Westhorp

Charles Darwin University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Bron McDonald

Melbourne Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge