Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kathryn E. Newcomer is active.

Publication


Featured researches published by Kathryn E. Newcomer.


International Journal of Public Administration | 2007

Measuring Government Performance

Kathryn E. Newcomer

Abstract This article provides an overview of the origins and deliberations of performance measurement efforts in the U.S. government and draws upon the American experience to identify the challenges and opportunities this management tool presents for public managers. The first section describes the political context that has shaped performance measurement efforts in the U.S. government. Then what we have learned about the challenges managers face in designing and using performance measurement is reviewed. And in the third section, current and evolving issues that affect the effectiveness of performance measurement efforts are discussed.


Evaluation and Program Planning | 2001

Opportunities for program evaluators to facilitate performance-based management

M.A. Scheirer; Kathryn E. Newcomer

Abstract Relationships between program evaluation and performance measurement are receiving greater attention as federal, state and local agencies are required to set objectives, then to collect data to document their results. This paper draws on interviews in 13 federal agencies to illustrate a framework showing the many potential uses of evaluation in support of performance management. Ten actions are suggested as opportunities for agency administrators and evaluators to bridge the current gaps between the potential and actual uses of evaluation.


Public Administration Review | 1994

Opportunities and Incentives for Improving Program Quality: Auditing and Evaluating

Kathryn E. Newcomer

The evolution of the Office of Inspector General (OIG) in the federal agencies has attracted much attention during the last ten years. Recent observers have pointed to the increased use of performance audits by IG offices, and the highly visible use of inspections by the OIG at the Department of Health and Human Services (HHS) as signals that the programmatic focus of the auditing community is merging with that of the program evaluation community (Light, 1993; Thompson and Yessian, 1991; Hendricks, Mangano, and Moran, 1990). Some observers have applauded this, while others have highlighted the crucial differences in the approaches and perspectives taken by the two, previously quite separate, communities. In his recent study of the OIGs, Paul Light commended the trend for OIGs to undertake forward-looking evaluation and analysis as a positive move. Light identified this move away from traditional auditing to identify fault in government as a possibly more effective way to monitor performance. Improving government performance through effective monitoring is the espoused objective of both auditors and evaluators. Recognizing that many ways exist to meet this objective, I undertook this study to investigate the strategies used to assess government performance currently employed by Offices of Inspector General and evaluation offices within the federal agencies. During early 1992, interviews were held with managers in 52 IG offices and 28 evaluation offices to assess how similar or different the approaches taken by these offices really are. Increasing Interest in the Offices of Inspector General Much public attention has been given to the function of inspectors general in the federal government during the last 12 years. During the Reagan and Bush administrations, the theme of public distrust in big government was played out to the accompaniment of calls to ferret out fraud, waste, and abuse. Compliance accountability was the natural approach in such a climate (Light, 1993). Conformity with the rules and regulations was encouraged and those violating them were to be sought out and punished. One of Reagans most memorable lines was his stated desire to hire inspectors general that were mean as junkyard dogs. Congress had established statutory inspectors general with much the same enthusiasm for a negative sanction-based approach. The first statutory inspector general was established in 1976, and since then Congress has established inspectors general in over 60 federal agencies, 26 of whom are presidentially appointed. The majority of the offices were established with the Inspector General Act of 1978 and the Inspector General Act Amendments of 1988. The original statute reflected the mistrust held by Congress for executive leadership in the mid-1970s. The intent of the 1978 act was to establish a consolidated oversight office within each federal agency that would report directly to the Congress and would be vigilant against fraud and abuse in executive agency spending. The high level of interest shown by Congress in accountability continued into the 1980s, and was heightened by the concern about the huge, and growing, federal deficit. Competition for declining resources during the 1980s pressed the Congress to place an even greater emphasis upon the efforts of internal watchdogs that could identify budgetary fat for trimming. Auditing and Evaluating: A Merging Focus? During the 1980s, within the federal evaluation community, impatience with costly, time-consuming, methodologically impressive evaluations lead to changing perceptions on how evaluation could best serve policy makers. Federal evaluation offices were shrinking in terms of resources, and an emphasis upon management-oriented, short-turnaround program reviews replaced earlier concerns with assessing program impacts. While the number of staff allocated to offices of inspectors general was growing, the number of evaluation office staff was declining. …


Public Performance & Management Review | 2007

How does Program Performance Assessment Affect Program Management in the Federal Government

Kathryn E. Newcomer

The term program performance has taken on a new priority and focuses the attention on what programs deliver. Changing management culture to promote the use of program data when making changes to improve program design and delivery or to reallocate resources is recognized as a challenging reform goal. Managerial decision making is certainly affected by many factors. Ongoing communication and fruitful dialogue among program managers, evaluators, and oversight officials on what performance data demonstrate about the viability and success of programs enhance management in government. Learning how to use performance measurement effectively to promote collective learning will evolve slowly and probably not in every public policy arena. In a fragmented service delivery system involving many players, efforts to improve communication and coordination are clearly beneficial, and conceptual learning is a very likely outcome.


American Journal of Evaluation | 2004

How Might We Strengthen Evaluation Capacity to Manage Evaluation Contracts

Kathryn E. Newcomer

The laudatory goal of evaluation capacity building is quite timely, given the spotlight on the potential use of program evaluation in the current environment which emphasizes performance reporting and evidence-based policy. A question that quickly comes to mind is: Where do you start? A seemingly reasonable starting point is to assess organizational culture and ways of working to tailor a strategy for building both evaluation capacity and a sustainable appreciation of evaluation practice among line managers. The nature and need for evaluation capacity across federal agencies vary greatly, and should be appropriate for the specific missions and activities of each agency. A large proportion of the program evaluations undertaken in many federal agencies are actually performed by contractors. This paper describes the initial scoping and scouting efforts that took place at HHS and AID to inform agency staff trying to forge an agency-wide strategy for strengthening capacity to oversee contract evaluations. The efforts were undertaken to assess organizational culture and “ways of working” to inform efforts to bolster capacity to oversee evaluation contracts. The nature of the efforts undertaken within HHS and AID are first described. Four lessons learned are then offered to inform others who are initiating efforts to strengthen capacity to oversee evaluation contracts in public agencies.


Public Performance & Management Review | 2011

Public Performance Management Systems: Embedding Practices for Improved Success

Kathryn E. Newcomer; Sharon L. Caudle

Over the past three decades, public sector officials and managers have faced demands for improved policy and program decision-making, more efficient service delivery, and clear accountability. Countries worldwide have developed new or more robust performance tracking embedded in more powerful performance management systems, often as part of public management reforms. Drawing on the literature, observations from experience in early-adopting countries and with recent and current performance-based management systems in the United States, this article provides a list of practices or recommendations and a framework that adopters can use to better anticipate challenges and improve collective learning.


Organization Management Journal | 2010

Strategic transformation process: Toward purpose, people, process and power

Elizabeth B. Davis; James Edwin Kee; Kathryn E. Newcomer

Across the world, public and non-profit sector leaders face an extremely turbulent socio-political-economic environment. This environment creates additional risks and uncertainties for organizations and may hinder a leaders ability to act strategically. Addressing these complex, constantly evolving conditions requires leaders to develop processes that involve the organizations stakeholders and that create organizational conditions for self-generation, creativity, resilience and action planning. In this paper we provide an organizational-level, integrative framework for the strategic transformation of public and non-profit organizations to assist leaders who are committed to effective stewardship of their organizations. The Strategic Transformation Process involves an intense dialogue among organizational stakeholders designed to create a new vision, negotiate priorities, minimize risk, and create action plans and a commitment for change.


American Journal of Evaluation | 2001

Tracking and Probing Program Performance: Fruitful Path or Blind Alley for Evaluation Professionals?

Kathryn E. Newcomer

Performance, results, outcomes, scorecards, and accountability are signifiers that currently frame discourse about public program service delivery (see, e.g., Schalock, 2001). These value-laden concepts focus the attention of managers of public and nonprofit programs on issues near and dear to the hearts of program evaluators. During the last three decades, the number of laws and executive directives that mandate performance measurement and reporting has increased at all levels of government, opening opportunities for evaluation professionals to play meaningful roles in these efforts (Scheirer & Newcomer, 2001). The challenges entailed in performance measurement, such as clarifying the logic that links program outputs with desired long term outcomes, and devising processes for verifying and validating performance data, have raised the hopes and expectations of evaluators that the technical assistance they can provide will be valued. Will this bull market for evaluation expertise continue into the future? And if so, what are the implications for the evaluation profession?


Journal of Public Affairs Education | 2010

Public Service Education: Adding Value in the Public Interest

Kathryn E. Newcomer; Heather Allen

Abstract The goal of public service education is to prepare students to serve in the public interest. Educational outcome measurement is an important method in determining whether public service programs actually are achieving their intended objectives. This paper provides a “Model of Learning Outcomes for Public Service Education.” This model builds on what we already know about outcome assessment, and elaborates on how public service education adds value to individuals, organizations, and governance. Key to this Model of Learning Outcomes for Public Service Education is what we term “enabling characteristics,” or factors that mediate the relationship between short-term, intermediate, and longer-term outcomes in public service education. This process enables practitioners to assess their public service education programs and determine to what extent they add value to students, organizations, and governance. Ultimately, this Model of Learning Outcomes for Public Service Education can be used to improve public service education programs.


American Journal of Evaluation | 2016

Forging a Strategic and Comprehensive Approach to Evaluation Within Public and Nonprofit Organizations Integrating Measurement and Analytics Within Evaluation

Kathryn E. Newcomer; Clinton T. Brass

The “performance movement” has been a subject of enthusiasm and frustration for evaluators. Performance measurement, data analytics, and program evaluation have been treated as different tasks, and those addressing them speak their own languages in their own circles. We suggest that situating performance measurement and data analytics within the broader field of evaluation would be theoretically parsimonious and fruitful. Scholars and practitioners of performance measurement and analytics may profitably use an evaluation mind-set and frame their tasks within the multidisciplinary field of evaluation practice. With this change in mind-set, we discuss some implications of viewing measurement, analytics, and other evaluation-related capacities within public organizations as part of an integrated, evaluation mission-support function. Working with other mission-support functions, evaluation capacity could be used by operating units to improve learning, strategy, and performance and better accomplish the mission. We outline steps that could be considered to help forge a more strategic and comprehensive approach to evaluation in public and nonprofit organizations.

Collaboration


Dive into the Kathryn E. Newcomer's collaboration.

Top Co-Authors

Avatar

James Edwin Kee

George Washington University

View shared research outputs
Top Co-Authors

Avatar

Joseph S. Wholey

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Laila El Baradei

American University in Cairo

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Elizabeth B. Davis

George Washington University

View shared research outputs
Top Co-Authors

Avatar

Heather Allen

George Washington University

View shared research outputs
Top Co-Authors

Avatar

John Forrer

George Washington University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge