Carolyn J. Hill
Georgetown University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Carolyn J. Hill.
Journal of Research on Educational Effectiveness | 2008
Howard S. Bloom; Carolyn J. Hill; Alison Rebeck Black; Mark W. Lipsey
Abstract Two complementary approaches to developing empirical benchmarks for achievement effect sizes in educational interventions are explored. The first approach characterizes the natural developmental progress in achievement made by students from one year to the next as effect sizes. Data for seven nationally standardized achievement tests show large annual gains in the early elementary grades followed by gradually declining gains in later grades. A given intervention effect will therefore look quite different when compared to the annual progress for different grade levels. The second approach explores achievement gaps for policy-relevant subgroups of students or schools. Data from national- and district-level achievement tests show that, when represented as effect sizes, student gaps are relatively small for gender and much larger for economic disadvantage and race/ethnicity. For schools, the differences between weak schools and average schools are surprisingly modest when expressed as student-level effect sizes. A given intervention effect viewed in terms of its potential for closing one of these performance gaps will therefore look very different depending on which gap is considered.
The Review of Economics and Statistics | 2004
Charles Michalopoulos; Howard S. Bloom; Carolyn J. Hill
This paper assesses nonexperimental estimators using results from a six-state random assignment study of mandatory welfare-to-work programs. The assessment addresses two questions: which nonexperimental methods provide the most accurate estimates; and do the best methods work well enough to replace random assignment? Three tentative conclusions emerge. Nonexperimental bias was larger in the medium run than in the short run. In-state comparison groups produced less average bias than out-of-state comparison groups. Statistical adjustments did not consistently reduce bias, although some methods reduced the estimated bias in some circumstances and propensity-score methods provided a specification check that eliminated some large biases.
Public Management Review | 2007
Melissa Forbes; Carolyn J. Hill; Laurence E. Lynn
Abstract A multi-level analytic framework termed a ‘logic of governance’ is used to identify systematic patterns of health care governance from the findings of disparate research studies. Using a subset of 112 studies on health care service delivery, we use an ‘inside-out’ interpretive strategy to construct an empirical overview of health care governance. This strategy incrementally aggregates findings from studies of adjacent then of non-adjacent levels of governance until a coherent overall picture emerges. In general, the choices of organizational arrangements, administrative strategies, treatment quality and other aspects of health care services by policy makers, public managers, physicians, and service workers, together with their values and attitudes toward their work, have significant effects on how health care public policies are transformed into service-delivery outputs and outcomes. Investigations that fail to account for such mediating effects in research designs or in the interpretation of results may provide inaccurate accounts of how health care governance works.
Administration & Society | 2008
Laurence E. Lynn; Carolyn J. Heinrich; Carolyn J. Hill
We read Larry Luton’s “Deconstructing Public Administration Empiricism” (2007) with interest, for two reasons. First, as public administration empiricists, we value provocative discussions of epistemology. Second, Luton cites our book Improving Governance: A New Logic for Empirical Research nine times! We always experience a frisson when our book is cited, but especially so in this case, in which Luton evidently finds our work useful to his argument. This brings us to Luton’s argument, and here we must confess to some confusion. It is not clear to us just what his argument is. Perhaps, although we shrink from such a calumny, there is no argument but, rather, an opinion generously stuffed with a concoction of linguistic ingredients chosen to give it the flavor an argument. One way of reading Luton is that he has an axe to grind. Empiricists, with their oh-so-superior manner, their hegemonic disdain for any but their own, “objective” methods, and their overweening pride in their meager findings, seem to set him off. Such poseurs are bad for the profession, he seems to be saying, because they tempt the unwary into error. Why don’t they “lighten up,” as he puts it, and concede the limitations, some inherent, some matters of craft, of the empiricist enterprise? Luton seems to regard us as on his side of this issue, as suggested by his numerous citations of our Administration & Society Volume 40 Number 1 March 2008 104-109
Child Development Perspectives | 2008
Carolyn J. Hill; Howard S. Bloom; Alison Rebeck Black; Mark W. Lipsey
Archive | 2001
Laurence E. Lynn; Carolyn J. Heinrich; Carolyn J. Hill
Journal of Public Administration Research and Theory | 2000
Laurence E. Lynn; Carolyn J. Heinrich; Carolyn J. Hill
Journal of Public Administration Research and Theory | 2004
Carolyn J. Hill; Laurence E. Lynn
Journal of Policy Analysis and Management | 2003
Howard S. Bloom; Carolyn J. Hill; James A. Riccio
Public Management Review | 2003
Carolyn J. Hill; Laurence E. Lynn