Liliana Rodríguez-Campos
University of South Florida
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Liliana Rodríguez-Campos.
International Journal of Leadership in Education | 2005
Liliana Rodríguez-Campos; Rigoberto Rincones-Gómez; Jianping Shen
Drawing from data collected by the National Center for Education Statistics (NCES) for the years 1987–88, 1990–91, 1993–94, and 1999–2000, we investigated secondary principals’ educational attainment, teaching experience, and professional development. We found that throughout the years, secondary principals have been active in attaining advanced academic degrees (e.g. masters, doctorate), have increased their years of teaching experience before obtaining the principalship, have a stronger background in instructional leadership, and have been striving to enhance their professional skills and knowledge base.
American Journal of Evaluation | 2014
David M. Fetterman; Liliana Rodríguez-Campos; Abraham Wandersman; Rita Goldfarb O’Sullivan
Defining, compartmentalizing, and differentiating among stakeholder involvement approaches to evaluation, such as collaborative, participatory, and empowerment evaluation, enhance conceptual clarity. It also informs practice, helping evaluators select the most appropriate approach for the task at hand. This view of science and practice is presented in response to the argument of Cousins, Whitmore, and Shulha (2013) that efforts to differentiate among approaches have been ‘‘unwarranted and ultimately unproductive’’ (p. 15). Over the past couple of decades, members of the American Evaluation Association’s (AEA) Collaborative, Participatory, and Empowerment Evaluation Topical Interest Group (CPE-TIG) have labored to build a strong theoretical and empirical foundation of stakeholder involvement approaches in evaluation. This includes identifying the essential features of collaborative, participatory, and empowerment evaluation. It also includes highlighting similarities and differences among these three major approaches to stakeholder involvement. Our primary disagreement with the article by Cousins et al. concerns the value and appropriateness of (1) differentiating among the stakeholder involvement approaches; (2) misleading characterization; (3) confounding and comingling terms, and (4) using collaborative inquiry as the umbrella term for stakeholder involvement approaches.
American Journal of Evaluation | 2018
Tyler Hicks; Liliana Rodríguez-Campos; Jeong Hoon Choi
To begin statistical analysis, Bayesians quantify their confidence in modeling hypotheses with priors. A prior describes the probability of a certain modeling hypothesis apart from the data. Bayesians should be able to defend their choice of prior to a skeptical audience. Collaboration between evaluators and stakeholders could make their choices more defensible. This article describes how evaluators and stakeholders could combine their expertise to select rigorous priors for analysis. The article first introduces Bayesian testing, then situates it within a collaborative framework, and finally illustrates the method with a real example.
American Journal of Evaluation | 2008
Liliana Rodríguez-Campos
to deal with (a) a very complex project, (b) a family of projects, and (c) a client who wanted technical assistance on the basis of logic modeling. So if a colleague asked me what I thought about the book, I would say that it serves as a very good introductory source and that it can help an evaluator as a guide when building the capacity of evaluation clients in logic modeling or when teaching about this tool to novice evaluators. I would tell my colleague not to expect a scholarly book on this particular topic but instead (and appropriately) a practical one. In this context, I would like to comment on a few issues that caught my attention as I went through the book. For example, in the first chapter the author provides a brief discussion of evaluation approaches that are similar to evaluation based on logic models, among them program theory evaluation (PTE). According to the author, logic modeling is a useful tool for performing PTE. In fact, logic models are said to describe the “theory of change” underlying a program—just as program theories do. Some evaluators would disagree and defend the unique contributions of program theory, in addition to logic modeling, and highlight their differences instead of their commonalities. One could also go into much greater length and detail when introducing evaluation and when differentiating evaluation approaches. Similarly, the book does not offer a comprehensive review of the literature related to logic modeling. The author chose to keep it simple, which may facilitate understanding by those who are new to the topic. Personally, as an evaluation researcher and practitioner, I found most interesting the chapter on using logic models to guide explanatory evaluation. This function of the logic model is a bit more complicated to envision concretely, and the author does a great job illustrating this particular application with an example. In general, the examples the author shares (project descriptions, related logic models, ensuing evaluations) are well written; it is surprising how little space she needs to clearly describe them. However, most examples come from the field of education, so they may be more relevant to readers with education backgrounds. The last three chapters, which provide more case examples, at first felt like they were lacking integration with the preceding chapters. However, I ended up finding them very useful because they include (a) logic models of the same program but with different foci, (b) logic models used to perform a cluster evaluation, and (c) suggestions on how to deal with an evaluation of systemic change as an intended outcome in a logic model. In my opinion, the varied examples are the biggest strength of the book. In summary, I recommend the book for evaluators and project designers who are unfamiliar with logic modeling. It is also a helpful source for an evaluator who needs to build clients’ capacity related to logic modeling. Expect an introductory text that provides many practical suggestions and examples as illustration but whose focus is not on the theoretical distinctions. Overall, the book effectively demonstrates that logic models are a very useful tool to support project design, management, and evaluation.
Journal of Multidisciplinary Evaluation | 2008
Liliana Rodríguez-Campos; Wes Martz; Rigoberto Rincones-Gómez
Evaluation and Program Planning | 2007
P. Cristian Gugiu; Liliana Rodríguez-Campos
Evaluation and Program Planning | 2012
Liliana Rodríguez-Campos
Education and Urban Society | 2000
Jianping Shen; Liliana Rodríguez-Campos; Rigoberto Rincones-Gómez
Journal of Multidisciplinary Evaluation | 2011
Liliana Rodríguez-Campos
Journal of education and training studies | 2013
Michael Berson; Liliana Rodríguez-Campos; Connie Walker-Egea; Corina Owens; Aarti P. Bellara