Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Robin Jacob is active.

Publication


Featured researches published by Robin Jacob.


Educational Researcher | 2013

Professional Development Research Consensus, Crossroads, and Challenges

Heather C. Hill; Mary Beisiegel; Robin Jacob

Commentaries regarding appropriate methods for researching professional development have been a frequent topic in recent issues of Educational Researcher as well as other venues. In this article, the authors extend this discussion by observing that randomized trials of specific professional development programs have not enhanced our knowledge of effective program characteristics, leaving practitioners without guidance with regard to best practices. In response, the authors propose that scholars should execute more rigorous comparisons of professional development designs at the initial stages of program development and use information derived from these studies to build a professional knowledge base. The authors illustrate with examples of both a proposed study and reviews of evidence on key questions in the literature.


Review of Educational Research | 2015

The Potential for School-Based Interventions That Target Executive Function to Improve Academic Achievement A Review

Robin Jacob; Julia Parkinson

This article systematically reviews what is known empirically about the association between executive function and student achievement in both reading and math and critically assesses the evidence for a causal association between the two. Using meta-analytic techniques, the review finds that there is a moderate unconditional association between executive function and achievement that does not differ by executive function construct, age, or measurement type but finds no compelling evidence that a causal association between the two exists.


Journal of Research on Educational Effectiveness | 2010

New Empirical Evidence for the Design of Group Randomized Trials in Education

Robin Jacob; Pei Zhu; Howard S. Bloom

Abstract This article provides practical guidance for researchers who are designing studies that randomize groups to measure the impacts of educational interventions. The article (a) provides new empirical information about the values of parameters that influence the precision of impact estimates (intraclass correlations and R 2 values) and includes outcomes other than standardized test scores and data with a three-level structure rather than a two-level structure, and (b) discusses the error (both generalizability and estimation error) that exists in estimates of key design parameters and the implications this error has for design decisions. Data for the paper come primarily from two studies: the Chicago Literacy Initiative: Making Better Early Readers Study (CLIMBERS) and the School Breakfast Pilot Project (SBPP). The analysis sample from CLIMBERS comprised 430 four-year-old children from 47 preschool classrooms in 23 Chicago public schools. The analysis sample from the SBPP study comprised 1,151 third graders from 233 classrooms in 111 schools from 6 school districts. Student achievement data from the Reading First Impact Study is also used to supplement the discussion.


Phi Delta Kappan | 2014

Social-Emotional Learning Is Essential to Classroom Management.

Stephanie M. Jones; Rebecca Bailey; Robin Jacob

Research tells us that children’s social-emotional development can propel learning. A new program, SECURe, embeds that research into classroom management strategies that improve teaching and learning. Across all classrooms and grade levels, four principles of effective management are constant: Effective classroom management is based in planning and preparation; is an extension of the quality of relationships in the room; is embedded in the school environment; and includes ongoing processes of observation and documentation.


Educational Evaluation and Policy Analysis | 2014

Assessing the Use of Aggregate Data in the Evaluation of School-Based Interventions Implications for Evaluation Research and State Policy Regarding Public-Use Data

Robin Jacob; Roger D. Goddard; Eun Sook Kim

It is often difficult and costly to obtain individual-level student achievement data, yet, researchers are frequently reluctant to use school-level achievement data that are widely available from state websites. We argue that public-use aggregate school-level achievement data are, in fact, sufficient to address a wide range of evaluation questions and the use of this data is more appropriate than commonly thought. Specifically, we explore (a) when point estimates and standard errors differ between models that use individual student-level data and those that use aggregate school-level data, (b) the potential for conducting subgroup and nonexperimental analyses with aggregate data, and (c) the metrics that are currently available in state public-use data sets and the implications these have for analyses.


Educational Evaluation and Policy Analysis | 2012

Designing and Analyzing Studies that Randomize Schools to Estimate Intervention Effects on Student Academic Outcomes without Classroom-Level Information.

Pei Zhu; Robin Jacob; Howard S. Bloom; Zeyu Xu

This paper provides practical guidance for researchers who are designing and analyzing studies that randomize schools - which comprise three levels of clustering (students in classrooms in schools) - to measure intervention effects on student academic outcomes when information on the middle level (classrooms) is missing. This situation arises frequently in practice because many available data sets identify the schools that students attend but not the classrooms in which they are taught. Do studies conducted under these circumstances yield results that are substantially different from what they would have been if this information had been available? The paper first considers this problem in the context of planning a school-randomized study based on preexisting two-level information about how academic outcomes for students vary across schools and across students within schools (but not across classrooms in schools). The paper next considers this issue in the context of estimating intervention effects from school-randomized studies. Findings are based on empirical analyses of four multisite data sets using academic outcomes for students within classrooms within schools. The results indicate that in almost all situations one will obtain nearly identical results whether or not the classroom or middle level is omitted when designing or analyzing studies.


Educational Evaluation and Policy Analysis | 2015

Exploring the Causal Impact of the McREL Balanced Leadership Program on Leadership, Principal Efficacy, Instructional Climate, Educator Turnover, and Student Achievement.

Robin Jacob; Roger D. Goddard; Minjung Kim; Robert James Miller; Yvonne Goddard

This study uses a randomized design to assess the impact of the Balanced Leadership program on principal leadership, instructional climate, principal efficacy, staff turnover, and student achievement in a sample of rural northern Michigan schools. Participating principals report feeling more efficacious, using more effective leadership practices, and having a better instructional climate than control group principals. However, teacher reports indicate that the instructional climate of the schools did not change. Furthermore, we find no impact of the program on student achievement. There was an impact of the program on staff turnover, with principals and teachers in treatment schools significantly more likely to remain in the same school over the 3 years of the study than staff in control schools.


Journal of Research on Educational Effectiveness | 2012

Prenotification, Incentives, and Survey Modality: An Experimental Test of Methods to Increase Survey Response Rates of School Principals

Robin Jacob; Brian A. Jacob

Abstract Teacher and principal surveys are among the most common data collection techniques employed in education research. Yet there is remarkably little research on survey methods in education, or about the most cost-effective way to raise response rates among teachers and principals. In an effort to explore various methods for increasing survey response rates, we randomly assigned 1,177 high school principals in the state of Michigan to 1 of 4 experimental conditions. We varied the mode of survey delivery, the mode in which the prenotification letter was sent, and whether or not a


New Directions for Youth Development | 2009

Using Instructional Logs to Identify Quality in Educational Settings.

Brian Rowan; Robin Jacob; Richard Correnti

10 incentive was provided. The results indicate that providing a monetary incentive substantially increased response rates over the no incentive condition and that principals were more likely to respond to a paper-based survey than a web-based one.


Evaluation Review | 2011

An Experiment to Test the Feasibility and Quality of a Web-Based Questionnaire of Teachers

Robin Jacob

When attempting to identify educational settings that are most effective in improving student achievement, classroom process (that is, the way in which a teacher interacts with his or her students) is a key feature of interest. Unfortunately, high-quality assessment of the student-teacher interaction occurs all too infrequently, despite the critical role that understanding and measuring such processes can play in school improvement. This article discusses the strengths and weaknesses of two common approaches to studying these processes-direct classroom observation and annual surveys of teachers-and then describes the ways in which instructional logs can be used to overcome some of the limitations of these two approaches when gathering data on curriculum content and coverage. Classroom observations are expensive, require extensive training of raters to ensure consistency in the observations, and because of their expense generally cannot be conducted frequently enough to enable the researcher to generalize observational findings to the entire school year or illuminate the patterns of instructional change that occur across the school year. Annual surveys are less expensive but often suffer from self-report bias and the bias that occurs when teachers are asked to retrospectively report on their activities over the course of a single year. Instructional logs offer a valid, reliable, and relatively cost-effective alternative for collecting detailed information about classroom practice and can overcome some of the limitations of both observations and annual surveys.

Collaboration


Dive into the Robin Jacob's collaboration.

Top Co-Authors

Avatar

Beth Gamse

University of Michigan

View shared research outputs
Top Co-Authors

Avatar

Beth Boulay

University of Michigan

View shared research outputs
Top Co-Authors

Avatar

Megan Horst

University of Michigan

View shared research outputs
Top Co-Authors

Avatar

Fatih Unlu

University of Michigan

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge