Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Geoffrey Phelps is active.

Publication


Featured researches published by Geoffrey Phelps.


Elementary School Journal | 2004

Developing Measures of Content Knowledge for Teaching Reading

Geoffrey Phelps; Stephen G. Schilling

In this article we present results from a project to develop survey measures of the content knowledge teachers need to teach elementary reading. In areas such as mathematics and science, there has been great interest in the specialized ways teachers need to know a subject to teach it to others—often referred to as pedagogical content knowledge. However, little is known about what teachers need to know about reading to teach it effectively. We begin the article by discussing what might constitute content knowledge for teaching reading and by describing the survey items we wrote. Next, factor and scaling results are presented from a pilot study of 261 multiple‐choice items with 1,542 elementary teachers. We found that content knowledge for teaching reading included multiple dimensions, defined both by topic and by how teachers use knowledge in teaching practice. Items within these constructs formed reliable scales.


Journal of Research on Educational Effectiveness | 2011

Teachers' Knowledge about Early Reading: Effects on Students' Gains in Reading Achievement.

Joanne F. Carlisle; Ben Kelcey; Brian Rowan; Geoffrey Phelps

Abstract This study developed a new survey of teachers’ knowledge about early reading and examined the effects of teachers’ knowledge on students’ reading achievement in Grades 1 to 3 in a large sample of Michigan schools. Using statistical models that controlled for teachers’ personal and professional characteristics, students’ prior reading achievement, and the clustering of high-knowledge teachers in schools and school districts with particular demographic composition, we found that the effects of teachers’ knowledge about early reading on students’ reading achievement were small. In 1st grade, students in classrooms headed by higher knowledge teachers performed better on year-end tests of reading comprehension but not word analysis. In 2nd and 3rd grades, the effects of teachers’ knowledge on either measure of students’ reading achievement were not statistically significant. Although the study suggests new forms of statistical analysis that might produce better estimates of the effects of teachers’ knowledge on students’ reading achievement, further research is needed to improve the conceptual and psychometric properties of measures of teachers’ knowledge of reading and to investigate the relation of their knowledge and their instructional practices.


Educational Evaluation and Policy Analysis | 2013

Considerations for Designing Group Randomized Trials of Professional Development With Teacher Knowledge Outcomes

Ben Kelcey; Geoffrey Phelps

Despite recent shifts in research emphasizing the value of carefully designed experiments, the number of studies of teacher professional development with rigorous designs has lagged behind its student outcome counterparts. We outline a framework for the design of group randomized trials (GRTs) with teachers’ knowledge as the outcome and consider mathematics and reading knowledge outcomes designed to assess the types of content problems that teachers encounter in practice. To estimate design parameters, we draw on a national sample of teachers for mathematics and a state Reading First sample to estimate for reading. Our results suggest that there is substantial clustering of teachers’ knowledge within schools and professional development GRTs will likely need increased sample sizes to account for this clustering.


Educational Evaluation and Policy Analysis | 2012

Explaining Variation in Instructional Time: An Application of Quantile Regression

Douglas Lyman Corey; Geoffrey Phelps; Deborah Loewenberg Ball; Jenny DeMonte; Delena Harrison

This research is conducted in the context of a large-scale study of three nationally disseminated comprehensive school reform projects (CSRs) and examines how school- and classroom-level factors contribute to variation in instructional time in English language arts and mathematics. When using mean-based OLS regression techniques such as Hierarchical Linear Models (HLM), we found that CSR programs did not have the expected effects on instructional time. However, when using Quantile Regression to estimate the effects at the lower end of the distribution of instructional time, we found substantial effects. These effects were strongest for the subjects that were the focus of the school interventions.


Educational Policy | 2012

How Much English Language Arts and Mathematics Instruction Do Students Receive? Investigating Variation in Instructional Time

Geoffrey Phelps; Douglas Lyman Corey; Jenny DeMonte; Delena Harrison; Deborah Loewenberg Ball

The amount of instruction students receive has long been viewed as a foundational educational resource. This article presents an analysis of the time students spend in elementary English language arts (ELA) and mathematics instruction. In mathematics, the average student received about 140 hr of instruction, but students in the top sixth of classrooms in this distribution can expect to receive between 80 and 160 hr more instruction over the school year than students assigned to the bottom sixth of classrooms. We found similar magnitudes of variation in ELA. Although most of the variation was due to differences among classrooms, there was also substantial variation among schools. Some variation in instructional time is expected and probably favorable. However, we argue that the large variation demonstrated by these results represent substantial inequity in students’ opportunity to learn ELA and mathematics.


Evaluation Review | 2013

Strategies for Improving Power in School-Randomized Studies of Professional Development.

Ben Kelcey; Geoffrey Phelps

Objectives: Group-randomized designs are well suited for studies of professional development because they can accommodate programs that are delivered to intact groups (e.g., schools), the collaborative nature of professional development, and extant teacher/school assignments. Though group designs may be theoretically favorable, prior evidence has suggested that they may be challenging to conduct in professional development studies because well-powered designs will typically require large sample sizes or expect large effect sizes. Using teacher knowledge outcomes in mathematics, we investigated when and the extent to which there is evidence that covariance adjustment on a pretest, teacher certification, or demographic covariates can reduce the sample size necessary to achieve reasonable power. Method: Our analyses drew on multilevel models and outcomes in five different content areas for over 4,000 teachers and 2,000 schools. Using these estimates, we assessed the minimum detectable effect sizes for several school-randomized designs with and without covariance adjustment. Results: The analyses suggested that teachers’ knowledge is substantially clustered within schools in each of the five content areas and that covariance adjustment for a pretest or, to a lesser extent, teacher certification, has the potential to transform designs that are unreasonably large for professional development studies into viable studies.


Evaluation Review | 2016

Informing Estimates of Program Effects for Studies of Mathematics Professional Development Using Teacher Content Knowledge Outcomes

Geoffrey Phelps; Benjamin Kelcey; Nathan Jones; Shuangshuang Liu

Mathematics professional development is widely offered, typically with the goal of improving teachers’ content knowledge, the quality of teaching, and ultimately students’ achievement. Recently, new assessments focused on mathematical knowledge for teaching (MKT) have been developed to assist in the evaluation and improvement of mathematics professional development. This study presents empirical estimates of average program change in MKT and its variation with the goal of supporting the design of experimental trials that are adequately powered to detect a specified program effect. The study drew on a large database representing five different assessments of MKT and collectively 326 professional development programs and 9,365 teachers. Results from cross-classified hierarchical growth models found that standardized average change estimates across the five assessments ranged from a low of 0.16 standard deviations (SDs) to a high of 0.26 SDs. Power analyses using the estimated pre- and posttest change estimates indicated that hundreds of teachers are needed to detect changes in knowledge at the lower end of the distribution. Even studies powered to detect effects at the higher end of the distribution will require substantial resources to conduct rigorous experimental trials. Empirical benchmarks that describe average program change and its variation provide a useful preliminary resource for interpreting the relative magnitude of effect sizes associated with professional development programs and for designing adequately powered trials.


Journal of Experimental Education | 2017

Designing Large-Scale Multisite and Cluster-Randomized Studies of Professional Development

Ben Kelcey; Jessaca Spybrook; Geoffrey Phelps; Nathan Jones; Jiaqi Zhang

ABSTRACT We develop a theoretical and empirical basis for the design of teacher professional development studies. We build on previous work by (a) developing estimates of intraclass correlation coefficients for teacher outcomes using two- and three-level data structures, (b) developing estimates of the variance explained by covariates, and (c) modifying the conventional optimal design framework to include differential covariate costs so as to capture the point at which the cost of collecting a covariate overtakes the reduction in variance it supplies. We illustrate the use of these estimates to explore the absolute and relative sensitivity of multilevel designs in teacher professional development studies. The results from these analyses are intended to guide researchers in making more-informed decisions about the tradeoffs and considerations involved in selecting study designs for assessing the impacts of professional development programs.


Journal of Teacher Education | 2008

Content Knowledge for Teaching: What Makes It Special?

Deborah Loewenberg Ball; Mark Hoover Thames; Geoffrey Phelps


Cognition and Instruction | 2008

Mathematical Knowledge for Teaching and the Mathematical Quality of Instruction: An Exploratory Study

Heather C. Hill; Merrie L. Blunk; Charalambos Y. Charalambous; Jennifer M. Lewis; Geoffrey Phelps; Laurie Sleep; Deborah Loewenberg Ball

Collaboration


Dive into the Geoffrey Phelps's collaboration.

Top Co-Authors

Avatar

Ben Kelcey

University of Cincinnati

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge