Harold P. Coyle
Harvard University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Harold P. Coyle.
The Journal of the Learning Sciences | 2000
Philip M. Sadler; Harold P. Coyle; Marc S. Schwartz
Engineering challenges that involve both the design and building of devices that satisfy constraints are increasingly employed in precollege science courses. We have experimented with exercises that are distinguished from those employed with elite students by reducing competition and increasing cooperation through the use of tests against nature, large dynamic ranges in performance, initial prototype designs, and alternative methods of recording and presenting results. We find that formulating easily understood goals helps engage students in fascinatingly creative processes that expose the need for a scientific methodology. Such challenges engage male and female students equally, helping to erase the gender disparity in familiarity with the technology and skills common to physical science.
American Educational Research Journal | 2013
Philip M. Sadler; Gerhard Sonnert; Harold P. Coyle; Nancy Cook-Smith; Jaimie L. Miller
This study examines the relationship between teacher knowledge and student learning for 9,556 students of 181 middle school physical science teachers. Assessment instruments based on the National Science Education Standards with 20 items in common were administered several times during the school year to both students and their teachers. For items that had a very popular wrong answer, the teachers who could identify this misconception had larger classroom gains, much larger than if the teachers knew only the correct answer. On items on which students did not exhibit misconceptions, teacher subject matter knowledge alone accounted for higher student gains. This finding suggests that a teacher’s ability to identify students’ most common wrong answer on multiple-choice items, a form of pedagogical content knowledge, is an additional measure of science teacher competence.
Astronomy Education Review | 2009
Philip M. Sadler; Harold P. Coyle; Jaimie L. Miller; Nancy Cook-Smith; Mary E. Dussault; R. R. Gould
We report on the development of an item test bank and associated instruments based on those K–12 national standards which involve astronomy and space science. Utilizing hundreds of studies in the science education research literature on student misconceptions, we have constructed 211 unique items that measure the degree to which students abandon such ideas for accepted scientific views. Piloted nationally with 7599 students and their 88 teachers spanning grades 5–12, the items reveal a range of interesting results, particularly student difficulties in mastering the NRC Standards and AAAS Benchmarks. Teachers generally perform well on items covering the standards of the grade level at which they teach, exhibiting few misconceptions of their own. Teachers dramatically overestimate their students’ performance, perhaps because they are unaware of their students’ misconceptions. Examples are given showing how the developed instruments can be used to assess the effectiveness of instruction and to evaluate the impact of professional development activities for teachers.
CBE- Life Sciences Education | 2013
Philip M. Sadler; Harold P. Coyle; Nancy R. Cook Smith; Jaimie L. Miller; Joel J. Mintzes; Kimberly D. Tanner; John Murray
We present an analysis of the relationship between student and teacher mastery of National Research Councils K8 life sciences content standards.
Educational Assessment | 2016
Philip M. Sadler; Gerhard Sonnert; Harold P. Coyle; Kelly Miller
The psychometrically sound development of assessment instruments requires pilot testing of candidate items as a first step in gauging their quality, typically a time-consuming and costly effort. Crowdsourcing offers the opportunity for gathering data much more quickly and inexpensively than from most targeted populations. In a simulation of a pilot testing protocol, item parameters for 110 life science questions are estimated from 4,043 crowdsourced adult subjects and then compared with those from 20,937 middle school science students. In terms of item discrimination classification (high vs. low), classical test theory yields an acceptable level of agreement (C-statistic = 0.755); item response theory produces excellent results (C-statistic = 0.848). Item response theory also identifies potential anchor items without including any false positives (items with low discrimination in the targeted population). We conclude that the use of crowdsourcing subjects is a reasonable, efficient method for the identification of high-quality items for field testing and for the selection of anchor items to be used for test equating.
Archive | 1994
Nadine Butcher Ball; Harold P. Coyle; Irwin I. Shapiro
Archive | 2013
Harold P. Coyle; John L. Hines; Kerry J. Rasmussen; Philip M. Sadler
Archive | 2013
Harold P. Coyle; John L. Hines; Kerry J. Rasmussen; Philip M. Sadler
Archive | 2013
Harold P. Coyle; John L. Hines; Kerry J. Rasmussen; Philip M. Sadler
Archive | 2013
Harold P. Coyle; John L. Hines; Kerry J. Rasmussen; Philip M. Sadler