Joel Breakstone
Stanford University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Joel Breakstone.
Phi Delta Kappan | 2013
Joel Breakstone; Mark Smith; Sam Wineburg
To prepare students for assessments tied to the Common Core, teachers need tools and tests that help students analyze primary and secondary sources and develop written historical arguments.
Theory and Research in Social Education | 2014
Joel Breakstone
Abstract This article considers the design process for new formative history assessments. Over the course of 3 years, my colleagues from the Stanford History Education Group and I designed, piloted, and revised dozens of History Assessments of Thinking (HATs). As we created HATs, we sought to gather information about their cognitive validity, the relationship between the constructs targeted by the assessments, and the cognitive processes students used to answer them. Three case studies trace the development of different HATs through analyses of draft assessments, student responses, and think-aloud protocols. Design principles specific to formative history assessments emerged from these analyses: (1) assessments must be historically accurate, (2) assessments must target specific historical constructs, (3) assessment structure must align with targeted constructs, (4) assessments must yield useful information for teachers, and (5) pilot data are indispensable for refining HATs. These finding suggest the need for increased attention on the construction and validation of new assessment materials for the history classroom.
Theory and Research in Social Education | 2018
Sarah McGrew; Joel Breakstone; Teresa Ortega; Mark D. Smith; Sam Wineburg
Abstract To be an informed citizen in today’s information-rich environment, individuals must be able to evaluate information they encounter on the Internet. However, teachers currently have limited options if they want to assess students’ evaluations of digital content. In response, we created a range of short tasks that assess students’ civic online reasoning—the ability to effectively search for, evaluate, and verify social and political information online. Assessments ranged from paper-and-pencil tasks to open Internet search tasks delivered via Google Forms. We outline a process of assessment development in which middle school, high school, and college students in 12 states completed tasks. We present a series of representative tasks and analyses of trends in student performance. Across tasks and grade levels, students struggled to effectively evaluate online claims, sources, and evidence. These results point to a need for curriculum materials that support students’ development of civic online reasoning competencies.
Phi Delta Kappan | 2018
Joel Breakstone; Sarah McGrew; Mark Smith; Teresa Ortega; Sam Wineburg
In recent years — and especially since the 2016 presidential election — numerous media organizations, newspapers, and policy advocates have made efforts to help Americans become more careful consumers of the information they see online. In K-12 and higher education, the main approach has been to provide students with checklists they can use to assess the credibility of individual websites. However, the checklist approach is outdated. It would be far better to teach young people to follow the lead of professional fact-checkers: When confronted by a new and unfamiliar website, they begin by looking elsewhere on the web, searching for any information that might shed light on who created the site in question and for what purpose.
Archive | 2018
Sam Wineburg; Joel Breakstone; Sarah McGrew; Teresa Ortega
The Stanford History Education Group has prototyped, field tested, and validated a bank of assessments that tap civic online reasoning—the ability to judge the credibility of the information that floods young people’s smartphones, tablets, and computers. We developed 56 tasks and administered them to students across 12 states. In total, we collected and analyzed 7,804 student responses. From pre-teens to seniors in college, students struggled mightily to evaluate online information. To investigate how people determine the credibility of digital information, we sampled 45 individuals: 10 PhD historians, 10 professional fact checkers, and 25 Stanford University undergraduates. We observed them as they evaluated websites and engaged in open web searches on social and political issues. Historians and students often fell victim to easily manipulated features of websites, such as official-looking logos and domain names.
Cognition and Instruction | 2018
Mark Smith; Joel Breakstone; Sam Wineburg
Abstract This article reports a validity study of History Assessments of Thinking (HATs), which are short, constructed-response assessments of historical thinking. In particular, this study focuses on aspects of cognitive validity, which is an examination of whether assessments tap the intended constructs. Think-aloud interviews with 26 high school students were used to examine the thinking elicited by 8 HATs and multiple-choice versions of these tasks. Results showed that although both HATs and multiple-choice items tapped historical thinking processes, HATs better reflected student proficiency in historical thinking than their multiple-choice counterparts. Item format also influenced the thinking elicited, with multiple-choice items eliciting more instances of construct-irrelevant reasoning than the constructed-response versions. Implications for history assessment are discussed.
Social Education | 2012
Sam Wineburg; Mark Smith; Joel Breakstone
The American Educator | 2017
Sarah McGrew; Teresa Ortega; Joel Breakstone; Sam Wineburg
The Journal of American History | 2018
Sam Wineburg; Mark Smith; Joel Breakstone
Phi Delta Kappan | 2013
Joel Breakstone; Mark Smith; Sam Wineburg