Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Beth Chance is active.

Publication


Featured researches published by Beth Chance.


Mathematical Thinking and Learning | 2000

Assessment in Statistics Education: Issues and Challenges

Joan Garfield; Beth Chance

There have been many changes in educational assessment in recent years, both within the fields of measurement and evaluation and in specific disciplines. In this article, we summarize current assessment practices in statistics education, distinguishing between assessment for different purposes and assessment at different educational levels. To provide a context for assessment of statistical learning, we first describe current learning goals for students. We then highlight recent assessment methods being used for different purposes: individual student evaluation, large-scale group evaluation, and as a research tool. Examples of assessment used in teaching statistics in primary schools, secondary schools, and tertiary schools are given. We then focus on 3 examples of effective uses of assessment and conclude with a description of some current assessment challenges.


Journal of Statistics Education | 1997

Experiences with Authentic Assessment Techniques in an Introductory Statistics Course

Beth Chance

In an effort to align evaluation with new instructional goals, authentic assessment techniques (see, e.g., Archbald and Newmann 1988, Crowley 1993, and Garfield 1994) have recently been introduced ...


Archive | 2007

Using students’ informal notions of variability to develop an understanding of formal measures of variability

Joan Garfield; Robert C. delMas; Beth Chance

Recent science reform efforts and standards documents advocate that students develop scientific inquiry practices, such as the construction and communication of scientific explanations. This paper focuses on 7th grade students’ scientific explanations during the enactment of a project based chemistry unit where the construction of scientific explanations is a key learning goal. During the unit, we make the explanation framework explicit to students and include supports or scaffolds in both the student and teacher materials to facilitate students’ in their understanding and construction of scientific explanations. Results from the enactment show significant learning gains for students for all components of scientific explanation (i.e. claim, evidence, and reasoning). Although students’ explanations were stronger at the end of the instructional unit, we also found that students’ still had difficulty differentiating between appropriate and inappropriate evidence for some assessment tasks. We conjecture that students’ ability to use appropriate data as evidence depends on the wording of the assessment task, students’ content knowledge, and their understanding of what counts as evidence. Having students construct scientific explanations can be an important tool to help make students thinking visible for both researchers and teachers. Appropriate and Inappropriate Evidence Use 2 Middle School Students’ Use of Appropriate and Inappropriate Evidence in Writing Scientific Explanations The National Research Council (1996) and the American Association for the Advancement of Science (1993) call for scientific literacy for all. All students need knowledge of scientific concepts and inquiry practices required for personal decision making, participation in societal and cultural affairs, and economic productivity. Science education should support students’ development toward competent participation in a science infused world (McGinn & Roth, 1999). This type of participation should be obtainable for all students, not just those who are educated for scientific professions. Consequently, we are interested in supporting all students in learning scientific concepts and inquiry practices. By scientific inquiry practices, we mean the multiple ways of knowing which scientists use to study the natural world (National Research Council, 1996). Key scientific inquiry practices called for by national standards documents include asking questions, designing experiments, analyzing data, and constructing explanations (American Association for the Advancement of Science, 1993; National Research Council, 1996). In this study, we focus on analyzing data and constructing explanations. These practices are essential not only for scientists, but for all individuals. On a daily basis, individuals need to evaluate scientific data provided to them in written form such as newspapers and magazines as well spoken through television and radio. Citizens need to be able to evaluate that data to determine whether the claims being made based on the data and reasoning are valid. This type of data evaluation, like other scientific inquiry practices, is dependent both on a general understanding of how to evaluate data as well as an understanding of the science content. Appropriate and Inappropriate Evidence Use 3 In this study we explore when students use appropriate evidence and when they use inappropriate evidence to support their claims. Our work focuses on an 8-week project-based chemistry curriculum designed to support 7 grade students in using evidence and constructing scientific explanations. We examine the characteristics of these students’ explanations, their understanding of the content knowledge, and the assessment tasks to unpack what may be influencing students use of evidence. Our Instructional Model for Scientific Explanations In our work, we examine how students construct scientific explanations using evidence. We use a specific instructional model for evidence-based scientific explanations as a tool for both classroom practice and research. We provide both teachers and students with this model to make the typically implicit framework of explanation, explicit to both teachers and students. Our instructional model for scientific explanation uses an adapted version of Toulmin’s (1958) model of argumentation and builds off previous science educators’ research on students’ construction of scientific explanations and arguments (Bell & Linn, 2000; Jiménez-Aleixandre, Rodríguez, & Duschl, 2000; Lee & Songer, 2004; Sandoval, 2003; Zembal-Saul, et al., 2002). Our explanation framework includes three components: a claim (similar to Toulmin’s claim), evidence (similar to Toulmin’s data), and reasoning (a combination of Toulmin’s warrants and backing). The claim makes an assertion or conclusion that addresses the original question or problem. The evidence supports the student’s claim using scientific data. This data can come from an investigation that students complete or from another source, such as observations, reading material, or archived data. The data need to be both appropriate and sufficient to support the claim. Appropriate data is relevant to the question or problem and relates to the given claim. Data is sufficient when it includes the necessary quantity to convince someone of a claim. The Appropriate and Inappropriate Evidence Use 4 reasoning is a justification that links the claim and evidence and shows why the data counts as evidence to support the claim by using the appropriate scientific principles. Kuhn argues (1993) that argument, or in our case scientific explanation, is a form of thinking that transcends the particular content to which it refers. Students can construct scientific explanations across different content areas. Although an explanation model, such as Toulmin’s, can be used to assess the structure of an explanation, it cannot determine the scientific accuracy of the explanation (Driver, Newton & Osborne, 2000). Instead, both the domain general explanation framework and the domain specific context of the assessment task determine the correctness of the explanation. Consequently, in both teaching students about explanation and assessing students’ construction of explanations we embed the scientific inquiry practice in a specific context. Student Difficulties Constructing Explanations Prior research in science classrooms suggests that students have difficulty constructing high-quality scientific explanations where they articulate and defend their claims (Sadler, 2004). For example, students have difficulty understanding what counts as evidence (Sadler, 2004) and using appropriate evidence (Sandoval, 2003; Sandoval & Reiser, 1997). Instead, students will draw on data that do not support their claim. Consequently, we are interested in whether students use appropriate evidence to support their claim or if they draw on evidence that is not relevant. Students’ claims also do not necessarily relate to their evidence. Instead, students often rely on their personal views instead of evidence to draw conclusions (Hogan & Maglienti, 2001). Students have a particularly difficult time reasoning from primary data, especially when Appropriate and Inappropriate Evidence Use 5 measurement error plays an important role (Kanari & Millar, 2004). Students can recognize variation in data and use characteristics of data in their reasoning, but their ability to draw final conclusions from that data can depend on the context. Masnick, Klahr, and Morris (this volume) concluded that young students who poorly understood the context of the investigation had difficulty interpreting data, particularly when the interpretation of that data contradicted their prior beliefs. Students will likely discount data if the data contradicts their current theory (Chinn & Brewer, 2001) and they will only consider data if they can come up with a mechanism for the pattern of data (Koslowski, 1996). When students evaluate data, more general reasoning strategies interact with domain-specific knowledge (Chinn & Brewer, 2001). Whether students use appropriate and inappropriate evidence may depend on their prior understanding of a particular content area or task. Students also have difficulty providing the backing, or what we refer to as reasoning, for why they chose the evidence (Bell & Linn, 2000) in their written explanations. Other researchers have shown that during classroom discourse, discussions tend to be dominated by claims with little backing to support their claims (Jiménez-Aleixandre, Rodríguez & Duschl, 2000). Our previous work supports these ideas. We found that middle school students’ had the most difficulty with the reasoning component of scientific explanations (McNeill, Lizotte, Krajcik & Marx, in review; McNeill, et al., 2003). Although students’ reasoning improved over the course of the 6-8 week instructional unit, it was consistently of lower quality than their claims or evidence. Students’ reasoning often just linked their claim and evidence and less frequently articulated the scientific principles that allowed them to make that connection. Similar to students ability to evaluate and use data, providing accurate reasoning is related to students understanding of the content. Students with stronger content knowledge Appropriate and Inappropriate Evidence Use 6 provide stronger reasoning in their scientific explanations (McNeill et al., in review). Previous research with students has found that their success at completing scientific inquiry practices is highly dependent on their understanding of both the content and the scientific inquiry practices (Metz, 2000). Both domain specific and general reasoning are essential for students’ effective evaluation of data and construction of scientific explanations. Although previous work has shown that students have difficulty with components of scientific explanations, there has been little research unpacking exactly when


Journal of Statistics Education | 2006

Applying Japanese Lesson Study Principles to an Upper-Level Undergraduate Statistics Course.

Paul Roback; Beth Chance; Julie Legler; Tom Moore

Japanese Lesson Study is a collaborative approach for teachers to plan, present, observe, and critique classroom lessons. Through the lesson study process, teachers systematically and thoughtfully examine both student learning and their own teaching practices. In addition, the process paves the way for a much broader approach to education research by gathering data about student learning directly in the classroom. By piloting an approach using Japanese Lesson Study principles in an upper division statistics course, we discovered some of the challenges it poses, but also some surprisingly promising results for statistics teaching. This case study should provide others considering this approach with information about the philosophy and methodology involved in the lesson study process as well as some practical ideas for its implementation. Supplemental data for this article can be accessed on the publishers website.


College Mathematics Journal | 1999

Teaching the Reasoning of Statistical Inference: A "Top Ten" List

Allan J. Rossman; Beth Chance

Allan Rossman ([email protected]) has taught at Dickinson College since receiving his Ph.D. in statistics from Carnegie Mellon in 1989. He has developed interactive curricular materials through which students explore principles of introductory statistics. He directs the MAAsSTATS (Statistical Thinking with Active Teaching Strategies) project, which conducted workshops for mathematicians who teach statistics. With mixed success he applies his statistical training to managing his fantasy baseball team, the Domestic Shorthairs.


Archive | 1998

Comparing Two Means

Allan J. Rossman; Beth Chance

You have been studying the application of inference techniques to various situations involving genuine data. In the previous two topics you have investigated problems which call for inferences about a population mean. With this topic you will examine the case of comparing two sample means where the samples have been collected independently (as opposed to the paired comparisons design that you studied in the last topic). The inference procedures will again be based on the t-distribution; the reasoning behind and interpretation of the procedures remain the same as always. Also as always, you will see the importance of visual and numerical examinations of the data prior to applying formal inference procedures.


The American Statistician | 2011

Rethinking Assessment of Student Learning in Statistics Courses

Joan Garfield; Andrew Zieffler; Daniel T. Kaplan; George W. Cobb; Beth Chance; John P. Holcomb

Although much attention has been paid to issues around student assessment, for most introductory statistics courses few changes have taken place in the ways students are assessed. The assessment literature describes three foundational elements—cognition, observation, and interpretation—that comprise an “assessment triangle” underlying all assessments. However, most instructors focus primarily on the second component: tasks that are used to produce grades. This article focuses on three sections written by leading statistics educators who describe some innovative and even provocative approaches to rethinking student assessment in statistics classes.


The American Statistician | 2015

Combating Anti-Statistical Thinking Using Simulation-Based Methods Throughout the Undergraduate Curriculum

Nathan L. Tintle; Beth Chance; George W. Cobb; Soma Roy; Todd Swanson; Jill VanderStoep

The use of simulation-based methods for introducing inferen-ce is growing in popularity for the Stat 101 course, due in part to increasing evidence of the methods ability to improve studen-ts’ statistical thinking. This impact comes from simulation-based methods (a) clearly presenting the overarching logic of inference, (b) strengthening ties between statistics and probability/mathematical concepts, (c) encouraging a focus on the entire research process, (d) facilitating student thinking about advanced statistical concepts, (e) allowing more time to explore, do, and talk about real research and messy data, and (f) acting as a firm-er foundation on which to build statistical intuition. Thus, we argue that simulation-based inference should be an entry point to an undergraduate statistics program for all students, and that simulation-based inference should be used throughout all under-graduate statistics courses. To achieve this goal and fully recognize the benefits of simulation-based inference on the undergraduate statistics program, we will need to break free of historical forces tying undergraduate statistics curricula to mathematics, consider radical and innovative new pedagogical approaches in our courses, fully implement assessment-driven content innovations, and embrace computation throughout the curriculum. [Received December 2014. Revised July 2015]


Journal of Statistics Education | 2016

Student Performance in Curricula Centered on Simulation-Based Inference: A Preliminary Report

Beth Chance; Jimmy Wong; Nathan L. Tintle

ABSTRACT “Simulation-based inference” (e.g., bootstrapping and randomization tests) has been advocated recently with the goal of improving student understanding of statistical inference, as well as the statistical investigative process as a whole. Preliminary assessment data have been largely positive. This article describes the analysis of the first year of data from a multi-institution assessment effort by instructors using such an approach in a college-level introductory statistics course, some for the first time. We examine several pre-/post-measures of student attitudes and conceptual understanding of several topics in the introductory course. We highlight some patterns in the data, focusing on student level and instructor level variables and the application of hierarchical modeling to these data. One observation of interest is that the newer instructors see very similar gains to more experienced instructors, but we also look to how the data collection and analysis can be improved for future years, especially the need for more data on “nonusers.”


The American Statistician | 2015

From Curriculum Guidelines to Learning Outcomes: Assessment at the Program Level

Beth Chance; Roxy Peck

The 2000 ASA Guidelines for Undergraduate Statistics majors aimed to provide guidance to programs with undergraduate degrees in statistics as to the content and skills that statistics majors should be learning. The 2014 Guidelines revise the earlier guidelines to reflect changes in the discipline. As programs strive to adjust their curricula to align with the 2014 Guidelines, it is appropriate to also think about developing an assessment cycle of evaluation. This will enable programs to determine whether students are learning what we want them to learn and to work on continuously improving the program over time. The first step is to translate the broader Guidelines into institution-specific measurable learning outcomes. This article focuses on providing examples of learning outcomes developed by different institutions based on the 2000 Guidelines. The companion article by Moore and Kaplan (this issue) focuses on choosing appropriate assessment methods and rubrics and creating an assessment plan. We hope the examples provided are illustrative and that they will assist programs as they implement the 2014 Guidelines. [Received November 2014. Revised July 2015.]

Collaboration


Dive into the Beth Chance's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Elsa Medina

California Polytechnic State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Cary J. Roseth

Michigan State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Soma Roy

California Polytechnic State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge