Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Elsa Medina is active.

Publication


Featured researches published by Elsa Medina.


Archive | 2008

Learning to Reason About Statistical Models and Modeling

Joan Garfield; Dani Ben-Zvi; Beth Chance; Elsa Medina; Cary J. Roseth; Andrew Zieffler

ion, and the use of models such as the normal distribution and the uniform distribution is a major step in understanding the power of statistics. Some software packages, such as Fathom, allow students to superimpose a model of the normal distribution on a data set. This feature helps students judge the degree of how well the model fits the data and develops students’ understanding of the model fitting process. The Place of Statistical Models and Modeling in the Curriculum We find the lack of explicit attention to statistical models (other than probability models, in a mathematical statistics class) surprising. While there are examples of Review of the Literature Related to Reasoning About Statistical Models and Modeling 147 ways to model probability problems using concrete materials and or simulation tools (see Simon, 1994; Konold, 1994b), these do not appear to be part of most introductory statistics classes, and do not appear to be part of an introduction to the use of models and modeling in statistics. We have thought carefully about how to incorporate lessons on statistical models and modeling into an introductory course. Rather than treat this topic as a separate unit, we think that activities that help students develop an understanding of the idea and uses of a statistical model should be embedded throughout the course, with connections made between these activities. The One-Son Modeling activity described earlier is a good way to introduce the related ideas of model, random outcome, and simulation. In the following unit on data (see Lesson 4, Chapter 6), we revisit the idea of modeling and simulation after students conduct a taste test and want to compare their results to what they would expect due to chance or guessing (the null model). The normal distribution is informally introduced as a model in the unit on distribution (Chapter 8) and revisited in the units on center (Chapter 9), variability (Chapter 10), and comparing groups (Chapter 11). After completing the topics in data analysis, the topic of probability distribution can be examined as a type of distribution based on a model. The normal distribution is then introduced as a formal statistical model (probability distribution) and as a precursor to the sampling unit (this activity is described in the end of this chapter). The sampling unit (Chapter 12) revisits the normal distribution as a model for sampling distributions. In the unit on statistical inference (Chapter 13), models are used to simulate data to test hypotheses and generate confidence intervals. Here a model is a theoretical population with specified parameters. Statistical models are used to find P-values if necessary conditions are met. The final model introduced after the unit on statistical inference is the regression line as a model of a linear relationship between two quantitative variables (see Chapter 14). This model is also tested by using methods of statistical inference and examining deviations (residuals). We find that the idea and use of a statistical model is explicitly linked to ideas of probability and often to the process of simulation. Therefore, we briefly discuss these related topics as well in this chapter. Review of the Literature Related to Reasoning About Statistical Models and Modeling All models are wrong, but some are useful. (George Box, 1979, p. 202) Models in Mathematics Education Several researchers in mathematics education have applied mathematical modeling ideas to data analysis (e.g., Horvath & Lehrer, 1998). Lehrer and Schauble (2004) tracked the development of student thinking about natural variation as elementary grade students learned about distribution in the context of modeling plant growth at the population level. They found that the data-modeling approach assisted children 148 7 Learning to Reason About Statistical Models and Modeling in coordinating their understanding of particular cases with an evolving notion of data as an aggregate of cases. In another study by the same researchers, four forms of models and related “modeling practices” were identified that relate to developing model-based reasoning in young students (Lehrer & Schauble, 2000). They found that studying students’ data modeling, in the sense of the inquiry cycle, provided feedback about student thinking that can guide teaching decisions, an important dividend for improving professional practice. A related instructional design heuristic called “emergent modeling” is discussed by Gravemeijer (2002) that provides an instructional sequence on data analysis as an example. The “emergent modeling” approach was an alternative to instructional approaches that focus on teaching ready-made representations. Within the “emergent modeling” perspective, the model and the situation modeled are mutually constituted in the course of modeling activity. This gives the label “emergent” a dual meaning. It refers to both the process by which models emerge and the process by which these models support the emergence of more formal mathematical knowledge. Models in Statistical Thinking Statisticians . . . have a choice of whether to access their data from the real world or from a model of the real world. (Graham, 2006, p. 204) How students understand and reason about models and modeling processes has received surprisingly little attention in statistics education literature. This is surprising since statistical models play an important part in statistical thinking. The quote by Box, “All models are wrong, but some are useful” (1979, p. 202), is a guiding principle in formulating and interpreting statistical models, acknowledging that they are ideal and rarely match precisely real life data. The usefulness of a statistical model is dependent on the extent that a model is helpful in explaining the variability in the data. Statistical models have an important role in the foundations of statistical thinking. This is evident in a study of practicing statisticians’ ways of thinking (Wild & Pfannkuch, 1999). In their proposed four-dimensional framework for statistical thinking, “reasoning with statistical models” is considered as a general type of thinking, as well as specific “statistical” type of thinking, which relates, for example, to measuring and modeling variability for the purpose of prediction, explanation, or control. The predominant statistical models are those developed for the analysis of data. While the term “statistical models” is often interpreted as meaning regression models or time-series models, Wild and Pfannkuch (1999) consider even much simpler tools such as statistical graphs as statistical models since they are statistical ways of representing and thinking about reality. These models enable us to summarize data in multiple ways depending on the nature of the data. For example, graphs, centers, spreads, clusters, outliers, residuals, confidence intervals, and P-values are Review of the Literature Related to Reasoning About Statistical Models and Modeling 149 read, interpreted, and reasoned with in an attempt to find evidence on which to base a judgment. Moore (1999) describes the role of models to describe a pattern in data analysis as the final step in a four-stage process. When you first examine a set of data, (1) begin by graphing the data and interpreting what you see; (2) look for overall patterns and for striking deviations from those patterns, and seek explanations in the problem context; (3) based on examination of the data, choose appropriate numerical descriptions of specific aspects; (4) if the overall pattern is sufficiently regular, seek a compact mathematical model for that pattern (p. 251). Mallows (1998) claims that too often students studying statistics start from a particular model, assuming the model is correct, rather than learning to choose and fit models to data. Wild and Pfannkuch (1999) add that we do not teach enough of the mapping between the context and the models. Chance (2002) points out that, particularly, in courses for beginning students, these issues are quite relevant and often more of interest to the student, and the “natural inclination to question studies should be rewarded and further developed.” Reasoning About a Statistical Model: Normal Distribution There is little research investigating students’ understanding of the normal distribution, and most of these studies examine isolated aspects in the understanding of this concept. The first pioneering work was carried out by Piaget and Inhelder (1951, 1975), who studied children’s spontaneous development of the idea of stochastic convergence. The authors analyzed children’s perception of the progressive regularity in the pattern of sand falling through a small hole (in the Galton apparatus or in a sand clock). They considered that children need to grasp the symmetry of all the possible sand paths falling through the hole, the probability equivalence between the symmetrical trajectory, the spread and the role of replication, before they are able to predict the final regularity that produces a bell-shaped (normal) distribution. This understanding takes place in the “formal operations” stage (13to 14-year-olds). In a study of college students’ conceptions about normal standard scores, Huck, Cross, and Clark (1986) identified two misconceptions: On the one hand, some students believe that all standard scores will always range between −3 and +3, while other students think there is no restriction on the maximum and minimum values in these scores. Others have examined people’s behavior when solving problems involving the normal distribution (Wilensky, 1995, 1997). In interviews with students and professionals with statistical knowledge, Wilensky asked them to solve a problem by using computer simulation. Although most subjects in his research could solve problems related to the normal distribution, they were unable to justify the use of the normal distribution instead of another concept or distribution, an


Archive | 2008

Learning to Reason About Samples and Sampling Distributions

Joan Garfield; Dani Ben-Zvi; Beth Chance; Elsa Medina; Cary J. Roseth; Andrew Zieffler

Why is the role of sample size in sampling distributions so hard to grasp? One consideration is that . . . the rule that the variability of a sampling distribution decreases with increasing sample size seems to have only few applications in ordinary life. In general, taking repeated samples and looking at the distribution of their means is rare in the everyday and only recent in scientific practice. (Sedlmeier & Gigerenzer, 1997, p. 46)


Archive | 2008

Learning to Reason About Statistical Inference

Joan Garfield; Dani Ben-Zvi; Beth Chance; Elsa Medina; Cary J. Roseth; Andrew Zieffler

Students revisit an activity conducted earlier in the semester in the unit on comparing groups with boxplots (Gummy Bears Activity in Lesson 2, Chapter 11). Once again, they are going to design an experiment to compare the distances of gummy bears launched from two different heights. The experiment is discussed, the students form groups, and the conditions are randomly assigned to the groups of students. This time a detailed protocol is developed and used that specifies exactly how students are to launch the gummy bears and measure the results. The data gathered this time seem to have less variability than the earlier activity, which is good. The students enter the data into Fathom (Key Curriculum Press, 2006), which is used to generate graphs that are compared to the earlier results, showing less within group variability this time due to the more detailed protocol. There is a discussion of the between versus within variability, and what the graphs suggest about true differences in distances. Fathom is then used to run a two sample t test and the results show a significant difference, indicated by a small P-value. Next, students have Fathom calculate a 95% confidence interval to estimate the true difference in mean distances. In discussing this experiment, the students revisit important concepts relating to designing experiments, how they are able to draw casual conclusions from this experiment, and the role of variability between and within groups. Connections are drawn between earlier topics and the topic of inference, as well as between tests of significance and confidence intervals in the context of a concrete experiment. The metaphor of making an argument is revisited from earlier uses in the course, this time in connection with the hypothesis test procedure. Links are shown between the claim (that higher stacks of books will launch bears for farther distances), the evidence used to support the claim (the data gathered in the experiment), the quality and justification of the evidence (the experimental design, randomization, sample size), limitations in the evidence (small number of launches) and finally, an indicator of how convincing the argument is (the P-value). By discussing the idea of the


Archive | 2008

Learning to Reason About Comparing Groups

Joan Garfield; Dani Ben-Zvi; Beth Chance; Elsa Medina; Cary J. Roseth; Andrew Zieffler

Students are shown a bag of gummy bears (a rubbery-textured confectionery, roughly two cm long, shaped in the form of little bears) and two stacks of books: one is short (one book) and one is high (four stacked books). They are shown a launcher made with tongue depressors and rubber bands (see Fig. 11.1), and are asked to make a conjecture about how the height of a launching pad will result in different distances when gummy bears are launched. The students discuss different rationales for launches traveling farther from either of the height conditions. They are then randomly assigned to small groups to set up and gather data in one of the two conditions, each small group launching gummy bears 10 times to collect data for their assigned height (short or high stack of books). Once the data are recorded, they are analyzed using boxplots to compare the results for the two conditions. The boxplots are used to determine that the higher launch resulted in further distances. Students had previously completed an activity that showed them how dot plots can be transformed into boxplots, and are reminded again of the dots (individual data values) hidden within or represented by the boxplot. Their attention is drawn to two types of variability, the variability between the two sets of data (resulting from the two conditions) and the variability in the data: within each group (in each boxplot). Students recall earlier discussions in the variability unit on error variability (noise) and signals in comparing these groups, and they realize the need for an experimental protocol that will help to keep the noise small and reveal clearer signals, so that true difference can be revealed. This experiment is revisited in a later activity when they are able to use a protocol to gather data with less variability and analyze the difference using a t-test (in the Inference unit, see Chapter 13).


Archive | 2008

Developing Students' Statistical Reasoning: Connecting Research and Teaching Practice

Joan Garfield; Dani Ben-Zvi; Beth Chance; Elsa Medina; Cary J. Roseth; Andrew Zieffler


Technology Innovations in Statistics Education | 2007

The Role of Technology in Improving Student Learning of Statistics

Beth Chance; Dani Ben-Zvi; Joan Garfield; Elsa Medina


Archive | 2008

The Discipline of Statistics Education

Joan Garfield; Dani Ben-Zvi; Beth Chance; Elsa Medina; Cary J. Roseth; Andrew Zieffler


Archive | 2008

Research on Teaching and Learning Statistics

Joan Garfield; Dani Ben-Zvi; Beth Chance; Elsa Medina; Cary J. Roseth; Andrew Zieffler


Archive | 2008

Creating a Statistical Reasoning Learning Environment

Joan Garfield; Dani Ben-Zvi; Beth Chance; Elsa Medina; Cary J. Roseth; Andrew Zieffler


Archive | 2008

Assessment in Statistics Education

Joan Garfield; Dani Ben-Zvi; Beth Chance; Elsa Medina; Cary J. Roseth; Andrew Zieffler

Collaboration


Dive into the Elsa Medina's collaboration.

Top Co-Authors

Avatar

Beth Chance

California Polytechnic State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Cary J. Roseth

Michigan State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge