Melissa S. Yale
Purdue University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Melissa S. Yale.
Journal of Science Teacher Education | 2014
Donna Farland-Smith; Kevin D. Finson; William J. Boone; Melissa S. Yale
Even long before children are able to verbalize which careers may be interesting to them, they collect and store ideas about scientists. For these reasons, asking children to Draw-A-Scientist has become an accepted method to provide a glimpse into how children represent and identify with those in the science fields. Years later these representations may translate into student’s career choice. Since 1995, children’s illustrations of scientists have been assessed by the Draw-A-Scientist Checklist (DAST-C). The checklist was created from the common aspects or features found in illustrations from previous studies and were based initially on the scientists, broken down into “stereotypical” and “alternative” images shown in the drawings. The purpose of this paper is to describe the development, field test and reliability of the modified Draw-A-Scientist Test (DAST) and The Draw-A-Scientist Rubric designed as an improvement of the DAST-C to provide a more appropriate method of assessing students’ drawings of scientists. The combination of the modified DAST and the DAST Rubric brings more refinement as it enables clarities to emerge and subsequently increased detail to what one could ascertain from students about their mental images of scientists.
Archive | 2014
William J. Boone; John R. Staver; Melissa S. Yale
This chapter provides readers with a thorough understanding of Rasch person measures, practice interpreting Winsteps output tables containing person measures, and how (and why) these measures are used for parametric statistical tests. Readers are asked to rerun the data from Chap. 3, and then they are provided with step-by-step interpretation of the key columns presented in the “person measure” tables. Readers learn how to identify the number of items a respondent has answered and the Rasch measure computed for each person. Readers will also learn how to save person measures in a variety of formats for subsequent statistical analysis using software such as SPSS and SAS. In this chapter, readers also learn how to use USCALE and UMEAN in a control file, to create a user-friendly Rasch measure scale. The final topic in this chapter is the introduction of cross plots, in which the authors demonstrate that using UMEAN and USCALE does not alter the manner in which persons are measured. The chapter finishes up with Keywords and Phrases, Quick Tips, Data Files, References, and Additional Readings. As in almost all chapters, sample analyses are used to reinforce the chapter topic.
Archive | 2014
William J. Boone; John R. Staver; Melissa S. Yale
Throughout this book, we discuss the Rasch model. This chapter presents details as to why the Rasch model is the model of choice when researchers are conducting an analysis when they wish to pool the answers of a respondent to a set of items that involve one construct. Readers learn that the Rasch model is quite different from the IRT perspective in that IRT models are altered to fit the data, while when Rasch is used, the data is evaluated as to how well the data fit the model. The chapter finishes up with a student dialogue, Keywords and Phrases, Quick Tips, References, and Additional Readings.
Archive | 2014
William J. Boone; John R. Staver; Melissa S. Yale
This chapter builds upon the concepts presented in Chap. 15, making use of setting boundaries on a Wright map to thoughtfully divide student measures into groups (e.g., low, medium, high). This chapter considers the situation in which the different groupings of students have already been defined (e.g., pain levels have already been defined as Levels I, II, III, and IV). The chapter presents the procedure to determine where (in the case of the pain example) each level is located on the Wright map. The chapter finishes up with a student dialog, Keywords and Phrases, Quick Tips, Data Files, References, and Additional Readings. As in almost all chapters, sample analyses are used to reinforce the chapter topic.
Archive | 2014
William J. Boone; John R. Staver; Melissa S. Yale
One of the key techniques researchers use in measurement is to investigate Differential Item Functioning (DIF). In this chapter, we give examples to help define what DIF is and why it is important to understand. Readers are provided with guidance on how to interpret numerous critical Winsteps tables which help one identify items exhibiting DIF. Particular attention is given to how DIF is related to bias and construct validity. The chapter concludes by guiding readers through the steps they might take if they identify an item with DIF for gender. Steps are presented so measures can be computed while retaining the problematic item. This is done by viewing the item as a different item for males and females. The chapter finishes up with a summary discussion between the two students, Keywords and Phrases, Quick Tips, Data Files, References, and Additional Readings. As in almost all chapters, sample analyses are used to reinforce the chapter topic.
Archive | 2014
William J. Boone; John R. Staver; Melissa S. Yale
In many research projects, it is important to be able to link different forms of a test or a survey. In this chapter, we explain how item anchoring can be conducted in order to link forms. Also, we provide guidance to strategies one uses to select the items to “link” forms, thus allowing all respondents to be expressed on the same measurement scale, even though not all respondents completed all items. The chapter also explains how to double-check the linking and understand the score measure table for each of the forms you have linked. The chapter finishes up with a student dialog, Keywords and Phrases, Quick Tips, Data Files, References, and Additional Readings. As in almost all chapters, sample analyses are used to reinforce the chapter topic.
Archive | 2014
William J. Boone; John R. Staver; Melissa S. Yale
In many projects, it is important to be able to compute Rasch person measures and item measures. This chapter introduces the use of Rasch to confidently compute “pass/fail” points for a test. This point can be thought of as the location of a line at a particular measure on a Wright map. Persons below the line fail. Some research projects may require a number of horizontal lines, for example, a line differentiating failing students from those students who “low pass,” a line allowing differentiation of “low pass” students and “medium pass” students, and a line allowing one to differentiate “medium pass” and “high pass” students. The step-by-step procedures are given to compute these boundaries (and understand and explain these boundaries). The chapter finishes up with a student dialog, Keywords and Phrases, Quick Tips, Data Files, References, and Additional Readings. As in almost all chapters, sample analyses are used to reinforce the chapter topic.
Archive | 2012
William J. Boone; John R. Staver; Melissa S. Yale
This chapter presents key ideas and examples of cognitive diagnostic assessment informed by Rasch measurement theory and the application of Rasch measurement. The methods involved in Rasch measurement might appear daunting at first sight, but with the availability of such user-friendly software as Winsteps, teachers and all others who develop assessments can quickly and thoughtfully create assessments that lend themselves to the collection and analysis of data for evidence-based decision making. In this chapter, a non-mathematical and applied approach is used to explain Rasch measurement to provide concepts and techniques that are easy to read, digest, and apply immediately to problems in cognitive diagnostic assessment. Furthermore, new perspectives about the benefits of using Rasch measurement will also be presented for experienced users.
Archive | 2014
William J. Boone; John R. Staver; Melissa S. Yale
When researchers learn about Rasch, along with terms such as logit, person measure, item measure, and Wright maps, they will also learn about the ogive curve. The Winsteps score-measure table is used to aid readers in a creation of an ogive. Following this plotting of raw scores and measures, the chapter considers different pairs of students and how the same difference in raw scores is not the same difference in measures. The example is then extended to considering how the ogive impacts comparisons from a pre-time point to a post-time point. The final details involve showing that the meaning of the ogive does not change with rescaling through appropriate use of USCALE and UMEAN. The chapter finishes up with Keywords and Phrases, Quick Tips, Data Files, References, and Additional Readings. As in almost all chapters, sample analyses are used to reinforce the chapter topic.
Archive | 2014
William J. Boone; John R. Staver; Melissa S. Yale
When conducting an analysis of test data or survey data using Rasch techniques, missing data often is not a big problem – a student skipping an item on a test can be measured using the items they attempt. In fact, when linking test or survey forms, the issue can be thought of, in part, as a missing data issue. In this chapter we discuss why missing data often will not impact the measurement of a respondent. However, we also discuss the issue of how to view missing data. For example, should skipped items be items counted as “wrong” but items “not reached” be counted as “missing”? In this chapter, we consider several missing data issues and we explain how Winsteps can allow one to experiment with the coding of missing data with the goal of conducting accurate measures of respondents. The chapter finishes up with a student dialogue, Keywords and Phrases, Quick Tips, Data Files, References, and Additional Readings. As in almost all chapters, sample analyses are used to reinforce the chapter topic.