Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Connie Fox is active.

Publication


Featured researches published by Connie Fox.


Measurement in Physical Education and Exercise Science | 2011

PE Metrics: Background, Testing Theory, and Methods

Weimo Zhu; Judy Rink; Judith H. Placek; Kim C. Graber; Connie Fox; Jennifer L. Fisette; Ben Dyson; Youngsik Park; Marybell Avery; Marian Franck; De Raynes

New testing theories, concepts, and psychometric methods (e.g., item response theory, test equating, and item bank) developed during the past several decades have many advantages over previous theories and methods. In spite of their introduction to the field, they have not been fully accepted by physical educators. Further, the manner in which many assessments are developed and used in physical education has limitations, including isolated test development, weak or poor psychometric quality control, lack of evaluation frameworks, and failure to measure change or growth. To eliminate these shortcomings and meet the needs of standard-based assessment, a major national effort was undertaken to develop an item or assessment bank, called “PE Metrics,” for assessing the national content standards for physical education. After providing a brief introduction to the background of PE Metrics, this article will describe the nature of the testing theory, psychometric methods, and how they were used in the construction of PE Metrics. Constraints of developing such a system are acknowledged, and future directions in physical education assessments are outlined.


Measurement in Physical Education and Exercise Science | 2011

Development of PE metrics elementary assessments for national physical education standard 1

Ben Dyson; Judith H. Placek; Kim C. Graber; Jennifer L. Fisette; Judy Rink; Weimo Zhu; Marybell Avery; Marian Franck; Connie Fox; De Raynes; Youngsik Park

This article describes how assessments in PE Metrics were developed following six steps: (a) determining test blueprint, (b) writing assessment tasks and scoring rubrics, (c) establishing content validity, (d) piloting assessments, (e) conducting item analysis, and (f) modifying the assessments based on analysis and expert opinion. A task force, composed of researchers, measurement and evaluation experts, teacher educators, K–12 physical education teachers, and education administrators, was formulated. The task force then determined a test blueprint for Grades K, 2, and 5 and developed corresponding assessments to assess the standards. The content validity evidence was established by a panel of experts examining the degree to which the content of the assessments matched the content of the national standards, specifically Standard 1. A total of 30 assessments (Kindergarten = 8, Grade 2 = 11, and Grade 5 = 11) were developed. They were piloted to a total of 773 students (Kindergarten = 232, Grade 2 = 268, and Grade 5 = 273). Descriptive statistics (e.g., M, SD, frequency) were computed for each assessment. More than 50% of the means were between 2.2 and 2.8 (on a 4-point scoring rubric). Assessment responses were well distributed; only 2.2% had an SD of 0, which indicates that assessments were well developed. With some editorial changes, the assessments were ready for the final calibration of PE Metrics construction.


Measurement in Physical Education and Exercise Science | 2001

Validity and Reliability of a Folk-Dance Performance Checklist for Children

Becky Smith Slettum; Connie Fox; Marilyn A. Looney; Danielle M. Jay

This study was designed to determine if a folk-dance performance checklist had logical validity and to establish intrarater and interrater reliability coefficients for raters using the checklist. Fourth-grade students were videotaped during regular physical education classes to create videos for raters to view and score. A total of 5 educators with backgrounds in physical education or dance participated in a training session and coded videotaped performances. Intrarater and interrater reliability were documented in 2 ways: percent of agreement and intraclass coefficients based on the 1-way repeated measures analysis of variance model for a single measure and the average of all measures. The results of the study support the importance of a training session when using a checklist as a method to evaluate student performance. All raters were able to be trained to effectively apply the performance standards after one 4-hour training session and demonstrated high interrater reliability when scoring 3 of the 6 skill components.


Measurement in Physical Education and Exercise Science | 2011

Related Critical Psychometric Issues and Their Resolutions during Development of PE Metrics.

Connie Fox; Weimo Zhu; Youngsik Park; Jennifer L. Fisette; Kim C. Graber; Ben Dyson; Marybell Avery; Marian Franck; Judith H. Placek; Judy Rink; De Raynes

In addition to validity and reliability evidence, other psychometric qualities of the PE Metrics assessments needed to be examined. This article describes how those critical psychometric issues were addressed during the PE Metrics assessment bank construction. Specifically, issues included (a) number of items or assessments needed, (b) training protocol for required intra- and inter-rater objectivity, and (c) the development of a score scale. First, using a subsample of data from the PE Metrics study, in which students were assessed using four assessments, the impact of the number of assessments was examined. It was found that at least two assessments are needed when applying PE Metrics for the purpose of high stakes testing. Single individual assessments can still be used in teaching practice, but the results must be interpreted with caution. Second, with the training protocol developed for PE Metrics, satisfactory intra-rater objectivity can be achieved. When two or more raters are involved in rating, however, an additional monitoring protocol should be employed so that inter-rater objectivity can be monitored and controlled. Third, to help allow for a consistent interpretation and reporting of PE Metrics results, a score scale was developed. Other related issues, such as test fairness and setting performance standards, were discussed, and future directions concerning PE Metrics maintenance and continuing development were outlined.


Strategies | 2009

PE Metrics: Assessing the National Standards: Article #2 in a 4-part series: Instructional Considerations for Implementing Student Assessments

Jennifer L. Fisette; Judith H. Placek; Marybell Avery; Ben Dyson; Connie Fox; Marian Franck; Kim C. Graber; Judith E. Rink; Weimo Zhu

StrAtegieS 33 the first article of the Pe Metrics series, Developing Quality Physical Education through Student Assessments (January/February 2009 Strategies issue) focused on the importance of assessing student learning in relation to NASPe’s content standards (NASPe, 2004). the article emphasized that unless students are appropriately assessed, it is impossible to accurately determine what they have learned and achieved as a result of physical education class. the physical education assessments recently published in PE Metrics: Assessing the National Standards, Standard 1: Elementary (2008) provide valid and reliable tools to measure student learning and inform teacher instruction. this second article of the Pe Metrics series focuses on explaining the formative and summative assessment processes and introducing different instructional considerations that teachers will need to contemplate in order to effectively implement the assessments.


The Journal of Physical Education, Recreation & Dance | 2012

How Teachers Can Use PE Metrics for Grading

Connie Fox

JOPERD • Volume 83 No. 5 • May/June 2012 A bundant literature demonstrates that physical education teachers grade primarily on attendance, participation, and dress. Skill performance and fitness, which experts advocate are areas on which teacher assessments should focus, represent only a small percentage of students’ overall grade (Hensley, 1990; Johnson, 2008; Kleinman, 1997; Stiggins, 2001). In fact, skill and knowledge tests typically represent less than half of students’ grades, whereas subjective evaluations of participation and effort, followed by attitude, skill, attendance, and dressing out represent the largest portion of the grade. Although more than half of all teachers use improvement as a basis for grading (Hensley et al., 1989), there are multiple reasons for discontinuing this practice. One of the most compelling is that improvement does not adequately reflect whether a student’s performance level is competent. In order to improve current assessment practices and accurately inform students and others about their level of performance, PE Metrics was developed (NASPE, 2008, 2010, 2011). PE Metrics is a series of valid, reliable, and useful assessments designed to measure student achievement of the national standards in physical education (National Association for Sport and Physical Education [NASPE], 2004). This article will explain what and how to grade using PE Metrics and will present three examples of grading procedures.


Strategies: a journal for physical and sport educators | 2009

PE Metrics: Assessing the National Standards Article #3 in a 4-part series: The Benefits and Advantages of Nationally Tested Assessments

Jennifer L. Fisette; Kim C. Graber; Judith H. Placek; Marybell Avery; Ben Dyson; Connie Fox; Marian Franck; Judith E. Rink; Weimo Zhu

May/June 2009 this is the third of four articles in the Pe Metrics series related to the value of using national assessments to inform teachers, students, parents, and administrators about student progress at the elementary level toward achieving the National Standards for Physical education (NaSPe, 2004) for Standard 1. the first article focused on the importance of assessing student learning (Fisette et al. 2009a). the second explained formative and summative assessment processes and instructional considerations when implementing the standards (Fisette et al. 2009b). this third article discusses the benefits and advantages of using the assessments that appear in PE Metrics: Assessing the National Standards, Standard 1: Elementary (NaSPe, 2008).


Strategies: a journal for physical and sport educators | 2009

PE Metrics: Assessing the National Standards Article #4 in a 4-Part Series: NASPE Developed Informational Products and Applications

Jennifer L. Fisette; Judith H. Placek; Marybell Avery; Ben Dyson; Connie Fox; Marian Franck; Kim C. Graber; Judith E. Rink; Weimo Zhu

STRATEGIES 35 This is the fourth and final article in the PE Metrics series that focuses on assessing the National Standards for Physical Education (NASPE, 2004) for Standard 1. The first article focused on assessment of student learning (Fisette et al. 2009a). The second described formative and summative assessments and provided considerations on how to implement assessment within standardsbased units and lessons (Fisette et al. 2009b). The third article (Fisette et al. 2009c) discussed the benefits and advantages of using the text PE Metrics: Assessing the National Standards, Standard 1: Elementary (2008). This final article of the PE Metrics series will focus on informational products and applications currently being developed by NASPE, which includes: • PE Metrics: Assessing the National Standards, Standard 1: Secondary • PE Metrics: Assessing the National Standards, Standards 2-6 • PE Metrics Web Application


Measurement in Physical Education and Exercise Science | 2011

Development and calibration of an item bank for PE metrics assessments: Standard 1

Weimo Zhu; Connie Fox; Youngsik Park; Jennifer L. Fisette; Ben Dyson; Kim C. Graber; Marybell Avery; Marian Franck; Judith H. Placek; Judy Rink; De Raynes


Strategies: a journal for physical and sport educators | 2009

Developing Quality Physical Education through Student Assessments.

Jennifer L. Fisette; Judith H. Placek; Marybell Avery; Ben Dyson; Connie Fox; Marian Franck; Kim C. Graber; Judith E. Rink; Weimo Zhu

Collaboration


Dive into the Connie Fox's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Judith H. Placek

University of Massachusetts Amherst

View shared research outputs
Top Co-Authors

Avatar

Ben Dyson

University of Auckland

View shared research outputs
Top Co-Authors

Avatar

Judith E. Rink

University of South Carolina

View shared research outputs
Top Co-Authors

Avatar

Judy Rink

University of South Carolina

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Marilyn A. Looney

Northern Illinois University

View shared research outputs
Researchain Logo
Decentralizing Knowledge