Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Catherine S. Taylor is active.

Publication


Featured researches published by Catherine S. Taylor.


American Educational Research Journal | 1994

Assessment for Measurement or Standards: The Peril and Promise of Large-Scale Assessment Reform

Catherine S. Taylor

The current call for performance-based assessments is, in part, a consequence of inappropriate uses of norm-referenced achievement tests. Still, the use of performance-based assessment will not automatically eliminate the negative consequences of high-stakes tests, nor support hoped-for changes in schools. School reform will be supported only if new assessment systems are developed using a model that is in harmony with the goals of reform. This article reviews two models for assessment, the measurement model and the standards model, their underlying assumptions about learners, and the resulting implications for performance-based test development. It briefly reviews the current testing debate, defines terms such as authentic assessments and performance-based assessments, and discusses the compromises that have led to the failed attempts to use testing to set standards for education. Finally, the article reflects on the power each assessment model can have on reform efforts.


Applied Environmental Education & Communication | 2006

Improving Test Scores Through Environmental Education: Is It Possible?

Oksana Bartosh; Margaret Tudor; Lynne Ferguson; Catherine S. Taylor

The present research investigated the impact of environmental education (EE) programs on student achievement in math, reading, and writing by comparing student performances on two standardized tests for environmental education schools and schools with traditional curriculum. Quantitative analysis was used to evaluate the impact of the EE programs. The study indicates that schools with integrated environmental education programs outperform comparable “non-EE” schools on the tests. The authors believe that this exploratory research shows a pattern or trend between the level of implementation of environmental education and student achievement, which calls for more in-depth studies to investigate a correlate or cause-effect relationship.


Journal of Educational Computing Research | 1997

Evidence for the Reliability and Factorial Validity of the "Computer Game Attitude Scale.".

Kelly K. Chappell; Catherine S. Taylor

The Computer Game Attitude Scale (CGAS) evaluates student attitudes toward educational computer games. This study provides evidence for the reliability and factorial validity of the scores of the CGAS and its two subscales. Study participants were 186 middle school students from two large school districts in the Pacific Northwest, one urban and one suburban. The CGAS produced scores with a total test alpha coefficient of 88 for the sample. A principal components factor analysis with a two factor solution and a varimax rotation was conducted on the items of the CGAS. Two factors explained 44 percent of the total variance. The pattern of loadings in the principal components factor analysis supports the grouping implied by the two subscales, indicating that the two subscales were sufficiently stable to be used as separate scores. Data indicate that the CGAS produced reliable test scores that may aid researchers, computer game designers, and teachers in the evaluation of educational software games.


Applied Measurement in Education | 2012

Gender DIF in Reading and Mathematics Tests With Mixed Item Formats

Catherine S. Taylor; Yoonsun Lee

This was a study of differential item functioning (DIF) for grades 4, 7, and 10 reading and mathematics items from state criterion-referenced tests. The tests were composed of multiple-choice and constructed-response items. Gender DIF was investigated using POLYSIBTEST and a Rasch procedure. The Rasch procedure flagged more items for DIF than did the simultaneous item bias procedure—particularly multiple-choice items. For both reading and mathematics tests, multiple-choice items generally favored males while constructed-response items generally favored females. Content analyses showed that flagged reading items typically measured text interpretations or implied meanings; males tended to benefit from items that asked them to identify reasonable interpretations and analyses of informational text. Most items that favored females asked students to make their own interpretations and analyses, of both literary and informational text, supported by text-based evidence. Content analysis of mathematics items showed that items favoring males measured geometry, probability, and algebra. Mathematics items favoring females measured statistical interpretations, multistep problem solving, and mathematical reasoning.


Physical Therapy | 2013

Expanding the Scoring System for the Dynamic Gait Index

Anne Shumway-Cook; Catherine S. Taylor; Patricia Noritake Matsuda; Michael T. Studer; Brady K. Whetten

Background The Dynamic Gait Index (DGI) measures the capacity to adapt gait to complex tasks. The current scoring system combining gait pattern (GP) and level of assistance (LOA) lacks clarity, and the test has a limited range of measurement. Objective This study developed a new scoring system based on 3 facets of performance (LOA, GP, and time) and examined the psychometric properties of the modified DGI (mDGI). Design A cross-sectional, descriptive study was conducted. Methods Nine hundred ninety-five participants (855 patients with neurologic pathology and mobility impairments [MI group] and 140 patients without neurological impairment [control group]) were tested. Interrater reliability was calculated using kappa coefficients. Internal consistency was computed using the Cronbach alpha coefficient. Factor analysis and Rasch analysis investigated unidimensionality and range of difficulty. Internal validity was determined by comparing groups using multiple t tests. Minimal detectable change (MDC) was calculated for total score and 3 facet scores using the reliability estimate for the alpha coefficients. Results Interrater agreement was strong, with kappa coefficients ranging from 90% to 98% for time scores, 59% to 88% for GP scores, and 84% to 100% for LOA scores. Test-retest correlations (r) for time, GP, and LOA were .91, .91, and .87, respectively. Three factors (time, LOA, GP) had eigenvalues greater than 1.3 and explained 79% of the variance in scores. All group differences were significant, with moderate to large effect sizes. The 95% minimal detectable change (MDC95) was 4 for the mDGI total score, 2 for the time and GP total scores, and 1 for the LOA total score. Limitations The limitations included uneven sample sizes in the 2 groups. The MI group were patients receiving physical therapy; therefore, they may not be representative of this population. Conclusions The mDGI, with its expanded scoring system, improves the range, discrimination, and facets of measurement related to walking function. The strength of the psychometric properties of the mDGI warrants its adoption for both clinical and research purposes.


Educational Assessment | 2011

Ethnic DIF in Reading Tests with Mixed Item Formats.

Catherine S. Taylor; Yoonsun Lee

This article presents a study of ethnic Differential Item Functioning (DIF) for 4th-, 7th-, and 10th-grade reading items on a state criterion-referenced achievement test. The tests, administered 1997 to 2001, were composed of multiple-choice and constructed-response items. Item performance by focal groups (i.e., students from Asian/Pacific Island, Black/African American, Native American, and Latino/Hispanic origins) were compared with the performance of White students using simultaneous item bias and Rasch procedures. Flagged multiple-choice items generally favored White students, whereas flagged constructed-response items generally favored students from Asian/Pacific Islander, Black/African American, and Latino/Hispanic origins. Content analysis of flagged reading items showed that positively and negatively flagged items typically measured inference, interpretation, or analysis of text in multiple-choice and constructed-response formats. Items that were not flagged for DIF generally measured very easy reading skills (e.g., literal comprehension) and reading skills that require higher level thinking (e.g., developing interpretations across texts and analyzing graphic elements).


Physical Therapy | 2015

Investigating the Validity of the Environmental Framework Underlying the Original and Modified Dynamic Gait Index

Anne Shumway-Cook; Patricia Noritake Matsuda; Catherine S. Taylor

Background The modified Dynamic Gait Index (mDGI), developed from a person-environment model of mobility disability, measures mobility function relative to specific environmental demands. The framework for interpreting mDGI scores relative to specific environmental dimensions has not been investigated. Objective The aim of this study was to examine the person-environmental model underlying the development and interpretation of mDGI scores. Design This was a cross-sectional, descriptive study. Methods There were 794 participants in the study, including 140 controls. Out of the total study population, 239 had sustained a stroke, 140 had vestibular dysfunction, 100 had sustained a traumatic brain injury, 91 had gait abnormality, and 84 had Parkinson disease. Exploratory factor analysis was used to investigate whether mDGI scores supported the 4 environmental dimensions. Results Factor analysis showed that, with some exceptions, tasks loaded on 4 underlying factors, partially supporting the underlying environmental model. Limitations Limitations of this study included the uneven sample sizes in the 6 groups. Conclusions Support for the environmental framework underlying the mDGI extends its usefulness as a clinical measure of functional mobility by providing a rationale for interpretation of scores that can be used to direct treatment and infer change in mobility function.


Physical Therapy | 2014

Evidence for the validity of the modified dynamic gait index across diagnostic groups

Patricia Noritake Matsuda; Catherine S. Taylor; Anne Shumway-Cook

Background The modified Dynamic Gait Index (mDGI) measures the capacity to adapt gait to complex tasks utilizing 8 tasks and 3 facets of performance. The measurement stability of the mDGI in specific diagnostic groups is unknown. Objective This study examined the psychometric properties of the mDGI in 5 diagnostic groups. Design This was a cross-sectional, descriptive study. Methods A total of 794 participants were included in the study: 140 controls, 239 with stroke, 140 with vestibular dysfunction, 100 with traumatic brain injury, 91 with gait abnormality, and 84 with Parkinson disease. Differential item functioning analysis was used to examine the comparability of scores across diagnoses. Internal consistency was computed using Cronbach alpha. Factor analysis was used to examine the factor loadings for the 3 performance facet scores. Minimal detectable change at the 95% confidence level (MDC95%) was calculated for each of the groups. Results Less than 5% of comparisons demonstrated moderate to large differential item functioning, suggesting that item scores had the same order of difficulty for individuals in all 5 diagnostic groups. For all 5 patient groups, 3 factors had eigenvalues >1.0 and explained 80% of the variability in scores, supporting the importance of characterizing mobility performance with respect to time, level of assistance, and gait pattern. Limitations There were uneven sample sizes in the 6 groups. Conclusions The strength of the psychometric properties of the mDGI across the 5 diagnostic groups further supports the validity and usefulness of scores for clinical and research purposes. In addition, the meaning of a score from the mDGI, regardless of whether at the task, performance facet, or total score level, was comparable across the 5 diagnostic groups, suggesting that the mDGI measured mobility function independent of medical diagnosis.


Physical Therapy | 2015

Examining the Relationship Between Medical Diagnoses and Patterns of Performance on the Modified Dynamic Gait Index

Patricia Noritake Matsuda; Catherine S. Taylor; Anne Shumway-Cook

Background In the original and modified Dynamic Gait Index (mDGI), 8 tasks are used to measure mobility; however, disagreement exists regarding whether all tasks are necessary. The relationship between mDGI scores and Centers for Medicare & Medicaid Services (CMS) severity indicators in the mobility domain has not been explored. Objective The study objectives were to examine the relationship between medical diagnoses and mDGI scores, to determine whether administration of the mDGI can be shortened on the basis of expected diagnostic patterns of performance, and to create a model in which mDGI scores are mapped to CMS severity modifiers. Design This was a cross-sectional, descriptive study. Methods The 794 participants included 140 people without impairments (control cohort) and 239 people with stroke, 140 with vestibular dysfunction, 100 with traumatic brain injury, 91 with gait abnormality, and 84 with Parkinson disease. Scores on the mDGI (total, performance facet, and task) for the control cohort were compared with those for the 5 diagnostic groups by use of an analysis of variance. For mapping mDGI scores to 7 CMS impairment categories, an underlying Rasch scale was used to convert raw scores to an interval scale. Results There was a main effect of mDGI total, time, and gait pattern scores for the groups. Task-specific score patterns based on medical diagnosis were found, but the range of performance within each group was large. A framework for mapping mDGI total, performance facet, and task scores to 7 CMS impairment categories on the basis of Rasch analysis was created. Limitations Limitations included uneven sample sizes in the 6 groups. Conclusions Results supported retaining all 8 tasks for the assessment of mobility function in older people and people with neurologic conditions. Mapping mDGI scores to CMS severity indicators should assist clinicians in interpreting mobility performance, including changes in function over time.


Archive | 2004

Classroom Assessment: Supporting Teaching and Learning in Real Classrooms

Catherine S. Taylor; Susan Bobbitt Nolen

Collaboration


Dive into the Catherine S. Taylor's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Yoonsun Lee

Seoul Women's University

View shared research outputs
Top Co-Authors

Avatar

Oksana Bartosh

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge