Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Stephen B. Dunbar is active.

Publication


Featured researches published by Stephen B. Dunbar.


Educational Researcher | 1991

Complex, Performance-Based Assessment: Expectations and Validation Criteria

Robert L. Linn; Eva L. Baker; Stephen B. Dunbar

In recent years there has been an increasing emphasis on assessment results, as well as increasing concern about the nature of the most widely used forms of student assessment and uses that are made of the results. These conflicting forces have helped create a burgeoning interest in alternative forms of assessments, particularly complex, performance-based assessments. It is argued that there is a need to rethink the criteria by which the quality of educational assessments are judged, and a set of criteria that are sensitive to some of the expectations for performance-based assessments is proposed.


Journal of Special Education | 1984

Hierarchical Factor Analysis of the K-ABC: Testing Alternate Models

Timothy Z. Keith; Stephen B. Dunbar

The Kaufman Assessment Battery for Children is a new, individually administered test designed to assess simultaneous and sequential mental processing and achievement in children ages 2 ½ to 12 ½. Factor analyses of the K-ABC standardization data generally offer support for the validity of the two mental processing scales, but analyses including the achievement tests have been considerably less supportive. For the present study, data from the standardization sample were used to test alternate structures for the K-ABC, based on the hypothesis that the test measures verbal memory skills, and verbal and nonverbal reasoning. Hierarchical factor models based on this structure were developed and tested using confirmatory techniques. Results suggest that the models fit the data fairly well, thus supporting the validity of this alternate structure. Of particular interest was the finding of the virtual equivalence of the verbal reasoning factor and the second order, or general ability, factor. It appears that users should exercise caution when interpreting K-ABC scores, especially scores on the K-ABC achievement scale.


Educational and Psychological Measurement | 2001

The Relative Appropriateness of Eight Measurement Models for Analyzing Scores from Tests Composed of Testlets.

Guemin Lee; Stephen B. Dunbar; David A. Frisbie

It has been shown that fundamental assumptions associated with conventional one-factor measurement models are frequently violated in analyses of scores from a test composed of testlets. Eight different measurement models were conceptualized for this kind of situation, and the goodness of fit of each model was examined. Conventional essentially tauequivalent and congeneric models present worse model fit to data and overestimate the reliability when testlets are involved. The one-factor congeneric model with correlated error specifications seems to be the best measurement model for a test composed of testlets if dichotomously scored items are used as the unit of analysis. However, in estimating score reliability for tests composed of testlets, the one-factor essentially tauequivalent model with correlated error specifications also provides good estimates. Measurement models using passage (testlet) scores would be alternatives for analyzing scores from tests composed of testlets when passage (testlet) scores are used as the unit of analysis.


Applied Psychological Measurement | 1990

Standard Errors of Correlations Adjusted for Incidental Selection

Nancy L. Allen; Stephen B. Dunbar

The standard error of correlations that have been adjusted for selection with commonly used formulas developed by Pearson (1903) was investigated. The major purposes of the study were (1) to provide large- sample approximations of the standard error of a cor relation adjusted using the Pearson-Lawley three-varia ble correction formula; (2) to examine the standard er rors of adjusted correlations under specific conditions; and (3) to compare various estimates of the standard errors under direct and indirect selection. Two theory- based large-sample estimates of the standard error of a correlation adjusted for indirect selection were devel oped using the delta method. These two estimates were compared to one another, to a bootstrap esti mate, and to an empirical standard deviation of a se ries of adjusted correlations generated in a simulation study. The simulation study manipulated factors de fined by sample size, selection ratio, underlying popu lation distribution, and population correlations in situ ations that satisfied the basic assumptions of the Pearson-Lawley procedures. The results indicated that the large-sample and bootstrap estimates were very similar when the sample size was 500 and, in most cases, the simpler of the two large-sample approxima tions appears to offer a reasonable estimate of the standard error of an adjusted correlation without re sorting to complex, computer-intensive approaches. Index terms: correlation coefficients, missing data, Pearson-Lawley corrections, selection, standard er rors of correlations, validity studies.


Educational and Psychological Measurement | 2004

A Comparison of Parametric and Nonparametric Approaches to Item Analysis for Multiple-Choice Tests

Pui-Wa Lei; Stephen B. Dunbar; Michael J. Kolen

This study compares the parametric multiple-choice model and the nonparametric kernel smoothing approach to estimating option characteristic functions (OCCs) using an empirical criterion, the stability of curve estimates over occasions that represents random error. The potential utility of graphical OCCs in item analysis was illustrated with selected items. The effect of increasing the smoothing parameter on the nonparametric model and the effect of small sample on both approaches were investigated. Differences between estimated curve values for between-model within-occasion, within-model between-occasion, and between-model between-occasion were evaluated. The between-model differences were minor in relation to the within-model stabilities, and the incremental difference attributable to model was smaller than that attributable to occasion. Either model leads to the same choice in item analysis.


Educational and Psychological Measurement | 1985

Hierarchical Factoring in a Standardized Achievement Battery

David J. Martin; Stephen B. Dunbar

This study was concerned with the factorial validity of the Iowa Tests of Basic Skills (ITBS). Previous research identified a strong general factor for this battery, which was taken as evidence of redundancy among the subtests. Hierarchical factor analysis was done with a subset of the standardization data to explore the presence of second-order group factors. The results supported the construct validity of the Language and Mathematics subscales, though a degree of factorial complexity was found in both. Verbal and Visual Information group factors were also identified. Extension of the ITBS general and group factors to subtests of the Cognitive Abilities Test supported the interpretations made of the various group factors.


Journal of Educational and Behavioral Statistics | 1986

SIMULTANEOUS ESTIMATION OF REGRESSION FUNCTIONS FOR MARINE CORPS TECHNICAL TRAINING SPECIALTIES

Stephen B. Dunbar; Shin-ichi Mayekawa; Melvin R. Novick

This paper considers the application of Bayesian techniques for simultaneous estimation to the specification of regression weights for selection tests used in various technical training courses in the Marine Corps. Results of a method for m-group regression developed by Molenaar and Lewis (1979) suggest that common weights for training courses belonging to certain general categories are justified in many cases. However, such commonality of regression weights does not appear to hold for all courses in these categories—weights for some training courses remain distinct even after the application of the simultaneous estimation procedure. Thus, a hypothesis of complete generalization of the predictor-criterion relationships across training courses in a given category would only be retained for a carefully selected subset of courses and not for all groups included in the analysis.


Educational Assessment | 2018

Establishing empirical links between high school assessments and college outcomes: An essential requirement for college readiness interpretations

Anthony D. Fina; Stephen B. Dunbar; Catherine J. Welch

ABSTRACT As states evaluate whether they should continue with their current assessment program or adopt next-generation college readiness assessments, it is important to ascertain the degree to which current high school assessments can be used for college readiness interpretations. In this study, we examined the ability of a state assessment to serve as an indicator of college readiness. Empirical evidence is presented summarizing relationships between performance on the standards-based high school assessment and performance in college. Benchmarks were set on the Reading, Mathematics, and Science tests by linking assessment scores directly to grades in college courses. The accuracy of the benchmarks was similar to that of a traditional college admission test. Students who met the college readiness benchmarks earned higher grades in general education college courses and had higher first-year college grade point averages. Implications for states and other stakeholders are discussed.


ETS Research Report Series | 1989

STANDARD ERRORS OF CORRELATIONS ADJUSTED FOR INCIDENTAL SELECTION

Nancy L. Allen; Stephen B. Dunbar

This investigation examined the standard error of correlations which have been adjusted for selection with commonly used formulas developed by Pearson (1903). Specifically, it had three major purposes: (1) to provide large-sample approximations of the standard error of a correlation adjusted using the Peason-Lawley three-variable correction formula; (2) to examine the standard errors of adjusted correlations under specific conditions; and (3) to compare various estimates of the standard errors under direct and indirect selection. Two theory-based large-sample estimates of the standard error of a correlation adjusted for indirect selection were developed using the delta method. These two estimates were compared to one another, to a bootstrap estimate and to an empirical standard deviation of a series of adjusted correlations generated in a simulation study. The simulation study manipulated factors defined by (1) sample size, (2) selection ratio, (3) underlying population distribution, and (4) population correlations, in situations that satisfied the basic assumptions of the Pearson-Lawley procedures. The results indicated that the large-sample and bootstrap estimates were very similar when the sample size was 500 and, in most cases, when the sample size was 100. On the basis of results of the simulation study, the simpler of the two large-sample approximations appears to offer a reasonable estimate of the standard error of an adjusted correlation without resorting to complex, computer-intensive approaches.


Journal of Child Psychology and Psychiatry | 2005

Pathways to conscience: early mother–child mutually responsive orientation and children's moral emotion, conduct, and cognition

Grazyna Kochanska; David R. Forman; Nazan Aksan; Stephen B. Dunbar

Collaboration


Dive into the Stephen B. Dunbar's collaboration.

Top Co-Authors

Avatar

Robert L. Linn

University of Colorado Boulder

View shared research outputs
Top Co-Authors

Avatar

Eva L. Baker

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Delwyn L. Harnisch

University of Nebraska–Lincoln

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge