Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kathleen M. Sheehan is active.

Publication


Featured researches published by Kathleen M. Sheehan.


Applied Psychological Measurement | 1990

Using Bayesian Decision Theory to Design a Computerized Mastery Test

Charles Lewis; Kathleen M. Sheehan

A theoretical framework for mastery testing based on item response theory and Bayesian deci sion theory is described. The idea of sequential testing is developed, with the goal of providing shorter tests for individuals who have clearly mastered (or clearly not mastered) a given subject and longer tests for those individuals for whom the mastery decision is not as clear-cut. In a simulat ed application of the approach to a professional certification examination, it is shown that average test lengths can be reduced by half without sacrifi cing classification accuracy. Index terms:


Applied Measurement in Education | 2002

Items by Design: The Impact of Systematic Feature Variation on Item Statistical Characteristics

Mary K. Enright; Mary Morley; Kathleen M. Sheehan

In this study we investigated the impact of systematic item feature variation on item statistical characteristics and the degree to which such information could be used as collateral information to supplement examinee performance data and reduce pretest sample size. Two families of word problem variants for the quantitative section of the Graduate Record Examinations General Test were generated by systematically manipulating item features. For rate problems, the item design features affected item difficulty (adjusted R2 = .90), item discrimination (adjusted R2 = .50), and guessing (adjusted R2 = .41). For probability problems the item design features affected difficulty (adjusted R2 = .61) but not discrimination or guessing. The results demonstrate the enormous potential of systematically creating item variants. The issue of how to develop a knowledge base that would support the systematic generation of a wider variety of quantitative problems is discussed.


Journal of Educational and Behavioral Statistics | 1989

Information Matrices in Latent-Variable Models.

Robert J. Mislevy; Kathleen M. Sheehan

The Fisher, or expected, information matrix for the parameters in a latent-variable model is bounded from above by the information that would be obtained if the values of the latent variables could also be observed. The difference between this upper bound and the information in the observed data is the “missing information.” This paper explicates the structure of the expected information matrix and related information matrices, and characterizes the degree to which missing information can be recovered by exploiting collateral variables for respondents. The results are illustrated in the context of item response theory models, and practical implications are discussed.


Journal of Educational and Behavioral Statistics | 1998

Extending the Rule Space Methodology to a Semantically-Rich Domain: Diagnostic Assessment in Architecture.

Irvin R. Katz; Michael E. Martinez; Kathleen M. Sheehan; Kikumi K. Tatsuoka

This paper presents a technique for applying the Rule Space methodology of cognitive diagnosis to assessment in a semantically-rich domain. Previous applications of Rule Space—all in simple, well-structured domains—based diagnosis on examinees’ ability to perform individual problem-solving steps. In a complex domain, however, test items might be so different from one another that the problem-solving steps used for one item are unrelated to the steps used to solve another item. The technique presented herein extends Rule Space’s applicability by basing diagnosis on item characteristics that are more abstract than individual problem-solving steps. A cognitive model of problem-solving motivates selection of characteristics in order to maintain the connection between an examinee’s problem-solving skill and his/her diagnosis. To test the extended Rule Space procedure, data were collected from 122 architects of three ability levels (students, architecture interns, and professional architects) on a 22-item test of architectural knowledge. Rule Space provided diagnostic reporting for between 40 and 90% of examinees. The findings support the effectiveness of Rule Space in a complex domain.


Elementary School Journal | 2014

The TextEvaluator Tool

Kathleen M. Sheehan; Irene Kostin; Diane Napolitano; Michael Flor

This article describes TextEvaluator, a comprehensive text-analysis system designed to help teachers, textbook publishers, test developers, and literacy researchers select reading materials that are consistent with the text-complexity goals outlined in the Common Core State Standards. Three particular aspects of the TextEvaluator measurement approach are highlighted: (1) attending to relevant reader and task considerations, (2) expanding construct coverage beyond the two dimensions of text variation traditionally assessed by readability metrics, and (3) addressing two potential threats to tool validity: genre bias and blueprint bias. We argue that systems that are attentive to these particular measurement issues may be more effective at helping users achieve a key goal of the new Standards: ensuring that students are challenged to read texts at steadily increasing complexity levels as they progress through school, so that all students acquire the advanced reading skills needed for success in college and careers.


ETS Research Report Series | 1990

Computerized mastery testing with nonequivalent testlets

Kathleen M. Sheehan; Charles Lewis

A practical procedure for determining the effect of testlet nonequivalence on the operating characteristics of a testlet-based computerized mastery test (CMT) is introduced. The procedure involves estimating the CMT decision rule twice, once with testlets treated as equivalent and once with testlets treated as nonequivalent. In the equivalent testlet mode, the likelihood functions estimated for specific number right scores are assumed to be constant across testlets and a single set of cutscores is estimated for all testlets. In the nonequivalent testlet mode, the likelihood functions estimated for specific number-right scores are allowed to vary from one testlet to another and a different set of cutscores is estimated for each permutation of testlet presentation order. Small differences between the estimated operating characteristics of the equivalent testlet decision rule and the nonequivalent testlet decision rule indicate that the assumption of equivalent testlets was warranted. This procedure is illustrated with data from the Architect Registration Examination, a professional certification examination administered by the National Council of Architectural Registration Boards (NCARB).


ETS Research Report Series | 1990

USING BAYESIAN DECISION THEORY TO DESIGN A COMPUTERIZED MASTERY TEST

Charles Lewis; Kathleen M. Sheehan

Mastery testing is used in educational and certification contexts to decide, on the basis of test performance, whether or not an individual has attained a specified level of knowledge, or mastery, of a given subject. A theoretical framework for mastery testing, based on Item Response Theory and Bayesian decision theory, is described in this paper. In this framework, the idea of sequential testing is developed, with the goal of providing shorter tests for individuals who have clearly mastered (or clearly not mastered) a given subject, and providing longer tests for those individuals for whom the mastery decision is not as clear-cut. In a simulated application of the approach to a professional certification examination, it is shown that average test lengths can be reduced by half without sacrificing classification accuracy.


north american chapter of the association for computational linguistics | 2015

Online Readability and Text Complexity Analysis with TextEvaluator

Diane Napolitano; Kathleen M. Sheehan; Robert Mundkowsky

We have developed the TextEvaluator system for providing text complexity and Common Core-aligned readability information. Detailed text complexity information is provided by eight component scores, presented in such a way as to aid in the user’s understanding of the overall readability metric, which is provided as a holistic score on a scale of 100 to 2000. The user may select a targeted US grade level and receive additional analysis relative to it. This and other capabilities are accessible via a feature-rich front-end, located at http://texteval-pilot.ets.org/TextEvaluator/.


ETS Research Report Series | 1999

ITEMS BY DESIGN: THE IMPACT OF SYSTEMATIC FEATURE VARIATION ON ITEM STATISTICAL CHARACTERISTICS

Mary K. Enright; Mary Morley; Kathleen M. Sheehan

This study investigated the impact of systematic item feature variation on item statistical characteristics and the degree to which such information could be used as collateral information to supplement examinee performance data and reduce pretest sample size. Two families of word problem variants for the quantitative section of the Graduate Record Examination (GRE®) General Test were generated by systematically manipulating item features. For rate problems, the item design features affected item difficulty (Adj. R2 = .90), item discrimination (Adj. R2 = .50), and guessing (Adj. R2 = .41). For probability problems the item design features affected difficulty (Adj. R2 = .61), but not discrimination or guessing. The results demonstrate the enormous potential of systematically creating item variants. However, questions of how best to manage variants in item pools and to implement statistical procedures that use collateral information must still be resolved.


Journal of Educational Measurement | 1992

Estimating Population Characteristics From Sparse Matrix Samples of Item Responses

Robert J. Mislevy; Albert E. Beaton; Bruce Kaplan; Kathleen M. Sheehan

Collaboration


Dive into the Kathleen M. Sheehan's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge