Irvin R. Katz
Princeton University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Irvin R. Katz.
Journal of Educational and Behavioral Statistics | 1998
Irvin R. Katz; Michael E. Martinez; Kathleen M. Sheehan; Kikumi K. Tatsuoka
This paper presents a technique for applying the Rule Space methodology of cognitive diagnosis to assessment in a semantically-rich domain. Previous applications of Rule Space—all in simple, well-structured domains—based diagnosis on examinees’ ability to perform individual problem-solving steps. In a complex domain, however, test items might be so different from one another that the problem-solving steps used for one item are unrelated to the steps used to solve another item. The technique presented herein extends Rule Space’s applicability by basing diagnosis on item characteristics that are more abstract than individual problem-solving steps. A cognitive model of problem-solving motivates selection of characteristics in order to maintain the connection between an examinee’s problem-solving skill and his/her diagnosis. To test the extended Rule Space procedure, data were collected from 122 architects of three ability levels (students, architecture interns, and professional architects) on a 22-item test of architectural knowledge. Rule Space provided diagnostic reporting for between 40 and 90% of examinees. The findings support the effectiveness of Rule Space in a complex domain.
Business Communication Quarterly | 2010
Irvin R. Katz; Catherine Haras; Carol Blaszczynski
Although the business community increasingly recognizes information literacy as central to its work, there remains the critical problem of measurement: How should employers assess the information literacy of their current or potential workers? In this article, we use a commercially available assessment to investigate the relationship between information literacy and the key business communication skill of business writing. Information literacy scores obtained prior to instruction predicted performance in an undergraduate, upper-division business writing course. Similar results emerged regardless of whether participants considered English their best language.
integrating technology into computer science education | 1996
Judith D. Wilson; Robert M. Aiken; Irvin R. Katz
We survey how several algorithm animation systorns are used in Computer Science instruction. Reported student reactions to the use of these systems is favorable, but little information is available on their effectiveness for learning. We examine several formal studies that have implications for how animation systems can most effectively be used to teach algorithms.
Assessment in Education: Principles, Policy & Practice | 2014
Juan Diego Zapata-Rivera; Irvin R. Katz
Score reports have one or more intended audiences: the people who use the reports to make decisions about test takers, including teachers, administrators, parents and test takers. Attention to audience when designing a score report supports assessment validity by increasing the likelihood that score users will interpret and use assessment results appropriately. Although most design guidelines focus on making score reports understandable to people who are not testing professionals, audiences should be defined by more than just their lack of statistical knowledge. This paper introduces an approach to identifying important audience characteristics for designing computer-based, interactive score reports. Through three examples, we demonstrate how an audience analysis suggests a design pattern, which guides the overall design of a report, as well as design details, such as data representations and scaffolding. We conclude with a research agenda for furthering the use of audience analysis in the design of interactive score reports.
human factors in computing systems | 1995
Judith D. Wilson; Irvin R. Katz; Giorgio P. Ingargiola; Robert M. Aiken; Nathan Hoskin
Our goal in this pilot study is to explore students’ behavior as they learn about two search algorithms, observing the role of algorithm animations. We find that alternative animations of the same algorithm may provide different information and facilitate different types of reasoning.
Applied Measurement in Education | 2015
Priya Kannan; Adrienne Sgammato; Richard J. Tannenbaum; Irvin R. Katz
The Angoff method requires experts to view every item on the test and make a probability judgment. This can be time consuming when there are large numbers of items on the test. In this study, a G-theory framework was used to determine if a subset of items can be used to make generalizable cut-score recommendations. Angoff ratings (i.e., probability judgments) from previously conducted standard setting studies were used first in a re-sampling study, followed by D-studies. For the re-sampling study, proportionally stratified subsets of items were extracted under various sampling and test-length conditions. The mean cut score, variance components, expected standard error (SE) around the mean cut score, and root-mean-squared deviation (RMSD) across 1,000 replications were estimated at each study condition. The SE and the RMSD decreased as the number of items increased, but this reduction tapered off after approximately 45 items. Subsequently, D-studies were performed on the same datasets. The expected SE was computed at various test lengths. Results from both studies are consistent with previous research indicating that between 40–50 items are sufficient to make generalizable cut score recommendations.
Advances in Human Factors\/ergonomics | 1995
R.M. Kaplan; Irvin R. Katz
Publisher Summary This chapter describes two examples of computer-based constructed-response questions that represent real-world tasks and must be automatically scored. The design specifications are described for each question, along with their corresponding interfaces. Particular attention is given to the iterative evolution of each interface, as well as the design rationale behind the initial and subsequent designs. The design rationale focuses on compromises made, such as trade-offs between real-world look and feel and the constraints necessary to allow automatic scoring. The chapter describes a question created to assess certain aspects of architectural skill, called the block diagram task. In constructed-response question, a test taker is presented with a one to three paragraph passage. To make automatic scoring of responses possible, each revision must be individually tracked and attached to its related error in the passage.
Journal of Educational Measurement | 2000
Irvin R. Katz; Randy Elliot Bennett; Aliza E. Berger
Archive | 2004
Irvin R. Katz; David M. Williamson; H. L. Nadelman; Irving Kirsch; Russell G. Almond; P. L. Cooper; M. L. Redman; Diego Zapata-Rivera
Educational Assessment | 1995
Michael E. Martinez; Irvin R. Katz