Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alina A. von Davier is active.

Publication


Featured researches published by Alina A. von Davier.


Journal of Educational and Behavioral Statistics | 2005

A Unified Approach to Linear Equating for the Nonequivalent Groups Design

Alina A. von Davier; Nan Kong

This article describes a new, unified framework for linear equating in a non-equivalent groups anchor test (NEAT) design. The authors focus on three methods for linear equating in the NEAT design—Tucker, Levine observed-score, and chain—and develop a common parameterization that shows that each particular equating method is a special case of the linear equating function in the NEAT design. A new concept, the method function, is used to distinguish among the linear equating functions, in general, and among the three equating methods, in particular. This approach leads to a general formula for the standard error of equating for all linear equating functions in the NEAT design. A new tool, the standard error of equating difference, is presented to investigate if the observed difference in the equating functions is statistically significant.


Journal of Educational and Behavioral Statistics | 2008

New Results on the Linear Equating Methods for the Non-Equivalent-Groups Design

Alina A. von Davier

The two most common observed-score equating functions are the linear and equipercentile functions. These are often seen as different methods, but von Davier, Holland, and Thayer showed that any equipercentile equating function can be decomposed into linear and nonlinear parts. They emphasized the dominant role of the linear part of the nonlinear equating function and gave conditions under which the equipercentile methods in the non-equivalent-groups anchor test (NEAT) design give identical results. Consequently, this article focuses on linear equating methods in a NEAT design—the Tucker, chained, and Levine observed-score functions—and describes the theoretical conditions under which these methods produce the same equating function. Constructed examples illustrate the theoretical results.The two most common observed-score equating functions are the linear and equipercentile functions. These are often seen as different methods, but von Davier, Holland, and Thayer showed that any equipercentile equating function can be decomposed into linear and nonlinear parts. They emphasized the dominant role of the linear part of the nonlinear equating function and gave conditions under which the equipercentile methods in the non-equivalent-groups anchor test (NEAT) design give identical results. Consequently, this article focuses on linear equating methods in a NEAT design—the Tucker, chained, and Levine observed-score functions—and describes the theoretical conditions under which these methods produce the same equating function. Constructed examples illustrate the theoretical results.


Computers in Human Behavior | 2017

Interdisciplinary research agenda in support of assessment of collaborative problem solving: lessons learned from developing a Collaborative Science Assessment Prototype

Alina A. von Davier; Jiangang Hao; Lei Liu; Patrick Kyllonen

Abstract Evidence from labor-market economics and predictive validity studies in psychology suggests that collaborative problem solving (CPS) is an increasingly important skill for both academic and career success in the 21st century. While there is a general agreement that collaborative problem solving is an important skill, there is less agreement on how to build an assessment to measure it, especially at scale and as a standardized test. Developing the type of CPS assessment envisioned in this work will require interdisciplinary synergy, involving learning science, data science, psychometrics, and software engineering. In this conceptual paper, we present our identification and novel instantiation of five interdisciplinary research strands supporting the development of a CPS assessment. We discuss how these research strands can comprehensively address some of the shortcomings of existing CPS assessments, such as collecting and managing the data from the process of collaboration in structured log files, or considering a statistical definition of collaboration in the design of the collaborative tasks. We describe the Collaborative Science Assessment Prototype developed at Educational Testing Service (ETS) under the proposed interdisciplinary research agenda to illustrate how these research strands can be operationalized.


Archive | 2017

Initial Steps Towards a Standardized Assessment for Collaborative Problem Solving (CPS): Practical Challenges and Strategies

Jiangang Hao; Lei Liu; Alina A. von Davier; Patrick Kyllonen

Collaborative problem-solving (CPS) skill is an important 21st century skill (Griffin, McGaw, and Care, 2012). However, assessing CPS, particularly in a standardized way, is challenging. The type of collaboration, size of teams, and assessment domain all need to be properly considered when developing a CPS assessment. In this chapter, we outline some practical challenges for developing a large-scale, standardized assessment for CPS and present some strategies to address those challenges. We illustrate these strategies with the Collaborative Science Assessment Prototype (CSAP) developed at Educational Testing Service.


artificial intelligence in education | 2018

Gamified Assessment of Collaborative Skills with Chatbots

Kristin Stoeffler; Yigal Rosen; Maria Bolsinova; Alina A. von Davier

Game-based assessments and learning environments create unique opportunities to provide learners with the ability to demonstrate their proficiency with cognitive skills and behaviors in increasingly authentic environments. Effective task designs, and the effective alignment of tasks with constructs, are also improving our ability to provide learners with insights about their proficiency with these skills. Sharing these insights within the industry with those working toward the same goal contributes to the rising tide that lifts all boats. In this paper we present insights from our work to develop and measure collaborative problem solving skills using a game-based assessment “Circuit Runner.” Our innovative educational game design allows us to incorporate item response data, telemetry data, and stealth- telemetry data to provide a more authentic measure of collaborative problem solving skills. Our study design included 379 study participants on Amazon Mechanical Turk (MTurk), who completed the “Circuit Runner” CPS assessment. The paper provides details on the design of educational games and scoring techniques and discusses findings from the pilot study.


artificial intelligence in education | 2018

Human-Agent Assessment: Interaction and Sub-skills Scoring for Collaborative Problem Solving

Pravin Chopade; Kristin Stoeffler; Saad M. Khan; Yigal Rosen; Spencer Swartz; Alina A. von Davier

Collaborative problem solving (CPS) is one of the 21st century skills identified as a critical competency for education and workplace success. Students entering the workforce will be expected to have a level of proficiency with both cognitive and social-emotional skills. This paper presents an approach to measuring features and sub-skills associated with CPS ability and provides a methodology for CPS based performance assessment using an educational problem solving video game. Our method incorporates K-Means clustering to evaluate and analyze the feature space of the CPS evidence that was gathered from game log-data. Our results illustrate distinct participant clusters of high, medium and low-CPS skills proficiency levels that can help focus remediation efforts.


Journal of Educational and Behavioral Statistics | 2018

Process Data in NAEP: Past, Present, and Future

Yoav Bergner; Alina A. von Davier

This article reviews how National Assessment of Educational Progress (NAEP) has come to collect and analyze data about cognitive and behavioral processes (process data) in the transition to digital assessment technologies over the past two decades. An ordered five-level structure is proposed for describing the uses of process data. The levels in this hierarchy range from ignoring the processes (i.e., counting only the outcomes), to incorporating process data as auxiliary or essential in addition to the outcome, to modeling the process as the outcome itself, either holistically in a rubric score or in a measurement model that accounts for sequential dependencies. Historical examples of these different uses are described as well as recent results using nontraditional analytical approaches. In the final section, speculative future directions incorporating state-of-the-art technologies and analysis methods are described with an eye toward hard-to-measure constructs such as higher order problem-solving and collaboration.


Archive | 2017

Introduction: Innovative Assessment of Collaboration

Patrick Kyllonen; Mengxiao Zhu; Alina A. von Davier

In this introductory chapter we provide the context for this edited volume, describe the recent research interests around developing collaborative assessments around the world, and synthesize the major research results from the literature from different fields. The purpose of this edited volume was to bring together researchers from diverse disciplines—educational psychology, organizational psychology, learning sciences, assessment design, communications, human-computer interaction, computer science, engineering and applied science, psychometrics—who shared a research interest in examining learners and workers engaged in collaborative activity. This chapter concludes with an emphasis on how each chapter contributes to the research agenda around the measurement research questions, from how to define the constructs to how to model the data from collaborative interactions.


Archive | 2016

A Tough Nut to Crack: Measuring Collaborative Problem Solving

Lei Liu; Jiangang Hao; Alina A. von Davier; Patrick Kyllonen; Juan-Diego Zapata-Rivera


educational data mining | 2014

Visualization and Confirmatory Clustering of Sequence Data from a Simulation-Based Assessment Task

Yoav Bergner; Zhan Shu; Alina A. von Davier

Collaboration


Dive into the Alina A. von Davier's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lei Liu

Princeton University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Samuel Greiff

University of Luxembourg

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nan Kong

Princeton University

View shared research outputs
Top Co-Authors

Avatar

Stephen M. Fiore

University of Central Florida

View shared research outputs
Researchain Logo
Decentralizing Knowledge