Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Anna N. Rafferty is active.

Publication


Featured researches published by Anna N. Rafferty.


Proceedings of the Workshop on Parsing German | 2008

Parsing Three German Treebanks: Lexicalized and Unlexicalized Baselines

Anna N. Rafferty; Christopher D. Manning

Previous work on German parsing has provided confusing and conflicting results concerning the difficulty of the task and whether techniques that are useful for English, such as lexicalization, are effective for German. This paper aims to provide some understanding and solid baseline numbers for the task. We examine the performance of three techniques on three treebanks (Negra, Tiger, and TuBa-D/Z): (i) Markovization, (ii) lexicalization, and (iii) state splitting. We additionally explore parsing with the inclusion of grammatical function information. Explicit grammatical functions are important to German language understanding, but they are numerous, and naively incorporating them into a parser which assumes a small phrasal category inventory causes large performance reductions due to increasing sparsity.


artificial intelligence in education | 2011

Faster teaching by POMDP planning

Anna N. Rafferty; Emma Brunskill; Thomas L. Griffiths; Patrick Shafto

Both human and automated tutors must infer what a student knows and plan future actions to maximize learning. Though substantial research has been done on tracking and modeling student learning, there has been significantly less attention on planning teaching actions and how the assumed student model impacts the resulting plans. We frame the problem of optimally selecting teaching actions using a decision-theoretic approach and show how to formulate teaching as a partially-observable Markov decision process (POMDP) planning problem. We consider three models of student learning and present approximate methods for finding optimal teaching actions given the large state and action spaces that arise in teaching. An experimental evaluation of the resulting policies on a simple concept-learning task shows that framing teacher action planning as a POMDP can accelerate learning relative to baseline performance.


Science | 2014

Computer-Guided Inquiry to Improve Science Learning

Marcia C. Linn; Libby Gerard; Kihyun Ryoo; Kevin W. McElhaney; Ou Lydia Liu; Anna N. Rafferty

Automated guidance on essays and drawings can improve learning in precollege and college courses. Engaging students in inquiry practices is known to motivate them to persist in science, technology, engineering, and mathematics (STEM) fields and to create lifelong learners (1, 2). In inquiry, students initiate investigations, gather data, critique evidence, and make sophisticated drawings or write coherent essays to explain complex phenomena. Yet, most instruction relies on lectures that transmit information and multiple-choice tests that determine which details students recall. Massive Open Online Courses (MOOCs) mostly offer more of the same. But new cyber-learning tools may change all this, by taking advantage of new algorithms to automatically score student essays and drawings and offer personalized guidance.


Cognitive Science | 2016

Faster Teaching via POMDP Planning

Anna N. Rafferty; Emma Brunskill; Thomas L. Griffiths; Patrick Shafto

Human and automated tutors attempt to choose pedagogical activities that will maximize student learning, informed by their estimates of the students current knowledge. There has been substantial research on tracking and modeling student learning, but significantly less attention on how to plan teaching actions and how the assumed student model impacts the resulting plans. We frame the problem of optimally selecting teaching actions using a decision-theoretic approach and show how to formulate teaching as a partially observable Markov decision process planning problem. This framework makes it possible to explore how different assumptions about student learning and behavior should affect the selection of teaching actions. We consider how to apply this framework to concept learning problems, and we present approximate methods for finding optimal teaching actions, given the large state and action spaces that arise in teaching. Through simulations and behavioral experiments, we explore the consequences of choosing teacher actions under different assumed student models. In two concept-learning tasks, we show that this technique can accelerate learning relative to baseline performance.


Cognitive Science | 2015

Inferring Learners' Knowledge From Their Actions

Anna N. Rafferty; Michelle LaMar; Thomas L. Griffiths

Watching another person take actions to complete a goal and making inferences about that persons knowledge is a relatively natural task for people. This ability can be especially important in educational settings, where the inferences can be used for assessment, diagnosing misconceptions, and providing informative feedback. In this paper, we develop a general framework for automatically making such inferences based on observed actions; this framework is particularly relevant for inferring student knowledge in educational games and other interactive virtual environments. Our approach relies on modeling action planning: We formalize the problem as a Markov decision process in which one must choose what actions to take to complete a goal, where choices will be dependent on ones beliefs about how actions affect the environment. We use a variation of inverse reinforcement learning to infer these beliefs. Through two lab experiments, we show that this model can recover peoples beliefs in a simple environment, with accuracy comparable to that of human observers. We then demonstrate that the model can be used to provide real-time feedback and to model data from an existing educational game.


artificial intelligence in education | 2015

Interpreting Freeform Equation Solving

Anna N. Rafferty; Thomas L. Griffiths

Learners’ step-by-step solutions can offer insight into their misunderstandings. Because of the difficulty of automatically interpreting freeform solutions, educational technologies often structure problem solving into particular patterns. Hypothesizing that structured interfaces may frustrate some learners, we conducted an experiment comparing two interfaces for solving equations: one requires users to enter steps in an efficient sequence and insists each step be mathematically correct before the user can continue, and the other allows users to enter any steps they would like. We find that practicing equation solving in either interface was associated with improved scores on a multiple choice assessment, but that users who had the freedom to make mistakes were more satisfied with the interface. In order to make inferences from these more freeform data, we develop a Bayesian inverse planning algorithm for diagnosing algebra understanding that interprets individual equation solving steps and places no restrictions on the ordering or correctness of steps. This algorithms draws inferences and exhibits similar confidence based on data from either interface. Our work shows that inverse planning can interpret freeform problem solving, and suggests the need to further investigate how structured interfaces affect learners’ motivation and engagement.


Cognitive Science | 2014

Analyzing the Rate at Which Languages Lose the Influence of a Common Ancestor

Anna N. Rafferty; Thomas L. Griffiths; Daniel Klein

Analyzing the rate at which languages change can clarify whether similarities across languages are solely the result of cognitive biases or might be partially due to descent from a common ancestor. To demonstrate this approach, we use a simple model of language evolution to mathematically determine how long it should take for the distribution over languages to lose the influence of a common ancestor and converge to a form that is determined by constraints on language learning. We show that modeling language learning as Bayesian inference of n binary parameters or the ordering of n constraints results in convergence in a number of generations that is on the order of n log n. We relax some of the simplifying assumptions of this model to explore how different assumptions about language evolution affect predictions about the time to convergence; in general, convergence time increases as the model becomes more realistic. This allows us to characterize the assumptions about language learning (given the models that we consider) that are sufficient for convergence to have taken place on a timescale that is consistent with the origin of human languages. These results clearly identify the consequences of a set of simple models of language evolution and show how analysis of convergence rates provides a tool that can be used to explore questions about the relationship between accounts of language learning and the origins of similarities across languages.


artificial intelligence in education | 2018

Bandit Assignment for Educational Experiments: Benefits to Students Versus Statistical Power.

Anna N. Rafferty; Huiji Ying; Joseph Jay Williams

Randomized experiments can lead to improvements in educational technologies, but often require many students to experience conditions associated with inferior learning outcomes. Multi-armed bandit (MAB) algorithms can address this by modifying experimental designs to direct more students to more helpful conditions. Using simulations and modeling data from previous educational experiments, we explore the statistical impact of using MABs for experiment design, focusing on the tradeoff between acquiring statistically reliable information and benefits to students. Results suggest that while MAB experiments can improve average benefits for students, at least twice as many participants are needed to attain power of 0.8 and false positives are twice as frequent as expected. Optimistic prior distributions in the MAB algorithm can mitigate the loss in power to some extent, without meaningfully reducing benefits or further increasing false positives.


learning at scale | 2017

MOOClets: A Framework for Dynamic Experimentation and Personalization

Joseph Jay Williams; Anna N. Rafferty; Samuel G. Maldonado; Andrew M. Ang; Dustin Tingley; Juho Kim

Randomized experiments in online educational environments are ubiquitous as a scientific method for investigating learning and motivation, but too rarely improve educational resources and produce practical benefits for learners. We suggest that software and tools for experimentally comparing resources are designed primarily through the lens of experiments as a scientific methodology, and therefore miss a tremendous opportunity for online experiments to serve as engines for dynamic improvement and personalization. We present the MOOClet requirements specification to guide the implementation of software or tools for experiments to ensure that whenever alternative versions of a resource can be experimentally compared (by randomly assigning versions), the resource can also be dynamically improved (by changing which versions are presented), and personalized (by presenting different versions to different people). The MOOClet specification was used to implement DEXPER, a proof-of-concept web service backend that enables dynamic experimentation and personalization of resources embedded in front-end educational platforms. We describe three use cases of MOOClets for dynamic experimentation and personalization of motivational emails, explanations, and problems.


meeting of the association for computational linguistics | 2008

Finding Contradictions in Text

Marie-Catherine de Marneffe; Anna N. Rafferty; Christopher D. Manning

Collaboration


Dive into the Anna N. Rafferty's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Marcia C. Linn

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Emma Brunskill

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Matei Zaharia

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge