Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Andrew E. Waters is active.

Publication


Featured researches published by Andrew E. Waters.


learning at scale | 2015

Mathematical Language Processing: Automatic Grading and Feedback for Open Response Mathematical Questions

Andrew S. Lan; Divyanshu Vats; Andrew E. Waters; Richard G. Baraniuk

While computer and communication technologies have provided effective means to scale up many aspects of education, the submission and grading of assessments such as homework assignments and tests remains a weak link. In this paper, we study the problem of automatically grading the kinds of open response mathematical questions that figure prominently in STEM (science, technology, engineering, and mathematics) courses. Our data-driven framework for mathematical language processing (MLP) leverages solution data from a large number of learners to evaluate the correctness of their solutions, assign partial-credit scores, and provide feedback to each learner on the likely locations of any errors. MLP takes inspiration from the success of natural language processing for text data and comprises three main steps. First, we convert each solution to an open response mathematical question into a series of numerical features. Second, we cluster the features from several solutions to uncover the structures of correct, partially correct, and incorrect solutions. We develop two different clustering approaches, one that leverages generic clustering algorithms and one based on Bayesian nonparametrics. Third, we automatically grade the remaining (potentially large number of) solutions based on their assigned cluster and one instructor-provided grade per cluster. As a bonus, we can track the cluster assignment of each step of a multistep solution and determine when it departs from a cluster of correct solutions, which enables us to indicate the likely locations of errors to learners. We test and validate MLP on real-world MOOC data to demonstrate how it can substantially reduce the human effort required in large-scale educational platforms.


international conference on acoustics, speech, and signal processing | 2010

Distributed bearing estimation via matrix completion

Andrew E. Waters; Volkan Cevher

We consider bearing estimation of multiple narrow-band plane waves impinging on an array of sensors. For this problem, bearing estimation algorithms such as minimum variance distortionless response (MVDR), multiple signal classification, and maximum likelihood generally require the array covariance matrix as sufficient statistics. Interestingly, the rank of the array covariance matrix is approximately equal to the number of the sources, which is typically much smaller than the number of sensors in many practical scenarios. In these scenarios, the covariance matrix is low-rank and can be estimated via matrix completion from only a small subset of its entries. We propose a distributed matrix completion framework to drastically reduce the inter-sensor communication in a network while still achieving near-optimal bearing estimation accuracy. Using recent results in noisy matrix completion, we provide sampling bounds and show how the additive noise at the sensor observations affects the reconstruction performance. We demonstrate via simulations that our approach sports desirable tradeoffs between communication costs and bearing estimation accuracy.


learning at scale | 2015

BayesRank: A Bayesian Approach to Ranked Peer Grading

Andrew E. Waters; David Tinapple; Richard G. Baraniuk

Advances in online and computer supported education afford exciting opportunities to revolutionize the classroom, while also presenting a number of new challenges not faced in traditional educational settings. Foremost among these challenges is the problem of accurately and efficiently evaluating learner work as the class size grows, which is directly related to the larger goal of providing quality, timely, and actionable formative feedback. Recently there has been a surge in interest in using peer grading methods coupled with machine learning to accurately and fairly evaluate learner work while alleviating the instructor bottleneck and grading overload. Prior work in peer grading almost exclusively focuses on numerically scored grades -- either real-valued or ordinal. In this work, we consider the implications of peer ranking in which learners rank a small subset of peer work from strongest to weakest, and propose new types of computational analyses that can be applied to this ranking data. We adopt a Bayesian approach to the ranked peer grading problem and develop a novel model and method for utilizing ranked peer-grading data. We additionally develop a novel procedure for adaptively identifying which work should be ranked by particular peers in order to dynamically resolve ambiguity in the data and rapidly resolve a clearer picture of learner performance. We showcase our results on both synthetic and several real-world educational datasets.


international conference on acoustics, speech, and signal processing | 2013

Sparse probit factor analysis for learning analytics

Andrew E. Waters; Andrew S. Lan; Christoph Studer

We develop a new model and algorithm for machine learning-based learning analytics, which estimate a learners knowledge of the concepts underlying a domain. Our model represents the probability that a learner provides the correct response to a question in terms of three factors: their understanding of a set of underlying concepts, the concepts involved in each question, and each questions intrinsic difficulty. We estimate these factors given the graded responses to a set of questions. We develop a bi-convex algorithm to solve the resulting SPARse Factor Analysis (SPARFA) problem. We also incorporate user-defined tags on questions to facilitate the interpretability of the estimated factors. Experiments with synthetic and real-world data demonstrate the efficacy of our approach.


international conference on acoustics, speech, and signal processing | 2012

A bit-constrained sar adc for compressive acquisition of frequency sparse signals

Andrew E. Waters; Charles K. Sestok; Richard G. Baraniuk

We introduce a novel analog-to-digital converter (ADC) based on the traditional successive approximation register. This architecture employs compressive sensing (CS) techniques to acquire and reconstruct frequency sparse signals. One important difference between our approach and traditional CS systems is that our architecture constrains the number of bits used during acquisition rather than the number of measurements. Our system is able to flexibly partition a fixed budget in order to trade the number of measurements it acquires with the quantization depth given to each measurement. We show that this degree of flexibility is particularly advantageous for ameliorating the CS noise folding phenomenon, allowing our ADC significant gains over measurement-constrained compressive sensing systems.


learning at scale | 2018

QG-net: a data-driven question generation model for educational content

Zichao Wang; Andrew S. Lan; Weili Nie; Andrew E. Waters; Phillip Grimaldi; Richard G. Baraniuk

The ever growing amount of educational content renders it increasingly difficult to manually generate sufficient practice or quiz questions to accompany it. This paper introduces QG-Net, a recurrent neural network-based model specifically designed for automatically generating quiz questions from educational content such as textbooks. QG-Net, when trained on a publicly available, general-purpose question/answer dataset and without further fine-tuning, is capable of generating high quality questions from textbooks, where the content is significantly different from the training data. Indeed, QG-Net outperforms state-of-the-art neural network-based and rules-based systems for question generation, both when evaluated using standard benchmark datasets and when using human evaluators. QG-Net also scales favorably to applications with large amounts of educational content, since its performance improves with the amount of training data.


IEEE Journal of Selected Topics in Signal Processing | 2017

BLAh: Boolean Logic Analysis for Graded Student Response Data

Andrew S. Lan; Andrew E. Waters; Christoph Studer; Richard G. Baraniuk

Machine learning (ML) models and algorithms can enable a personalized learning experience for students in an inexpensive and scalable manner. At the heart of ML-driven personalized learning is the automated analysis of student responses to assessment items. Existing statistical models for this task enable the estimation of student knowledge and question difficulty solely from graded response data with only minimal effort from instructors. However, most existing student–response models are generalized linear models, meaning that they characterize the probability that a student answers a question correctly through a linear combination of their knowledge and the questions difficulty with respect to each concept that is being assessed. Such models cannot characterize complicated, nonlinear student–response associations and, hence, lack human interpretability in practice. In this paper, we propose a nonlinear student–response model called Boolean logic analysis (BLAh) that models a students binary-valued graded response to a question as the output of a Boolean logic function. We develop a Markov chain Monte Carlo inference algorithm that learns the Boolean logic functions for each question solely from graded response data. A refined BLAh model improves the identifiability, tractability, and interpretability by considering a restricted set of ordered Boolean logic functions. Experimental results on a variety of real-world educational datasets demonstrate that BLAh not only achieves best-in-class prediction performance on unobserved student responses on some datasets but also provides easily interpretable parameters when questions are tagged with metadata by domain experts, which can provide useful feedback to instructors and content designers to improve the quality of assessment items.


neural information processing systems | 2011

SpaRCS: Recovering low-rank and sparse matrices from compressive measurements

Andrew E. Waters; Aswin C. Sankaranarayanan; Richard G. Baraniuk


Journal of Machine Learning Research | 2014

Sparse factor analysis for learning and content analytics

Andrew S. Lan; Andrew E. Waters; Christoph Studer; Richard G. Baraniuk


Archive | 2014

Sparse Factor Analysis for Analysis of User Content Preferences

Richard G. Baraniuk; Andrew S. Lan; Christoph Studer; Andrew E. Waters

Collaboration


Dive into the Andrew E. Waters's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David Tinapple

Arizona State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Dmytro Babik

James Madison University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge