Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Andrew Dean Ho is active.

Publication


Featured researches published by Andrew Dean Ho.


Educational Researcher | 2014

Changing “Course” Reconceptualizing Educational Variables for Massive Open Online Courses

Jennifer DeBoer; Andrew Dean Ho; Glenda S. Stump; Lori Breslow

In massive open online courses (MOOCs), low barriers to registration attract large numbers of students with diverse interests and backgrounds, and student use of course content is asynchronous and unconstrained. The authors argue that MOOC data are not only plentiful and different in kind but require reconceptualization—new educational variables or different interpretations of existing variables. The authors illustrate this by demonstrating the inadequacy or insufficiency of conventional interpretations of four variables for quantitative analysis and reporting: enrollment, participation, curriculum, and achievement. Drawing from 230 million clicks from 154,763 registrants for a prototypical MOOC offering in 2012, the authors present new approaches to describing and understanding user behavior in this emerging educational context.


Educational Researcher | 2008

The Problem With “Proficiency”: Limitations of Statistics and Policy Under No Child Left Behind:

Andrew Dean Ho

The Percentage of Proficient Students (PPS) has become a ubiquitous statistic under the No Child Left Behind Act. This focus on proficiency has statistical and substantive costs. The author demonstrates that the PPS metric offers only limited and unrepresentative depictions of large-scale test score trends, gaps, and gap trends. The limitations are unpredictable, dramatic, and difficult to correct in the absence of other data. Interpretation of these depictions generally leads to incorrect or incomplete inferences about distributional change. The author shows how the statistical shortcomings of these depictions extend to shortcomings of policy, from exclusively encouraging score gains near the proficiency cut score to shortsighted comparisons of state and national testing results. The author proposes alternatives for large-scale score reporting and argues that a distribution-wide perspective on results is required for any serious analysis of test score data, including “growth”-related results under the recent Growth Model Pilot Program.


ACM Queue | 2014

Privacy, anonymity, and big data in the social sciences

Jon P. Daries; Justin Reich; Jim Waldo; Elise M. Young; Jonathan Whittinghill; Andrew Dean Ho; Daniel T. Seaton; Isaac L. Chuang

Open data has tremendous potential for science, but, in human subjects research, there is a tension between privacy and releasing high-quality open data. Federal law governing student privacy and the release of student records suggests that anonymizing student data protects student privacy. Guided by this standard, we de-identified and released a data set from 16 MOOCs (massive open online courses) from MITx and HarvardX on the edX platform. In this article, we show that these and other de-identification procedures necessitate changes to data sets that threaten replication and extension of baseline analyses. To balance student privacy and the benefits of open data, we suggest focusing on protecting privacy without anonymizing data by instead expanding policies that compel researchers to uphold the privacy of the subjects in open data sets. If we want to have high-quality social science research and also protect the privacy of human subjects, we must eventually have trust in researchers. Otherwise, we’ll always have the strict tradeoff between anonymity and science illustrated here.


Journal of Educational and Behavioral Statistics | 2009

A Nonparametric Framework for Comparing Trends and Gaps Across Tests

Andrew Dean Ho

Problems of scale typically arise when comparing test score trends, gaps, and gap trends across different tests. To overcome some of these difficulties, test score distributions on the same score scale can be represented by nonparametric graphs or statistics that are invariant under monotone scale transformations. This article motivates and then develops a framework for the comparison of these nonparametric trend, gap, and gap trend representations across tests. The connections between this framework and other nonparametric tools, including probability–probability (PP) plots, the Mann-Whitney U test, and the statistic known as P(Y > X), are highlighted. The author describes the advantages of this framework over scale-dependent trend and gap statistics and demonstrates applications of these nonparametric methods to frequently asked policy questions.


Journal of Educational and Behavioral Statistics | 2013

Contrasting OLS and Quantile Regression Approaches to Student “Growth” Percentiles

Katherine E. Castellano; Andrew Dean Ho

Regression methods can locate student test scores in a conditional distribution, given past scores. This article contrasts and clarifies two approaches to describing these locations in terms of readily interpretable percentile ranks or “conditional status percentile ranks.” The first is Betebenner’s quantile regression approach that results in “Student Growth Percentiles.” The second is an ordinary least squares (OLS) regression approach that involves expressing OLS regression residuals as percentile ranks. The study describes the empirical and conceptual similarity of the two metrics in simulated and real-data scenarios. The metrics contrast in their scale-transformation invariance and sample size requirements but are comparable in their dependence on the number of prior years used as conditioning variables. These results support guidelines for selecting the model that best fits the data and have implications for the interpretations of these percentiles ranks as “growth” measures.


Journal of Educational and Behavioral Statistics | 2012

Estimating Achievement Gaps From Test Scores Reported in Ordinal “Proficiency” Categories:

Andrew Dean Ho; Sean F. Reardon

Test scores are commonly reported in a small number of ordered categories. Examples of such reporting include state accountability testing, Advanced Placement tests, and English proficiency tests. This article introduces and evaluates methods for estimating achievement gaps on a familiar standard-deviation-unit metric using data from these ordered categories alone. These methods hold two practical advantages over alternative achievement gap metrics. First, they require only categorical proficiency data, which are often available where means and standard deviations are not. Second, they result in gap estimates that are invariant to score scale transformations, providing a stronger basis for achievement gap comparisons over time and across jurisdictions. The authors find three candidate estimation methods that recover full-distribution gap estimates well when only censored data are available.


Educational and Psychological Measurement | 2015

Descriptive Statistics for Modern Test Score Distributions: Skewness, Kurtosis, Discreteness, and Ceiling Effects

Andrew Dean Ho; Carol Yu

Many statistical analyses benefit from the assumption that unconditional or conditional distributions are continuous and normal. More than 50 years ago in this journal, Lord and Cook chronicled departures from normality in educational tests, and Micerri similarly showed that the normality assumption is met rarely in educational and psychological practice. In this article, the authors extend these previous analyses to state-level educational test score distributions that are an increasingly common target of high-stakes analysis and interpretation. Among 504 scale-score and raw-score distributions from state testing programs from recent years, nonnormal distributions are common and are often associated with particular state programs. The authors explain how scaling procedures from item response theory lead to nonnormal distributions as well as unusual patterns of discreteness. The authors recommend that distributional descriptive statistics be calculated routinely to inform model selection for large-scale test score data, and they illustrate consequences of nonnormality using sensitivity studies that compare baseline results to those from normalized score scales.


Journal of Educational and Behavioral Statistics | 2015

Practical Differences Among Aggregate-Level Conditional Status Metrics From Median Student Growth Percentiles to Value-Added Models

Katherine E. Castellano; Andrew Dean Ho

Aggregate-level conditional status metrics (ACSMs) describe the status of a group by referencing current performance to expectations given past scores. This article provides a framework for these metrics, classifying them by aggregation function (mean or median), regression approach (linear mean and nonlinear quantile), and the scale that supports interpretations (percentile rank and score scale), among other factors. This study addresses the question “how different are these ACSMs?” in three ways. First, using simulated data, it evaluates how well each model recovers its respective parameters. Second, using both simulated and empirical data, it illustrates practical differences among ACSMs in terms of pairwise rank differences incurred by switching between metrics. Third, it ranks ACSMs in terms of their robustness under scale transformations. The results consistently show that choices between mean- and median-based metrics lead to more substantial differences than choices between fixed- and random-effects or linear mean and nonlinear quantile regression. The findings set expectations for cross-metric comparability in realistic data scenarios.


Journal of Educational and Behavioral Statistics | 2015

Practical Issues in Estimating Achievement Gaps From Coarsened Data

Sean F. Reardon; Andrew Dean Ho

In an earlier paper, we presented methods for estimating achievement gaps when test scores are coarsened into a small number of ordered categories, preventing fine-grained distinctions between individual scores. We demonstrated that gaps can nonetheless be estimated with minimal bias across a broad range of simulated and real coarsened data scenarios. In this article, we extend this previous work to obtain practical estimates of the imprecision imparted by the coarsening process and of the bias imparted by measurement error. In the first part of this article, we derive standard error estimates and demonstrate that coarsening leads to only very modest increases in standard errors under a wide range of conditions. In the second part of this article, we describe and evaluate a practical method for disattenuating gap estimates to account for bias due to measurement error.


Legal Studies | 2014

Due dates in MOOCs: does stricter mean better?

Sergiy O Nesterko; Daniel T. Seaton; Justin Reich; Joe McIntyre; Qiuyi Han; Isaac L. Chuang; Andrew Dean Ho

Massive Open Online Courses (MOOCs) employ a variety of components to engage students in learning (eg. videos, forums, quizzes). Some components are graded, which means that they play a key role in a students final grade and certificate attainment. It is not yet clear how the due date structure of graded components affects student outcomes including academic performance and alternative modes of learning of students. Using data from HarvardX and MITx, Harvards and MITs divisions for online learning, we study the structure of due dates on graded components for 10 completed MOOCs. We find that stricter due dates are associated with higher certificate attainment rates but fewer students who join late being able to earn a certificate. Our findings motivate further studies of how the use of graded components and deadlines affects academic and alternative learning of MOOC students, and can help inform the design of online courses.

Collaboration


Dive into the Andrew Dean Ho's collaboration.

Top Co-Authors

Avatar

Daniel T. Seaton

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Isaac L. Chuang

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Justin Reich

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge