Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Josh Gardner is active.

Publication


Featured researches published by Josh Gardner.


User Modeling and User-adapted Interaction | 2018

Student success prediction in MOOCs

Josh Gardner; Christopher Brooks

Predictive models of student success in Massive Open Online Courses (MOOCs) are a critical component of effective content personalization and adaptive interventions. In this article we review the state of the art in predictive models of student success in MOOCs and present a categorization of MOOC research according to the predictors (features), prediction (outcomes), and underlying theoretical model. We critically survey work across each category, providing data on the raw data source, feature engineering, statistical model, evaluation method, prediction architecture, and other aspects of these experiments. Such a review is particularly useful given the rapid expansion of predictive modeling research in MOOCs since the emergence of major MOOC platforms in 2012. This survey reveals several key methodological gaps, which include extensive filtering of experimental subpopulations, ineffective student model evaluation, and the use of experimental data which would be unavailable for real-world student success prediction and intervention, which is the ultimate goal of such models. Finally, we highlight opportunities for future research, which include temporal modeling, research bridging predictive and explanatory student models, work which contributes to learning theory, and evaluating long-term learner success in MOOCs.


learning at scale | 2018

Replicating MOOC predictive models at scale

Josh Gardner; Christopher Brooks; Juan Miguel L. Andres; Ryan S. Baker

We present a case study in predictive model replication for student dropout in Massive Open Online Courses (MOOCs) using a large and diverse dataset (133 sessions of 28 unique courses offered by two institutions). This experiment was run on the MOOC Replication Framework (MORF), which makes it feasible to fully replicate complex machine learned models, from raw data to model evaluation. We provide an overview of the MORF platform architecture and functionality, and demonstrate its use through a case study. In this replication of [41], we contextualize and evaluate the results of the previous work using statistical tests and a more effective model evaluation scheme. We find that only some of the original findings replicate across this larger and more diverse sample of MOOCs, with others replicating significantly in the opposite direction. Our analysis also reveals results which are highly relevant to the prediction task which were not reported in the original experiment. This work demonstrates the importance of replication of predictive modeling research in MOOCs using large and diverse datasets, illuminates the challenges of doing so, and describes our freely available, open-source software framework to overcome barriers to replication.


learning at scale | 2017

A Statistical Framework for Predictive Model Evaluation in MOOCs

Josh Gardner; Christopher Brooks

Feature extraction and model selection are two essential processes when building predictive models of student success. In this work we describe and demonstrate a statistical approach to both tasks, comparing five modeling techniques (a lasso penalized logistic regression model, naïve Bayes, random forest, SVM, and classification tree) across three sets of features (week-only, summed, and appended). We conduct this comparison on a dataset compiled from 30 total offerings of five different MOOCs run on the Coursera platform. Through the use of the Friedman test with a corresponding post-hoc Nemenyi test, we present comparative performance results for several classifiers across the three different feature extraction methods, demonstrating a rigorous inferential process intended to guide future analyses of student success systems.


international learning analytics knowledge conference | 2017

Integrating syllabus data into student success models

Josh Gardner; Ogechi Onuoha; Christopher Brooks

In this work, we present (1) a methodology for collecting, evaluating, and utilizing human-annotated data about course syllabi in predictive models of student success, and (2) an empirical analysis of the predictiveness of such features as they relate to others in modeling end-of-course grades in traditional higher education courses. We present a two-stage approach to (1) that addresses several challenges unique to the annotation task, and address (2) using variable importance metrics from a series of exploratory models. We demonstrate that the process of supplementing traditional course data with human-annotated data can potentially improve predictive models with information not contained in university records, and highlight specific features that demonstrate these potential information gains.


national conference on artificial intelligence | 2018

Dropout Model Evaluation in MOOCs.

Josh Gardner; Christopher Brooks


arXiv: Software Engineering | 2018

MORF: A Framework for MOOC Predictive Modeling and Replication At Scale.

Josh Gardner; Christopher Brooks; Juan Miguel L. Andres; Ryan S. Baker


Journal of learning Analytics | 2018

Evaluating Predictive Models of Student Success: Closing the Methodological Gap

Josh Gardner; Christopher Brooks


learning analytics and knowledge | 2018

Coenrollment networks and their relationship to grades in undergraduate education

Josh Gardner; Christopher Brooks


arXiv: Software Engineering | 2018

MORF: A Framework for Predictive Modeling and Replication At Scale With Privacy-Restricted MOOC Data.

Josh Gardner; Christopher Brooks; Juan Miguel L. Andres; Ryan S. Baker


arXiv: Computers and Society | 2018

Enabling End-To-End Machine Learning Replicability: A Case Study in Educational Data Mining.

Josh Gardner; Yuming Yang; Ryan S. Baker; Christopher Brooks

Collaboration


Dive into the Josh Gardner's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ryan S. Baker

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Arya Farahi

University of Michigan

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jared Webb

Brigham Young University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Victor Pang

University of Michigan

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge