Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Joseph Jay Williams is active.

Publication


Featured researches published by Joseph Jay Williams.


learning at scale | 2015

A Playful Game Changer: Fostering Student Retention in Online Education with Social Gamification

Markus Krause; Marc Mogalle; Henning Pohl; Joseph Jay Williams

Many MOOCs report high drop off rates for their students. Among the factors reportedly contributing to this picture are lack of motivation, feelings of isolation, and lack of interactivity in MOOCs. This paper investigates the potential of gamification with social game elements for increasing retention and learning success. Students in our experiment showed a significant increase of 25% in retention period (videos watched) and 23% higher average scores when the course interface was gamified. Social game elements amplify this effect significantly -- students in this condition showed an increase of 50% in retention period and 40% higher average test scores.


Psychonomic Bulletin & Review | 2015

A rational model of function learning

Christopher G. Lucas; Thomas L. Griffiths; Joseph Jay Williams; Michael L. Kalish

Theories of how people learn relationships between continuous variables have tended to focus on two possibilities: one, that people are estimating explicit functions, or two that they are performing associative learning supported by similarity. We provide a rational analysis of function learning, drawing on work on regression in machine learning and statistics. Using the equivalence of Bayesian linear regression and Gaussian processes, which provide a probabilistic basis for similarity-based function learning, we show that learning explicit rules and using similarity can be seen as two views of one solution to this problem. We use this insight to define a rational model of human function learning that combines the strengths of both approaches and accounts for a wide variety of experimental results.


Journal of Experimental Psychology: Learning, Memory and Cognition | 2013

Why are people bad at detecting randomness? A statistical argument

Joseph Jay Williams; Thomas L. Griffiths

Errors in detecting randomness are often explained in terms of biases and misconceptions. We propose and provide evidence for an account that characterizes the contribution of the inherent statistical difficulty of the task. Our account is based on a Bayesian statistical analysis, focusing on the fact that a random process is a special case of systematic processes, meaning that the hypothesis of randomness is nested within the hypothesis of systematicity. This analysis shows that randomly generated outcomes are still reasonably likely to have come from a systematic process and are thus only weakly diagnostic of a random process. We tested this account through 3 experiments. Experiments 1 and 2 showed that the low accuracy in judging whether a sequence of coin flips is random (or biased toward heads or tails) is due to the weak evidence provided by random sequences. While randomness judgments were less accurate than judgments involving non-nested hypotheses in the same task domain, this difference disappeared once the strength of the available evidence was equated. Experiment 3 extended this finding to assessing whether a sequence was random or exhibited sequential dependence, showing that the distribution of statistical evidence has an effect that complements known misconceptions.


Archive | 2014

The MOOClet Framework: Improving Online Education through Experimentation and Personalization of Modules

Joseph Jay Williams; Na Li; Juho Kim; Jacob Whitehill; Samuel G. Maldonado; Mykola Pechenizkiy; Larry F. Chu; Neil T. Heffernan

One goal for massive open online courses is that the educational benefits they provide scale as a function of the number and diversity of learners interacting with the platforms, since an increasing amount of data is available about what interactions and content increase engagement and learning, as well as which educational interactions are effective for which learners, particularly since more and more is known about each individual. This paper presents the MOOClet Framework for tackling this goal, which recognizes a key relationship between randomized experiments and personalization of content. The Framework defines MOOClets as modular components of online courses that can be modified to create different versions, which in turn can be iteratively and adaptively improved through experiments and personalized to characteristics of users. We show how the MOOClet Framework provides guidance in identifying MOOClets and augmenting existing platforms with a platform-independent layer that enables experimentation and personalization even when platforms do not provide native support. We present a concrete usage scenario of the framework in an implementation for the EdX platform, showing how the addition of reflection questions and other content to a lecture video could be experimentally evaluated and personalized. A modeling simulation is also presented to show how using the MOOClet Framework allows data-driven decisions about experimentation and personalization to be made using existing machine learning models. Consideration of the MOOClet Framework could help researchers, instructors, and course designers in identifying, implementing, and improving modular components of existing online education platforms through experimentation and personalization.


learning analytics and knowledge | 2016

The assessment of learning infrastructure (ALI): the theory, practice, and scalability of automated assessment

Korinn Ostrow; Douglas Selent; Yan Wang; Eric Van Inwegen; Neil T. Heffernan; Joseph Jay Williams

Researchers invested in K-12 education struggle not just to enhance pedagogy, curriculum, and student engagement, but also to harness the power of technology in ways that will optimize learning. Online learning platforms offer a powerful environment for educational research at scale. The present work details the creation of an automated system designed to provide researchers with insights regarding data logged from randomized controlled experiments conducted within the ASSISTments TestBed. The Assessment of Learning Infrastructure (ALI) builds upon existing technologies to foster a symbiotic relationship beneficial to students, researchers, the platform and its content, and the learning analytics community. ALI is a sophisticated automated reporting system that provides an overview of sample distributions and basic analyses for researchers to consider when assessing their data. ALIs benefits can also be felt at scale through analyses that crosscut multiple studies to drive iterative platform improvements while promoting personalized learning.


learning at scale | 2015

Using and Designing Platforms for In Vivo Educational Experiments

Joseph Jay Williams; Korinn Ostrow; Xiaolu Xiong; Elena L. Glassman; Juho Kim; Samuel G. Maldonado; Na Li; Justin Reich; Neil T. Heffernan

In contrast to typical laboratory experiments, the everyday use of online educational resources by large populations and the prevalence of software infrastructure for A/B testing leads us to consider how platforms can embed in vivo experiments that do not merely support research, but ensure practical improvements to their educational components. Examples are presented of randomized experimental comparisons conducted by subsets of the authors in three widely used online educational platforms -- Khan Academy, edX, and ASSISTments. We suggest design principles for platform technology to support randomized experiments that lead to practical improvements -- enabling Iterative Improvement and Collaborative Work -- and explain the benefit of their implementation by WPI co-authors in the ASSISTments platform.


human factors in computing systems | 2018

Harvesting Caregiving Knowledge: Design Considerations for Integrating Volunteer Input in Dementia Care

Pin Sym Foong; Shengdong Zhao; Felicia Tan; Joseph Jay Williams

Improving volunteer performance leads to better caregiving in dementia care settings. However, caregiving knowledge systems have been focused on eliciting and sharing expert, primary caregiver knowledge, rather than volunteer-provided knowledge. Through the use of an experience prototype, we explored the content of volunteer caregiver knowledge and identified ways in which such non-expert knowledge can be useful to dementia care. By using lay language, sharing information specific to the client and collaboratively finding strategies for interaction, volunteers were able to boost the effectiveness of future volunteers. Therapists who reviewed the content affirmed the reliability of volunteer caregiver knowledge and placed value on its recency, variety and its ability to help bridge language and professional barriers. We discuss how future systems designed for eliciting and sharing volunteer caregiver knowledge can be used to promote better dementia care.


international conference on user modeling adaptation and personalization | 2018

Increasing Response Rates to Email Surveys in MOOCs

Dan Ding; Oleksandra Poquet; Joseph Jay Williams; Radhika Nikam; Samuel Rhys Cox

Email is an important and widely used communication medium. However, email is increasingly unreliable as people become unlikely to respond to the growing influx of information they receive. Low response rate to email becomes a problem in situations where closing the feedback loop is critical, such as in education, marketing or research. To investigate ways of increasing email response rate, we designed experiments that manipulated the textual elements of the emails. We conducted experiments in a MOOC setting, with email surveys sent out to over 3,000 learners. The emails were sent to elicit responses as to why learners were not engaging with the course. We found that response rates were significantly increased by varying how closely emails were framed as pertaining to a learners personal situation, such as by changing introductory message, and the format in which links to a survey were presented. Our results yield useful implications to educational and marketing context.


artificial intelligence in education | 2018

Combining Difficulty Ranking with Multi-Armed Bandits to Sequence Educational Content.

Avi Segal; Yossi Ben David; Joseph Jay Williams; Kobi Gal; Yaar Shalom

We address the problem of how to personalize educational content to students in order to maximize their learning gains over time. We present a new computational approach to this problem called MAPLE (Multi-Armed Bandits based Personalization for Learning Environments) that combines difficulty ranking with multi-armed bandits. Given a set of target questions MAPLE estimates the expected learning gains for each question and uses an exploration-exploitation strategy to choose the next question to pose to the student. It maintains a personalized ranking over the difficulties of question in the target set and updates it in real-time according to students’ progress. We show in simulations that MAPLE was able to improve students’ learning gains compared to approaches that sequence questions in increasing level of difficulty, or rely on content experts. When implemented in a live e-learning system in the wild, MAPLE showed promising initial results.


artificial intelligence in education | 2018

Bandit Assignment for Educational Experiments: Benefits to Students Versus Statistical Power.

Anna N. Rafferty; Huiji Ying; Joseph Jay Williams

Randomized experiments can lead to improvements in educational technologies, but often require many students to experience conditions associated with inferior learning outcomes. Multi-armed bandit (MAB) algorithms can address this by modifying experimental designs to direct more students to more helpful conditions. Using simulations and modeling data from previous educational experiments, we explore the statistical impact of using MABs for experiment design, focusing on the tradeoff between acquiring statistically reliable information and benefits to students. Results suggest that while MAB experiments can improve average benefits for students, at least twice as many participants are needed to attain power of 0.8 and false positives are twice as frequent as expected. Optimistic prior distributions in the MAB algorithm can mitigate the loss in power to some extent, without meaningfully reducing benefits or further increasing false positives.

Collaboration


Dive into the Joseph Jay Williams's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Neil T. Heffernan

Worcester Polytechnic Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Justin Reich

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Cody Austun Coleman

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Tania Lombrozo

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge