Korinn Ostrow
Worcester Polytechnic Institute
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Korinn Ostrow.
learning analytics and knowledge | 2016
Korinn Ostrow; Douglas Selent; Yan Wang; Eric Van Inwegen; Neil T. Heffernan; Joseph Jay Williams
Researchers invested in K-12 education struggle not just to enhance pedagogy, curriculum, and student engagement, but also to harness the power of technology in ways that will optimize learning. Online learning platforms offer a powerful environment for educational research at scale. The present work details the creation of an automated system designed to provide researchers with insights regarding data logged from randomized controlled experiments conducted within the ASSISTments TestBed. The Assessment of Learning Infrastructure (ALI) builds upon existing technologies to foster a symbiotic relationship beneficial to students, researchers, the platform and its content, and the learning analytics community. ALI is a sophisticated automated reporting system that provides an overview of sample distributions and basic analyses for researchers to consider when assessing their data. ALIs benefits can also be felt at scale through analyses that crosscut multiple studies to drive iterative platform improvements while promoting personalized learning.
learning at scale | 2015
Korinn Ostrow; Christopher Donnelly; Seth Adjei; Neil T. Heffernan
Student modeling within intelligent tutoring systems is a task largely driven by binary models that predict student knowledge or next problem correctness (i.e., Knowledge Tracing (KT)). However, using a binary construct for student assessment often causes researchers to overlook the feedback innate to these platforms. The present study considers a novel method of tabling an algorithmically determined partial credit score and problem difficulty bin for each students current problem to predict both binary and partial next problem correctness. This study was conducted using log files from ASSISTments, an adaptive mathematics tutor, from the 2012-2013 school year. The dataset consisted of 338,297 problem logs linked to 15,253 unique student identification numbers. Findings suggest that an efficiently tabled model considering partial credit and problem difficulty performs about as well as KT on binary predictions of next problem correctness. This method provides the groundwork for modifying KT in an attempt to optimize student modeling.
learning at scale | 2015
Joseph Jay Williams; Korinn Ostrow; Xiaolu Xiong; Elena L. Glassman; Juho Kim; Samuel G. Maldonado; Na Li; Justin Reich; Neil T. Heffernan
In contrast to typical laboratory experiments, the everyday use of online educational resources by large populations and the prevalence of software infrastructure for A/B testing leads us to consider how platforms can embed in vivo experiments that do not merely support research, but ensure practical improvements to their educational components. Examples are presented of randomized experimental comparisons conducted by subsets of the authors in three widely used online educational platforms -- Khan Academy, edX, and ASSISTments. We suggest design principles for platform technology to support randomized experiments that lead to practical improvements -- enabling Iterative Improvement and Collaborative Work -- and explain the benefit of their implementation by WPI co-authors in the ASSISTments platform.
artificial intelligence in education | 2015
Korinn Ostrow; Neil T. Heffernan
While adaptive tutoring systems have improved classroom education through individualization, few platforms offer students preference in regard to their education. In the present study, a randomized controlled trial is used to investigate the effects of student choice within ASSISTments. A problem set featuring either text feedback or matched content video feedback was assigned to a sample of 82 middle school students. Those who were able to choose their feedback medium at the start of the assignment outperformed those who were randomly assigned a medium. Results suggest that even if feedback is not ultimately observed, students average significantly higher assignment scores after voicing a choice. Findings offer evidence for enhancing intrinsic motivation through the provision of choice within adaptive tutoring systems.
artificial intelligence in education | 2015
Korinn Ostrow; Neil T. Heffernan; Cristina Heffernan; Zoe Peterson
The benefit of interleaving cognitive content has gained attention in recent years, specifically in mathematics education. The present study serves as a conceptual replication of previous work, documenting the interleaving effect within a middle school sample through brief homework assignments completed within ASSISTments, an adaptive tutoring platform. The results of a randomized controlled trial are presented, examining a practice session featuring interleaved or blocked content spanning three skills: Complementary and Supplementary Angles, Surface Area of a Pyramid, and Compound Probability without Replacement. A second homework session served as a delayed posttest. Tutor log files are analyzed to track student performance and to establish a metric of global mathematics skill for each student. Findings suggest that interleaving is beneficial in the context of adaptive tutoring systems when considering learning gains and average hint usage at posttest. These observations were especially relevant for low skill students.
International Journal of STEM Education | 2018
Paul Salvador Inventado; Peter Scupelli; Korinn Ostrow; Neil T. Heffernan; Jaclyn Ocumpaugh; Victoria Almeda; Stefan Slater
BackgroundInteractive learning environments often provide help strategies to facilitate learning. Hints, for example, help students recall relevant concepts, identify mistakes, and make inferences. However, several studies have shown cases of ineffective help use. Findings from an initial study on the availability of hints in a mathematics problem-solving activity showed that early access to on-demand hints were linked to lack of performance improvements and longer completion times in students answering problems for summer work. The same experimental methodology was used in the present work with a different student sample population collected during the academic year to check for generalizability.ResultsResults from the academic year study showed that early access to on-demand-hints in an online mathematics assignment significantly improved student performance compared to students with later access to hints, which was not observed in the summer study. There were no differences in assignment completion time between conditions, which had been observed in the summer study and has been attributed to engagement in off-task activities. Although the summer and academic year studies were internally valid, there were significantly more students in the academic year study who did not complete their assignment. The sample populations differed significantly by student characteristics and external factors, possibly contributing to differences in the findings. Notable contextual factors that differed included prior knowledge, grade level, and assignment deadlines.ConclusionsContextual differences influence hint effectiveness. This work found varying results when the same experimental methodology was conducted on two separate sample populations engaged in different learning settings. Further work is needed, however, to better understand how on-demand hints generalize to other learning contexts. Despite its limitations, the study shows how randomized controlled trials can be used to better understand the effectiveness of instructional designs applied in online learning systems that cater to thousands of learners across diverse student populations. We hope to encourage additional research that will validate the effectiveness of instructional designs in different learning contexts, paving the way for the development of robust and generalizable designs.
artificial intelligence in education | 2018
Korinn Ostrow; Neil T. Heffernan
Online learning environments allow for the implementation of psychometric scales on diverse samples of students participating in authentic learning tasks. One such scale, the Intrinsic Motivation Inventory (IMI) can be used to inform stakeholders of students’ subjective motivational and regulatory styles. The IMI is a multidimensional scale developed in support of Self-Determination Theory [1, 2, 3], a strongly validated theory stating that motivation and regulation are moderated by three innate needs: autonomy, belonging, and competence. As applied to education, the theory posits that students who perceive volition in a task, those who report stronger connections with peers and teachers, and those who perceive themselves as competent in a task are more likely to internalize the task and excel. ASSISTments, an online mathematics platform, is hosting a series of randomized controlled trials targeting these needs to promote integrated learning. The present work supports these studies by attempting to validate four subscales of the IMI within ASSISTments. Iterative factor analysis and item reduction techniques are used to optimize the reliability of these subscales and limit the obtrusive nature of future data collection efforts. Such scale validation efforts are valuable because student perceptions can serve as powerful covariates in differentiating effective learning interventions.
Educational Media International | 2017
Patrick McGuire; Shihfen Tu; Mary Ellin Logue; Craig A. Mason; Korinn Ostrow
Abstract This study compared the effects of three different feedback formats provided to sixth grade mathematics students within a web-based online learning platform, ASSISTments. A sample of 196 students were randomly assigned to one of three conditions: (1) text-based feedback; (2) image-based feedback; and (3) correctness only feedback. Regardless of condition, students solved a set of problems pertaining to the division of fractions by fractions. This mathematics content was representative of challenging sixth grade mathematics Common Core State Standard (6.NS.A.1). Students randomly assigned to receive text-based feedback (Condition A) or image-based feedback (Condition B) outperformed those randomly assigned to the correctness only group (Condition C). However, these differences were not statistically significant (F(2,108) = 1.394, p = .25). Results of this study also demonstrated a completion-bias. Students randomly assigned to Condition B were less likely to complete the problem set than those assigned to Conditions A and C. To conclude, we discuss the counterintuitive findings observed in this study and implications related to developing and implementing feedback in online learning environments for middle school mathematics.
learning at scale | 2016
Yan Wang; Korinn Ostrow; Seth Adjei; Neil T. Heffernan
Detailed performance data can be exploited to achieve stronger student models when predicting next problem correctness (NPC) within intelligent tutoring systems. However, the availability and importance of these details may differ significantly when considering opportunity count (OC), or the compounded sequence of problems a student experiences within a skill. Inspired by this intuition, the present study introduces the Opportunity Count Model (OCM), a unique approach to student modeling in which separate models are built for differing OCs rather than creating a blanket model that encompasses all OCs. We use Random Forest (RF), which can be used to indicate feature importance, to construct the OCM by considering detailed performance data within tutor log files. Results suggest that OC is significant when modeling student performance and that detailed performance data varies across OCs.
learning at scale | 2016
Korinn Ostrow; Neil T. Heffernan
An interactive demonstration on how to design and implement randomized controlled experiments at scale within the ASSISTments TestBed, a new collaborative for educational research funded by the National Science Foundation (NSF). The Assessment of Learning infrastructure (ALI), a unique data retrieval and analysis tool, is also demonstrated.