Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michael Eagle is active.

Publication


Featured researches published by Michael Eagle.


learning analytics and knowledge | 2015

Exploring networks of problem-solving interactions

Michael Eagle; Drew Hicks; Barry W. Peddycord; Tiffany Barnes

Intelligent tutoring systems and other computer-aided learning environments produce large amounts of transactional data on student problem-solving behavior, in previous work we modeled the student-tutor interaction data as a complex network, and successfully generated automated next-step hints as well as visualizations for educators. In this work we discuss the types of tutoring environments that are best modeled by interaction networks, and how the empirical observations of problem-solving result in common network features. We find that interaction networks exhibit the properties of scale-free networks such as vertex degree distributions that follow power law. We compare data from two versions of a propositional logic tutor, as well as two different representations of data from an educational game on programming. We find that statistics such as degree assortativity and the scale-free metric allow comparison of the network structures across domains, and provide insight into student problem solving behavior.


learning analytics and knowledge | 2015

Towards data-driven mastery learning

Behrooz Mostafavi; Michael Eagle; Tiffany Barnes

We have developed a novel data-driven mastery learning system to improve learning in complex procedural problem solving domains. This new system was integrated into an existing logic proof tool, and assigned as homework in a deductive logic course. Student performance and dropout were compared across three systems: The Deep Thought logic tutor, Deep Thought with integrated hints, and Deep Thought with our data-driven mastery learning system. Results show that the data-driven mastery learning system increases mastery of target tutor-actions, improves tutor scores, and lowers the rate of tutor dropout over Deep Thought, with or without provided hints.


international learning analytics knowledge conference | 2017

An instructor dashboard for real-time analytics in interactive programming assignments

Nicholas Diana; Michael Eagle; John C. Stamper; Shuchi Grover; Marie A. Bienkowski; Satabdi Basu

Many introductory programming environments generate a large amount of log data, but making insights from these data accessible to instructors remains a challenge. This research demonstrates that student outcomes can be accurately predicted from student program states at various time points throughout the course, and integrates the resulting predictive models into an instructor dashboard. The effectiveness of the dashboard is evaluated by measuring how well the dashboard analytics correctly suggest that the instructor help students classified as most in need. Finally, we describe a method of matching low-performing students with high-performing peer tutors, and show that the inclusion of peer tutors not only increases the amount of help given, but the consistency of help availability as well.


intelligent tutoring systems | 2014

Survival Analysis on Duration Data in Intelligent Tutors

Michael Eagle; Tiffany Barnes

Effects such as student dropout and the non-normal distribution of duration data confound the exploration of tutor efficiency, time-in-tutor vs. tutor performance, in intelligent tutors. We use an accelerated failure time (AFT) model to analyze the effects of using automatically generated hints in Deep Thought, a propositional logic tutor. AFT is a branch of survival analysis, a statistical technique designed for measuring time-to-event data and account for participant attrition. We found that students provided with automatically generated hints were able to complete the tutor in about half the time taken by students who were not provided hints. We compare the results of survival analysis with a standard between-groups mean comparison and show how failing to take student dropout into account could lead to incorrect conclusions. We demonstrate that survival analysis is applicable to duration data collected from intelligent tutors and is particularly useful when a study experiences participant attrition.


intelligent tutoring systems | 2016

Estimating Individual Differences for Student Modeling in Intelligent Tutors from Reading and Pretest Data

Michael Eagle; Albert T. Corbett; John C. Stamper; Bruce M. McLaren; Angela Z. Wagner; Benjamin A. MacLaren; Aaron P. Mitchell

Past studies have shown that Bayesian Knowledge Tracing BKT can predict student performance and implement Cognitive Mastery successfully. Standard BKT individualizes parameter estimates for skills, also referred to as knowledge components KCs, but not for students. Studies deriving individual student parameters from the data logs of student tutor performance have shown improvements to the standard BKT model fits, and result in different practice recommendations for students. This study investigates whether individual student parameters, specifically individual difference weights IDWs [1], can be derived from student activities prior to tutor use. We find that student performance measures in reading instructional text and in a conceptual knowledge pretest can be employed to predict IDWs. Further, we find that a model incorporating these predicted IDWs performs well, in terms of model fit and learning efficiency, when compared to a standard BKT model and a model with best-fitting IDWs derived from tutor performance.


ACM Transactions on Computing Education | 2017

A Framework for Using Hypothesis-Driven Approaches to Support Data-Driven Learning Analytics in Measuring Computational Thinking in Block-Based Programming Environments

Shuchi Grover; Satabdi Basu; Marie A. Bienkowski; Michael Eagle; Nicholas Diana; John C. Stamper

Systematic endeavors to take computer science (CS) and computational thinking (CT) to scale in middle and high school classrooms are underway with curricula that emphasize the enactment of authentic CT skills, especially in the context of programming in block-based programming environments. There is, therefore, a growing need to measure students’ learning of CT in the context of programming and also support all learners through this process of learning computational problem solving. The goal of this research is to explore hypothesis-driven approaches that can be combined with data-driven ones to better interpret student actions and processes in log data captured from block-based programming environments with the goal of measuring and assessing students’ CT skills. Informed by past literature and based on our empirical work examining a dataset from the use of the Fairy Assessment in the Alice programming environment in middle schools, we present a framework that formalizes a process where a hypothesis-driven approach informed by Evidence-Centered Design effectively complements data-driven learning analytics in interpreting students’ programming process and assessing CT in block-based programming environments. We apply the framework to the design of Alice tasks for high school CS to be used for measuring CT during programming.


annual symposium on computer-human interaction in play | 2015

Measuring Implicit Science Learning with Networks of Player-Game Interactions

Michael Eagle; Elizabeth Rowe; Drew Hicks; Rebecca Brown; Tiffany Barnes; Jodi Asbell-Clarke; Teon Edwards

Visualizing player behavior in complex problem solving tasks such as games is important for both assessing learning and for the design of content. We collected data from 195 high school students playing an optics puzzle game, Quantum Spectre, and modeled their game play as an interaction network, examining errors hypothesized to be related to a lack of implicit understanding of the science concepts embedded in the game. We found that the networks were useful for visualization of student behavior, identifying areas of student misconceptions and locating regions of the network where students become stuck. Preliminary regression analyses show a negative relationship between the science misconceptions identified during gameplay and implicit science learning.


learning analytics and knowledge | 2018

Data-driven generation of rubric criteria from an educational programming environment

Nicholas Diana; Michael Eagle; John C. Stamper; Shuchi Grover; Marie A. Bienkowski; Satabdi Basu

We demonstrate that, by using a small set of hand-graded student work, we can automatically generate rubric criteria with a high degree of validity, and that a predictive model incorporating these rubric criteria is more accurate than a previously reported model. We present this method as one approach to addressing the often challenging problem of grading assignments in programming environments. A classic solution is creating unit-tests that the student-generated program must pass, but the rigid, structured nature of unit-tests is suboptimal for assessing the more open-ended assignments students encounter in introductory programming environments like Alice. Furthermore, the creation of unit-tests requires predicting the various ways a student might correctly solve a problem - a challenging and time-intensive process. The current study proposes an alternative, semi-automated method for generating rubric criteria using low-level data from the Alice programming environment.


artificial intelligence in education | 2015

Exploring Missing Behaviors with Region-Level Interaction Network Coverage

Michael Eagle; Tiffany Barnes

We have used a complex network model of student-tutor interactions to derive high-level approaches to problem solving. We also have used interaction networks to evaluate between-group differences in student approaches, as well as for automatically producing both next-step and high-level hints. Students do not visit vertices within the networks uniformly; students from different experimental groups are expected to have different patterns of network exploration. In this work we explore the possibility of using frequency estimation to uncover locations in the network with differing amounts of student-saturation. Identification of these regions can be used to locate specific problem approaches and strategies that would be most improved by additional student-data, as well as provide a measure of confidence when comparing across networks or between groups.


intelligent tutoring systems | 2014

Modeling Student Dropout in Tutoring Systems

Michael Eagle; Tiffany Barnes

Intelligent tutors have been shown to be almost as effective as human tutors in supporting learning in many domains. However, the construction of intelligent tutors can be costly. One way to address this problem is to use previously collected data to generate models to provide intelligent feedback to otherwise non-personalized tutors. In this work, we explore how we can use previously collected data to build models of student dropout over time; we define dropout as ceasing to interact with the tutor before the completion of all required tasks. We use survival analysis, a statistical method of measuring time to event data, to model how long we can expect students to interact with a tutor. Future work will explore ways to use these models to to provide personalized feedback, with the goal of preventing students from dropping out.

Collaboration


Dive into the Michael Eagle's collaboration.

Top Co-Authors

Avatar

Tiffany Barnes

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar

John C. Stamper

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Nicholas Diana

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rebecca Brown

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Matthew W. Johnson

University of North Carolina at Charlotte

View shared research outputs
Top Co-Authors

Avatar

Ryan S. Baker

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Albert T. Corbett

Carnegie Mellon University

View shared research outputs
Researchain Logo
Decentralizing Knowledge