Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where John C. Stamper is active.

Publication


Featured researches published by John C. Stamper.


intelligent tutoring systems | 2008

Toward Automatic Hint Generation for Logic Proof Tutoring Using Historical Student Data

Tiffany Barnes; John C. Stamper

We have proposed a novel application of Markov decision processes (MDPs), a reinforcement learning technique, to automatically generate hints for an intelligent tutor that learns. We demonstrate the feasibility of this approach by extracting MDPs from four semesters of student solutions in a logic proof tutor, and calculating the probability that we will be able to generate hints at any point in a given problem. Our results indicate that extracted MDPs and our proposed hint-generating functions will be able to provide hints over 80% of the time. Our results also indicate that we can provide valuable tradeoffs between hint specificity and the amount of data used to create an MDP.


artificial intelligence in education | 2011

Experimental Evaluation of Automatic Hint Generation for a Logic Tutor

John C. Stamper; Michael Eagle; Tiffany Barnes; Marvin J. Croy

We have augmented the Deep Thought logic tutor with a Hint Factory that generates data-driven, context-specific hints for an existing computer aided instructional tool. We investigate the impact of the Hint Factorys automatically generated hints on educational outcomes in a switching replications experiment that shows that hints help students persist in a deductive logic proofs tutor. Three instructors taught two semester-long courses, each teaching one semester using a logic tutor with hints, and one semester using the tutor without hints, controlling for the impact of different instructors on course outcomes. Our results show that students in the courses using a logic tutor augmented with automatically generated hints attempted and completed significantly more logic proof problems, were less likely to abandon the tutor, performed significantly better on a post-test implemented within the tutor, and achieved higher grades in the course.


Ai Magazine | 2013

New Potentials for Data-Driven Intelligent Tutoring System Development and Optimization

Kenneth R. Koedinger; Emma Brunskill; Ryan S. Baker; Elizabeth A. McLaughlin; John C. Stamper

Increasing widespread use of educational technologies is producing vast amounts of data. Such data can be used to help advance our understanding of student learning and enable more intelligent, interactive, engaging, and effective education. In this article, we discuss the status and prospects of this new and powerful opportunity for data-driven development and optimization of educational technologies, focusing on intelligent tutoring systems We provide examples of use of a variety of techniques to develop or optimize the select, evaluate, suggest, and update functions of intelligent tutors, including probabilistic grammar learning, rule induction, Markov decision process, classification, and integrations of symbolic search and statistical inference.


artificial intelligence in education | 2013

Using Data-Driven Discovery of Better Student Models to Improve Student Learning

Kenneth R. Koedinger; John C. Stamper; Elizabeth A. McLaughlin; Tristan Nixon

Deep analysis of domain content yields novel insights and can be used to produce better courses. Aspects of such analysis can be performed by applying AI and statistical algorithms to student data collected from educational technology and better cognitive models can be discovered and empirically validated in terms of more accurate predictions of student learning. However, can such improved models yield improved student learning? This paper reports positively on progress in closing this loop. We demonstrate that a tutor unit, redesigned based on data-driven cognitive model improvements, helped students reach mastery more efficiently. In particular, it produced better learning on the problem-decomposition planning skills that were the focus of the cognitive model improvements.


intelligent tutoring systems | 2012

Program representation for automatic hint generation for a data-driven novice programming tutor

Wei Jin; Tiffany Barnes; John C. Stamper; Michael Eagle; Matthew W. Johnson; Lorrie Lehmann

We describe a new technique to represent, classify, and use programs written by novices as a base for automatic hint generation for programming tutors. The proposed linkage graph representation is used to record and reuse student work as a domain model, and we use an overlay comparison to compare in-progress work with complete solutions in a twist on the classic approach to hint generation. Hint annotation is a time consuming component of developing intelligent tutoring systems. Our approach uses educational data mining and machine learning techniques to automate the creation of a domain model and hints from student problem-solving data. We evaluate the approach with a sample of partial and complete, novice programs and show that our algorithms can be used to generate hints over 80 percent of the time. This promising rate shows that the approach has potential to be a source for automatically generated hints for novice programmers.


artificial intelligence in education | 2011

Human-machine student model discovery and improvement using DataShop

John C. Stamper; Kenneth R. Koedinger

We show how data visualization and modeling tools can be used with human input to improve student models. We present strategies for discovering potential flaws in existing student models and use them to identify improvements in a Geometry model. A key discovery was that the student model should distinguish problem steps requiring problem decomposition planning and execution from problem steps requiring just execution of problem decomposition plans. This change to the student model better fits student data not only in the original data set, but also in two other data sets from different sets of students. We also show how such student model changes can be used to modify a tutoring system, not only in terms of the usual student model effects on the tutors problem selection, but also in driving the creation of new problems and hint messages.


integrating technology into computer science education | 2013

Towards improving programming habits to create better computer science course outcomes

Jaime Spacco; Davide Fossati; John C. Stamper; Kelly Rivers

We examine a large dataset collected by the Marmoset system in a CS2 course. The dataset gives us a richly detailed portrait of student behavior because it combines automatically collected program snapshots with unit tests that can evaluate the correctness of all snapshots. We find that students who start earlier tend to earn better scores, which is consistent with the findings of other researchers. We also detail the overall work habits exhibited by students. Finally, we evaluate how students use release tokens, a novel mechanism that provides feedback to students without giving away the code for the test cases used for grading, and gives students an incentive to start coding earlier. We find that students seem to use their tokens quite effectively to acquire feedback and improve their project score, though we do not find much evidence suggesting that students start coding particularly early.


learning analytics and knowledge | 2012

Educational data mining meets learning analytics

Ryan S. Baker; Erik Duval; John C. Stamper; David Wiley; Simon Buckingham Shum

W This panel is proposed as a means of promoting mutual learning and continued dialogue between the Educational Data Mining and Learning Analytics communities. EDM has been developing as a community for longer than the LAK conference, so what if anything makes the LAK community different, and where is the common ground?


intelligent tutoring systems | 2010

PSLC datashop: a data analysis service for the learning science community

John C. Stamper; Kenneth R. Koedinger; Ryan S. Baker; Alida Skogsholm; Brett Leber; Jim Rankin; Sandy Demi

The Pittsburgh Science of Learning Centers DataShop is an open data repository and set of associated visualization and analysis tools DataShop has data from thousands of students deriving from interactions with on-line course materials and intelligent tutoring systems The data is fine-grained, with student actions recorded roughly every 20 seconds, and it is longitudinal, spanning semester or yearlong courses Currently over 188 datasets are stored including over 42 million student actions and over 150,000 student hours of data Most student actions are “coded” meaning they are not only graded as correct or incorrect, but are categorized in terms of the hypothesized competencies or knowledge components needed to perform that action.


Topics in Cognitive Science | 2013

LearnLab's DataShop: A Data Repository and Analytics Tool Set for Cognitive Science

Kenneth R. Koedinger; John C. Stamper; Brett Leber; Alida Skogsholm

In What Should Be the Data Sharing Policy of Cognitive Science? Pitt and Tang (2013) make the case for an open data-sharing policy in Cognitive Science and highlight the use of online data repositories to store and share raw research data. One such data repository is the LearnLab DataShop (http://pslcdatashop.org) hosted at Carnegie Mellon University. DataShop is part of LearnLab, a NSF-funded Science of Learning Center started in 2004. DataShop is a major resource for researchers in educational data mining and the learning sciences, including the educational arm of Cognitive Science. DataShop is both an open repository of learning data and a web application for performing exploratory analyses on those data. DataShop specializes in data on the interaction between students and educational software, including online courses, intelligent tutoring systems, virtual labs, online assessment systems, collaborative learning environments, and simulations. As of March 2013, DataShop offers 385 datasets under 116 projects. Across these data sets, there are 97 million software-student transactions, representing over 238,000 student hours. A key feature relevant to the Cognitive Science community is DataShop’s set of tools for exploring cognitive models both visually and statistically. In DataShop, a cognitive model is a mapping between hypothesized “knowledge components”—a more general term for skill, concept, schema, production rule, misconception, or facet—and steps in the procedural completion of an online activity. A researcher can define a hypothesized model in a spreadsheet and upload it to DataShop, where it becomes available for analyses. Visual analyses include learning curves and an error report, while statistical analyses include a logistic regression model that describes how well alternative cognitive models predict student learning. DataShop has been valuable to both primary and secondary researchers in the learning sciences fueling over 100 secondary analysis studies and associated papers. For

Collaboration


Dive into the John C. Stamper's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michael Eagle

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Tiffany Barnes

North Carolina State University

View shared research outputs
Top Co-Authors

Avatar

Nicholas Diana

Carnegie Mellon University

View shared research outputs
Top Co-Authors

Avatar

Ryan S. Baker

University of Pennsylvania

View shared research outputs
Top Co-Authors

Avatar

Marvin J. Croy

University of North Carolina at Charlotte

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alida Skogsholm

Carnegie Mellon University

View shared research outputs
Researchain Logo
Decentralizing Knowledge