Chris Piech
Stanford University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Chris Piech.
learning analytics and knowledge | 2013
René F. Kizilcec; Chris Piech; Emily Schneider
As MOOCs grow in popularity, the relatively low completion rates of learners has been a central criticism. This focus on completion rates, however, reflects a monolithic view of disengagement that does not allow MOOC designers to target interventions or develop adaptive course features for particular subpopulations of learners. To address this, we present a simple, scalable, and informative classification method that identifies a small number of longitudinal engagement trajectories in MOOCs. Learners are classified based on their patterns of interaction with video lectures and assessments, the primary features of most MOOCs to date. In an analysis of three computer science MOOCs, the classifier consistently identifies four prototypical trajectories of engagement. The most notable of these is the learners who stay engaged through the course without taking assessments. These trajectories are also a useful framework for the comparison of learner engagement between different course structures or instructional approaches. We compare learners in each trajectory and course across demographics, forum participation, video access, and reports of overall experience. These results inform a discussion of future interventions, research, and design directions for MOOCs. Potential improvements to the classification mechanism are also discussed, including the introduction of more fine-grained analytics.
technical symposium on computer science education | 2012
Chris Piech; Mehran Sahami; Daphne Koller; Stephen Cooper; Paulo Blikstein
Despite the potential wealth of educational indicators expressed in a students approach to homework assignments, how students arrive at their final solution is largely overlooked in university courses. In this paper we present a methodology which uses machine learning techniques to autonomously create a graphical model of how students in an introductory programming course progress through a homework assignment. We subsequently show that this model is predictive of which students will struggle with material presented later in the class.
The Journal of the Learning Sciences | 2014
Paulo Blikstein; Marcelo Worsley; Chris Piech; Mehran Sahami; Steven Cooper; Daphne Koller
New high-frequency, automated data collection and analysis algorithms could offer new insights into complex learning processes, especially for tasks in which students have opportunities to generate unique open-ended artifacts such as computer programs. These approaches should be particularly useful because the need for scalable project-based and student-centered learning is growing considerably. In this article, we present studies focused on how students learn computer programming, based on data drawn from 154,000 code snapshots of computer programs under development by approximately 370 students enrolled in an introductory undergraduate programming course. We use methods from machine learning to discover patterns in the data and try to predict final exam grades. We begin with a set of exploratory experiments that use fully automated techniques to investigate how much students change their programming behavior throughout all assignments in the course. The results show that students’ change in programming patterns is only weakly predictive of course performance. We subsequently hone in on 1 single assignment, trying to map students’ learning process and trajectories and automatically identify productive and unproductive (sink) states within these trajectories. Results show that our process-based metric has better predictive power for final exams than the midterm grades. We conclude with recommendations about the use of such methods for assessment, real-time feedback, and course improvement.
technical symposium on computer science education | 2018
Lisa Yan; Nick McKeown; Mehran Sahami; Chris Piech
As computer science classes grow, instructor workload also increases: teachers must simultaneously teach material, provide assignment feedback, and monitor student progress. At scale, it is hard to know which students need extra help, and as a result some students can resort to excessive collaboration--using online resources or peer code--to complete their work. In this paper, we present TMOSS, a tool that analyzes the intermediate steps a student takes to complete a programming assignment. We find that for three separate course offerings, TMOSS is almost twice as effective as traditional software similarity detectors in identifying the number of students who exhibit excessive collaboration. We also find that such students spend significantly less time on their assignment, use fewer class tutoring resources, and perform worse on exams than their peers. Finally, we provide a theory of the parametric distribution of typical student assignment similarity, which allows for probabilistic interpretation.
technical symposium on computer science education | 2018
Chris Piech; Chris Gregg
This paper presents BlueBook, a lightweight, cross-platform, computer-based, open source examination environment that overcomes traditional hurdles with computerized testing for computer science courses. As opposed to paper exam testing, BlueBook allows students to type coding problems on their laptops in an environment similar to their normal programming routine (e.g., with syntax highlighting), but purposefully does not provide them the ability to compile and/or run their code. We seamlessly transitioned from paper exams to BlueBook and found that students appreciated the ability to type their responses. Additionally, we are just beginning to harness the benefits to grading by having student answers in digital form. In the paper, we discuss the pedagogical benefits and trade-offs to using a computerized exam format, and we argue that both the students and the graders benefit from it.
Proceedings of the Third Conference on GenderIT | 2015
Rob Semmens; Chris Piech; Michelle Friend
We developed a short, easily implemented survey that measures the similarity in phrases describing the self and a computer scientist. Additionally, we took initial steps in determining adjectives or phrases that describe a stereotypical computer scientist. We then administered this survey before and after an eight-week summer computer science program for high school girls. We found that phrases or adjectives used to describe the self converged with those to describe the computer scientist. In addition, descriptions of both were more positive at the end of the program compared to the beginning. Finally, the stereotypical of a computer scientist decreased from the beginning to the end of the program. Future work includes refinement of the stereotype measure and assessing different types of computer science programs.
technical symposium on computer science education | 2017
Thomas W. Price; Neil C.C. Brown; Chris Piech; Kelly Rivers
As more programming environments add logging features and programming data becomes more accessible, it is important to have a conversation about how we share and use this data. Uses of programming log data range from big-picture analyses to dashboards for instant teacher feedback, to intelligent, data-driven learning environments. The goal of this BOF is to talk about what data is important to collect, where it can be gathered and shared, what general data formats make sense, how to handle privacy and anonymization, and what ultimately we want to see the data used for. The BOF welcomes both producers of programming log data and current or potential consumers, interested in how it could be applied in their classrooms or research. One hopeful outcome of this BOF is a commitment to documenting and sharing existing programming data in an accessible location and format.
educational data mining | 2013
Chris Piech; Jonathan Huang; Zhenghao Chen; Chuong B. Do; Andrew Y. Ng; Daphne Koller
neural information processing systems | 2015
Chris Piech; Jonathan Bassen; Jonathan Huang; Surya Ganguli; Mehran Sahami; Leonidas J. Guibas; Jascha Sohl-Dickstein
international world wide web conferences | 2014
Andy Nguyen; Chris Piech; Jonathan Huang; Leonidas J. Guibas