Thomas Staubitz
Hasso Plattner Institute
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Thomas Staubitz.
ieee international conference on teaching assessment and learning for engineering | 2015
Thomas Staubitz; Hauke Klement; Jan Renz; Ralf Teusner; Christoph Meinel
In recent years, Massive Open Online Courses (MOOCs) have become a phenomenon presenting the prospect of free high class education to everybody. They bear a tremendous potential for teaching programming to a large and diverse audience. The typical MOOC components, such as video lectures, reading material, and easily assessable quizzes, however, are not sufficient for proper programming education. To learn programming, participants need an option to work on practical programming exercises and to solve actual programming tasks. It is crucial that the participants receive proper feedback on their work in a timely manner. Without a tool for automated assessment of programming assignments, the teaching teams would be restricted to offer optional ungraded exercises only. The paper at hand sketches scenarios how practical programming exercises could be provided and examines the landscape of potentially helpful tools in this context. Automated assessment has a long record in the history of computer science education. We give an overview of existing tools in this field and also explore the question what can and/or should be assessed.
learning at scale | 2016
Thomas Staubitz; Dominic Petrick; Matthias Bauer; Jan Renz; Christoph Meinel
Massive Open Online Courses (MOOCs) have revolutionized higher education by offering university-like courses for a large amount of learners via the Internet. The paper at hand takes a closer look on peer assessment as a tool for delivering individualized feedback and engaging assignments to MOOC participants. Benefits, such as scalability for MOOCs and higher order learning, and challenges, such as grading accuracy and rogue reviewers, are described. Common practices and the state-of-the-art to counteract challenges are highlighted. Based on this research, the paper at hand describes a peer assessment workflow and its implementation on the openHPI and openSAP MOOC platforms. This workflow combines the best practices of existing peer assessment tools and introduces some small but crucial improvements.
global engineering education conference | 2016
Thomas Staubitz; Hauke Klement; Ralf Teusner; Jan Renz; Christoph Meinel
The paper at hand introduces CodeOcean, a web-based platform to provide practical programming exercises. CodeOcean is designed to be used in Massive Open Online Courses (MOOCs) to teach programming to beginners. Its concept and implementation are discussed with regard to tools provided to students and teachers, sandboxed and scalable code execution, scalable assessment, and interoperability. MOOCs bear a tremendous potential for teaching programming to a large and diverse audience. Learning to program, however, is a hands-on effort; watching videos and solving multiple choice tests will not be sufficient. A platform, such as CodeOcean, to work on practical programming exercises and to solve actual programming tasks is required. Due to the massiveness of the courses, teaching teams cannot check, give feedback, or assess the submissions of the participants manually. CodeOcean provides the participants with proper automated feedback in a timely manner and is able to assess the given programming tasks in an automated way.
global engineering education conference | 2014
Thomas Staubitz; Jan Renz; Christian Willems; Johannes Jasper; Christoph Meinel
There is a great demand for hands-on training in engineering education. In the context of a Massive Open Online Course (MOOC), assessing these experiments manually by teaching assistants is not possible owed to the high number of participants and the resulting workload for the teaching team. Systems for machine-based assessment of coding tasks are existing, but not necessarily available publicly, or not prepared to handle the massive amount of users in a MOOC. Definitely, they are not available “ad hoc”, but require a certain amount of effort to be integrated in the MOOC platform or to be made available for the students in another way. Time and money to provide the required effort is not always available. This work presents a lightweight solution for the assessment of practical programming exercises, based on third party online coding tools. The solution was introduced as a part of openHPIs Web-Technologies course. The basic idea is to prepare a task in an available online tool, along with a piece of code that is able to evaluate the participants solution. In case of success the participant is provided with a password, which in return serves as the answer for a fill-in-the-gap question in a standard quiz as provided by the openHPI MOOC platform, and thus allows for automatic online assessment based on practical coding exercises.
learning at scale | 2016
Jan Renz; Gerado Navarro-Suarez; Rowshan Sathi; Thomas Staubitz; Christoph Meinel
This paper at hand describes the design and implementation of an analytics service to retrieve live usage data from students enrolled in a service-oriented MOOC platform for the purpose of learning analytics (LA) research. A real-time and extensible architecture for consolidating and processing data in versatile analytics stores is introduced.
learning analytics and knowledge | 2016
Jan Renz; Daniel Hoffmann; Thomas Staubitz; Christoph Meinel
In recent years, Massive Open Online Courses (MOOCs) have become a phenomenon offering the possibility to teach thousands of participants simultaneously. In the same time the platforms used to deliver these courses are still in their fledgling stages. While course content and didactics of those massive courses are the primary key factors for the success of courses, still a smart platform may increase or decrease the learners experience and his learning outcome. The paper at hand proposes the usage of an A/B testing framework that is able to be used within an micro-service architecture to validate hypotheses about how learners use the platform and to enable data-driven decisions about new features and settings. To evaluate this framework three new features (Onboarding Tour, Reminder Mails and a Pinboard Digest) have been identified based on a user survey. They have been implemented and introduced on two large MOOC platforms and their influence on the learners behavior have been measured. Finally this paper proposes a data driven decision workflow for the introduction of new features and settings on e-learning platforms.
frontiers in education conference | 2015
Martin von Löwis; Thomas Staubitz; Ralf Teusner; Jan Renz; Christoph Meinel; Susanne Tannert
The paper at hand evaluates the Massive Open Online Course (MOOC) Spielend Programmieren Lernen (Playfully learning to program), an effort to scale the youth development program at the Hasso Plattner Institute (HPI) for a larger audience. The HPI has a strong tradition in attracting children and adolescents to take their first steps towards a career in IT at an early age. The Schülerakademie, the Schülerkolleg, the Schülerklub, and the support for the CoderDojo in Potsdam are some of the regular activities in this context to take youngsters by the hand and supply them with material and guidance in their mother tongue. With the emergence of MOOCs and the success of HPIs own MOOC Platform - openHPI - it was a natural step to develop a course to address an audience that is only marginally represented in openHPIs regular courses: school children and adolescents. A further novelty for openHPI in this course was the focus on teaching programming with a high percentage of obligatory hands-on tasks. Particularly for this course, a standalone tool allowing participants to write and evaluate code directly in the browser - without the need to install additional software - has been developed. We will compare this tool to a small selection of similar approaches on other platforms. As it will be shown, the course attracted a far more diverse audience than expected, and therefore, also needs to be seen in the context of spreading digital literacy amongst wider parts of society. In this context we also will discuss the significant differences in the usage of the forum between the course Spielend Programmieren Lernen and the course In-Memory Databases, a more traditional openHPI course.
global engineering education conference | 2017
Thomas Staubitz; Christian Willems; Christiane Hagedorn; Christoph Meinel
Massive Open Online Courses (MOOCs) have left their mark on the face of education during the recent years. At the Hasso Plattner Institute (HPI) in Potsdam, Germany, we are actively developing a MOOC platform, which provides our research with a plethora of e-learning topics, such as learning analytics, automated assessment, peer assessment, team-work, online proctoring, and gamification. We run several instances of this platform. On openHPI, we provide our own courses from within the HPI context. Further instances are openSAP, openWHO, and mooc.HOUSE, which is the smallest of these platforms, targeting customers with a less extensive course portfolio. In 2013, we started to work on the gamification of our platform. By now, we have implemented about two thirds of the features that we initially have evaluated as useful for our purposes. About a year ago we activated the implemented gamification features on mooc.HOUSE. Before activating the features on openHPI as well, we examined, and re-evaluated our initial considerations based on the data we collected so far and the changes in other contexts of our platforms.
learning at scale | 2018
Thomas Staubitz; Christoph Meinel
Teamwork and collaborative learning are considered superior to learning individually by many instructors and didactical theories. Particularly, in the context of e-learning and Massive Open Online Courses (MOOCs) we see great benefits but also great challenges for both, learners and instructors. We discuss our experience with six team based assignments on the openHPI and openSAP1 MOOC platforms.
learning at scale | 2018
Ralf Teusner; Thomas Hille; Thomas Staubitz
A typical problem in MOOCs is the missing opportunity for course conductors to individually support students in overcoming their problems and misconceptions. This paper presents the results of automatically intervening on struggling students during programming exercises and offering peer feedback and tailored bonus exercises. To improve learning success, we do not want to abolish instructionally desired trial and error but reduce extensive struggle and demotivation. Therefore, we developed adaptive automatic just-in-time interventions to encourage students to ask for help if they require considerably more than average working time to solve an exercise. Additionally, we offered students bonus exercises tailored for their individual weaknesses. The approach was evaluated within a live course with over 5,000 active students via a survey and metrics gathered alongside. Results show that we can increase the call outs for help by up to 66% and lower the dwelling time until issuing action. Learnings from the experiments can further be used to pinpoint course material to be improved and tailor content to be audience specific.