Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Michael Striewe is active.

Publication


Featured researches published by Michael Striewe.


integrating technology into computer science education | 2011

Using run time traces in automated programming tutoring

Michael Striewe; Michael Goedicke

Running test cases against a students solution of a programming assignment is one of the easiest ways to generate feedback. If black-box tests are used, students may have difficulties to retrace the complete system behaviour and to find erroneous programming statements. This paper discusses the use of automated trace generation for assisting students in this task. Both manual and automated trace interpretation is discussed and evaluated by examples.


integrating technology into computer science education | 2011

Automated checks on UML diagrams

Michael Striewe; Michael Goedicke

Automated checks for software artefacts like UML diagrams used in automated assessment or tutoring systems do often rely on direct comparisons between a solution and a sample solution. This approach has drawbacks regarding flexibility in face of different possible solutions which are quite common in modeling tasks. This paper presents an alternative technique for checking UML class diagrams based on graph queries which promises to be more flexible.


International Computer Assisted Assessment Conference | 2014

A Review of Static Analysis Approaches for Programming Exercises

Michael Striewe; Michael Goedicke

Static source code analysis is a common feature in automated grading and tutoring systems for programming exercises. Different approaches and tools are used in this area, each with individual benefits and drawbacks, which have direct influence on the quality of assessment feedback. In this paper, different principal approaches and different tools for static analysis are presented, evaluated and compared regarding their usefulness in learning scenarios. The goal is to draw a connection between the technical outcomes of source code analysis and the didactical benefits that can be gained from it for programming education and feedback generation.


international conference on software engineering | 2012

Refounding software engineering: the semat initiative (invited presentation)

Mira Kajko-Mattsson; Ivar Jacobson; Ian Spence; Paul E. McMahon; Brian Elvesæter; Arne J. Berre; Michael Striewe; Michael Goedicke; Shihong Huang; Bruce MacIsaac; Ed Seymour

The new software engineering initiative, Semat, is in the process of developing a kernel for software engineering that stands on a solid theoretical basis. So far, it has suggested a set of kernel elements for software engineering and basic language constructs for defining the elements and their usage. This paper describes a session during which Semat results and status will be presented. The presentation will be followed by a discussion panel.


european conference on technology enhanced learning | 2013

JACK Revisited: Scaling Up in Multiple Dimensions

Michael Striewe; Michael Goedicke

In 2009 the authors of this paper published their proposal for a modular software architecture for computer aided assessments and automated marking. Since then, four more years of experience have passed. This paper reports on technical and organizational aspects of using the proposed architecture and the actual system in various scenarios.


international conference on graph transformation | 2008

Using a Triple Graph Grammar for State Machine Implementations

Michael Striewe

State machines can be comprehensively specified, simulated and validated at design time to get a formally founded skeleton for a software application. A direct implementation of state machines in Java source code can realize states as classes, transitions as methods and variables as well as pre-conditions and variable updates as auxilliary methods, invoking either arbitrary application methods to retrieve the current variable values or evaluating expressions [1]. Viewed from an abstract level, implementation and state machine are two models, sharing the same semantics. To maintain the modelled software systems, it is vitally important to perserve as much of this semantics as possible in the source code to be able to track back changes and errors [2,3]. In contrast to these considerations, current techniques of model-driven development use several unidirectional steps, starting from an abstract model and resulting in platfrom specific source code.


MMB&DFT'10 Proceedings of the 15th international GI/ITG conference on Measurement, Modelling, and Evaluation of Computing Systems and Dependability and Fault Tolerance | 2010

SyLaGen: an extendable tool environment for generating load

Michael Striewe; Moritz Balz; Michael Goedicke

Measuring run time behaviour of systems under load can cause the need for complex workload definitions, measurement strategies and integration of load generation techniques. In this contribution we present SyLaGen (“Synthetic Load Generator”), a load generation environment that focuses on extendability with respect to four different aspects: First, a system under test may offer different interfaces for handling external requests, thus a load generator must be able to handle different protocols randomly and in parallel. Second, load generation for a client-server system may require complex client behaviour that cannot be formulated in a simple descriptive way, but instead with non-trivial algorithms that have to be implemented programmatically. Third, more than simple atomic measurements may be required in complex environments, so that strategies applying sequences of measurements to a system should be configured. Finally, comprehensive requirements engineering may result in complex use cases that cannot be modelled as linear scripts.


software visualization | 2017

A Dashboard for Visualizing Software Engineering Processes Based on ESSENCE

Sebastian Brandt; Michael Striewe; Fabian Beck; Michael Goedicke

While traditional project planning approaches focus on precise scheduling of tasks, the ESSENCE standard proposes a higher-level approach that focuses on monitoring. Hence, a new kind of process visualization that picks up ideas of Kanban boards and physical cards is sketched in the standard. This tool paper presents a dashboard application refining, extending, and implementing these ideas based on five use cases posed by two industry partners. It demonstrates that a high degree of support for project management can be achieved by using a relatively small set of visualization means.


Science of Computer Programming | 2016

An architecture for modular grading and feedback generation for complex exercises

Michael Striewe

Grading and feedback generation for complex open exercises is a major challenge in e-learning and e-assessment. One particular instance of e-assessment systems designed especially for grading programming exercises is JACK. This paper aims to discuss and evaluate key architectural concepts of JACK in terms of components, interfaces, and communication. It is shown how the architectural concept stands the test in an actual large scale deployment. Modular architecture for e-assessment-systems.Discussion of static and dynamic properties of architecture.Reports from eight years of development and practical use.


2016 International Conference on Learning and Teaching in Computing and Engineering (LaTICE) | 2016

Towards Deriving Programming Competencies from Student Errors

Marc Berges; Michael Striewe; Philipp Shah; Michael Goedicke; Peter Hubwieser

Learning outcomes are more and more defined and measured in terms of competencies. Many research projects are conducted that investigate combinations of knowledge and skills that students might learn. Yet, it is also promising to analyze what students might fail to learn, which provides information about the absence of certain competencies. For this purpose, we are evaluating the outcomes of automatic assessment tools that provide automatic feedback to the participating students. In particular, we analyzed the errors of the students that participated in an introductory programming course. The 604 students participating in the course had to solve six tasks during the semester, resulting in a total of 12274 submissions. The error analysis is done by evaluating the data from the automatic assessment tool JACK, which provides automatic feedback on programming tasks. To derive information about prospective competencies, we conducted a qualitative analysis of the different errors the students made in their solutions. The results provide interesting insights into missing competencies. In further research our findings have to be validated by investigating the cognitive processes involved during programming.

Collaboration


Dive into the Michael Striewe's collaboration.

Top Co-Authors

Avatar

Michael Goedicke

University of Duisburg-Essen

View shared research outputs
Top Co-Authors

Avatar

Moritz Balz

University of Duisburg-Essen

View shared research outputs
Top Co-Authors

Avatar

Melanie Schypula

University of Duisburg-Essen

View shared research outputs
Top Co-Authors

Avatar

Oliver J. Bott

Braunschweig University of Technology

View shared research outputs
Top Co-Authors

Avatar

Sven Strickroth

Humboldt University of Berlin

View shared research outputs
Top Co-Authors

Avatar

Björn Zurmaar

University of Duisburg-Essen

View shared research outputs
Top Co-Authors

Avatar

Filiz Kurt-Karaoglu

University of Duisburg-Essen

View shared research outputs
Top Co-Authors

Avatar

Niels Pinkwart

Humboldt University of Berlin

View shared research outputs
Top Co-Authors

Avatar

Oliver Müller

Clausthal University of Technology

View shared research outputs
Top Co-Authors

Avatar

Alexander Tillmann

Goethe University Frankfurt

View shared research outputs
Researchain Logo
Decentralizing Knowledge