Elke Pulvermüller
University of Osnabrück
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Elke Pulvermüller.
Electronic Notes in Theoretical Computer Science | 2002
Elke Pulvermüller
Abstract This paper presents an approach to ensure correctness of composed systems. It takes into consideration that correctness can usually be achieved only to a certain degree (except for some small and very mission-critical applications) and complete specifications are usually not practicable. By modelling the parts, the composition activities and the requirements specification we automise the checking procedures using model checking. An important issue hereby is that our approach allows partial modelling and specification.
conference of the industrial electronics society | 2001
Andreas Speck; Elke Pulvermüller
Since the first discussion about the software crisis in 1968 many concepts in order to improve software development and reuse have been introduced. Some of them, like frameworks or components aim on the reuse of code, others capture experiences with system architecture, design or coding recommendations. This paper focuses on a way to express the reuse of pieces of software and design reasoning. We apply versions in order to describe sets of features we want to have in a system. Conditions are used to formulate these requirements. The set of features in a version system may be verified with a feature model which captures the known variability of a system. The required knowledge about a system is often given since many systems are derived from already existing systems. In order to build larger systems from smaller units the versions are hierarchical. Versions on a lower level which are more easy to understand may be part of larger versions.
the practice of enterprise modeling | 2012
Thomas Stuht; Andreas Speck; Sven Feja; Sören Witt; Elke Pulvermüller
Business architectures are an important part of any enterprise architecture containing business processes and business capabilities. High quality business processes are key factors for the success of a company. Hence, the quality and the correctness or compliance have to be verified. We propose to use the business capabilities for an efficient and easily understandable definition of rules to perform this verification. The rule specification is based on rule patterns to define requirements from an operational point of view. These patterns are derived from experience gained in projects for modeling and optimization of business processes with extensive manual checks. For the rule validation we rely on model checking as an established technology to cope with the dynamic properties of processes. We present a tool based approach to automate this verification integrated in a unique system with a common user interface.
business information systems | 2011
Andreas Speck; Sören Witt; Sven Feja; Aneta Lotyzc; Elke Pulvermüller
There are numerous concepts and tools for modeling business processes and several academic approaches to verify business processes. However, most modeling tools don’t integrate the checking of the processes. The three-tier architecture of the Business Application Modeler (BAM) provides the graphical representation of business models and rules (presentation layer) as well as integrates a verification mechanism layer with an intermediate transformation layer.
international conference on evaluation of novel approaches to software engineering | 2017
Mathias Menninghaus; Falk Wilke; Jan-Philipp Schleutker; Elke Pulvermüller
Modern software systems often communicate with their users by graphical user interfaces (GUI). While the underlying business logic may be fully covered by unit tests, the GUI mostly is not. Despite the widespread use of capture and replay tools, which leave the test generation of GUI tests to the user, recent research also focuses on automated GUI test generation. From the numerous approaches, which include symbolic execution, modelbased generation, and random testing, search based test data generation seems to be the most promising. In this paper, we create GUI tests using hill climbing, simulated annealing and several genetic algorithms which deal differently with the sequence length and use multi or single objective algorithms. These different test data generators are compared in terms of runtime and coverage. All approaches are also compared using different optimization goals which are a high coverage of the event flow graph (EFG) of the GUIs and a high coverage of the underlying source code. The evaluation shows that the genetic algorithms outperform hill climbing and simulated annealing in terms of coverage, and that targeting a high EFG coverage causes the best runtime performance.
international conference on performance engineering | 2016
Mathias Menninghaus; Elke Pulvermüller
The development process for new algorithms or data structures often begins with the analysis of benchmark results to identify the drawbacks of already existing implementations. Furthermore it ends with the comparison of old and new implementations by using one or more well established benchmark. But how relevant, reproducible, fair, verifiable and usable those benchmarks may be, they have certain drawbacks. On the one hand a new implementation may be biased to provide good results for a specific benchmark. On the other hand benchmarks are very general and often fail to identify the worst and best cases of a specific implementation. In this paper we present a new approach for the comparison of algorithms and data structures on the implementation level using code coverage. Our approach uses model checking and multi-objective evolutionary algorithms to create test cases with a high code coverage. It then executes each of the given implementations with each of the test cases in order to calculate a cross coverage. Using this it calculates a combined coverage and weighted performance where implementations, which are not fully covered by the test cases of the other implementations, are punished. These metrics can be used to compare the performance of several implementations on a much deeper level than traditional benchmarks and they incorporate worst, best and average cases in an equal manner. We demonstrate this approach by two example sets of algorithms and outline the next research steps required in this context along with the greatest risks and challenges.
new trends in software methodologies, tools and techniques | 2015
Sören Witt; Sven Feja; Christian Hadler; Andreas Speck; Elke Pulvermüller
Graphically represented Business Process Models (BPMs) are common artifacts in documentation as well as in early phases of (software) development processes. The Graphical Computation Tree Logic (G-CTL) is a notation to define formal graphical validation rules on the same level of abstraction as the BPMs, allowing to specify high-level requirements regarding the content level of the BPMs. The research tool Business Application Modeler (BAM) enables the automatic validation of BPMs with G-CTL rules. While details of the validation procedure are hidden from the user, the checking results need to be presented adequately. In this contribution, we present and discuss methods for visualization and analysis of the checking results in the context of G-CTL based validations. We elaborate how artifacts, which are generated during a validation procedure, may be used to derive different visualizations, and we show how these methods can be combined into more expressive visualizations.
business process management | 2015
Andreas Speck; Sören Witt; Sven Feja; Elke Pulvermüller
Business process models describe the behaviour of commercial information systems. Since these models are the base for the development and understanding of such information systems the business process models are subject of strict quality assurance. Such an importance leads to the idea to support the checking by an automated tool concept. The paper presents such an integrated, tool-based validation concept supporting human testers who are mainly business experts and not test experts. Such business experts may use the process model notations they are familiar with modelling the processes as well as the rules for these models. The automated testing system integrates model checking tools to perform the validation. The result is then presented to the human user. In case of an error detected by the check, a counter example demonstrating one source of the error is presented directly in the business process model.
new trends in software methodologies, tools and techniques | 2013
Elke Pulvermüller; Andreas Speck; Sven Feja; Sören Witt
Automated checking concepts for business process models support human testers considerably by saving time. However, this new checking ability results in a comparatively large number of rules representing requirements. But without a comprehensible representation of the relations between the rules on the one hand its hard to keep track on the validated rules and on the other hand to correctly interpret the validation results. In this paper we propose an improvement for the automated validation of business process models by offering elements to create abstract rules and arranging these rules in hierarchies. Top-down and bottom-up testing are supported by stepwise activating (and validating) the rules starting from the top of the hierarchy (or bottom respectively). Moreover, the rule hierarchies may be reused when similar systems are to be validated by configuring a valid rule sub-set for the specific business process system.
International Journal of Applied Mathematics and Computer Science | 2002
Andreas Speck; Elke Pulvermüller; Michael Jerger; Bogdan Franczyk