Andreas Grimmer
Johannes Kepler University of Linz
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Andreas Grimmer.
automated software engineering | 2015
Florian Angerer; Andreas Grimmer; Herbert Prähofer; Paul Grünbacher
Understanding variability is essential to allow the configuration of software systems to diverse requirements. Variability-aware program analysis techniques have been proposed for analyzing the space of program variants. Such techniques are highly beneficial, e.g., to determine the potential impact of changes during maintenance. This paper presents an interprocedural and configuration-aware change impact analysis (CIA) approach for determining possibly impacted products when changing source code of a product family. The approach further supports engineers, who are adapting specific product variants after an initial pre-configuration. The approach can be adapted to work with different variability mechanism, it provides more precise results than existing CIA approaches, and it can be implemented using standard control flow and data flow analysis. Using an industrial product line we report evaluation results on the benefit and performance of the approach.
asia and south pacific design automation conference | 2017
Oliver Keszocze; Zipeng Li; Andreas Grimmer; Robert Wille; Krishnendu Chakrabarty; Rolf Drechsler
Digital microfluidics is an emerging technology that provide fluidic-handling capabilities on a chip. One of the most important issues to be considered when conducting experiments on the corresponding biochips is the routing of droplets. A recent variant of biochips uses a micro-electrode-dot-array (MEDA) which yields a finer controllability of the droplets. Although this new technology allows for more advanced routing possibilities, it also poses new challenges to corresponding CAD methods. In contrast to conventional microfluidic biochips, droplets on MEDA biochips may move diagonally on the grid and are not bound to have the same shape during the entire experiment. In this work, we present an exact routing method that copes with these challenges while, at the same time, guarantees to find the minimal solution with respect to completion time. For the first time, this allows for evaluating the benefits of MEDA biochips compared to their conventional counterparts as well as a quality assessment of previously proposed routing methods in this domain.
asia and south pacific design automation conference | 2017
Andreas Grimmer; Qin Wang; Hailong Yao; Tsung-Yi Ho; Robert Wille
Continuous-flow microfluidics rapidly evolved in the last decades as a solution to automate laboratory procedures in molecular biology and biochemistry. Therefore, the physical design of the corresponding chips, i.e., the placement and routing of the involved components and channels, received significant attention. Recently, several physical design solutions for this task have been presented. However, they often rely on general heuristics which traverse the search space in a rather arbitrary fashion and, additionally, consider placement and routing independently from each other. Consequently, the obtained results are often far from being optimal. In this work, a methodology is proposed which aims for determining close-to-optimal physical designs for continuous-flow microfluidic biochips. To this end, we consider all — or, at least, as much as possible — of the valid solutions. As this obviously yields a significant complexity, solving engines are utilized to efficiently traverse the search space and pruning schemes are proposed to reduce the search space without discarding too many promising solutions. Evaluations show that the proposed methodology is capable of determining optimal results for small experiments to be realized. For larger experiments, close-to-optimal results can efficiently be derived. Moreover, compared to the current state-of-the-art, improvements of up to 1–2 orders of magnitude can be observed.
ieee international conference on software analysis evolution and reengineering | 2016
Andreas Grimmer; Florian Angerer; Herbert Prähofer; Paul Grünbacher
Static code analysis techniques are widely and successfully used for mainstream programming languages. However, domain-specific languages and company-specific variations of languages often lack the same level of support. An example is the domain of industrial automation, where programmable logic controller programs are mainly written in languages conforming to the IEC 61131-3 standard, a non-mainstream family of languages. This experience paper reports about the development of a program analysis framework for the IEC 61131-3 languages. We use OMGs Abstract Syntax Tree Meta-Model (ASTM) as an abstract representation and show our extensions of this model to represent the different IEC 61131-3 languages. Using this representation our approach generates Jimple code, an intermediate representation used by the Soot program analysis framework. We use Soots standard analysis methods to compute a system dependence graph, which is then used for change impact analysis. We apply our approach to industrial-size product lines of our industry partner to demonstrate its correctness and performance. Finally, we discuss experiences and lessons learned intended for developers of program analysis methods for nonmainstream languages.
Software and Systems Modeling | 2018
Daniela Rabiser; Herbert Prähofer; Paul Grünbacher; Michael Petruzelka; Klaus Eder; Florian Angerer; Mario Kromoser; Andreas Grimmer
Feature models are frequently used to capture the knowledge about configurable software systems and product lines. However, feature modeling of large-scale systems is challenging as models are needed for diverse purposes. For instance, feature models can be used to reflect the perspectives of product management, technical solution architecture, or product configuration. Furthermore, models are required at different levels of granularity. Although numerous approaches and tools are available, it remains hard to define the purpose, scope, and granularity of feature models. This paper first reports results and experiences of an exploratory case study on developing feature models for two large-scale industrial automation software systems. We report results on the characteristics and modularity of the feature models, including metrics about model dependencies. Based on the findings from the study, we developed FORCE, a modeling language, and tool environment that extends an existing feature modeling approach to support models for different purposes and at multiple levels, including mappings to the code base. We demonstrate the expressiveness and extensibility of our approach by applying it to the well-known Pick and Place Unit example and an injection molding subsystem of an industrial product line. We further show how our approach supports consistency between different feature models. Our results and experiences show that considering the purpose and level of features is useful for modeling large-scale systems and that modeling dependencies between feature models is essential for developing a system-wide perspective.
international conference on communications | 2017
Werner Haselmayr; Andrea Biral; Andreas Grimmer; Andrea Zanella; Andreas Springer; Robert Wille
On a droplet-based Labs-on-Chip (LoC) device, tiny volumes of fluids, so-called droplets, flow in channels of micrometer scale. The droplets contain chemical/biological samples that are processed by different modules on the LoC. In current solutions, an LoC is a single-purpose device that is designed for a specific application, which limits its flexibility. In order to realize a multi-purpose system, different modules are interconnected in a microfluidic network — yielding so-called Networked LoCs (NLoCs). In NLoCs, the droplets are routed to the desired modules by exploiting hydrodynamic forces. A well established topology for NLoCs are ring networks. However, the addressing schemes provided so far in the literature only allow to address multiple modules by re-injecting the droplet at the source every time, which is a very complex task and increases the risk of ruining the sample. In this work, we address this issue by revising the design of the network nodes, which include the modules. A novel configuration allows the droplet to undergo processing several times in cascade by different modules with a single injection. Simulating the trajectory of the droplets across the network confirmed the validity of our approach.
design, automation, and test in europe | 2017
Andreas Grimmer; Werner Haselmayr; Andreas Springer; Robert Wille
Labs-on-Chips (LoCs) revolutionize conventional biochemical processes and may even replace laboratories by integrating and minimizing their functionalities on a single chip. In a promising and emerging realization of LoCs, small volumes of reagents, so-called droplets, transport the biological sample and flow in closed channels of sub-millimeter diameters. This realization is called Networked Labs-on-Chips (NLoCs). The architecture of an NLoC defines different paths through which the droplets can flow. These paths are realized by splitting channels into multiple successor channels — so-called bifurcations. However, whether the architecture indeed allows to route droplets along the desired paths and, hence, correctly executes the intended experiment is not guaranteed. In this work, we present the first automatic solution for verifying whether an NLoC architecture allows to correctly route the droplets. Our evaluations demonstrate the applicability and importance of the proposed solution on a set of NLoC architectures.
emerging technologies and factory automation | 2014
Herbert Prähofer; Roland Schatz; Andreas Grimmer
Dynamic program analysis is a technique which records a program execution for the purpose of analyzing its behavior and building high-level models and views. This paper presents an approach to build a high-level model of the behavior of a PLC program component as observed in a program execution. Based on a deterministic record and replay technique, a model is synthesized which represents the transition behavior, timing information, and input output behavior of the component. Then this model can be used to check other executions of the same or similar programs for compliance with the model. We present the synthesis techniques and two variants of trace analysis algorithms.
international conference on nanoscale computing and communication | 2018
Medina Hamidović; Werner Haselmayr; Andreas Grimmer; Robert Wille; Andreas Springer
In this work, we investigate passive droplet routing in microfluidic bus networks as a promising approach for realizing programmable and flexible microfluidic devices. Two main concepts for passive droplet routing have been established in the past: switching by distance and switching by size. In this work, we deliver a unified analysis and comparison of both methods in terms of network size and achievable throughput. We complete the analysis by identifying the key limiting parameters for both metrics and providing some application specific design guidelines.
RSC Advances | 2018
Andreas Grimmer; Xiaoming Chen; Medina Hamidović; Werner Haselmayr; Carolyn L. Ren; Robert Wille
The functional performance of passively operated droplet microfluidics is sensitive with respect to the dimensions of the channel network, the fabrication precision as well as the applied pressure because the entire network is coupled together. Especially, the local and global hydrodynamic resistance changes caused by droplets make the task to develop a robust microfluidic design challenging as plenty of interdependencies which all affect the intended behavior have to be considered by the designer. After the design, its functionality is usually validated by fabricating a prototype and testing it with physical experiments. In case that the functionality is not implemented as desired, the designer has to go back, revise the design, and repeat the fabrication as well as experiments. This current design process based on multiple iterations of refining and testing the design produces high costs (financially as well as in terms of time). In this work, we show how a significant amount of those costs can be avoided when applying simulation before fabrication. To this end, we demonstrate how simulations on the 1D circuit analysis model can help in the design process by means of a case study. Therefore, we compare the design process with and without using simulation. As a case study, we use a microfluidic network which is capable of trapping and merging droplets with different content on demand. The case study demonstrates how simulation can help to validate the derived design by considering all local and global hydrodynamic resistance changes. Moreover, the simulations even allow further exploration of different designs which have not been considered before due to the high costs.