A.J.C. van Gemund
Delft University of Technology
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by A.J.C. van Gemund.
Testing: Academic and Industrial Conference Practice and Research Techniques - MUTATION (TAICPART-MUTATION 2007) | 2007
Peter Zoeteweij; A.J.C. van Gemund
Spectrum-based fault localization shortens the test- diagnose-repair cycle by reducing the debugging effort. As a light-weight automated diagnosis technique it can easily be integrated with existing testing schemes. However, as no model of the system is taken into account, its diagnostic accuracy is inherently limited. Using the Siemens Set benchmark, we investigate this diagnostic accuracy as a function of several parameters (such as quality and quantity of the program spectra collected during the execution of the system), some of which directly relate to test design. Our results indicate that the superior performance of a particular similarity coefficient, used to analyze the program spectra, is largely independent of test design. Furthermore, near- optimal diagnostic accuracy (exonerating about 80% of the blocks of code on average) is already obtained for low-quality error observations and limited numbers of test cases. The influence of the number of test cases is of primary importance for continuous (embedded) processing applications, where only limited observation horizons can be maintained.
international conference on parallel processing | 2001
A. Radulescu; A.J.C. van Gemund
A relatively new trend in parallel programming scheduling is the so-called mixed task and data scheduling. It has been shown that mixing task and data parallelism to solve large computational applications often yields better speedups compared to either applying more task parallelism or pure data parallelism. In this paper we present a new compile-time heuristic, named critical path and allocation (CPA), for scheduling data-parallel task graphs. Designed to have a very low cost, its complexity is much lower compared to existing approaches, such as TSAS, TwoL or CPR, by one order of magnitude or even more. Experimental results based on graphs derived from real problems as well as synthetic graphs, show that the performance loss of CPA relative to the above algorithms does not exceed 50%. These results are also confirmed by performance measurements of two real applications (i.e., complex matrix multiplication and Strassen matrix multiplication) running on a cluster of workstations.
international parallel and distributed processing symposium | 2001
A. Radulescu; Cristina Nicolescu; A.J.C. van Gemund; Pieter P. Jonker
It is well-known that mixing task and data parallelism to solve large computational applications often yields better speedups compared to either applying pure task parallelism or pure data parallelism. Typically, the applications are modeled in terms of a dependence graph of coarse-grain data-parallel tasks, called a data-parallel task graph. In this paper we present a new compile-time heuristic, named Critical Path Reduction (CPR), for scheduling data-parallel task graphs. Experimental results based on graphs derived from real problems as well as synthetic graphs, show that CPR achieves higher speedup compared to other well-known existing scheduling algorithms, at the expense of some higher cost. These results are also confirmed by performance measurements of two real applications (i.e., complex matrix multiplication and Strassen matrix multiplication) running on a cluster of workstations.
IEEE Transactions on Parallel and Distributed Systems | 2002
A. Radulescu; A.J.C. van Gemund
In compile-time task scheduling for distributed-memory systems, list scheduling is generally accepted as an attractive approach, since it pairs low cost with good results. List-scheduling algorithms schedule tasks in order of their priority. This priority can be computed either (1) statically, before the scheduling, or (2) dynamically, during the scheduling. In this paper, we show that list scheduling with statically-computed priorities (LSSP) can be performed at a significantly lower cost than existing approaches, without sacrificing performance. Our approach is general, i.e. it can be applied to any LSSP algorithm. The low complexity is achieved by using low-complexity methods for the most time-consuming parts in list-scheduling algorithms, i.e. processor selection and task selection, preserving the criteria used in the original algorithms. We exemplify our method by applying it to the MCP (Modified Critical Path) algorithm. Using an extension of this method, we can also reduce the time complexity of a particular class of list scheduling with dynamic priorities (LSDP) [including algorithms such as DLS (Dynamic Level Scheduling), ETF (Earliest Task First) and ERT (Earliest Ready Task)]. Our results confirm that the modified versions of the list-scheduling algorithms obtain a performance comparable to their original versions, yet at a significantly lower cost. We also show that the modified versions of the list-scheduling algorithms consistently outperform multi-step algorithms, such as DSC-LLB (Dynamic Sequence Clustering with List Load Balancing), which also have higher complexity and clearly outperform algorithms in the same class of complexity, such as CPM (Critical Path Method).
IEEE Transactions on Parallel and Distributed Systems | 2003
A.J.C. van Gemund
Performance prediction is an important engineering tool that provides valuable feedback on design choices in program synthesis and machine architecture development. We present an analytic performance modeling approach aimed to minimize prediction cost, while providing a prediction accuracy that is sufficient to enable major code and data mapping decisions. Our approach is based on a performance simulation language called PAMELA. Apart from simulation, PAMELA features a symbolic analysis technique that enables PAMELA models to be compiled into symbolic performance models that trade prediction accuracy for the lowest possible solution cost. We demonstrate our approach through a large number of theoretical and practical modeling case studies, including six parallel programs and two distributed-memory machines. The average prediction error of our approach is less than 10 percent, while the average worst-case error is limited to 50 percent. It is shown that this accuracy is sufficient to correctly select the best coding or partitioning strategy. For programs expressed in a high-level, structured programming model, such as data-parallel programs, symbolic performance modeling can be entirely automated. We report on experiments with a PAMELA model generator built within a dataparallel compiler for distributed-memory machines. Our results show that with negligible program annotation, symbolic performance models are automatically compiled in seconds, while their solution cost is in the order of milliseconds.
engineering of computer based systems | 2007
Peter Zoeteweij; R. Golsteijn; A.J.C. van Gemund
Automated diagnosis of errors detected during software testing can improve the efficiency of the debugging process, and can thus help to make software more reliable. In this paper we discuss the application of a specific automated debugging technique, namely software fault localization through the analysis of program spectra, in the area of embedded software in high-volume consumer electronics products. We discuss why the technique is particularly well suited for this application domain, and through experiments on an industrial test case we demonstrate that it can lead to highly accurate diagnoses of realistic errors
IEEE Transactions on Reliability | 2012
A.J.C. van Gemund; Gerard L. Reijns
k-out-of-n systems with cold standby units are typically studied for unit lifetime distributions that allow analytical tractability. Often, however, these distributions differ significantly from reality. In this paper, we present an analytical approach to compute the mean failure time for k -out-of-n systems with a single cold standby unit for the wide class of lifetime distributions that can be captured by the Pearson distribution. The method requires the first four statistical moments of the units lifetime distribution to be given, and computes the mean failure time using the Pearson distribution as an intermediate vehicle during the numerical integration. Experimental results for various instances of the Weibull distribution show that the numerical accuracy of the approach is high, with less than 0.5 percent error across a large range of k -out-of-n systems.
international conference on parallel processing | 1999
A. Radulescu; A.J.C. van Gemund
This paper describes a novel compile-time list-based task scheduling algorithm for distributed-memory systems, called Fast Load Balancing (FLB). Compared to other typical list scheduling heuristics, FLB drastically reduces scheduling time complexity to O(V(log(W)+log (P))+E), where V and E are the number of tasks and edges in the task graph, respectively, W is the task graph width and P is the number of processors. It is proven that FLB is essentially equivalent to the existing ETF scheduling algorithm of O(W(E+V)P) time complexity. Experiments also show that FLB performs equally to other one-step algorithms of much higher cost, such as MCP. Moreover, FLB consistently outperforms multi-step algorithms such as DSC-LLB that also have higher cost.
autotestcon | 2005
Jurryt Pietersma; A.J.C. van Gemund; A. Bos
Fault diagnosis is crucial for the reduction of test & integration time and down-time of complex systems. In this paper, we present a model-based approach to derive tests and test sequences for sequential fault diagnosis. This approach offers advantages over methods that are based on test coverage of explicit fault states, represented in matrix form. Functional models are more easily adapted to design changes and constitute a complete information source for test selection on a given abstraction level. We introduce our approach and implementation with a theoretic example. We demonstrate the practical use in three case studies and obtain cost reductions of up to 59% compared to the matrix-based approach
secure software integration and reliability improvement | 2008
Peter Zoeteweij; Jurryt Pietersma; Alexander Feldman; A.J.C. van Gemund
Automated fault diagnosis is emerging as an important factor in achieving an acceptable and competitive cost/dependability ratio for embedded systems. In this paper, we survey model-based diagnosis and spectrum-based fault localization, two state-of-the-art approaches to fault diagnosis that jointly cover the combination of hardware and control software typically found in embedded systems. We present an introduction to the field, discuss our recent research results, and report on the application on industrial test cases. In addition, we propose to combine the two techniques into a novel, dynamic modeling approach to software fault localization.