Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jaco Geldenhuys is active.

Publication


Featured researches published by Jaco Geldenhuys.


foundations of software engineering | 2012

Green: reducing, reusing and recycling constraints in program analysis

Willem Visser; Jaco Geldenhuys; Matthew B. Dwyer

The analysis of constraints plays an important role in many aspects of software engineering, for example constraint satisfiability checking is central to symbolic execution. However, the norm is to recompute results in each analysis. We propose a different approach where every call to the solver is wrapped in a check to see if the result is not already available. While many tools use some form of results caching, the novelty of our approach is the persistence of results across runs, across programs being analyzed, across different analyses and even across physical location. Achieving such reuse requires that constraints be distilled into their essential parts and represented in a canonical form. In this paper, we describe the key ideas of our approach and its implementation, the Green solver interface, which reduces constraints to a simple form, allows for reuse of constraint solutions within an analysis run, and allows for recycling constraint solutions produced in one analysis run for use in other analysis runs. We describe how we integrated Green into two existing symbolic execution tools and demonstrate the reuse we achieve in the different settings.


automated software engineering | 2013

Symbolic PathFinder: integrating symbolic execution with model checking for Java bytecode analysis

Corina S. Păsăreanu; Willem Visser; David H. Bushnell; Jaco Geldenhuys; Peter C. Mehlitz; Neha Rungta

Symbolic PathFinder (SPF) is a software analysis tool that combines symbolic execution with model checking for automated test case generation and error detection in Java bytecode programs. In SPF, programs are executed on symbolic inputs representing multiple concrete inputs and the values of program variables are represented by expressions over those symbolic inputs. Constraints over these expressions are generated from the analysis of different paths through the program. The constraints are solved with off-the-shelf solvers to determine path feasibility and to generate test inputs. Model checking is used to explore different symbolic program executions, to systematically handle aliasing in the input data structures, and to analyze the multithreading present in the code. SPF incorporates techniques for handling input data structures, strings, and native calls to external libraries, as well as for solving complex mathematical constraints. We describe the tool and its application at NASA, in academia, and in industry.


international symposium on software testing and analysis | 2012

Probabilistic symbolic execution

Jaco Geldenhuys; Matthew B. Dwyer; Willem Visser

The continued development of efficient automated decision procedures has spurred the resurgence of research on symbolic execution over the past decade. Researchers have applied symbolic execution to a wide range of software analysis problems including: checking programs against contract specifications, inferring bounds on worst-case execution performance, and generating path-adequate test suites for widely used library code. In this paper, we explore the adaptation of symbolic execution to perform a more quantitative type of reasoning --- the calculation of estimates of the probability of executing portions of a program. We present an extension of the widely used Symbolic PathFinder symbolic execution system that calculates path probabilities. We exploit state-of-the-art computational algebra techniques to count the number of solutions to path conditions, yielding exact results for path probabilities. To mitigate the cost of using these techniques, we present two optimizations, PC slicing and count memoization, that significantly reduce the cost of probabilistic symbolic execution. Finally, we present the results of an empirical evaluation applying our technique to challenging library container implementations and illustrate the benefits that adding probabilities to program analyses may offer.


international workshop on model checking software | 2006

Larger automata and less work for LTL model checking

Jaco Geldenhuys; Henri Hansen

Many different automata and algorithms have been investigated in the context of automata-theoretic LTL model checking. This article compares the behaviour of two variations on the widely used Buchi automaton, namely (i) a Buchi automaton where states are labelled with atomic propositions and transitions are unlabelled, and (ii) a form of testing automaton that can only observe changes in state propositions and makes use of special livelock acceptance states. We describe how these variations can be generated from standard Buchi automata, and outline an SCC-based algorithm for verification with testing automata. The variations are compared to standard automata in experiments with both random and human-generated Kripke structures and LTL_X formulas, using SCC-based algorithms as well as a recent, improved version of the classic nested search algorithm. The results show that SCC-based algorithms outperform their nested search counterpart, but that the biggest improvements come from using the variant automata. Much work has been done on the generation of small automata, but small automata do not necessarily lead to small products when combined with the system being verified. We investigate the underlying factors for the superior performance of the new variations.


south african institute of computer scientists and information technologists | 2012

Symbolic execution of programs with strings

Gideon Redelinghuys; Willem Visser; Jaco Geldenhuys

Symbolic execution has long been a popular technique for automated test generation and for error detection in complex code. Most of the focus has however been on programs manipulating integers, booleans, and references in object oriented programs. Recently researchers have started looking at programs that do lots of string processing; this is motivated by the popularity of the web and the risk that errors in such programs may lead to security violations. Attempts to extend symbolic execution to the domain of strings have mainly been divided into one of two camps: automata-based approaches and approaches based on efficient bitvector analysis. Here we investigate these two approaches in one setting: the symbolic execution framework of Java PathFinder. First we describe the implementations of both approaches and then do an extensive evaluation to show under what circumstances each approach performs well (or not so well). We also illustrate the usefulness of the symbolic execution of strings by finding errors in real-world examples.


IEEE Transactions on Software Engineering | 2015

BLISS: Improved Symbolic Execution by Bounded Lazy Initialization with SAT Support

Nicolás Rosner; Jaco Geldenhuys; Nazareno Aguirre; Willem Visser; Marcelo F. Frias

Lazy Initialization (LI) allows symbolic execution to effectively deal with heap-allocated data structures, thanks to a significant reduction in spurious and redundant symbolic structures. Bounded lazy initialization (BLI) improves on LI by taking advantage of precomputed relational bounds on the interpretation of class fields in order to reduce the number of spurious structures even further. In this paper we present bounded lazy initialization with SAT support (BLISS), a novel technique that refines the search for valid structures during the symbolic execution process. BLISS builds upon BLI, extending it with field bound refinement and satisfiability checks. Field bounds are refined while a symbolic structure is concretized, avoiding cases that, due to the concrete part of the heap and the field bounds, can be deemed redundant. Satisfiability checks on refined symbolic heaps allow us to prune these heaps as soon as they are identified as infeasible, i.e., as soon as it can be confirmed that they cannot be extended to any valid concrete heap. Compared to LI and BLI, BLISS reduces the time required by LI by up to four orders of magnitude for the most complex data structures. Moreover, the number of partially symbolic structures obtained by exploring program paths is reduced by BLISS by over 50 percent, with reductions of over 90 percent in some cases (compared to LI). BLISS uses less memory than LI and BLI, which enables the exploration of states unreachable by previous techniques.


Journal of Informetrics | 2016

Evaluating paper and author ranking algorithms using impact and contribution awards

Marcel Dunaiski; Willem Visser; Jaco Geldenhuys

In the work presented in this paper, we analyse ranking algorithms that can be applied to bibliographic citation networks and rank academic entities such as papers and authors. We evaluate how well these algorithms identify important and high-impact entities.


nasa formal methods symposium | 2013

Bounded Lazy Initialization

Jaco Geldenhuys; Nazareno Aguirre; Marcelo F. Frias; Willem Visser

Tight field bounds have been successfully used in the context of bounded-exhaustive bug finding. They allow one to check the correctness of, or find bugs in, code manipulating data structures whose size made this kind of analyses previously infeasible. In this article we address the question of whether tight field bounds can also contribute to a significant speed-up for symbolic execution when using a system such as Symbolic Pathfinder. Specifically, we propose to change Symbolic Pathfinder’s lazy initialization mechanism to take advantage of tight field bounds. While a straightforward approach that takes into account tight field bounds works well for small scopes, the lack of symmetry-breaking significantly affects its performance. We then introduce a new technique that generates only non-isomorphic structures and consequently is able to consider fewer structures and to execute faster than lazy initialization.


automated technology for verification and analysis | 2009

Exploring the Scope for Partial Order Reduction

Jaco Geldenhuys; Henri Hansen; Antti Valmari

Partial order reduction methods combat state explosion by exploring only a part of the full state space. In each state a subset of enabled transitions is selected using well-established criteria. Typically such criteria are based on an upper approximation of dependencies between transitions. An additional heuristic is needed to ensure that currently disabled transitions stay disabled in the discarded execution paths. Usually rather coarse approximations and heuristics have been used, together with fast, simple algorithms that do not fully exploit the information available. More powerful approximations, heuristics, and algorithms had been suggested early on, but little is known whether their use pays off. We approach this question, not by trying alternative methods, but by investigating how much room the popular methods leave for better reduction. We do this via a series of experiments that mimic the ultimate reduction obtainable under certain conditions.


foundations of software engineering | 2014

Statistical symbolic execution with informed sampling

Antonio Filieri; Corina S. Păsăreanu; Willem Visser; Jaco Geldenhuys

Symbolic execution techniques have been proposed recently for the probabilistic analysis of programs. These techniques seek to quantify the likelihood of reaching program events of interest, e.g., assert violations. They have many promising applications but have scalability issues due to high computational demand. To address this challenge, we propose a statistical symbolic execution technique that performs Monte Carlo sampling of the symbolic program paths and uses the obtained information for Bayesian estimation and hypothesis testing with respect to the probability of reaching the target events. To speed up the convergence of the statistical analysis, we propose Informed Sampling, an iterative symbolic execution that first explores the paths that have high statistical significance, prunes them from the state space and guides the execution towards less likely paths. The technique combines Bayesian estimation with a partial exact analysis for the pruned paths leading to provably improved convergence of the statistical analysis. We have implemented statistical symbolic execution with informed sampling in the Symbolic PathFinder tool. We show experimentally that the informed sampling obtains more precise results and converges faster than a purely statistical analysis and may also be more efficient than an exact symbolic analysis. When the latter does not terminate symbolic execution with informed sampling can give meaningful results under the same time and memory limits.

Collaboration


Dive into the Jaco Geldenhuys's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Matthew B. Dwyer

University of Nebraska–Lincoln

View shared research outputs
Top Co-Authors

Avatar

Henri Hansen

Tampere University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Anna Ruokonen

Tampere University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge