Antti Eero Johannes Hyvärinen
University of Lugano
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Antti Eero Johannes Hyvärinen.
international conference on logic programming | 2010
Antti Eero Johannes Hyvärinen; Tommi A. Junttila; Ilkka Niemelä
In this paper we study the problem of solving hard propositional satisfiability problem (SAT) instances in a computing grid or cloud, where run times and communication between parallel running computations are limited. We study analytically an approach where the instance is partitioned iteratively into a tree of subproblems and each node in the tree is solved in parallel. We present new methods for constructing partitions which combine clause learning and lookahead. The methods are incorporated into the iterative approach and its performance is demonstrated with an extensive comparison against the best sequential solvers in the SAT competition 2009 as well as against two efficient parallel solvers.
principles and practice of constraint programming | 2011
Antti Eero Johannes Hyvärinen; Tommi A. Junttila; Ilkka Niemelä
This work studies the solving of challenging SAT problem instances in distributed computing environments that have massive amounts of parallel resources but place limits on individual computations. We present an abstract framework which extends a previously presented iterative partitioning approach with clause learning, a key technique applied in modern SAT solvers. In addition we present two techniques that alter the clause learning of modern SAT solvers to fit the framework. An implementation of the proposed framework is then analyzed experimentally using a well-known set of benchmark instances. The results are very encouraging. For example, the implementation is able to solve challenging SAT instances not solvable in reasonable time by state-of-the-art sequential and parallel SAT solvers.
international conference on logic programming | 2013
Simone Fulvio Rollini; Leonardo Alt; Grigory Fedyukovich; Antti Eero Johannes Hyvärinen; Natasha Sharygina
Propositional interpolation is widely used as a means of overapproximation to achieve efficient SAT-based symbolic model checking. Different verification applications exploit interpolants for different purposes; it is unlikely that a single interpolation procedure could provide interpolants fit for all cases. This paper describes the PeRIPLO framework, an interpolating SAT-solver that implements a set of techniques to generate and manipulate interpolants for different model checking tasks. We demonstrate the flexibility of the framework in two software bounded model checking applications: verification of a given source code incrementally with respect to various properties, and verification of software upgrades with respect to a fixed set of properties. Both applications use interpolation for generating function summaries. Our systematic experimental investigation shows that size and logical strength of interpolants significantly affect verification, that these characteristics depend on the role played by interpolants, and that therefore techniques for tuning size and strength can be used to customize interpolants in different applications.
theory and applications of satisfiability testing | 2012
Antti Eero Johannes Hyvärinen; Norbert Manthey
Solving instances of the propositional satisfiability problem (SAT) in parallel has received a significant amount of attention as the number of cores in a typical workstation is steadily increasing. With the increase of the number of cores, in particular the scalability of such approaches becomes essential for fully harnessing the potential of modern architectures. The best parallel SAT solvers have, until recently, been based on algorithm portfolios, while search-space partitioning approaches have been less successful. We prove, under certain natural assumptions on the partitioning function, that search-space partitioning can always result in an increased expected run time, justifying the success of the portfolio approaches. Furthermore, we give first controlled experiments showing that an approach combining elements from partitioning and portfolios scales better than either of the two approaches and succeeds in solving instances not solved in a recent solver competition.
artificial intelligence and symbolic computation | 2008
Antti Eero Johannes Hyvärinen; Tommi A. Junttila; Ilkka Niemelä
Grid computing offers a promising approach to solving challenging computational problems in an environment consisting of a large number of easily accessible resources. In this paper we develop strategies for solving collections of hard instances of the propositional satisfiability problem (SAT) with a randomized SAT solver run in a Grid. We study alternative strategies by using a simulation framework which is composed of (i) a grid model capturing the communication and management delays, and (ii) run-time distributions of a randomized solver, obtained by running a state-of-the-art SAT solver on a collection of hard instances. The results are experimentally validated in a production level Grid. When solving a single hard SAT instance, the results show that in practice only a relatively small amount of parallelism can be efficiently used; the speedup obtained by increasing parallelism thereafter is negligible. This observation leads to a novel strategy of using grid to solve collections of hard instances. Instead of solving instances one-by-one, the strategy aims at decreasing the overall solution time by applying an alternating distribution schedule.
computer-based medical systems | 2008
Mikko Juhani Pitkänen; Xin Zhou; Antti Eero Johannes Hyvärinen; Henning Müller
In this paper we show how grid computing can be used to improve the operation of a medical image search system. The paper introduces the basic principles of a content-based image retrieval (CBIR) system and identifies the computationally challenging tasks in the system. For the computationally challenging tasks an efficient design is proposed that uses distributed grid computing to carry out the image processing in a distributed and efficient way. The algorithms of the search system are executed by using a real medical image collection as input and a grid computing infrastructure to provide the needed computing power. Finally, the results show how the image processing task that required tens of hours to complete can be processed by using only a fraction of the originally required computing time.
artificial intelligence methodology systems applications | 2008
Antti Eero Johannes Hyvärinen; Tommi A. Junttila; Ilkka Niemelä
Computational Grids provide a widely distributed computing environment suitable for randomized SAT solving. This paper develops techniques for incorporating learning, known to yield significant speed-ups in the sequential case, in such a distributed framework. The approach exploits existing state-of-the-art clause learning SAT solvers by embedding them with virtually no modifications. We show that for many industrial SAT instances, the expected run time can be decreased by carefully combining the learned clauses from the distributed solvers. We compare different parallel learning strategies by using a representative set of benchmarks, and exploit the results to devise an algorithm for learning-enhanced randomized SAT solving in Grid environments. Finally, we experiment with an implementation of the algorithm in a production level Grid and solve several problems which were not solved in the SAT 2007 solver competition.
international symposium on software testing and analysis | 2014
Fabrizio Pastore; Leonardo Mariani; Antti Eero Johannes Hyvärinen; Grigory Fedyukovich; Natasha Sharygina; Stephan Sehestedt; Ali Muhammad
In this paper we present Verification-Aided Regression Testing (VART), a novel extension of regression testing that uses model checking to increase the fault revealing capability of existing test suites. The key idea in VART is to extend the use of test case executions from the conventional direct fault discovery to the generation of behavioral properties specific to the upgrade, by (i) automatically producing properties that are proved to hold for the base version of a program, (ii) automatically identifying and checking on the upgraded program only the properties that, according to the developers’ intention, must be preserved by the upgrade, and (iii) reporting the faults and the corresponding counter-examples that are not revealed by the regression tests. Our empirical study on both open source and industrial software systems shows that VART automatically produces properties that increase the effectiveness of testing by automatically detecting faults unnoticed by the existing regression test suites.
verified software theories tools experiments | 2015
Leonardo Alt; Grigory Fedyukovich; Antti Eero Johannes Hyvärinen; Natasha Sharygina
The labeled interpolation system LIS is a framework for Craig interpolation widely used in propositional-satisfiability-based model checking. Most LIS-based algorithms construct the interpolant from a proof of unsatisfiability and a fixed labeling determined by which part of the propositional formula is being over-approximated. While this results in interpolants with fixed strength, it limits the possibility of generating interpolants of small size. This is problematic since the interpolant size is a determining factor in achieving good overa performance in model checking. This paper analyses theoretically how labeling functions can be used to construct small interpolants. In addition to developing the new labeling mechanism guaranteeing small interpolants, we also present its versions managing the strength of the interpolants. We implement the labeling functions in our tool PeRIPLO and study the behavior of the resulting algorithms experimentally by integrating the tool to a variety of model checking applications. Our results suggest that the new proof-sensitive interpolation algorithm performs consistently better than any of the standard interpolation algorithms based on LIS.
Fundamenta Informaticae | 2011
Antti Eero Johannes Hyvärinen; Tommi A. Junttila; Ilkka Niemelä
This paper studies the following question: given an instance of the propositional satisfiability problem, a randomized satisfiability solver, and a cluster of n computers, what is the best way to use the computers to solve the instance? Two approaches, simple distribution and search space partitioning as well as their combinations are investigated both analytically and empirically. It is shown that the results depend heavily on the type of the problem (unsatisfiable, satisfiable with few solutions, and satisfiable with many solutions) as well as on how good the search space partitioning function is. In addition, the behavior of a real search space partitioning function is evaluated in the same framework. The results suggest that in practice one should combine the simple distribution and search space partitioning approaches.