Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Guillermo Román-Díez is active.

Publication


Featured researches published by Guillermo Román-Díez.


tools and algorithms for construction and analysis of systems | 2014

SACO: Static analyzer for concurrent objects

Puri Arenas; Antonio Flores-Montoya; Samir Genaim; Miguel Gómez-Zamalloa; Enrique Martin-Martin; Germán Puebla; Guillermo Román-Díez

We present the main concepts, usage and implementation of SACO, a static analyzer for concurrent objects. Interestingly, SACO is able to infer both liveness (namely termination and resource boundedness) and safety properties (namely deadlock freedom) of programs based on concurrent objects. The system integrates auxiliary analyses such as points-to and may-happen-in-parallel, which are essential for increasing the accuracy of the aforementioned more complex properties. SACO provides accurate information about the dependencies which may introduce deadlocks, loops whose termination is not guaranteed, and upper bounds on the resource consumption of methods.


partial evaluation and semantic-based program manipulation | 2011

Verified resource guarantees using COSTA and KeY

Richard Bubel; Samir Genaim; Reiner Hähnle; Germán Puebla; Guillermo Román-Díez

Resource guarantees allow being certain that programs will run within the indicated amount of resources, which may refer to memory consumption, number of instructions executed, etc. This information can be very useful, especially in real-time and safety-critical applications. Nowadays, a number of automatic tools exist, often based on type systems or static analysis, which produce such resource guarantees. In spite of being based on theoretically sound techniques, the implemented tools may contain bugs which render the resource guarantees thus obtained not completely trustworthy. Performing full-blown verification of such tools is a daunting task, since they are large and complex. In this work we investigate an alternative approach whereby, instead of the tools, we formally verify the results of the tools. We have implemented this idea using COSTA, a state-of-the-art static analysis system, for producing resource guarantees and KeY, a state-of-the-art verification tool, for formally verifying the correctness of such resource guarantees. Our preliminary results show that the proposed tool cooperation can be used for automatically producing verified resource guarantees.


Software Testing, Verification & Reliability | 2015

Object-sensitive cost analysis for concurrent objects

Puri Arenas; Jesús Correas; Samir Genaim; Miguel Gómez-Zamalloa; Germán Puebla; Guillermo Román-Díez

This article presents a novel cost analysis framework for concurrent objects. Concurrent objects form a well‐established model for distributed concurrent systems. In this model, objects are the concurrency units that communicate among them via asynchronous method calls. Cost analysis aims at automatically approximating the resource consumption of executing a program in terms of its input parameters. While cost analysis for sequential programming languages has received considerable attention, concurrency and distribution have been notably less studied. The main challenges of cost analysis in a concurrent setting are as follows. First, inferring precise size abstractions for data in the program in the presence of shared memory. This information is essential for bounding the number of iterations of loops. Second, distribution suggests that analysis must infer the cost of the diverse distributed components separately. We handle this by means of a novel form of object‐sensitive recurrence equations that use cost centres in order to keep the resource usage assigned to the different components separate. We have implemented our analysis and evaluated it on several small applications that are classical examples of concurrent and distributed programming. Copyright


tools and algorithms for construction and analysis of systems | 2015

Non-cumulative Resource Analysis

Jesús Correas Fernández; Guillermo Román-Díez

Existing cost analysis frameworks have been defined for cumulative resources which keep on increasing along the computation. Traditional cumulative resources are execution time, number of executed steps, amount of memory allocated, and energy consumption. Non-cumulative resources are acquired and possibly released along the execution. Examples of non-cumulative cost are memory usage in the presence of garbage collection, number of connections established that are later closed, or resources requested to a virtual host which are released after using them.We present, to the best of our knowledge, the first generic static analysis framework to infer an upper bound on the peak cost for non-cumulative types of resources. Our analysis comprises several components: 1 a pre-analysis to infer when resources are being used simultaneously, 2 a program-point resource analysis which infers an upper bound on the cost at the points of interest namely the points where resources are acquired and 3 the elimination from the upper bounds obtained in 2 of those resources accumulated that are not used simultaneously. We report on a prototype implementation of our analysis that can be used on a simple imperative language.


static analysis symposium | 2014

Peak Cost Analysis of Distributed Systems

Jesús Correas; Guillermo Román-Díez

We present a novel static analysis to infer the peak cost of distributed systems. The different locations of a distributed system communicate and coordinate their actions by posting tasks among them. Thus, the amount of work that each location has to perform can greatly vary along the execution depending on: (1) the amount of tasks posted to its queue, (2) their respective costs, and (3) the fact that they may be posted in parallel and thus be pending to execute simultaneously. The peak cost of a distributed location refers to the maximum cost that it needs to carry out along its execution. Inferring the peak cost is challenging because it increases and decreases along the execution, unlike the standard notion of total cost which is cumulative. Our key contribution is the novel notion of quantified queue configuration which captures the worst-case cost of the tasks that may be simultaneously pending to execute at each location along the execution. A prototype implementation demonstrates the accuracy and feasibility of the proposed peak cost analysis.


partial evaluation and semantic-based program manipulation | 2012

Incremental resource usage analysis

Jesús Correas; Germán Puebla; Guillermo Román-Díez

The aim of incremental global analysis is, given a program, its analysis results and a series of changes to the program, to obtain the new analysis results as efficiently as possible and, ideally, without having to (re-)analyze fragments of code which are not affected by the changes. incremental analysis can significantly reduce both the time and the memory requirements of analysis. This paper presents an incremental resource usage analysis for a sequential Java-like language. Our main contributions are (1) a multi-domain incremental fixed-point algorithm which can be used by all global pre-analyses required to infer the cost (including class, sharing, cyclicity, constancy, and size analyses), and which takes care of propagating dependencies among such domains, and (2) a novel form of cost summaries which allows us to incrementally reconstruct only those components of cost functions affected by the change. Experimental results in the COSTA system show that the proposed incremental analysis performs very efficiently in practice.


fundamental approaches to software engineering | 2012

Verified resource guarantees for heap manipulating programs

Richard Bubel; Samir Genaim; Reiner Hähnle; Guillermo Román-Díez

Program properties that are automatically inferred by static analysis tools are generally not considered to be completely trustworthy, unless the tool implementation or the results are formally verified. Here we focus on the formal verification of resource guarantees inferred by automatic cost analysis. Resource guarantees ensure that programs run within the indicated amount of resources which may refer to memory consumption, to number of instructions executed, etc. In previous work we studied formal verification of inferred resource guarantees that depend only on integer data. In realistic programs, however, resource consumption is often bounded by the size of heap-allocated data structures. Bounding their size requires to perform a number of structural heap analyses. The contributions of this paper are (i) to identify what exactly needs to be verified to guarantee sound analysis of heap manipulating programs, (ii) to provide a suitable extension of the program logic used for verification to handle structural heap properties in the context of resource guarantees, and (iii) to improve the underlying theorem prover so that proof obligations can be automatically discharged.


static analysis symposium | 2015

Parallel Cost Analysis of Distributed Systems

Jesús Correas; Einar Broch Johnsen; Guillermo Román-Díez

We present a novel static analysis to infer the parallel cost of distributed systems. Parallel cost differs from the standard notion of serial cost by exploiting the truly concurrent execution model of distributed processing to capture the cost of synchronized tasks executing in parallel.It is challenging to analyze parallel cost because one needs to soundly infer the parallelism between tasks while accounting for waiting and idle processor times at the different locations. Our analysis works in three phases: (1) It first performs a block-level analysis to estimate the serial costs of the blocks between synchronization points in the program; (2) Next, it constructs a distributed flow graph (DFG) to capture the parallelism, the waiting and idle times at the locations of the distributed system; Finally, (3) the parallel cost can be obtained as the path of maximal cost in the DFG. A prototype implementation demonstrates the accuracy and feasibility of the proposed analysis.


Science of Computer Programming | 2014

Conditional termination of loops over heap-allocated data

Puri Arenas; Samir Genaim; Germán Puebla; Guillermo Román-Díez

Static analysis which takes into account the values of data stored in the heap is considered complex and computationally intractable in practice. Thus, most static analyzers do not keep track of object fields nor of array contents, i.e., they are heap-insensitive. In this article, we propose locality conditions for soundly tracking heap-allocated data in Java (bytecode) programs, by means of ghost non-heap allocated variables. This way, heap-insensitive analysis over the transformed program can infer information on the original heap-allocated data without sacrificing efficiency. If the locality conditions cannot be proven unconditionally, we seek to generate aliasing preconditions which, when they hold in the initial state, guarantee the termination of the program. Experimental results show that we greatly improve the accuracy w.r.t. a heap-insensitive analysis while the overhead introduced is reasonable.


Software and Systems Modeling | 2016

A formal verification framework for static analysis

Richard Bubel; Samir Genaim; Reiner Hähnle; Germán Puebla; Guillermo Román-Díez

Static analysis tools, such as resource analyzers, give useful information on software systems, especially in real-time and safety-critical applications. Therefore, the question of the reliability of the obtained results is highly important. State-of-the-art static analyzers typically combine a range of complex techniques, make use of external tools, and evolve quickly. To formally verify such systems is not a realistic option. In this work, we propose a different approach whereby, instead of the tools, we formally verify the results of the tools. The central idea of such a formal verification framework for static analysis is the method-wise translation of the information about a program gathered during its static analysis into specification contracts that contain enough information for them to be verified automatically. We instantiate this framework with costa, a state-of-the-art static analysis system for sequential Java programs, for producing resource guarantees and KeY, a state-of-the-art verification tool, for formally verifying the correctness of such resource guarantees. Resource guarantees allow to be certain that programs will run within the indicated amount of resources, which may refer to memory consumption, number of instructions executed, etc. Our results show that the proposed tool cooperation can be used for automatically producing verified resource guarantees.

Collaboration


Dive into the Guillermo Román-Díez's collaboration.

Top Co-Authors

Avatar

Germán Puebla

Technical University of Madrid

View shared research outputs
Top Co-Authors

Avatar

Jesús Correas

Complutense University of Madrid

View shared research outputs
Top Co-Authors

Avatar

Samir Genaim

Complutense University of Madrid

View shared research outputs
Top Co-Authors

Avatar

Puri Arenas

Complutense University of Madrid

View shared research outputs
Top Co-Authors

Avatar

Miguel Gómez-Zamalloa

Complutense University of Madrid

View shared research outputs
Top Co-Authors

Avatar

Enrique Martin-Martin

Complutense University of Madrid

View shared research outputs
Top Co-Authors

Avatar

Reiner Hähnle

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar

Richard Bubel

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar

Damiano Zanardini

Technical University of Madrid

View shared research outputs
Top Co-Authors

Avatar

Antonio Flores-Montoya

Technische Universität Darmstadt

View shared research outputs
Researchain Logo
Decentralizing Knowledge