Matthieu Martel
University of Perpignan
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Matthieu Martel.
computer aided verification | 2005
Alexandru Costan; Stéphane Gaubert; Eric Goubault; Matthieu Martel; Sylvie Putot
We present a new method for solving the fixed point equations that appear in the static analysis of programs by abstract interpretation. We introduce and analyze a policy iteration algorithm for monotone self-maps of complete lattices. We apply this algorithm to the particular case of lattices arising in the interval abstraction of values of variables. We demonstrate the improvements in terms of speed and precision over existing techniques based on Kleene iteration, including traditional widening/narrowing acceleration mecanisms.
european symposium on programming | 2002
Eric Goubault; Matthieu Martel; Sylvie Putot
The manipulation of real numbers by computers is approximated by floatingpoint arithmetic, which uses a finite representation of numbers. This implies that a (small in general) rounding error may be committed at each operation. Although this approximation is accurate enough for most applications, there are some cases where results become irrelevant because of the precision lost at some stages of the computation, even when the underlying numerical scheme is stable. In this paper, we present a tool for studying the propagation of rounding errors in floating-point computations, that carries out some ideas proposed in [3], [7]. Its aim is to detect automatically a possible catastrophic loss of precision, and its source. The tool is intended to cope with real industrial problems, and we believe it is specially appropriate for critical instrumentation software. On these numerically quite simple programs, we believe our tool will bring some very helpful information, and allow us to find possible programming errors such as potentially dangerous double/float conversions, or blatant unstabilities or losses of accuracy. The techniques used being those of static analysis, the tool will not compete on numerically intensive codes with a numerician’s study of stability. Neither is it designed for helping to find better numerical schemes. But, it is automatic and in comparison with a study of sensitivity to data, brings about the contribution of rounding errors occuring at every intermediary step of the computation. Moreover, static analyzes are sure (but may be pessimistic) and consider a set of possible executions and not just one, which is the essential requirement a verification tool for critical software must meet.
european symposium on programming | 2002
Matthieu Martel
We introduce a concrete semantics for floating-point operations which describes the propagation of roundoff errors throughout a computation. This semantics is used to assert the correctness of an abstract interpretation which can be straightforwardly derived from it. In our model, every elementary operation introduces a new first order error term, which is later combined with other error terms, yielding higher order error terms. The semantics is parameterized by the maximal order of error to be examined and verifies whether higher order errors actually are negligible. We consider also coarser semantics computing the contribution, to the final error, of the errors due to some intermediate computations.
Scanning | 2006
Olivier Bouissou; Matthieu Martel
In this article, we describe a new library for computing guaranteed bounds of the solutions of Initial Value Problems (IVP). Given an initial value problem and an end point, our library computes a sequence of approximation points together with a sequence of approximation errors such that the distance to the true solution of the IVP is below these error terms at each approximation point. These sequences are computed using a classical Runge-Kutta method for which truncation and roundoff errors may be over-approximated. We also compute the propagation of local errors to obtain an enclosure of the global error at each computation step. These techniques are implemented in a C++ library which provides an easy-to-use framework for the rigorous approximation of IVP. This library implements an error control technique based on step size reduction in order to reach a certain tolerance on local errors.
Lecture Notes in Computer Science | 2004
Sylvie Putot; Eric Goubault; Matthieu Martel
Finite precision computations can severely affect the accuracy of computed solutions. We present a static analysis, and a prototype implementing this analysis for C codes, for studying the propagation of rounding errors occurring at every intermediary step in floating-point computations. The analysis presented relies on abstract interpretation by interval values and series of interval error terms. Considering all errors possibly introduced by floating-point numbers, it aims at identifying the operations responsible for the main losses of accuracy. We believe this approach is for now specially appropriate for numerically simple programs which results must be verified, such as critical instrumentation software.
partial evaluation and semantic-based program manipulation | 2009
Matthieu Martel
This article introduces a new program transformation in order to enhance the numerical accuracy of floating-point computations. We consider that a program would return an exact result if the computations were carried out using real numbers. In practice, roundoff errors due to the finite representation of values arise during the execution. These errors are closely related to the way formulas are evaluated. Indeed, mathematically equivalent formulas, obtained using laws like associativity, distributivity, etc., may lead to very different numerical results in the computer arithmetic. We propose a semantics-based transformation in order to optimize the numerical accuracy of programs. This transformation is expressed in the abstract interpretation framework and it aims at rewriting pieces of numerical codes in order to obtain results closer to what the computer would output if it used the exact arithmetic.
static analysis symposium | 2012
Arnault Ioualalen; Matthieu Martel
Exact computations being in general not tractable for computers, they are approximated by floating-point computations. This is the source of many errors in numerical programs. Because the floating-point arithmetic is not intuitive, these errors are very difficult to detect and to correct by hand and we consider the problem of automatically synthesizing accurate formulas. We consider that a program would return an exact result if the computations were carried out using real numbers. In practice, roundoff errors arise during the execution and these errors are closely related to the way formulas are written. Our approach is based on abstract interpretation. We introduce Abstract Program Equivalence Graphs (APEGs) to represent in polynomial size an exponential number of mathematically equivalent expressions. The concretization of an APEG yields expressions of very different shapes and accuracies. Then, we extract optimized expressions from APEGs by searching the most accurate concrete expressions among the set of represented expressions.
static analysis symposium | 2002
Matthieu Martel
We introduce a relational static analysis to determine the stability of the numerical errors arising inside a loop in which floating-point computations are carried out. This analysis is based on a stability test for non-linear functions and on a precise semantics for floating-point numbers that computes the propagation of the errors made at each operation. A major advantage of this approach is that higher-order error terms are not neglected. We introduce two algorithms for the analysis. The first one, less complex, only determines the global stability of the loop. The second algorithm determines which particular operation makes a loop unstable. Both algorithms have been implemented and we present some experimental results.
static analysis symposium | 2007
Matthieu Martel
Floating-point arithmetic is an important source of errors in programs because of the loss of precision arising during a computation. Unfortunately, this arithmetic is not intuitive (e.g. many elementary operations are not associative, inversible, etc.) making the debugging phase very difficult and empiric. This article introduces a new kind of program transformation in order to automatically improve the accuracy of floating-point computations. We use P. Cousot and R. Cousots framework for semantics program transformation and we propose an offline transformation. This technique was implemented, and the first experimental results are presented.
international conference on embedded software and systems | 2009
Alexandre Chapoutot; Matthieu Martel
Simulink is one of the most widely used industrial tools to design embedded systems. Applying formal methods sooner in the cycle of development is an important industrial challenge in order to reduce the cost of bug fixing. In this article, we introduce a new method, called Abstract Simulation and based on abstract interpretation of Simulink models. Abstract Simulation uses several numerical domains such as a domain for Taylor forms or floating-point numbers with errors. These domains allow us to estimate errors introduced by numerical algorithms and by computations during simulations. As a result, our method makes it possible to validate numerical behaviors of embedded systems modeled in Simulink. A prototype has been implemented and experimental results are commented.