Stef Graillat
Pierre-and-Marie-Curie University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Stef Graillat.
Japan Journal of Industrial and Applied Mathematics | 2009
Stef Graillat; Philippe Langlois; Nicolas Louvet
We survey a class of algorithms to evaluate polynomials with floating point coefficients and for computation performed with IEEE-754 floating point arithmetic. The principle is to apply, once or recursively, an error-free transformation of the polynomial evaluation with the Horner algorithm and to accurately sum the final decomposition. These compensated algorithms are as accurate as the Horner algorithm perforned inK times the working precision, forK an arbitrary positive integer. We prove this accuracy property with an a priori error analysis. We also provide validated dynamic bounds and apply these results to compute a faithfully rounded evaluation. These compensated algorithms are fast. We illustrate their practical efficiency with numerical experiments on significant environments. Comparing to existing alternatives theseK-times compensated algorithms are competitive forK up to 4, i.e., up to 212 mantissa bits.
Numerical Algorithms | 2010
Siegfried M. Rump; Stef Graillat
It is well known that it is an ill-posed problem to decide whether a function has a multiple root. Even for a univariate polynomial an arbitrary small perturbation of a polynomial coefficient may change the answer from yes to no. Let a system of nonlinear equations be given. In this paper we describe an algorithm for computing verified and narrow error bounds with the property that a slightly perturbed system is proved to have a double root within the computed bounds. For a univariate nonlinear function f we give a similar method also for a multiple root. A narrow error bound for the perturbation is computed as well. Computational results for systems with up to 1000 unknowns demonstrate the performance of the methods.
ACM Sigsam Bulletin | 2005
Stef Graillat
In this paper, we consider the problem of a nearest polynomial with a given root in the complex field (the coefficients of the polynomial and the root are complex numbers). We are interested in the existence and the uniqueness of such polynomials. Then we study the problem in the real case (the coefficients of the polynomial and the root are real numbers), and in the real-complex case (the coefficients of the polynomial are real numbers and the root is a complex number). We derive new formulas for computing such polynomials.
Journal of Computational and Applied Mathematics | 2013
Hao Jiang; Stef Graillat; Canbin Hu; Shengguo Li; Xiangke Liao; Lizhi Cheng; Fang Su
This paper presents a compensated algorithm for the evaluation of the k-th derivative of a polynomial in power basis. The proposed algorithm makes it possible the direct evaluation without obtaining the k-th derivative expression of the polynomial itself, with a very accurate result to all but the most ill-conditioned evaluation. Forward error analysis and running error analysis are performed by an approach based on the data dependency graph. Numerical experiments illustrate the accuracy and efficiency of the algorithm.
acm symposium on applied computing | 2006
Stef Graillat; Philippe Langlois; Nicolas Louvet
Several different techniques and softwares intend to improve the accuracy of results computed in a fixed finite precision. Here we focus on a method to improve the accuracy of the polynomial evaluation. It is well known that the use of the Fused Multiply and Add operation available on some microprocessors like Intel Itanium improves slightly the accuracy of the Horner scheme. In this paper, we propose an accurate compensated Horner scheme specially designed to take advantage of the Fused Multiply and Add. We prove that the computed result is as accurate as if computed in twice the working precision. The algorithm we present is fast since it only requires well optimizable floating point operations, performed in the same working precision as the given data.
Electronic Journal of Linear Algebra | 2006
Françoise Tisseur; Stef Graillat
The effect of structure-preserving perturbations on the solution to a linear sys- tem, matrix inversion, and distance to singularity is investigated. Particular attention is paid to linear and nonlinear structures that form Lie algebras, Jordan algebras and automorphism groups of asca la r product. These include complex symmetric, pseudo-symmetric, persymmetric, skew- symmetric, Hamiltonian, unitary, complex orthogonal and symplectic matrices. Under reasonable assumptions on the scalar product, it is shown that there is little or no difference between structured and unstructured condition numbers and distance to singularity for matrices in Lie and Jordan alge- bras. Hence, for these classes of matrices, the usual unstructured perturbation analysis is sufficient. It is shown that this is not true in general for structures in automorphism groups. Bounds and com- putable expressions for the structured condition numbers for a linear system and matrix inversion are derived for these nonlinear structures. Structured backward errors for the approximate solution of linear systems are also considered. Conditions are given for the structured backward error to be finite. For Lie and Jordan algebras it is proved that, whenever the structured backward error is finite, it is within a small factor of or equal to the unstructured one. The same conclusion holds for orthogonal and unitary structures but cannot easily be extended to other matrix groups. This work extends and unifies earlier analyses.
Information & Computation | 2012
Stef Graillat; Valérie Ménissier-Morain
Several different techniques and softwares intend to improve the accuracy of results computed in a fixed finite precision. Here we focus on methods to improve the accuracy of summation, dot product and polynomial evaluation. Such algorithms exist real floating point numbers. In this paper, we provide new algorithms which deal with complex floating point numbers. We show that the computed results are as accurate as if computed in twice the working precision. The algorithms are simple since they only require addition, subtraction and multiplication of floating point numbers in the same working precision as the given data.
Mathematics in Computer Science | 2011
Stef Graillat; Fabienne Jézéquel; Shiyue Wang; Yuxiang Zhu
Floating-point arithmetic precision is limited in length the IEEE single (respectively double) precision format is 32-bit (respectively 64-bit) long. Extended precision formats can be up to 128-bit long. However some problems require a longer floating-point format, because of round-off errors. Such problems are usually solved in arbitrary precision, but round-off errors still occur and must be controlled. Interval arithmetic has been implemented in arbitrary precision, for instance in the MPFI library. Interval arithmetic provides guaranteed results, but it is not well suited for the validation of huge applications. The CADNA library estimates round-off error propagation using stochastic arithmetic. CADNA has enabled the numerical validation of real-life applications, but it can be used in single precision or in double precision only. In this paper, we present a library called SAM (Stochastic Arithmetic in Multiprecision). It is a multiprecision extension of the classic CADNA library. In SAM (as in CADNA), the arithmetic and relational operators are overloaded in order to be able to deal with stochastic numbers. As a consequence, the use of SAM in a scientific code needs only few modifications. This new library SAM makes it possible to dynamically control the numerical methods used and more particularly to determine the optimal number of iterations in an iterative process. We present some applications of SAM in the numerical validation of chaotic systems modeled by the logistic map.
symposium on computer arithmetic | 2013
Hao Jiang; Stef Graillat; Roberto Barrio
This paper is concerned with the fast and accurate evaluation of elementary symmetric functions. We present a new compensated algorithm by applying error-free transformations to improve the accuracy of the so-called Summation Algorithm, which is used, by example, in the MATLABs poly function. We derive a forward round off error bound and running error bound for our new algorithm. The round off error bound implies that the computed result is as accurate as if computed with twice the working precision and then rounded to the current working precision. The running error analysis provides a shaper bound along with the result, without increasing significantly the computational cost. Numerical experiments illustrate that our algorithm runs much faster than the algorithm using the classic double-double library while sharing similar error estimates. Such an algorithm can be widely applicable for example to compute characteristic polynomials from eigen values. It can also be used into the Rasch model in psychological measurement.
International Journal of Reliability and Safety | 2009
Hong Diep Nguyen; Stef Graillat; Jean Luc Lamotte
In the field of scientific computing, the exactness of the calculation is of prime importance. That leads to efforts made to increase the precision of the floating point algorithms. One of them is to increase the precision of the floating point number to double or quadruple the working precision. The building block of these efforts is the Error-Free Transformations (EFT). In this paper, we develop EFT operations in truncation rounding mode optimised for the Cell processor. They have been implemented and used in double precision library using only single precision numbers. We compare the performance of our library with the native one in double precision on vectors operations. In the best case, the performance of our library is very closed to the standard double precision implementation. The work could be easily extended to obtain quadruple precision.