Philippe Langlois
University of Perpignan
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Philippe Langlois.
Japan Journal of Industrial and Applied Mathematics | 2009
Stef Graillat; Philippe Langlois; Nicolas Louvet
We survey a class of algorithms to evaluate polynomials with floating point coefficients and for computation performed with IEEE-754 floating point arithmetic. The principle is to apply, once or recursively, an error-free transformation of the polynomial evaluation with the Horner algorithm and to accurately sum the final decomposition. These compensated algorithms are as accurate as the Horner algorithm perforned inK times the working precision, forK an arbitrary positive integer. We prove this accuracy property with an a priori error analysis. We also provide validated dynamic bounds and apply these results to compute a faithfully rounded evaluation. These compensated algorithms are fast. We illustrate their practical efficiency with numerical experiments on significant environments. Comparing to existing alternatives theseK-times compensated algorithms are competitive forK up to 4, i.e., up to 212 mantissa bits.
symposium on computer arithmetic | 2007
Philippe Langlois; Nicolas Louvet
The compensated Horner algorithm improves the accuracy of polynomial evaluation in IEEE-754 floating point arithmetic: the computed result is as accurate as if it was computed with the classic Horner algorithm in twice the working precision. Since the condition number still governs the accuracy of this computation, it may return an arbitrary number of inexact digits. We address here how to compute a faithfully rounded result, that is one of the two floating point neighbors of the exact evaluation. We propose an a priori sufficient condition on the condition number to ensure that the compensated evaluation is faithfully rounded. We also propose a validated and dynamic method to test at the running time if the compensated result is actually faithfully rounded. Numerical experiments illustrate the behavior of these two conditions and that the associated running time over-cost is really interesting.
Bit Numerical Mathematics | 2001
Philippe Langlois
A new automatic method to correct the first-order effect of floating point rounding errors on the result of a numerical algorithm is presented. A correcting term and a confidence threshold are computed using algorithmic differentiation, computation of elementary rounding error and running error analysis. Algorithms for which the accuracy of the result is not affected by higher order terms are identified. The correction is applied to the final result or to sensitive intermediate results to improve the accuracy of the computed result and/or the stability of the algorithm.
parallel symbolic computation | 2010
Philippe Langlois; Matthieu Martel; Laurent Thévenoux
In this article, we focus on numerical algorithms for which, in practice, parallelism and accuracy do not cohabit well. In order to increase parallelism, expressions are reparsed, implicitly using mathematical laws like associativity, and this reduces the accuracy. Our approach consists in focusing on summation algorithms and in performing an exhaustive study: we generate all the algorithms equivalent to the original one and compatible with our relaxed time constraint. Next we compute the worst errors which may arise during their evaluation, for several relevant sets of data. Our main conclusion is that relaxing very slightly the time constraints by choosing algorithms whose critical paths are a bit longer than the optimal makes it possible to strongly optimize the accuracy. We extend these results to the case of bounded parallelism and to accurate sum algorithms that use compensation techniques.
acm symposium on applied computing | 2006
Stef Graillat; Philippe Langlois; Nicolas Louvet
Several different techniques and softwares intend to improve the accuracy of results computed in a fixed finite precision. Here we focus on a method to improve the accuracy of the polynomial evaluation. It is well known that the use of the Fused Multiply and Add operation available on some microprocessors like Intel Itanium improves slightly the accuracy of the Horner scheme. In this paper, we propose an accurate compensated Horner scheme specially designed to take advantage of the Fused Multiply and Add. We prove that the computed result is as accurate as if computed in twice the working precision. The algorithm we present is fast since it only requires well optimizable floating point operations, performed in the same working precision as the given data.
new technologies, mobility and security | 2015
Philippe Langlois; Rafife Nheili; Christophe Denis
Floating-point arithmetic may introduce failures of the numerical reproducibility between a priori similar sequential and parallel executions of HPC simulation. We present how to apply some existing techniques to part of hydrodynamic finite element simulations. We analyze how easy these techniques allow us to recover its numerical reproducibility.
Lecture Notes in Computer Science | 2014
Chemseddine Chohra; Philippe Langlois; David Parello
Numerical reproducibility failures appear in massively parallel floating-point computations. One way to guarantee this reproducibility is to extend the IEEE-754 correct rounding to larger computing sequences, e.g. to the BLAS. Is the extra cost for numerical reproducibility acceptable in practice? We present solutions and experiments for the leveli¾?1 BLAS and we conclude about their efficiency.
parallel computing | 2010
Bernard Goossens; Philippe Langlois; David Parello; Eric Petit
We introduce and describe PerPI, a software tool analyzing the instruction level parallelism (ILP) of a program. ILP measures the best potential of a program to run in parallel on an ideal machine --- a machine with infinite resources. PerPI is a programmer-oriented tool the function of which is to improve the understanding of how the algorithm and the (micro-) architecture will interact. PerPI fills the gap between the manual analysis of an abstract algorithm and implementation-dependent profiling tools. The current version provides reproducible measures of the average number of instructions per cycle executed on an ideal machine, histograms of these instructions and associated data-flow graphs for any x86 binary file. We illustrate how these measures explain the actual performance of core numerical subroutines when measured run times cannot be correlated with the classical flop count analysis.
Theoretical Computer Science | 2003
Marc Daumas; Philippe Langlois
An additive symmetric value b of a with respect to c satisfies c = (a + b)/2. Existence and uniqueness of such b are basic properties in exact arithmetic that fail when a and b are floating point numbers and the computation of c performed in IEEE-754-like arithmetic. We exhibit and prove conditions on the existence, the uniqueness and the consistency of an additive symmetric value when b and c have the same sign.
Linear Algebra and its Applications | 2000
T. Braconnier; Philippe Langlois; J.C. Rioual
Abstract Many algorithms for solving eigenproblems need to compute an orthonormal basis. The computation is commonly performed using a QR factorization computed using the classical or the modified Gram–Schmidt algorithm, the Householder algorithm, the Givens algorithm or the Gram–Schmidt algorithm with iterative reorthogonalization. For the eigenproblem, although textbooks warn users about the possible instability of eigensolvers due to loss of orthonormality, few theoretical results exist. In this paper we prove that the loss of orthonormality of the computed basis can affect the reliability of the computed eigenpair when we use the Arnoldi method. We also show that the stopping criterion based on the backward error and the value computed using the Arnoldi method can differ because of the loss of orthonormality of the computed basis of the Krylov subspace. We also give a bound which quantifies this difference in terms of the loss of orthonormality.