Brigitte Verdonk
University of Antwerp
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Brigitte Verdonk.
Computing | 1985
Annie Cuyt; Brigitte Verdonk
Many papers have already been published on the subject of multivariate polynomial interpolation and also on the subject of multivariate Padé approximation. But the problem of multivariate rational interpolation has only very recently been considered; we refer among others to [8] and [3].The computation of a univariate rational interpolant can be done in various equivalent ways: one can calculate the explicit solution of the system of interpolatory conditions, or start a recursive algorithm, or calculate the convergent of a continued fraction.In this paper we will generalize each of those methods from the univariate to the multivariate case. Although the generalization is simple, the equivalence of the computational methods is completely lost in the multivariate case. This was to be expected since various authors have already remarked [2,7] that there is no link between multivariate Padé approximants calculated by matching the Taylor series and those obtained as convergents of a continued fraction.ZusammenfassungDas multivariate polynomiale Interpolationsproblem sowie die multivariate Padé-Approximation sind schon einige Jahre alt, aber das multivariate rationale Interpolationsproblem ist noch verhältnismäßig jung [3,8].Für univariate Funktionen gibt es verschiedene äquivalente Algorithmen zur Berechnung vom rationalen Interpolant: die Lösung eines Gleichungssystems, die rekursive Berechnung oder die Berechnung eines Kettenbruchs.Diese Algorithmen werden hier verallgemeinert auf multivariate Funktionen. Wir bemerken, daß sie nun nicht mehr equivalent sind. Diese Beobachtung ist auch schon von anderen Mathematikern gemacht worden für das multivariate Padé-Approximationsproblem [2,7], das man auch auf verschiedene Weisen lösen kann.
Numerische Mathematik | 1984
Annie Cuyt; Brigitte Verdonk
SummaryPadé approximants are a frequently used tool for the solution of mathematical problems. One of the main drawbacks of their use for multivariate functions is the calculation of the derivatives off(x1, ...,xp). Therefore multivariate Newton-Padé approximants are introduced; their computation will only use the value off at some points. In Sect. 1 we shall repeat the univariate Newton-Padé approximation problem which is a rational Hermite interpolation problem. In Sect. 2 we sketch some problems that can arise when dealing with multivariate interpolation. In Sect. 3 we define multivariate divided differences and prove some lemmas that will be useful tools for the introduction of multivariate Newton-Padé approximants in Sect. 4. A numerical example is given in Sect. 5, together with the proof that forp=1 the classical Newton-Padé approximants for a univariate function are obtained.
Applied Numerical Mathematics | 1988
Annie Cuyt; Brigitte Verdonk
While the history of continued fractions goes back to Euclid’s algorithm, branched continued fractions are only twenty years old. The idea to construct them was born in Lvov (U.S.S.R.) in the early sixties. The first and most general form of these fractions was introduced by Skorobogatko in [14] together with Droniuk, Bobyk and Ptashnik. An ordinary continued frution (CF) is an expression of the form &I + a1 Q2 , b, + b2+ b 3 +“’ . . .
Numerical Algorithms | 2007
Oliver Salazar Celis; Annie Cuyt; Brigitte Verdonk
In many applications, observations are prone to imprecise measurements. When constructing a model based on such data, an approximation rather than an interpolation approach is needed. Very often a least squares approximation is used. Here we follow a different approach. A natural way for dealing with uncertainty in the data is by means of an uncertainty interval. We assume that the uncertainty in the independent variables is negligible and that for each observation an uncertainty interval can be given which contains the (unknown) exact value. To approximate such data we look for functions which intersect all uncertainty intervals. In the past this problem has been studied for polynomials, or more generally for functions which are linear in the unknown coefficients. Here we study the problem for a particular class of functions which are nonlinear in the unknown coefficients, namely rational functions. We show how to reduce the problem to a quadratic programming problem with a strictly convex objective function, yielding a unique rational function which intersects all uncertainty intervals and satisfies some additional properties. Compared to rational least squares approximation which reduces to a nonlinear optimization problem where the objective function may have many local minima, this makes the new approach attractive.
Journal of Computational and Applied Mathematics | 1988
Annie Cuyt; Brigitte Verdonk
Etude des differences reciproques multivariables pour les developpements en fractions continues de Thiele ramifiees
ACM Transactions on Mathematical Software | 2001
Brigitte Verdonk; Annie Cuyt; Dennis Verschaeren
This paper introduces a precision- and range-independent tool for testing the compliance of hardware or software implementations of (multiprecision) floating-point arithmetic with the principles of the IEEE standards 754 and 854. The tool consists of a driver program, offering many options to test only specific aspects of the IEEE standards, and a large set of test vectors, encoded in a precision-independent syntax to allow the testing of basic and extended hardware formats as well as multiprecision floating-point implementations. The suite of test vectors stems on one hand from the integration and fully precision- and range-independent generalization of existing hardware test sets, and on the other hand from the systematic testing of exact rounding for all combinations of round and sticky bits that can occur. The former constitutes only 50% of the resulting test set. In the latter we especially focus on hard-to-round cases. In addition, the test suite implicitly tests properties of floating-point operations, following the idea of Paranoia, and it reports which of the three IEEE-compliant underflow mechanisms is used by the floating-point implementation under consideration. We also chech whether that underflow mechanism is used consistently. The tool is backward compatible with the UCBTEST package and with Coonens test syntax.
IEEE Transactions on Microwave Theory and Techniques | 2006
Annie Cuyt; R. B. Lenin; Stefan Becuwe; Brigitte Verdonk
The behavior of certain electromagnetic devices or components can be simulated with great detail in software. A drawback of these simulation models is that they are very time consuming. Since the accuracy required for the computational electromagnetic analysis is usually only 2-3 significant digits, an approximate analytic model is sometimes used instead, as noted by Lehmensiek and Meyer in 2001. The most complex model we consider here is a multivariate rational function, which interpolates a number of simulation data. The interpolating rational function is constructed in such a way that it minimizes both the truncation error and the number of simulation data since each evaluation of the simulation model is computationally costly.
Computing | 2001
Annie Cuyt; Brigitte Verdonk; Stefan Becuwe; Peter Kuterna
Abstract In this paper we reinvestigate a well-known expression first published in [7], which is often used to illustrate catastrophic cancellation as well as the fact that identical output in different precisions does not imply reliability. The purpose of revisiting this expression is twofold. First, we show in Section 2 that the effect of the cancellation is very different on different IEEE 754 compliant platforms, and we unravel the underlying (hardware) reasons which are unknown to many numerical analysts. Besides illustrating cancellation, this expression also counters the common misbelief among many numerical analysts that a same program will deliver identical results on all IEEE conforming systems. Second, in Section 3 we use, illustrate and comment upon the cross-platform didactical tool Arithmetic Explorer developed at the University of Antwerp, by means of which we performed the bit level analysis of the expression evaluation under investigation on the different machines. We believe that this tool, which is freely available from the authors, can be of use to all of us teaching a first numerical analysis course.
SIAM Journal on Scientific Computing | 2005
Annie Cuyt; Gene H. Golub; Peyman Milanfar; Brigitte Verdonk
In shape reconstruction, the celebrated Fourier slice theorem plays an essential role. It allows one to reconstruct the shape of a quite general object from the knowledge of its Radon transform [S. Helgason, The Radon Transform, Birkhauser Boston, Boston, 1980]---in other words from the knowledge of projections of the object. In case the object is a polygon [G. H. Golub, P. Milanfar, and J. Varah, SIAM J. Sci.\ Comput., 21 (1999), pp. 1222-1243], or when it defines a quadrature domain in the complex plane [B. Gustafsson, C. He, P. Milanfar, and M. Putinar, Inverse Problems, 16 (2000), pp. 1053-1070], its shape can also be reconstructed from the knowledge of its moments. Essential tools in the solution of the latter inverse problem are quadrature rules and formal orthogonal polynomials. In this paper we show how shape reconstruction from the knowledge of moments can also be realized in the case of general compact objects, not only in two but also in higher dimensions. To this end we use a less-known homogeneous Pade slice property. Again integral transforms---in our case the multivariate Stieltjes transform and univariate Markov transform---formal orthogonal polynomials in the form of Pade denominators, and multidimensional integration formulas or cubature rules play an essential role. We emphasize that the new technique is applicable in all higher dimensions and illustrate it through the reconstruction of several two- and three-dimensional objects.
SIAM Journal on Scientific Computing | 2006
Annie Cuyt; Brigitte Verdonk; Haakon Waadeland
Special functions are pervasive in all fields of science. The most well-known application areas are in physics, engineering, chemistry, computer science, and statistics. Because of their importance, several books and a large collection of papers have been devoted to the numerical computation of these functions. The technique for providing a floating-point implementation of a function differs substantially when going from a fixed finite precision context to a finite multiprecision context. In the former, the aim is to provide an optimal mathematical model, valid on a reduced argument range and requiring as few operations as possible. Here optimal means that, in relation to the model’s complexity, the truncation error is as small as it can get. The total relative error, including round-off error and possible argument reduction effect, should not exceed a prescribed threshold. In a finite multiprecision context, the goal is to provide a more generic technique, from which an approximant yielding the user-defined accuracy can be deduced at runtime. Hence best approximants are not an option since these models have to be recomputed every time the precision is altered and a function evaluation is requested. At the same time the generic technique should generate an approximant of as low complexity as possible. In the current approach we point out how continued fraction representations of functions can be helpful in the multiprecision context. The newly developed generic technique is based mainly on the use of sharpened a priori truncation error estimates for real continued fraction representations of a real variable, developed in section 3. As illustrated in section 4, the technique is very efficient and even quite competitive when compared to the traditional fixed precision implementations. The implementation is reliable in the sense that it allows one to return a sharp interval enclosure for the requested function evaluation, at the same cost. The paper follows a recipe style. In section 2 we gather the ingredients for the new results. In section 3 we construct or prepare, for a general function