Featured Researches

Symbolic Computation

Fast In-place Algorithms for Polynomial Operations: Division, Evaluation, Interpolation

We consider space-saving versions of several important operations on univariate polynomials, namely power series inversion and division, division with remainder, multi-point evaluation, and interpolation. Now-classical results show that such problems can be solved in (nearly) the same asymptotic time as fast polynomial multiplication. However, these reductions, even when applied to an in-place variant of fast polynomial multiplication, yield algorithms which require at least a linear amount of extra space for intermediate results. We demonstrate new in-place algorithms for the aforementioned polynomial computations which require only constant extra space and achieve the same asymptotic running time as their out-of-place counterparts. We also provide a precise complexity analysis so that all constants are made explicit, parameterized by the space usage of the underlying multiplication algorithms.

Read more
Symbolic Computation

Fast Matrix Multiplication and Symbolic Computation

The complexity of matrix multiplication (hereafter MM) has been intensively studied since 1969, when Strassen surprisingly decreased the exponent 3 in the cubic cost of the straightforward classical MM to log 2 (7) ??2.8074. Applications to some fundamental problems of Linear Algebra and Computer Science have been immediately recognized, but the researchers in Computer Algebra keep discovering more and more applications even today, with no sign of slowdown. We survey the unfinished history of decreasing the exponent towards its information lower bound 2, recall some important techniques discovered in this process and linked to other fields of computing, reveal sample surprising applications to fast computation of the inner products of two vectors and summation of integers, and discuss the curse of recursion, which separates the progress in fast MM into its most acclaimed and purely theoretical part and into valuable acceleration of MM of feasible sizes. Then, in the second part of our paper, we cover fast MM in realistic symbolic computations and discuss applications and implementation of fast exact matrix multiplication. We first review how most of exact linear algebra can be reduced to matrix multiplication over small finite fields. Then we highlight the differences in the design of approximate and exact implementations of fast MM, taking into account nowadays processor and memory hierarchies. In the concluding section we comment on current perspectives of the study of fast MM.

Read more
Symbolic Computation

Fast Operations on Linearized Polynomials and their Applications in Coding Theory

This paper considers fast algorithms for operations on linearized polynomials. We propose a new multiplication algorithm for skew polynomials (a generalization of linearized polynomials) which has sub-quadratic complexity in the polynomial degree s , independent of the underlying field extension degree~ m . We show that our multiplication algorithm is faster than all known ones when s≤m . Using a result by Caruso and Le Borgne (2017), this immediately implies a sub-quadratic division algorithm for linearized polynomials for arbitrary polynomial degree s . Also, we propose algorithms with sub-quadratic complexity for the q -transform, multi-point evaluation, computing minimal subspace polynomials, and interpolation, whose implementations were at least quadratic before. Using the new fast algorithm for the q -transform, we show how matrix multiplication over a finite field can be implemented by multiplying linearized polynomials of degrees at most s=m if an elliptic normal basis of extension degree m exists, providing a lower bound on the cost of the latter problem. Finally, it is shown how the new fast operations on linearized polynomials lead to the first error and erasure decoding algorithm for Gabidulin codes with sub-quadratic complexity.

Read more
Symbolic Computation

Fast and deterministic computation of the determinant of a polynomial matrix

Given a square, nonsingular matrix of univariate polynomials F∈K[x ] n×n over a field K , we give a deterministic algorithm for finding the determinant of F . The complexity of the algorithm is $\bigO \left(n^{\omega}s\right)$ field operations where s is the average column degree or the average row degree of F . Here $\bigO$ notation is Big- O with log factors omitted and ω is the exponent of matrix multiplication.

Read more
Symbolic Computation

Fast computation of approximant bases in canonical form

In this article, we design fast algorithms for the computation of approximant bases in shifted Popov normal form. We first recall the algorithm known as PM-Basis, which will be our second fundamental engine after polynomial matrix multiplication: most other fast approximant basis algorithms basically aim at efficiently reducing the input instance to instances for which PM-Basis is fast. Such reductions usually involve partial linearization techniques due to Storjohann, which have the effect of balancing the degrees and dimensions in the manipulated matrices. Following these ideas, Zhou and Labahn gave two algorithms which are faster than PM-Basis for important cases including Hermite-Pade approximation, yet only for shifts whose values are concentrated around the minimum or the maximum value. The three mentioned algorithms were designed for balanced orders and compute approximant bases that are generally not normalized. Here, we show how they can be modified to return the shifted Popov basis without impact on their cost bound; besides, we extend Zhou and Labahn's algorithms to arbitrary orders. Furthermore, we give an algorithm which handles arbitrary shifts with one extra logarithmic factor in the cost bound compared to the above algorithms. To the best of our knowledge, this improves upon previously known algorithms for arbitrary shifts, including for particular cases such as Hermite-Pade approximation. This algorithm is based on a recent divide and conquer approach which reduces the general case to the case where information on the output degree is available. As outlined above, we solve the latter case via partial linearizations and PM-Basis.

Read more
Symbolic Computation

Fast generalized Bruhat decomposition

The deterministic recursive pivot-free algorithms for the computation of generalized Bruhat decomposition of the matrix in the field and for the computation of the inverse matrix are presented. This method has the same complexity as algorithm of matrix multiplication and it is suitable for the parallel computer systems.

Read more
Symbolic Computation

Fast integer multiplication using generalized Fermat primes

For almost 35 years, Sch{ö}nhage-Strassen's algorithm has been the fastest algorithm known for multiplying integers, with a time complexity O(n × log n × log log n) for multiplying n-bit inputs. In 2007, F{ü}rer proved that there exists K > 1 and an algorithm performing this operation in O(n × log n × K log n). Recent work by Harvey, van der Hoeven, and Lecerf showed that this complexity estimate can be improved in order to get K = 8, and conjecturally K = 4. Using an alternative algorithm, which relies on arithmetic modulo generalized Fermat primes, we obtain conjecturally the same result K = 4 via a careful complexity analysis in the deterministic multitape Turing model.

Read more
Symbolic Computation

Fast multiplication for skew polynomials

We describe an algorithm for fast multiplication of skew polynomials. It is based on fast modular multiplication of such skew polynomials, for which we give an algorithm relying on evaluation and interpolation on normal bases. Our algorithms improve the best known complexity for these problems, and reach the optimal asymptotic complexity bound for large degree. We also give an adaptation of our algorithm for polynomials of small degree. Finally, we use our methods to improve on the best known complexities for various arithmetics problems.

Read more
Symbolic Computation

Fast real and complex root-finding methods for well-conditioned polynomials

Given a polynomial p of degree d and a bound κ on a condition number of p , we present the first root-finding algorithms that return all its real and complex roots with a number of bit operations quasi-linear in d log 2 (κ) . More precisely, several condition numbers can be defined depending on the norm chosen on the coefficients of the polynomial. Let p(x)=?�_ k=0 d a_k x k =?�_ k=0 d ( d k ) ????????b_k x k . We call the condition number associated with a perturbation of the a_k the hyperbolic condition number κ_h , and the one associated with a perturbation of the b_k the elliptic condition number κ_e . For each of these condition numbers, we present algorithms that find the real and the complex roots of p in O(d log 2 (dκ) polylog(log(dκ))) bit operations.Our algorithms are well suited for random polynomials since κ_h (resp. κ_e ) is bounded by a polynomial in d with high probability if the a_k (resp. the b_k ) are independent, centered Gaussian variables of variance 1 .

Read more
Symbolic Computation

Fast transforms over finite fields of characteristic two

An additive fast Fourier transform over a finite field of characteristic two efficiently evaluates polynomials at every element of an F 2 -linear subspace of the field. We view these transforms as performing a change of basis from the monomial basis to the associated Lagrange basis, and consider the problem of performing the various conversions between these two bases, the associated Newton basis, and the '' novel '' basis of Lin, Chung and Han (FOCS 2014). Existing algorithms are divided between two families, those designed for arbitrary subspaces and more efficient algorithms designed for specially constructed subspaces of fields with degree equal to a power of two. We generalise techniques from both families to provide new conversion algorithms that may be applied to arbitrary subspaces, but which benefit equally from the specially constructed subspaces. We then construct subspaces of fields with smooth degree for which our algorithms provide better performance than existing algorithms.

Read more

Ready to get started?

Join us today