Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Andrew Arnold is active.

Publication


Featured researches published by Andrew Arnold.


Journal of Symbolic Computation | 2016

Faster sparse multivariate polynomial interpolation of straight-line programs

Andrew Arnold; Mark Giesbrecht; Daniel S. Roche

Given a straight-line program whose output is a polynomial function of the inputs, we present a new algorithm to compute a concise representation of that unknown function. Our algorithm can handle any case where the unknown function is a multivariate polynomial, with coefficients in an arbitrary finite field, and with a reasonable number of nonzero terms but possibly very large degree. It is competitive with previously known sparse interpolation algorithms that work over an arbitrary finite field, and provides an improvement when there are a large number of variables.


international symposium on symbolic and algebraic computation | 2014

Multivariate sparse interpolation using randomized Kronecker substitutions

Andrew Arnold; Daniel S. Roche

We present new techniques for reducing a multivariate sparse polynomial to a univariate polynomial. The reduction works similarly to the classical and widely-used Kronecker substitution, except that we choose the degrees randomly based on the number of nonzero terms in the multivariate polynomial. The resulting univariate polynomial often has a significantly lower degree than the Kronecker substitution polynomial, at the expense of a small number of term collisions. As an application, we give a new algorithm for multivariate interpolation which uses these new techniques along with any existing univariate interpolation algorithm.


international symposium on symbolic and algebraic computation | 2015

Output-Sensitive Algorithms for Sumset and Sparse Polynomial Multiplication

Andrew Arnold; Daniel S. Roche

We present randomized algorithms to compute the sumset (Minkowski sum) of two integer sets, and to multiply two univariate integer polynomials given by sparse representations. Our algorithm for sumset has cost softly linear in the combined size of the inputs and output. This is used as part of our sparse multiplication algorithm, whose cost is softly linear in the combined size of the inputs, output, and the sumset of the supports of the inputs. As a subroutine, we present a new method for computing the coefficients of a sparse polynomial, given a set containing its support. Our multiplication algorithm extends to multivariate Laurent polynomials over finite fields and rational numbers. Our techniques are based on sparse interpolation algorithms and results from analytic number theory.


Mathematics of Computation | 2011

Calculating cyclotomic polynomials

Andrew Arnold; Michael B. Monagan

We present three algorithms to calculate Φ n (z), the n th cyclotomic polynomial. The first algorithm calculates Φ n (z) by a series of polynomial divisions, which we perform using the fast Fourier transform. The second algorithm calculates Φ n (z) as a quotient of products of sparse power series. These two algorithms, described in detail in the paper, were used to calculate cyclotomic polynomials of large height and length. In particular, we have found the least n for which the height of Φ n (z) is greater than n, n 2 , n 3 , and n 4 , respectively. The third algorithm, the big prime algorithm, generates the terms of Φ n (z) sequentially, in a manner which reduces the memory cost. We use the big prime algorithm to find the minimal known height of cyclotomic polynomials of order five. We include these results as well as other examples of cyclotomic polynomials of unusually large height, and bounds on the coefficient of the term of degree k for all cyclotomic polynomials.


international symposium on symbolic and algebraic computation | 2015

Error-Correcting Sparse Interpolation in the Chebyshev Basis

Andrew Arnold; Erich Kaltofen

We present an error-correcting interpolation algorithm for a univariate black-box polynomial that has a sparse representation using Chebyshev polynomials as a term basis. Our algorithm assumes that an upper bound on the number of erroneous evaluations is given as input, and is a generalization of the algorithm by Lakshman and Saunder [SIAM J. Comput., vol. 24 (1995)] for interpolating sparse Chebyshev polynomials and the techniques in error-correcting sparse interpolation in the usual basis of consecutive powers of the variable due to Comer, Kaltofen, and Pernet [Proc. ISSAC 2012 and 2014]. We prove the correctness of our list-decoder-based algorithm with a Descartes-rule-of-signs-like property for sparse polynomials in Chebyshev basis. We also give a new algorithm that reduces the sparse interpolation in Chebyshev basis to that in power basis, thus making the many techniques for the sparse interpolation in power basis, for instance, supersparse (lacunary) interpolation over large finite fields, available to interpolation in Chebyshev basis. Furthermore, we can customize the randomized early termination algorithms from Kaltofen and Lee [J. Symb. Comput., vol. 36 (2003)] to our new approach.


parallel symbolic computation | 2010

A high-performance algorithm for calculating cyclotomic polynomials

Andrew Arnold; Michael B. Monagan

The <i>n</i><sub><i>th</i></sub> cyclotomic polynomial, Φ<sub><i>n</i></sub>(<i>z</i>), is the monic polynomial whose φ(<i>n</i>) distinct roots are the <i>n</i><sub><i>th</i></sub> primitive roots of unity. Φ<sub><i>n</i></sub>(<i>z</i>) can be computed efficiently as a quotient of terms of the form (1 - <i>z</i><sup><i>d</i></sup>) by way of a method the authors call the Sparse Power Series algorithm. We improve on this algorithm in three steps, ultimately deriving a fast, recursive algorithm to calculate Φ<sub><i>n</i></sub>(<i>z</i>). The new algorithm, which we have implemented in C, allows us to compute Φ<sub><i>n</i></sub>(<i>z</i>) for <i>n</i> > 10<sup>9</sup> in less than one minute.


ACM Communications in Computer Algebra | 2015

A Truncated Fourier Transform middle product

Andrew Arnold; Éric Schost

The middle product computes the middle n terms of a (2n--1)xn polynomial product, with effectively the same cost as computing an n x n polynomial product. The middle product allows for faster power series operations and Newton iteration. Middle product variants of classical, Karatsuba, and FFT-based multiplication algorithms are known. We present a middle product algorithm based on the Truncated Fourier Transform.


ACM Communications in Computer Algebra | 2014

Recursive sparse interpolation

Andrew Arnold; Mark Giesbrecht; Daniel S. Roche

We consider the problem of interpolating a sparse univariate polynomial f over an arbitrary ring, given by a straight-line program. In this problem we are given a straight-line program that computes f , as well as bounds D and T on the degree and sparsity (i.e., the number of nonzero terms) of f respectively. We build on ideas developed in Garg and Schost (2009) and Giesbrecht and Roche (2011) towards algorithms for this specific problem. We present a Monte Carlo algorithm that improves on the best previously-known algorithm for this specific problem by a factor (softly) on the order of T/ logD. Thus this new algorithm is favourable for “moderate” values of T . Our algorithm is recursive. At a recursive step of the algorithm we have a straight-line program for f , an approximation f∗ of f , and respective bounds T and D on the sparsity and degree of the difference g = f − f∗. We initialize f∗ to zero. We will construct an approximation f∗∗ to g such that, with high probability, g − f∗∗ has at most T/2 terms. We then recurse with f∗ + f∗∗ as our refined approximation for f . The algorithms in Garg and Schost (2009) and Giesbrecht and Roche (2011), as well as the algorithm we will present, interpolate f by using its straight-line program to evaluate f at a symbolic k-th root of unity, for appropriate choices of k. This effectively gives the image f mod (zk − 1). We call such an evaluation a probe of degree k. The cost of a degree-k probe to a length-L straight-line program is quasi-linear in kL. We use the number of probes, multiplied by a bound on the probe degree, as a rough measure of the cost of an interpolation algorithm. The image f mod (zk − 1) in practise gives a large amount of useful information about the polynomial f . Namely, a term cze of f will appear as cze mod k in the image f mod (zk − 1), so the image should give us f ’s vector of exponents modulo k. However, there are potential obstacles. We may not be able to match images of the same term in multiple images of f . In addition, terms can collide modulo zk− 1 if they have the same degree modulo k. Collisions are problematic because it is difficult to detect if a term in an image f mod (zk − 1) is in fact the image of a sum of colliding terms. Alternatively, colliding terms may sum to zero modulo zk − 1, which also may be difficult to detect. Previous Las Vegas interpolation algorithms require a “good” prime, a prime p for which the terms of f remain distinct modulo zp − 1. If p is a good prime, f mod (zp − 1) has the same number of terms as f . Thus, once we have a good prime with high probability, we can detect the presence of collisions in other images of f . In order to guarantee one can find such a prime with high probability, one chooses primes at random on the order of T 2 logD as probe degrees. In order to reduce this probe degree, we relax the condition that p separates all the terms of the difference g. We instead look for an “ok” prime: a prime which separates most of the terms of g. This allows instead to search over primes p of size O(T logD). Once we have an “ok” prime, we make probes of degree pqi for a set of co-prime qi, each of size O(logD). Our probe degree thus becomes O(T logD). If a term of g does not collide with another term modulo zp − 1 then it will not collide modulo (zpqi − 1). These probes will allow us to construct a polynomial f∗∗ containing the non-colliding terms of g, plus potentially a small proportion of deceptive terms: terms


ACM Communications in Computer Algebra | 2011

A fast recursive algorithm for computing cyclotomic polynomials

Andrew Arnold; Michael B. Monagan

The nth cyclotomic polynomial, Φn(z), is the minimal polynomial over Q of the nth complex primitive roots of unity. We let the order of Φn(z) be the number of distinct odd prime divisors of n and denote by A(n) the height of Φn(z), that is, the magnitude of the largest coefficient of Φn(z). In [5], Erdős proved that, for any c > 0, A(n) is not bounded by nc. Maier showed in [3] that the set of n for which A(n) > nc has positive lower density. We were originally motivated by the question: what is the least n for which A(n) > n? n2? n3? and so forth. Towards that aim, we wanted to develop and implement faster algorithms for computing Φn(z). One approach to compute Φn(z) is to compute Φn(z) using


ACM Communications in Computer Algebra | 2008

Calculating really big cyclotomic polynomials (abstract only)

Andrew Arnold; Michael B. Monagan

The following twelve poster abstracts were presented at the ANTS-8 poster session.1 ANTS-8 was held at the Banff Centre in Banff, Alberta Canada, May 17–22, 2008. The conference website, where many of the posters can be viewed online, is http://ants.math. ucalgary.ca/. Calculating Really Big Cyclotomic Polynomials Andrew Arnold and Michael Monagan, Simon Fraser University, ada26@ sfu.ca The nth cyclotomic polynomial, Φn(z), is the monic polynomial whose φ(n) distinct roots are the nth complex primitive roots of unity. That is, Φn(z) = ∏ 0≤k<n gcd(k,n)=1 (z − e 2πi n ) The first ten cyclotomic polynomials are as follows: Φ1(z) = z − 1 Φ6(z) = z − z + 1 Φ2(z) = z + 1 Φ7(z) = z 6 + z + z + z + z + z + 1 Φ3(z) = z 2 + z + 1 Φ8(z) = z 4 + 1 Φ4(z) = z 2 + 1 Φ9(z) = z 6 + z + 1 Φ5(z) = z 4 + z + z + z + 1 Φ10(z) = z 4 − z + z − z + 1 Poster session funding provided by Butler University. Special thanks to Teri Amberger and Amy Aldridge for their help in printing and shipping posters to Calgary, and also to Renate Scheidler.

Collaboration


Dive into the Andrew Arnold's collaboration.

Top Co-Authors

Avatar

Daniel S. Roche

United States Naval Academy

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Erich Kaltofen

North Carolina State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge