Featured Researches

Symbolic Computation

Automatic Differentiation for Tensor Algebras

Kjolstad et. al. proposed a tensor algebra compiler. It takes expressions that define a tensor element-wise, such as f ij (a,b,c,d)=exp[− ∑ 4 k=0 (( a ik + b jk ) 2 c ii + d 3 i+k )] , and generates the corresponding compute kernel code. For machine learning, especially deep learning, it is often necessary to compute the gradient of a loss function l(a,b,c,d)=l(f(a,b,c,d)) with respect to parameters a,b,c,d . If tensor compilers are to be applied in this field, it is necessary to derive expressions for the derivatives of element-wise defined tensors, i.e. expressions for (da ) ik =∂l/∂ a ik . When the mapping between function indices and argument indices is not 1:1, special attention is required. For the function f ij (x)= x 2 i , the derivative of the loss is (dx ) i =∂l/∂ x i = ∑ j (df ) ij 2 x i ; the sum is necessary because index j does not appear in the indices of f . Another example is f i (x)= x 2 ii , where x is a matrix; here we have (dx ) ij = δ ij (df ) i 2 x ii ; the Kronecker delta is necessary because the derivative is zero for off-diagonal elements. Another indexing scheme is used by f ij (x)=exp x i+j ; here the correct derivative is (dx ) k = ∑ i (df ) i,k−i exp x k , where the range of the sum must be chosen appropriately. In this publication we present an algorithm that can handle any case in which the indices of an argument are an arbitrary linear combination of the indices of the function, thus all the above examples can be handled. Sums (and their ranges) and Kronecker deltas are automatically inserted into the derivatives as necessary. Additionally, the indices are transformed, if required (as in the last example). The algorithm outputs a symbolic expression that can be subsequently fed into a tensor algebra compiler. Source code is provided.

Read more
Symbolic Computation

Automatic Differentiation: a look through Tensor and Operational Calculus

In this paper we take a look at Automatic Differentiation through the eyes of Tensor and Operational Calculus. This work is best consumed as supplementary material for learning tensor and operational calculus by those already familiar with automatic differentiation. To that purpose, we provide a simple implementation of automatic differentiation, where the steps taken are explained in the language tensor and operational calculus.

Read more
Symbolic Computation

Automatic Generation of Bounds for Polynomial Systems with Application to the Lorenz System

This study covers an analytical approach to calculate positively invariant sets of dynamical systems. Using Lyapunov techniques and quantifier elimination methods, an automatic procedure for determining bounds in the state space as an enclosure of attractors is proposed. The available software tools permit an algorithmizable process, which normally requires a good insight into the systems dynamics and experience. As a result we get an estimation of the attractor, whose conservatism only results from the initial choice of the Lyapunov candidate function. The proposed approach is illustrated on the well-known Lorenz system.

Read more
Symbolic Computation

Automatic Generation of Moment-Based Invariants for Prob-Solvable Loops

One of the main challenges in the analysis of probabilistic programs is to compute invariant properties that summarise loop behaviours. Automation of invariant generation is still at its infancy and most of the times targets only expected values of the program variables, which is insufficient to recover the full probabilistic program behaviour. We present a method to automatically generate moment-based invariants of a subclass of probabilistic programs, called Prob-Solvable loops, with polynomial assignments over random variables and parametrised distributions. We combine methods from symbolic summation and statistics to derive invariants as valid properties over higher-order moments, such as expected values or variances, of program variables. We successfully evaluated our work on several examples where full automation for computing higher-order moments and invariants over program variables was not yet possible.

Read more
Symbolic Computation

Automatic Library Generation for Modular Polynomial Multiplication

Polynomial multiplication is a key algorithm underlying computer algebra systems (CAS) and its efficient implementation is crucial for the performance of CAS. In this paper we design and implement algorithms for polynomial multiplication using approaches based the fast Fourier transform (FFT) and the truncated Fourier transform (TFT). We improve on the state-of-the-art in both theoretical and practical performance. The {\SPIRAL} library generation system is extended and used to automatically generate and tune the performance of a polynomial multiplication library that is optimized for memory hierarchy, vectorization and multi-threading, using new and existing algorithms. The performance tuning has been aided by the use of automation where many code choices are generated and intelligent search is utilized to find the "best" implementation on a given architecture. The performance of autotuned implementations is comparable to, and in some cases better than, the best hand-tuned code.

Read more
Symbolic Computation

Automatic differentiation in machine learning: a survey

Derivatives, mostly in the form of gradients and Hessians, are ubiquitous in machine learning. Automatic differentiation (AD), also called algorithmic differentiation or simply "autodiff", is a family of techniques similar to but more general than backpropagation for efficiently and accurately evaluating derivatives of numeric functions expressed as computer programs. AD is a small but established field with applications in areas including computational fluid dynamics, atmospheric sciences, and engineering design optimization. Until very recently, the fields of machine learning and AD have largely been unaware of each other and, in some cases, have independently discovered each other's results. Despite its relevance, general-purpose AD has been missing from the machine learning toolbox, a situation slowly changing with its ongoing adoption under the names "dynamic computational graphs" and "differentiable programming". We survey the intersection of AD and machine learning, cover applications where AD has direct relevance, and address the main implementation techniques. By precisely defining the main differentiation techniques and their interrelationships, we aim to bring clarity to the usage of the terms "autodiff", "automatic differentiation", and "symbolic differentiation" as these are encountered more and more in machine learning settings.

Read more
Symbolic Computation

Baby-Step Giant-Step Algorithms for the Symmetric Group

We study discrete logarithms in the setting of group actions. Suppose that G is a group that acts on a set S . When r,s?�S , a solution g?�G to r g =s can be thought of as a kind of logarithm. In this paper, we study the case where G= S n , and develop analogs to the Shanks baby-step / giant-step procedure for ordinary discrete logarithms. Specifically, we compute two sets A,B??S n such that every permutation of S n can be written as a product ab of elements a?�A and b?�B . Our deterministic procedure is optimal up to constant factors, in the sense that A and B can be computed in optimal asymptotic complexity, and |A| and |B| are a small constant from n! ??????in size. We also analyze randomized "collision" algorithms for the same problem.

Read more
Symbolic Computation

Better Answers to Real Questions

We consider existential problems over the reals. Extended quantifier elimination generalizes the concept of regular quantifier elimination by providing in addition answers, which are descriptions of possible assignments for the quantified variables. Implementations of extended quantifier elimination via virtual substitution have been successfully applied to various problems in science and engineering. So far, the answers produced by these implementations included infinitesimal and infinite numbers, which are hard to interpret in practice. We introduce here a post-processing procedure to convert, for fixed parameters, all answers into standard real numbers. The relevance of our procedure is demonstrated by application of our implementation to various examples from the literature, where it significantly improves the quality of the results.

Read more
Symbolic Computation

Bilinear systems with two supports: Koszul resultant matrices, eigenvalues, and eigenvectors

A fundamental problem in computational algebraic geometry is the computation of the resultant. A central question is when and how to compute it as the determinant of a matrix. whose elements are the coefficients of the input polynomials up-to sign. This problem is well understood for unmixed multihomogeneous systems, that is for systems consisting of multihomogeneous polynomials with the * 1 same support. However, little is known for mixed systems, that is for systems consisting of polynomials with different supports. We consider the computation of the multihomogeneous resultant of bilinear systems involving two different supports. We present a constructive approach that expresses the resultant as the exact determinant of a Koszul resultant matrix, that is a matrix constructed from maps in the Koszul complex. We exploit the resultant matrix to propose an algorithm to solve such systems. In the process we extend the classical eigenvalues and eigenvectors criterion to a more general setting. Our extension of the eigenvalues criterion applies to a general class of matrices, including the Sylvester-type and the Koszul-type ones.

Read more
Symbolic Computation

Binomial Difference Ideals

In this paper, binomial difference ideals are studied. Three canonical representations for Laurent binomial difference ideals are given in terms of the reduced Groebner basis of Z[x]-lattices, regular and coherent difference ascending chains, and partial characters over Z[x]-lattices, respectively. Criteria for a Laurent binomial difference ideal to be reflexive, prime, well-mixed, and perfect are given in terms of their support lattices. The reflexive, well-mixed, and perfect closures of a Laurent binomial difference ideal are shown to be binomial. Most of the properties of Laurent binomial difference ideals are extended to the case of difference binomial ideals. Finally, algorithms are given to check whether a given Laurent binomial difference ideal I is reflexive, prime, well-mixed, or perfect, and in the negative case, to compute the reflexive, well-mixed, and perfect closures of I. An algorithm is given to decompose a finitely generated perfect binomial difference ideal as the intersection of reflexive prime binomial difference ideals.

Read more

Ready to get started?

Join us today