Dmitry V. Savostyanov
Russian Academy of Sciences
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Dmitry V. Savostyanov.
SIAM Journal on Scientific Computing | 2014
Sergey Dolgov; Dmitry V. Savostyanov
We propose algorithms for the solution of high-dimensional symmetrical positive definite (SPD) linear systems with the matrix and the right-hand side given and the solution sought in a low-rank format. Similarly to density matrix renormalization group (DMRG) algorithms, our methods optimize the components of the tensor product format subsequently. To improve the convergence, we expand the search space by an inexact gradient direction. We prove the geometrical convergence and estimate the convergence rate of the proposed methods utilizing the analysis of the steepest descent algorithm. The complexity of the presented algorithms is linear in the mode size and dimension, and the demonstrated convergence is comparable to or even better than the one of the DMRG algorithm. In the numerical experiment we show that the proposed methods are also efficient for non-SPD systems, for example, those arising from the chemical master equation describing the gene regulatory model at the mesoscopic scale.
Computer Physics Communications | 2014
Sergey Dolgov; Boris N. Khoromskij; Ivan V. Oseledets; Dmitry V. Savostyanov
We consider approximate computation of several minimal eigenpairs of large Hermitian matrices which come from high-dimensional problems. We use the tensor train (TT) format for vectors and matrices to overcome the curse of dimensionality and make storage and computational cost feasible. We approximate several low-lying eigenvectors simultaneously in the block version of the TT format. The computation is done by the alternating minimization of the block Rayleigh quotient sequentially for all TT cores. The proposed method combines the advances of the density matrix renormalization group (DMRG) and the variational numerical renormalization group (vNRG) methods. We compare the performance of the proposed method with several versions of the DMRG codes, and show that it may be preferable for systems with large dimension and/or mode size, or when a large number of eigenstates is sought.
Computing | 2009
Ivan V. Oseledets; Dmitry V. Savostyanov; Eugene E. Tyrtyshnikov
By a tensor problem in general, we mean one where all the data on input and output are given (exactly or approximately) in tensor formats, the number of data representation parameters being much smaller than the total amount of data. For such problems, it is natural to seek for algorithms working with data only in tensor formats maintaining the same small number of representation parameters—by the price of all results of computation to be contaminated by approximation (recompression) to occur in each operation. Since approximation time is crucial and depends on tensor formats in use, in this paper we discuss which are best suitable to make recompression inexpensive and reliable. We present fast recompression procedures with sublinear complexity with respect to the size of data and propose methods for basic linear algebra operations with all matrix operands in the Tucker format, mostly through calls to highly optimized level-3 BLAS/LAPACK routines. We show that for three-dimensional tensors the canonical format can be avoided without any loss of efficiency. Numerical illustrations are given for approximate matrix inversion via proposed recompression techniques.
The 2011 International Workshop on Multidimensional (nD) Systems | 2011
Dmitry V. Savostyanov; Ivan V. Oseledets
Using recently proposed tensor train format for the representation of multi-dimensional dense arrays (tensors) we develop a fast interpolation method to approximate the given tensor by using only a small number of its elements. The algorithm is based on DMRG scheme, known among the quantum chemistry society. It is modified to make an interpolation on the adaptive set of tensor elements. The latter is selected using the maximum-volume principle which was previously used for the cross approximation schemes for matrices and 3-tensors. The numerical examples includes the interpolation of one- and many-dimensional functions on the uniform grids.
Russian Journal of Numerical Analysis and Mathematical Modelling | 2008
Heinz-Jürgen Flad; Boris N. Khoromskij; Dmitry V. Savostyanov; Eugene E. Tyrtyshnikov
Abstract A recently developed Cross 3D algorithm is applied to approximation of the electron density function. The algorithm is proved to be fast and reliable on a sample of quantum chemistry data produced by the MOLPRO package.
arXiv: Numerical Analysis | 2013
Sergey Dolgov; Dmitry V. Savostyanov
We propose algorithms for the solution of high-dimensional symmetrical positive definite (SPD) linear systems with the matrix and the right-hand side given and the solution sought in a low-rank format. Similarly to density matrix renormalization group (DMRG) algorithms, our methods optimize the components of the tensor product format subsequently. To improve the convergence, we expand the search space by an inexact gradient direction. We prove the geometrical convergence and estimate the convergence rate of the proposed methods utilizing the analysis of the steepest descent algorithm. The complexity of the presented algorithms is linear in the mode size and dimension, and the demonstrated convergence is comparable to or even better than the one of the DMRG algorithm. In the numerical experiment we show that the proposed methods are also efficient for non-SPD systems, for example, those arising from the chemical master equation describing the gene regulatory model at the mesoscopic scale.
arXiv: Numerical Analysis | 2013
Sergey Dolgov; Dmitry V. Savostyanov
We propose algorithms for the solution of high-dimensional symmetrical positive definite (SPD) linear systems with the matrix and the right-hand side given and the solution sought in a low-rank format. Similarly to density matrix renormalization group (DMRG) algorithms, our methods optimize the components of the tensor product format subsequently. To improve the convergence, we expand the search space by an inexact gradient direction. We prove the geometrical convergence and estimate the convergence rate of the proposed methods utilizing the analysis of the steepest descent algorithm. The complexity of the presented algorithms is linear in the mode size and dimension, and the demonstrated convergence is comparable to or even better than the one of the DMRG algorithm. In the numerical experiment we show that the proposed methods are also efficient for non-SPD systems, for example, those arising from the chemical master equation describing the gene regulatory model at the mesoscopic scale.
Journal of Magnetic Resonance | 2013
Luke J. Edwards; Dmitry V. Savostyanov; Alexander A. Nevzorov; Maria Concistrè; Giuseppe Pileio; Ilya Kuprov
We demonstrate that Fokker-Planck equations in which spatial coordinates are treated on the same conceptual level as spin coordinates yield a convenient formalism for treating magic angle spinning NMR experiments. In particular, time dependence disappears from the background Hamiltonian (sample spinning is treated as an interaction), spherical quadrature grids are avoided completely (coordinate distributions are a part of the formalism) and relaxation theory with any linear diffusion operator is easily adopted from the Stochastic Liouville Equation theory. The proposed formalism contains Floquet theory as a special case. The elimination of the spherical averaging grid comes at the cost of increased matrix dimensions, but we show that this can be mitigated by the use of state space restriction and tensor train techniques. It is also demonstrated that low correlation order basis sets apparently give accurate answers in powder-averaged MAS simulations, meaning that polynomially scaling simulation algorithms do exist for a large class of solid state NMR experiments.
Physical Review B | 2014
Dmitry V. Savostyanov; Sergey Dolgov; Joern Werner; Ilya Kuprov
We introduce a new method, based on alternating optimization, for compact representation of spin Hamiltonians and solution of linear systems of algebraic equations in the tensor train format. We demonstrate the methods utility by simulating, without approximations, a N 15 NMR spectrum of ubiquitin—a protein containing several hundred interacting nuclear spins. Existing simulation algorithms for the spin system and the NMR experiment in question either require significant approximations or scale exponentially with the spin system size. We compare the proposed method to the Spinach package that uses heuristic restricted state space techniques to achieve polynomial complexity scaling. When the spin system topology is close to a linear chain (e.g., for the backbone of a protein), the tensor train representation is more compact and can be computed faster than the sparse representation using restricted state spaces.
Linear Algebra and its Applications | 2014
Dmitry V. Savostyanov
We consider a cross interpolation of high-dimensional arrays in the tensor train format. We prove that the maximum-volume choice of the interpolation sets provides the quasioptimal interpolation accuracy, that differs from the best possible accuracy by the factor which does not grow exponentially with dimension. For nested interpolation sets we prove the interpolation property and propose greedy cross interpolation algorithms. We justify the theoretical results and measure speed and accuracy of the proposed algorithm with numerical experiments.