Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alexander Litvinenko is active.

Publication


Featured researches published by Alexander Litvinenko.


Computing | 2009

Application of hierarchical matrices for computing the Karhunen–Loève expansion

Boris N. Khoromskij; Alexander Litvinenko; Hermann G. Matthies

Realistic mathematical models of physical processes contain uncertainties. These models are often described by stochastic differential equations (SDEs) or stochastic partial differential equations (SPDEs) with multiplicative noise. The uncertainties in the right-hand side or the coefficients are represented as random fields. To solve a given SPDE numerically one has to discretise the deterministic operator as well as the stochastic fields. The total dimension of the SPDE is the product of the dimensions of the deterministic part and the stochastic part. To approximate random fields with as few random variables as possible, but still retaining the essential information, the Karhunen–Loève expansion (KLE) becomes important. The KLE of a random field requires the solution of a large eigenvalue problem. Usually it is solved by a Krylov subspace method with a sparse matrix approximation. We demonstrate the use of sparse hierarchical matrix techniques for this. A log-linear computational cost of the matrix-vector product and a log-linear storage requirement yield an efficient and fast discretisation of the random fields presented.


Engineering Structures | 2013

Parameter identification in a probabilistic setting

Bojana V. Rosić; Anna Kučerová; Jan Sýkora; Oliver Pajonk; Alexander Litvinenko; Hermann G. Matthies

Abstract The parameters to be identified are described as random variables, the randomness reflecting the uncertainty about the true values, allowing the incorporation of new information through Bayes’s theorem. Such a description has two constituents, the measurable function or random variable, and the probability measure. One group of methods updates the measure, the other group changes the function. We connect both with methods of spectral representation of stochastic problems, and introduce a computational procedure without any sampling which works completely deterministically, and is fast and reliable. Some examples we show have highly nonlinear and non-smooth behaviour and use non-Gaussian measures.


SIAM Journal on Scientific Computing | 2014

To be or not to be intrusive? The solution of parametric and stochastic equations — the "plain vanilla" Galerkin case ∗

Loïc Giraldi; Alexander Litvinenko; Dishi Liu; Hermann G. Matthies; Anthony Nouy

In parametric equations---stochastic equations are a special case---one may want to approximate the solution such that it is easy to evaluate its dependence on the parameters. Interpolation in the parameters is an obvious possibility---in this context often labeled as a collocation method. In the frequent situation where one has a “solver” for a given fixed parameter value, this may be used “nonintrusively” as a black-box component to compute the solution at all the interpolation points independently of each other. By extension, all other methods, and especially simple Galerkin methods, which produce some kind of coupled system, are often classed as “intrusive.” We show how, for such “plain vanilla” Galerkin formulations, one may solve the coupled system in a nonintrusive way, and even the simplest form of block-solver has comparable efficiency. This opens at least two avenues for possible speed-up: first, to benefit from the coupling in the iteration by using more sophisticated block-solvers and, second,...


Archive | 2012

Efficient Analysis of High Dimensional Data in Tensor Formats

Mike Espig; Wolfgang Hackbusch; Alexander Litvinenko; Hermann G. Matthies; Elmar Zander

In this article we introduce new methods for the analysis of high dimensional data in tensor formats, where the underling data come from the stochastic elliptic boundary value problem. After discretisation of the deterministic operator as well as the presented random fields via KLE and PCE, the obtained high dimensional operator can be approximated via sums of elementary tensors. This tensors representation can be effectively used for computing different values of interest, such as maximum norm, level sets and cumulative distribution function. The basic concept of the data analysis in high dimensions is discussed on tensors represented in the canonical format, however the approach can be easily used in other tensor formats. As an intermediate step we describe efficient iterative algorithms for computing the characteristic and sign functions as well as pointwise inverse in the canonical tensor format. Since during majority of algebraic operations as well as during iteration steps the representation rank grows up, we use lower-rank approximation and inexact recursive iteration schemes.


Computers & Mathematics With Applications | 2014

Efficient low-rank approximation of the stochastic Galerkin matrix in tensor formats

Mike Espig; Wolfgang Hackbusch; Alexander Litvinenko; Hermann G. Matthies; Philipp Wähnert

In this article, we describe an efficient approximation of the stochastic Galerkin matrix which stems from a stationary diffusion equation. The uncertain permeability coefficient is assumed to be a log-normal random field with given covariance and mean functions. The approximation is done in the canonical tensor format and then compared numerically with the tensor train and hierarchical tensor formats. It will be shown that under additional assumptions the approximation error depends only on the smoothness of the covariance function and does not depend either on the number of random variables nor the degree of the multivariate Hermite polynomials.


10th Working Conference on Uncertainty Quantification in Scientific Computing (WoCoUQ) | 2011

Parametric and Uncertainty Computations with Tensor Product Representations

Hermann G. Matthies; Alexander Litvinenko; Oliver Pajonk; Bojana V. Rosić; Elmar Zander

Computational uncertainty quantification in a probabilistic setting is a special case of a parametric problem. Parameter dependent state vectors lead via association to a linear operator to analogues of covariance, its spectral decomposition, and the associated Karhunen-Loeve expansion. From this one obtains a generalised tensor representation The parameter in question may be a tuple of numbers, a function, a stochastic process, or a random tensor field. The tensor factorisation may be cascaded, leading to tensors of higher degree. When carried on a discretised level, such factorisations in the form of low-rank approximations lead to very sparse representations of the high dimensional quantities involved. Updating of uncertainty for new information is an important part of uncertainty quantification. Formulated in terms or random variables instead of measures, the Bayesian update is a projection and allows the use of the tensor factorisations also in this case.


Archive | 2013

Sampling and Low-Rank Tensor Approximation of the Response Surface

Alexander Litvinenko; Hermann G. Matthies; Tarek A. El-Moselhy

Most (quasi)-Monte Carlo procedures can be seen as computing some integral over an often high-dimensional domain. If the integrand is expensive to evaluate—we are thinking of a stochastic PDE (SPDE) where the coefficients are random fields and the integrand is some functional of the PDE-solution—there is the desire to keep all the samples for possible later computations of similar integrals. This obviously means a lot of data. To keep the storage demands low, and to allow evaluation of the integrand at points which were not sampled, we construct a low-rank tensor approximation of the integrand over the whole integration domain. This can also be viewed as a representation in some problem-dependent basis which allows a sparse representation. What one obtains is sometimes called a “surrogate” or “proxy” model, or a “response surface”. This representation is built step by step or sample by sample, and can already be used for each new sample. In case we are sampling a solution of an SPDE, this allows us to reduce the number of necessary samples, namely in case the solution is already well-represented by the low-rank tensor approximation. This can be easily checked by evaluating the residuum of the PDE with the approximate solution. The procedure will be demonstrated in the computation of a compressible transonic Reynolds-averaged Navier-Strokes flow around an airfoil with random/uncertain data.


Mathematical Geosciences | 2013

Kriging and Spatial Design Accelerated by Orders of Magnitude: Combining Low-Rank Covariance Approximations with FFT-Techniques

Wolfgang Nowak; Alexander Litvinenko

Computational power poses heavy limitations to the achievable problem size for Kriging. In separate research lines, Kriging algorithms based on FFT, the separability of certain covariance functions, and low-rank representations of covariance functions have been investigated, all three leading to drastic speedup factors. The current study combines these ideas, and so combines the individual speedup factors of all ideas. This way, we reduce the mathematics behind Kriging to a computational complexity of only


NUMERICAL ANALYSIS AND APPLIED MATHEMATICS: International Conference on Numerical Analysis and Applied Mathematics 2008 | 2008

Data Sparse Computation of the Karhunen-Loeve Expansion

Boris N. Khoromskij; Alexander Litvinenko

\mathcal{O}(dL^{*} \log L^{*})


Pattern Recognition Letters | 2003

The influence of prior knowledge on the expected performance of a classifier

Vladimir B. Berikov; Alexander Litvinenko

, where L∗ is the number of points along the longest edge of the involved lattice of estimation points, and d is the physical dimensionality of the lattice. For separable (factorized) covariance functions, the results are exact, and nonseparable covariance functions can be approximated well through sums of separable components.Only outputting the final estimate as an explicit map causes computational costs of

Collaboration


Dive into the Alexander Litvinenko's collaboration.

Top Co-Authors

Avatar

Hermann G. Matthies

Braunschweig University of Technology

View shared research outputs
Top Co-Authors

Avatar

Bojana V. Rosić

Braunschweig University of Technology

View shared research outputs
Top Co-Authors

Avatar

Oliver Pajonk

Braunschweig University of Technology

View shared research outputs
Top Co-Authors

Avatar

David E. Keyes

King Abdullah University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Elmar Zander

Braunschweig University of Technology

View shared research outputs
Top Co-Authors

Avatar

Marc G. Genton

King Abdullah University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Ying Sun

King Abdullah University of Science and Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge