Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Eran Treister is active.

Publication


Featured researches published by Eran Treister.


Numerical Linear Algebra With Applications | 2010

Square and stretch multigrid for stochastic matrix eigenproblems

Eran Treister; Irad Yavneh

A novel multigrid algorithm for computing the principal eigenvector of column-stochastic matrices is developed. The method is based on an approach originally introduced by Horton and Leutenegger (Perform. Eval. Rev. 1994; 22:191–200) whereby the coarse-grid problem is adapted to yield a better and better coarse representation of the original problem. A special feature of the present approach is the squaring of the stochastic matrix—followed by a stretching of its spectrum—just prior to the coarse-grid correction process. This procedure is shown to yield good convergence properties, even though a cheap and simple aggregation is used for the restriction and prolongation matrices, which is important for maintaining competitive computational costs. A second special feature is a bottom–up procedure for defining coarse-grid aggregates. Copyright


Numerical Linear Algebra With Applications | 2011

Fast multilevel methods for Markov chains

Hans De Sterck; K. Miller; Eran Treister; Irad Yavneh

This paper describes multilevel methods for the calculation of the stationary probability vector of large, sparse, irreducible Markov chains. In particular, several recently proposed significant improvements to the multilevel aggregation method of Horton and Leutenegger are described and compared. Furthermore, we propose a very simple improvement of that method using an over-correction mechanism. We also compare with more traditional iterative methods for Markov chains such as weighted Jacobi, two-level aggregation/disaggregation, and preconditioned stabilized biconjugate gradient and GMRES. Numerical experiments confirm that our improvements lead to significant speedup, and result in multilevel methods that are competitive with leading iterative solvers for Markov chains. Copyright c


SIAM Journal on Scientific Computing | 2011

On-the-Fly Adaptive Smoothed Aggregation Multigrid for Markov Chains

Eran Treister; Irad Yavneh

A new adaptive algebraic multigrid scheme is developed for the solution of Markov chains, where the hierarchy of operators is adapted on-the-fly in a setup process that is interlaced with the solution process. The setup process feeds the solution process with improved operators, while the solution process provides the adaptive setup process with better approximations on which to base further-improved operators. The approach is demonstrated using Petrov-Galerkin smoothed aggregation where only the prolongation operator is smoothed, while the restriction remains of low order. Results show that the on-the-fly adaptive scheme can improve the performance of multigrid solvers that require extensive setup computations, in both serial and parallel environments.


IEEE Transactions on Signal Processing | 2012

A Multilevel Iterated-Shrinkage Approach to

Eran Treister; Irad Yavneh

The area of sparse approximation of signals is drawing tremendous attention in recent years. Typically, sparse solutions of underdetermined linear systems of equations are required. Such solutions are often achieved by minimizing an l1 penalized least squares functional. Various iterative-shrinkage algorithms have recently been developed and are quite effective for handling these problems, often surpassing traditional optimization techniques. In this paper, we suggest a new iterative multilevel approach that reduces the computational cost of existing solvers for these inverse problems. Our method takes advantage of the typically sparse representation of the signal, and at each iteration it adaptively creates and processes a hierarchy of lower-dimensional problems employing well-known iterated shrinkage methods. Analytical observations suggest, and numerical results confirm, that this new approach may significantly enhance the performance of existing iterative shrinkage algorithms in cases where the matrix is given explicitly.


SIAM Journal on Scientific Computing | 2017

l_{1}

Lars Ruthotto; Eran Treister; Eldad Haber

Estimating parameters of Partial Differential Equations (PDEs) from noisy and indirect measurements often requires solving ill-posed inverse problems. These so called parameter estimation or inverse medium problems arise in a variety of applications such as geophysical, medical imaging, and nondestructive testing. Their solution is computationally intense since the underlying PDEs need to be solved numerous times until the reconstruction of the parameters is sufficiently accurate. Typically, the computational demand grows significantly when more measurements are available, which poses severe challenges to inversion algorithms as measurement devices become more powerful. In this paper we present jInv, a flexible framework and open source software that provides parallel algorithms for solving parameter estimation problems with many measurements. Being written in the expressive programming language Julia, jInv is portable, easy to understand and extend, cross-platform tested, and well-documented. It provides novel parallelization schemes that exploit the inherent structure of many parameter estimation problems and can be used to solve multiphysics inversion problems as is demonstrated using numerical experiments motivated by geophysical imaging.


SIAM Journal on Scientific Computing | 2015

Penalized Least-Squares Minimization

Eran Treister; Irad Yavneh

Algebraic multigrid (AMG) methods are known to be efficient in solving linear systems arising from the discretization of partial differential equations and other related problems. These methods employ a hierarchy of representations of the problem on successively coarser meshes. The coarse-grid operators are usually defined by (Petrov--)Galerkin coarsening, which is a projection of the original operator using the restriction and prolongation transfer operators. Therefore, these transfer operators determine the sparsity pattern and operator complexity of the multigrid hierarchy. In many scenarios the multigrid operators tend to become much denser as the coarsening progresses. Such behavior is especially problematic in parallel AMG computations, where it imposes an expensive communication overhead. In this work we present a new algebraic technique for controlling the sparsity pattern of the operators in the AMG hierarchy, independently of the choice of the restriction and prolongation. Our method is based on...


SIAM Journal on Scientific Computing | 2017

jInv--a Flexible Julia Package for PDE Parameter Estimation

Eran Treister; Eldad Haber

Full waveform inversion (FWI) is a process in which seismic numerical simulations are fit to observed data by changing the wave velocity model of the medium under investigation. The problem is non-linear, and therefore optimization techniques have been used to find a reasonable solution to the problem. The main problem in fitting the data is the lack of low spatial frequencies. This deficiency often leads to a local minimum and to non-plausible solutions. In this work we explore how to obtain low frequency information for FWI. Our approach involves augmenting FWI with travel time tomography, which has low-frequency features. By jointly inverting these two problems we enrich FWI with information that can replace low frequency data. In addition, we use high order regularization, in a preliminary inversion stage, to prevent high frequency features from polluting our model in the initial stages of the reconstruction. This regularization also promotes the non-dominant low-frequency modes that exist in the FWI sensitivity. By applying a joint FWI and travel time inversion we are able to obtain a smooth model than can later be used to recover a good approximation for the true model. A second contribution of this paper involves the acceleration of the main computational bottleneck in FWI--the solution of the Helmholtz equation. We show that the solution time can be reduced by solving the equation for multiple right hand sides using block multigrid preconditioned Krylov methods.


SIAM Journal on Scientific Computing | 2016

Non-Galerkin Multigrid Based on Sparsified Smoothed Aggregation

Eran Treister; Javier S. Turek; Irad Yavneh

Solving l1 regularized optimization problems is common in the fields of computational biology, signal processing and machine learning. Such l1 regularization is utilized to find sparse minimizers of convex functions. A well-known example is the LASSO problem, where the l1 norm regularizes a quadratic function. A multilevel framework is presented for solving such l1 regularized sparse optimization problems efficiently. We take advantage of the expected sparseness of the solution, and create a hierarchy of problems of similar type, which is traversed in order to accelerate the optimization process. This framework is applied for solving two problems: (1) the sparse inverse covariance estimation problem, and (2) l1-regularized logistic regression. In the first problem, the inverse of an unknown covariance matrix of a multivariate normal distribution is estimated, under the assumption that it is sparse. To this end, an l1 regularized log-determinant optimization problem needs to be solved. This task is challenging especially for large-scale datasets, due to time and memory limitations. In the second problem, the l1-regularization is added to the logistic regression classification objective to reduce overfitting to the data and obtain a sparse model. Numerical experiments demonstrate the efficiency of the multilevel framework in accelerating existing iterative solvers for both of these problems.


ieee convention of electrical and electronics engineers in israel | 2012

Full Waveform Inversion Guided by Travel Time Tomography

Eran Treister; Irad Yavneh

The area of sparse approximation of signals is drawing tremendous attention in recent years. Typically, sparse solutions of underdetermined linear systems of equations are required. Such solutions are often achieved by minimizing an l 1 penalized least squares functional. Various iterative-shrinkage algorithms have recently been developed and are quite effective for handling these problems, often surpassing traditional optimization techniques. In this paper, we suggest a new iterative multilevel approach that reduces the computational cost of existing solvers for these inverse problems. Our method takes advantage of the typically sparse representation of the signal, and at each iteration it adaptively creates and processes a hierarchy of lower-dimensional problems employing well-known iterated shrinkage methods. Analytical observations suggest, and numerical results confirm, that this new approach may significantly enhance the performance of existing iterative shrinkage algorithms in cases where the matrix is given explicitly.


neural information processing systems | 2014

A Multilevel Framework for Sparse Optimization with Application to Inverse Covariance Estimation and Logistic Regression

Eran Treister; Javier S. Turek

Collaboration


Dive into the Eran Treister's collaboration.

Top Co-Authors

Avatar

Irad Yavneh

Technion – Israel Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Eldad Haber

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Javier S. Turek

Technion – Israel Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Elliot Holtham

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

K. Miller

University of Waterloo

View shared research outputs
Researchain Logo
Decentralizing Knowledge