Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Lior Horesh is active.

Publication


Featured researches published by Lior Horesh.


Inverse Problems | 2010

Numerical methods for the design of large-scale nonlinear discrete ill-posed inverse problems

Eldad Haber; Lior Horesh; L. Tenorio

Design of experiments for discrete ill-posed problems is a relatively new area of research. While there has been some limited work concerning the linear case, little has been done to study design criteria and numerical methods for ill-posed nonlinear problems. We present an algorithmic framework for nonlinear experimental design with an efficient numerical implementation. The data are modeled as indirect, noisy observations of the model collected via a set of plausible experiments. An inversion estimate based on these data is obtained by a weighted Tikhonov regularization whose weights control the contribution of the different experiments to the data misfit term. These weights are selected by minimization of an empirical estimate of the Bayes risk that is penalized to promote sparsity. This formulation entails a bilevel optimization problem that is solved using a simple descent method. We demonstrate the viability of our design with a problem in electromagnetic imaging based on direct current resistivity and magnetotelluric data.


international conference on information fusion | 2010

Kalman filtering for compressed sensing

Dimitri Kanevsky; Avishy Carmi; Lior Horesh; Pini Gurfil; Bhuvana Ramabhadran; Tara N. Sainath

Compressed sensing is a new emerging field dealing with the reconstruction of a sparse or, more precisely, a compressed representation of a signal from a relatively small number of observations, typically less than the signal dimension. In our previous work we have shown how the Kalman filter can be naturally applied for obtaining an approximate Bayesian solution for the compressed sensing problem. The resulting algorithm, which was termed CSKF, relies on a pseudo-measurement technique for enforcing the sparseness constraint. Our approach raises two concerns which are addressed in this paper. The first one refers to the validity of our approximation technique. In this regard, we provide a rigorous treatment of the CSKF algorithm which is concluded with an upper bound on the discrepancy between the exact (in the Bayesian sense) and the approximate solutions. The second concern refers to the computational overhead associated with the CSKF in large scale settings. This problem is alleviated here using an efficient measurement update scheme based on Krylov subspace method.


SIAM Journal on Scientific Computing | 2011

A Second Order Discretization of Maxwell's Equations in the Quasi-Static Regime on OcTree Grids

Lior Horesh; Eldad Haber

In this study we consider adaptive mesh refinement for the solution of Maxwells equations in the quasi-static or diffusion regime. We propose a new finite volume OcTree discretization for the problem and show how to construct second order stencils on Yee grids, extending the known first order discretization stencils. We then develop an effective preconditioner to the problem. We show that our preconditioner performs well for discontinuous conductivities as well as for a wide range of frequencies.


ieee automatic speech recognition and understanding workshop | 2013

Accelerating Hessian-free optimization for Deep Neural Networks by implicit preconditioning and sampling

Tara N. Sainath; Lior Horesh; Brian Kingsbury; Aleksandr Y. Aravkin; Bhuvana Ramabhadran

Hessian-free training has become a popular parallel second order optimization technique for Deep Neural Network training. This study aims at speeding up Hessian-free training, both by means of decreasing the amount of data used for training, as well as through reduction of the number of Krylov subspace solver iterations used for implicit estimation of the Hessian. In this paper, we develop an L-BFGS based preconditioning scheme that avoids the need to access the Hessian explicitly. Since L-BFGS cannot be regarded as a fixed-point iteration, we further propose the employment of flexible Krylov subspace solvers that retain the desired theoretical convergence guarantees of their conventional counterparts. Second, we propose a new sampling algorithm, which geometrically increases the amount of data utilized for gradient and Krylov subspace iteration calculations. On a 50-hr English Broadcast News task, we find that these methodologies provide roughly a 1.5× speed-up, whereas, on a 300-hr Switchboard task, these techniques provide over a 2.3× speedup, with no loss in WER. These results suggest that even further speed-up is expected, as problems scale and complexity grows.


Inverse Problems | 2009

Sensitivity computation of the ℓ1 minimization problem and its application to dictionary design of ill-posed problems

Lior Horesh; Eldad Haber

The � 1 minimization problem has been studied extensively in the past few years. Recently, there has been a growing interest in its application for inverse problems. Most studies have concentrated in devising ways for sparse representation of a solution using a given prototype dictionary. Very few studies have addressed the more challenging problem of optimal dictionary construction, and even these were primarily devoted to the simplistic sparse coding application. In this paper, sensitivity analysis of the inverse solution with respect to the dictionary is presented. This analysis reveals some of the salientfeaturesandintrinsicdifficultieswhichareassociatedwiththedictionary design problem. Equipped with these insights, we propose an optimization strategy that alleviates these hurdles while utilizing the derived sensitivity relations for the design of a locally optimal dictionary. Our optimality criterion is based on local minimization of the Bayesian risk, given a set of training models. We present a mathematical formulation and an algorithmic framework to achieve this goal. The proposed framework offers the design of dictionaries for inverse problems that incorporate non-trivial, non-injective observation operators, where the data and the recovered parameters may reside in different spaces. We test our algorithm and show that it yields improved dictionaries for a diverse set of inverse problems in geophysics and medical imaging. (Some figures in this article are in colour only in the electronic version)


Inverse Problems in Science and Engineering | 2015

Optimal design of simultaneous source encoding

Eldad Haber; Kees van den Doel; Lior Horesh

A broad range of parameter estimation problems involve the collection of an excessively large number of observations N. Typically, each such observation involves excitation of the domain through injection of energy at some predefined sites and recording of the response of the domain at another set of locations. It has been observed that similar results can often be obtained by considering a far smaller number K of multiple linear superpositions of experiments with . This allows the construction of the solution to the inverse problem in time instead of . Given these considerations it should not be necessary to perform all the N experiments but only a much smaller number of K experiments with simultaneous sources in superpositions with certain weights. Devising such procedure would results in a drastic reduction in acquisition time. The question we attempt to rigorously investigate in this work is: what are the optimal weights? We formulate the problem as an optimal experimental design problem and show that by leveraging techniques from this field an answer is readily available. Designing optimal experiments requires some statistical framework and therefore the statistical framework that one chooses to work with plays a major role in the selection of the weights.


Eurosurveillance | 2012

Adjoint-based History-Matching of Production and Time-lapse Seismic Data

Gijs van Essen; Eduardo Jimenez; J.K. Przybysz-Jarnut; Lior Horesh; Sippe G. Douma; Paul van den Hoek; Andrew R. Conn; Ulisses T. Mello

Time-lapse (4D) seismic attributes can provide valuable information on the fluid flow within subsurface reservoirs. This spatially-rich source of information complements the poor areal information obtainable from production well data. While fusion of information from the two sources holds great promise, in practice, this task is far from trivial. Joint Inversion is complex for many reasons, including different time and spatial scales, the fact that the coupling mechanisms between the various parameters are often not well established, the localized nature of the required model updates, and the necessity to integrate multiple data. These concerns limit the applicability of many data-assimilation techniques. Adjoint-based methods are free of these drawbacks but their implementation generally requires extensive programming effort. In this study we present a workflow that exploits the adjoint functionality that modern simulators offer for production data to consistently assimilate inverted 4D seismic attributes without the need for re-programming of the adjoint code. Here we discuss a novel workflow which we applied to assimilate production data and 4D seismic data from a synthetic reservoir model, which acts as the real yet unknown reservoir. Synthetic production data and 4D seismic data were created from this model to study the performance of the adjoint-based method. The seamless structure of the workflow allowed rapid setup of the data assimilation process, while execution of the process was reduced significantly. The resulting reservoir model updates displayed a considerable improvement in matching the saturation distribution in the field. This work was carried out as part of a joint Shell-IBM research project. Introduction In history matching, production measurements are assimilated to obtain a dynamical reservoir model that is consistent with historical data; see e.g. Oliver et al (2008). However, production measurements – although generally of a high temporal resolution – provide only very localized spatial information about the subsurface around the wells, especially in the early production phase when wateror gas-breakthrough has not yet occurred in the producers. After breakthrough, somewhat more insight can be gained into the reservoir model parameters that influence the mismatch between measured and simulated data. At that time however the benefits of using a pro-active reservoir management strategy have often diminished considerably. Interpreted time-lapse (4D) seismic data can provide information on the areal distribution of pressure and saturation changes due to fluid production or injection. The seismic data are generally more noisy and uncertain than production data, but due to the field-wide distribution of the data, very valuable additional information on the subsurface can be gathered; see e.g. Calvert (2005). In production data assimilation, the quality of the updated model is usually evaluated with a cost function defined as the summed squared error between the observations (measurements) and simulated production data, sometimes weighted by a measure of the accuracy of the observations. Ensemble Kalman filter (EnKF) methods (Naevdal et al. (2005); Evensen (2009); Aanonsen et al. (2009)), streamline-based methods (Vasco et al. (1999); Wang and Kovscek (2000).) and adjoint-based methods (Chen et al. (1974), Chavent et al. (1975); Li et al. (2003); Rodrigues (2006); Oliver et al. (2008)) are the most common data-assimilation techniques reported in literature to deal with the history matching problem. All these methods update the reservoir model using the sensitivities of a least-squares cost function with respect to model parameters, but differ in the considered measurement types, model parameters and derivation of the sensitivities. Of these three methods, the adjointbased method is the preferred method, because:


Archive | 2014

Nuclear Norm Optimization and Its Application to Observation Model Specification

Ning Hao; Lior Horesh; Misha E. Kilmer

Optimization problems involving the minimization of the rank of a matrix subject to certain constraints are pervasive in a broad range of disciples, such as control theory [6, 26, 31, 62], signal processing [25], and machine learning [3, 77, 89]. However, solving such rank minimization problems is usually very difficult as they are NP-hard in general [65, 75]. The nuclear norm of a matrix, as the tightest convex surrogate of the matrix rank, has fueled much of the recent research and has proved to be a powerful tool in many areas. In this chapter, we aim to provide a brief review of some of the state-of-the-art in nuclear norm optimization algorithms as they relate to applications. We then propose a novel application of the nuclear norm to the linear model recovery problem, as well as a viable algorithm for solution of the recovery problem. Preliminary numerical results presented here motivates further investigation of the proposed idea.


european control conference | 2015

Source estimation for wave equations with uncertain parameters

Sergiy Zhuk; Stephen Moore; Alberto Costa Nogueira; Andrew Rawlinson; Tigran T. Tchrakian; Lior Horesh; Aleksandr Y. Aravkin; Albert Akhriev

Source estimation is a fundamental ingredient of Full Waveform Inversion (FWI). In such seismic inversion methods wavelet intensity and phase spectra are usually estimated statistically although for the FWI formulation as a nonlinear least-squares optimization problem it can naturally be incorporated to the workflow. Modern approaches for source estimation consider robust misfit functions leading to the well known robust FWI method. The present work uses synthetic data generated from a high order spectral element forward solver to produce observed data which in turn are used to estimate the intensity and the location of the point seismic source term of the original elastic wave PDE. A min-max filter approach is used to convert the original source estimation problem into a state problem conditioned to the observations and a non-standard uncertainty description. The resulting numerical scheme uses an implicit midpoint method to solve, in parallel, the chosen 2D and 3D numerical examples running on an IBM Blue Gene/Q using a grid defined by approximately sixteen thousand 5th order elements, resulting in a total of approximately 6.5 million degrees of freedom.


european conference on parallel processing | 2015

Semi-discrete Matrix-Free Formulation of 3D Elastic Full Waveform Inversion Modeling

Stephen Moore; Devi Sudheer Chunduri; Sergiy Zhuk; Tigran T. Tchrakian; Ewout van den Berg; Albert Akhriev; Alberto Costa Nogueira; Andrew Rawlinson; Lior Horesh

Full waveform inversion (FWI) is an emerging subsurface imaging technique, used to locate oil and gas reservoirs. The key challenges that hinder its adoption by industry are both algorithmic and computational in nature, including storage, communication, and processing of large-scale data structures, which impose cardinal impediments upon computational scalability. In this work we will present a complete matrix-free algorithmic formulation of a 3D elastic time domain spectral element solver for both the forward and adjoint wave-fields as part of a greater cloud based FWI framework. We discuss computational optimisation (SIMD vectorisation, use of Many Integrated Core architectures, etc.) and present scaling results for two HPC systems, namely an IBM Blue Gene/Q and an Intel based system equipped with Xeon Phi coprocessors.

Collaboration


Dive into the Lior Horesh's collaboration.

Researchain Logo
Decentralizing Knowledge