Elena Resmerita
Austrian Academy of Sciences
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Elena Resmerita.
Inverse Problems | 2005
Elena Resmerita
This paper deals with quantitative aspects of regularization for ill-posed linear equations in Banach spaces, when the regularization is done using a general convex penalty functional. The error estimates shown here by means of Bregman distances yield better convergences rates than those already known for maximum entropy regularization, as well as for total variation regularization.
Abstract and Applied Analysis | 2006
Dan Butnariu; Elena Resmerita
The aim of this paper is twofold. First, several basic mathematical concepts involved in the construction and study of Bregman type iterative algorithms are presented from a unified analytic perspective. Also, some gaps in the current knowledge about those concepts are filled in. Second, we employ existing results on total convexity, sequential consistency, uniform convexity and relative projections in order to define and study the convergence of a new Bregman type iterative method of solving operator equations.
Inverse Problems | 2006
Elena Resmerita; Otmar Scherzer
In this paper error estimates for non-quadratic regularization of nonlinear ill-posed problems in Banach spaces are derived. Our analysis is based on a few novel features: in comparison with the classical analysis of regularization methods for inverse and ill-posed problems where a Lipschitz continuity for the Frechet derivative is required, we use a differentiability condition with respect to the Bregman distance. Also, a stability result for the regularized solutions in terms of Bregman distances is proven. Moreover, a source-wise representation of the solution as used in standard theory is interpreted in terms of data enhancement. It is also shown that total variation Bregman distance regularization for image analysis, as developed recently, can be considered as a two-step regularization method consisting of a combination of total variation regularization and additional enhancement. This technique can also be applied for filtering.
Computing | 2007
Martin Burger; Elena Resmerita; Lin He
SummaryIn this paper, we consider error estimation for image restoration problems based on generalized Bregman distances. This error estimation technique has been used to derive convergence rates of variational regularization schemes for linear and nonlinear inverse problems by the authors before (cf. Burger in Inverse Prob 20: 1411–1421, 2004; Resmerita in Inverse Prob 21: 1303–1314, 2005; Inverse Prob 22: 801–814, 2006), but so far it was not applied to image restoration in a systematic way. Due to the flexibility of the Bregman distances, this approach is particularly attractive for imaging tasks, where often singular energies (non-differentiable, not strictly convex) are used to achieve certain tasks such as preservation of edges. Besides the discussion of the variational image restoration schemes, our main goal in this paper is to extend the error estimation approach to iterative regularization schemes (and time-continuous flows) that have emerged recently as multiscale restoration techniques and could improve some shortcomings of the variational schemes. We derive error estimates between the iterates and the exact image both in the case of clean and noisy data, the latter also giving indications on the choice of termination criteria. The error estimates are applied to various image restoration approaches such as denoising and decomposition by total variation and wavelet methods. We shall see that interesting results for various restoration approaches can be deduced from our general results by just exploring the structure of subgradients.
Inverse Problems | 2007
Elena Resmerita; Heinz W. Engl; Alfredo N. Iusem
The expectation-maximization (EM) algorithm is a convenient tool for approximating maximum likelihood estimators in situations when available data are incomplete, as is the case for many inverse problems. Our focus here is on the continuous version of the EM algorithm for a Poisson model, which is known to perform unstably when applied to ill-posed integral equations. We interpret and analyse the EM algorithm as a regularization procedure. We show weak convergence of the iterates to a solution of the equation when exact data are considered. In the case of perturbed data, similar results are established by employing a stopping rule of discrepancy type under boundedness assumptions on the problem data.
Multiscale Modeling & Simulation | 2011
Klaus Frick; Dirk A. Lorenz; Elena Resmerita
The augmented Lagragian method is an algorithm to compute saddle points for linearly constraint convex minimization problems. Recently it has received much attention, also under the name Bregman iteration, as an approach for regularizing inverse problems by applying the iteration to noisy data and stopping appropriately. Convergence and convergence rates have been shown for a priori stopping rules. This work shows convergence and convergence rates for this method when a special a posteriori rule, namely, Morozov’s discrepancy principle, is chosen as a stopping criterion. Particularly, we treat the case in which this rule degenerates in the sense that the stopping indices do not tend to infinity as the noise level vanishes. Moreover, error estimates for the involved sequence of subgradients are pointed out. As potential fields of application we study implications of these results for particular examples in imaging. These are total-variation regularization as well as lq penalties with q∈[1,2]. It is shown t...
Inverse Problems | 2009
Markus Haltmeier; A Leitão; Elena Resmerita
We consider regularization methods of Kaczmarz type in connection with the expectation-maximization (EM) algorithm for solving ill-posed equations. For noisy data, our methods are stabilized extensions of the well-established ordered-subsets expectation-maximization iteration (OS-EM). We show monotonicity properties of the methods and present a numerical experiment which indicates that the extended OS-EM methods we propose are much faster than the standard EM algorithm in a relevant application.
Inverse Problems | 2010
Christiane Pöschl; Elena Resmerita; Otmar Scherzer
Consider a nonlinear ill-posed operator equation F(u) = y, where F is defined on a Banach space X. In this paper we analyze finite-dimensional variational regularization, which takes into account operator approximations and noisy data. As shown in the literature, depending on the setting, convergence of the regularized solutions of the finite-dimensional problems can be with respect to the strong or just a weak topology. In this paper our contribution is twofold. First, we derive convergence rates in terms of Bregman distances in the convex regularization setting under appropriate sourcewise representation of a solution of the equation. Secondly, for particular regularization realizations in nonseparable Banach spaces, we discuss the finite-dimensional approximations of the spaces and the type of convergence, which is needed for the convergence analysis. These considerations lay the fundament for efficient numerical implementation. In particular, we emphasize on the space X of finite total variation functions and analyze in detail the cases when X is the space of the functions of finite bounded deformation and the L∞-space. The latter two settings are of interest in numerous problems arising in optimal control, machine learning and engineering.
Optimization | 2002
Dan Butnariu; Elena Resmerita
We consider the problem of finding minima of convex functions under convex inequality constraints as well as the problem of finding Nash equilibria in n -person constant sum games. We prove that both problems can be solved by algorithms whose basic principles consist of representing the original problems as infinite systems of convex inequalities which, in turn, can be approached by outer projection techniques. Experiments showing how one of these algorithms behaves in test cases are presented and, in context, we describe a numerical method for computing subgradients of convex functions.
Inverse Problems | 2016
Uno Hämarik; Barbara Kaltenbacher; Urve Kangro; Elena Resmerita
We consider ill-posed linear operator equations with operators acting between Banach spaces. For solution approximation, the methods of choice here are projection methods onto finite dimensional subspaces, thus extending existing results from Hilbert space settings. More precisely, general projection methods, the least squares method and the least error method are analyzed. In order to appropriately choose the dimension of the subspace, we consider a priori and a posteriori choices by the discrepancy principle and by the monotone error rule. Analytical considerations and numerical tests are provided for a collocation method applied to a Volterra integral equation in one space dimension.