Felix Abramovich
Tel Aviv University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Felix Abramovich.
Journal of The Royal Statistical Society Series B-statistical Methodology | 1998
Felix Abramovich; Theofanis Sapatinas; Bernard W. Silverman
We discuss a Bayesian formalism which gives rise to a type of wavelet threshold estimation in nonparametric regression. A prior distribution is imposed on the wavelet coefficients of the unknown response function, designed to capture the sparseness of wavelet expansion that is common to most applications. For the prior specified, the posterior median yields a thresholding procedure. Our prior model for the underlying function can be adjusted to give functions falling in any specific Besov space. We establish a relationship between the hyperparameters of the prior model and the parameters of those Besov spaces within which realizations from the prior will fall. Such a relationship gives insight into the meaning of the Besov space parameters. Moreover, the relationship established makes it possible in principle to incorporate prior knowledge about the functions regularity properties into the prior model for its wavelet coefficients. However, prior knowledge about a functions regularity properties might be difficult to elicit; with this in mind, we propose a standard choice of prior hyperparameters that works well in our examples. Several simulated examples are used to illustrate our method, and comparisons are made with other thresholding methods. We also present an application to a data set that was collected in an anaesthesiological study.
Annals of Statistics | 2006
Felix Abramovich; Yoav Benjamini; David L. Donoho; Iain M. Johnstone
We attempt to recover an n-dimensional vector observed in white noise, where n is large and the vector is known to be sparse, but the degree of sparsity is unknown. We consider three different ways of defining sparsity of a vector: using the fraction of nonzero terms; imposing power-law decay bounds on the ordered entries; and controlling the lp norm for p small. We obtain a procedure which is asymptotically minimax for l r loss, simultaneously throughout a range of such sparsity classes. The optimal procedure is a data-adaptive thresholding scheme, driven by control of the False Discovery Rate (FDR). FDR control is a relatively recent innovation in simultaneous testing, ensuring that at most a certain fraction of the rejected null hypotheses will correspond to false rejections. In our treatment, the FDR control parameter qn also plays a determining role in asymptotic minimaxity. If q = lim qn ∈ [0,1/2] and also qn > γ/log(n) we get sharp asymptotic minimaxity, simultaneously, over a wide range of sparse parameter spaces and loss functions. On the other hand, q = lim qn ∈ (1/2,1], forces the risk to exceed the minimax risk by a factor growing with q. To our knowledge, this relation between ideas in simultaneous inference and asymptotic decision theory is new. Our work provides a new perspective on a class of model selection rules which has been introduced recently by several authors. These new rules impose complexity penalization of the form 2 � log( potential model size / actual model size ). We exhibit a close connection with FDR-controlling procedures under stringent control of the false discovery rate.
The Statistician | 2000
Felix Abramovich; Trevor C. Bailey; Theofanis Sapatinas
Summary. In recent years there has been a considerable development in the use of wavelet methods in statistics. As a result, we are now at the stage where it is reasonable to consider such methods to be another standard tool of the applied statistician rather than a research novelty. With that in mind, this paper gives a relatively accessible introduction to standard wavelet analysis and provides a review of some common uses of wavelet methods in statistical applications. It is primarily orientated towards the general statistical audience who may be involved in analysing data where the use of wavelets might be effective, rather than to researchers who are already familiar with the field. Given that objective, we do not emphasize mathematical generality or rigour in our exposition of wavelets and we restrict our discussion to the more frequently employed wavelet methods in statistics. We provide extensive references where the ideas and concepts discussed can be followed up in greater detail and generality if required. The paper first establishes some necessary basic mathematical background and terminology relating to wavelets. It then reviews the more well-established applications of wavelets in statistics including their use in nonparametric regression, density estimation, inverse problems, changepoint problems and in some specialized aspects of time series analysis. Possible extensions to the uses of wavelets in statistics are then considered. The paper concludes with a brief reference to readily available software packages for wavelet analysis.
Computational Statistics & Data Analysis | 1996
Felix Abramovich; Yoav Benjamini
Wavelet techniques have become an attractive and efficient tool in function estimation. Given noisy data, its discrete wavelet transform is an estimator of the wavelet coefficients. It has been shown by Donoho and Johnstone (Biometrika 81 (1994) 425–455) that thresholding the estimated coefficients and then reconstructing an estimated function reduces the expected risk close to the possible minimum. They offered a global threshold λ ∼ δ2lognfor j > j0, while the coefficients of the first coarse j0 levels are always included. We demonstrate that the choice of j0 may strongly affect the corresponding estimators. Then, we use the connection between thresholding and hypotheses testing to construct a thresholding procedure based on the false discovery rate (FDR) approach to multiple testing of Benjamini and Hochberg (J. Roy. Statist. Soc. Ser. B 57 (1995) 289–300). The suggested procedure controls the expected proportion of incorrectly included coefficients among those chosen for the wavelet reconstruction. The resulting procedure is inherently adaptive, and responds to the complexity of the estimated function and to the noise level. Finally, comparing the proposed FDR based procedure with the fixed global threshold by evaluating the relative mean-square-error across the various test-functions and noise levels, we find the FDR-estimator to enjoy robustness of MSE-efficiency.
Computational Statistics & Data Analysis | 2002
Felix Abramovich; Panagiotis Besbeas; Theofanis Sapatinas
Wavelet methods have demonstrated considerable success in function estimation through term-by-term thresholding of the empirical wavelet coefficients. However, it has been shown that grouping the empirical wavelet coefficients into blocks and making simultaneous threshold decisions about all the coefficients in each block has a number of advantages over term-by-term wavelet thresholding, including asymptotic optimality and better mean squared error performance in finite sample situations. An empirical Bayes approach to incorporating information on neighbouring empirical wavelet coefficients into function estimation that results in block wavelet shrinkage and block wavelet thresholding estimators is considered. Simulated examples are used to illustrate the performance of the resulting estimators, and to compare these estimators with several existing non-Bayesian block wavelet thresholding estimators. It is observed that the proposed empirical Bayes block wavelet shrinkage and block wavelet thresholding estimators outperform the non-Bayesian block wavelet thresholding estimators in finite sample situations. An application to a data set that was collected in an anaesthesiological study is also presented.
Journal of Statistical Planning and Inference | 1996
Felix Abramovich; David M. Steinberg
Smoothing splines are one of the most popular approaches to nonparametric regression. Wahba (J. Roy. Statist. Soc. Ser. B 40 (1978) 364–372; 45 (1983) 133–150) showed that smoothing splines are also Bayes estimates and used the corresponding prior model to derive interval estimates for the regression function. Although the interval estimates work well on a global basis, they can have poor local properties. The source of this problem is the use of a global smoothing parameter. We introduce the notion of Lk-smoothing splines. These splines allow for a variable smoothing parameter and can substantially improve local inference.
IEEE Transactions on Communications | 1997
Felix Abramovich; Polina Bayvel
We consider the signal detection problem in amplified optical transmission systems as a statistical hypothesis testing procedure, and we show that the detected signal has a well-known chi-squared distribution. In particular, this approach considerably simplifies the derivation of bit-error rate (BER). Finally, we discuss the accuracy of the Gaussian approximations to the exact distributions of the signal.
Bernoulli | 1999
Felix Abramovich; Vadim Grinshtein
We consider first the spline smoothing nonparametric estimation with variable smoothing parameter and arbitrary design density function and show that the corresponding equivalent kernel can be approximated by the Green function of a certain linear differential operator. Furthermore, we propose to use the standard (in applied mathematics and engineering) method for asymptotic solution of linear differential equations, known as the Wentzel-Kramers-Brillouin method, for systematic derivation of an asymptotically equivalent kernel in this general case. The corresponding results for polynomial splines are a special case of the general solution. Then, we show how these ideas can be directly extended to the very general L-spline smoothing.
IEEE Transactions on Information Theory | 2016
Felix Abramovich; Vadim Grinshtein
We consider model selection in generalized linear models (GLM) for high-dimensional data and propose a wide class of model selection criteria based on penalized maximum likelihood with a complexity penalty on the model size. We derive a general nonasymptotic upper bound for the Kullback-Leibler risk of the resulting estimators and establish the corresponding minimax lower bounds for the sparse GLM. For the properly chosen (nonlinear) penalty, the resulting penalized maximum likelihood estimator is shown to be asymptotically minimax and adaptive to the unknown sparsity. We also discuss possible extensions of the proposed approach to model selection in the GLM under additional structural constraints and aggregation.
Archive | 2011
Felix Abramovich; Vadim Grinshtein
We consider model selection in Gaussian regression, where the number of predictors might be even larger than the number of observations. The proposed procedure is based on penalized least square criteria with a complexity penalty on a model size.We discuss asymptotic properties of the resulting estimators corresponding to linear and so-called 2k ln(p/k)-type nonlinear penalties for nearly-orthogonal and multicollinear designs. We show that any linear penalty cannot be simultaneously adapted to both sparse and dense setups, while 2k ln(p/k)-type penalties achieve the wide adaptivity range.We also present Bayesian perspective on the procedure that provides an additional insight and can be used as a tool for obtaining a wide class of penalized estimators associated with various complexity penalties.