Featured Researches

Statistics Theory

Online Statistical Inference for Gradient-free Stochastic Optimization

As gradient-free stochastic optimization gains emerging attention for a wide range of applications recently, the demand for uncertainty quantification of parameters obtained from such approaches arises. In this paper, we investigate the problem of statistical inference for model parameters based on gradient-free stochastic optimization methods that use only function values rather than gradients. We first present central limit theorem results for Polyak-Ruppert-averaging type gradient-free estimators. The asymptotic distribution reflects the trade-off between the rate of convergence and function query complexity. We next construct valid confidence intervals for model parameters through the estimation of the covariance matrix in a fully online fashion. We further give a general gradient-free framework for covariance estimation and analyze the role of function query complexity in the convergence rate of the covariance estimator. This provides a one-pass computationally efficient procedure for simultaneously obtaining an estimator of model parameters and conducting statistical inference. Finally, we provide numerical experiments to verify our theoretical results and illustrate some extensions of our method for various machine learning and deep learning applications.

Read more
Statistics Theory

Online nonparametric regression with Sobolev kernels

In this work we investigate the variation of the online kernelized ridge regression algorithm in the setting of d??dimensional adversarial nonparametric regression. We derive the regret upper bounds on the classes of Sobolev spaces W β p (X) , p??,β> d p . The upper bounds are supported by the minimax regret analysis, which reveals that in the cases β> d 2 or p=??these rates are (essentially) optimal. Finally, we compare the performance of the kernelized ridge regression forecaster to the known non-parametric forecasters in terms of the regret rates and their computational complexity as well as to the excess risk rates in the setting of statistical (i.i.d.) nonparametric regression.

Read more
Statistics Theory

Open-end nonparametric sequential change-point detection based on the retrospective CUSUM statistic

The aim of online monitoring is to issue an alarm as soon as there is significant evidence in the collected observations to suggest that the underlying data generating mechanism has changed. This work is concerned with open-end, nonparametric procedures that can be interpreted as statistical tests. The proposed monitoring schemes consist of computing the so-called retrospective CUSUM statistic (or minor variations thereof) after the arrival of each new observation. After proposing suitable threshold functions for the chosen detectors, the asymptotic validity of the procedures is investigated in the special case of monitoring for changes in the mean, both under the null hypothesis of stationarity and relevant alternatives. To carry out the sequential tests in practice, an approach based on an asymptotic regression model is used to estimate high quantiles of relevant limiting distributions. Monte Carlo experiments demonstrate the good finite-sample behavior of the proposed monitoring schemes and suggest that they are superior to existing competitors as long as changes do not occur at the very beginning of the monitoring. Extensions to statistics exhibiting an asymptotic mean-like behavior are briefly discussed. Finally, the application of the derived sequential change-point detection tests is succinctly illustrated on temperature anomaly data.

Read more
Statistics Theory

Optimal Bayesian estimation of Gaussian mixtures with growing number of components

We study posterior concentration properties of Bayesian procedures for estimating finite Gaussian mixtures in which the number of components is unknown and allowed to grow with the sample size. Under this general setup, we derive a series of new theoretical results. More specifically, we first show that under mild conditions on the prior, the posterior distribution concentrates around the true mixing distribution at a near optimal rate with respect to the Wasserstein distance. Under a separation condition on the true mixing distribution, we further show that a better and adaptive convergence rate can be achieved, and the number of components can be consistently estimated. Furthermore, we derive optimal convergence rates for the higher-order mixture models where the number of components diverges arbitrarily fast. In addition, we consider the fractional posterior and investigate its posterior contraction rates, which are also shown to be minimax optimal in estimating the mixing distribution under mild conditions. We also investigate Bayesian estimation of general mixtures with strong identifiability conditions, and derive the optimal convergence rates when the number of components is fixed. Lastly, we study theoretical properties of the posterior of the popular Dirichlet process (DP) mixture prior, and show that such a model can provide a reasonable estimate for the number of components while only guaranteeing a slow convergence rate of the mixing distribution estimation.

Read more
Statistics Theory

Optimal Clustering in Anisotropic Gaussian Mixture Models

We study the clustering task under anisotropic Gaussian Mixture Models where the covariance matrices from different clusters are unknown and are not necessarily the identical matrix. We characterize the dependence of signal-to-noise ratios on the cluster centers and covariance matrices and obtain the minimax lower bound for the clustering problem. In addition, we propose a computationally feasible procedure and prove it achieves the optimal rate within a few iterations. The proposed procedure is a hard EM type algorithm, and it can also be seen as a variant of the Lloyd's algorithm that is adjusted to the anisotropic covariance matrices.

Read more
Statistics Theory

Optimal Full Ranking from Pairwise Comparisons

We consider the problem of ranking n players from partial pairwise comparison data under the Bradley-Terry-Luce model. For the first time in the literature, the minimax rate of this ranking problem is derived with respect to the Kendall's tau distance that measures the difference between two rank vectors by counting the number of inversions. The minimax rate of ranking exhibits a transition between an exponential rate and a polynomial rate depending on the magnitude of the signal-to-noise ratio of the problem. To the best of our knowledge, this phenomenon is unique to full ranking and has not been seen in any other statistical estimation problem. To achieve the minimax rate, we propose a divide-and-conquer ranking algorithm that first divides the n players into groups of similar skills and then computes local MLE within each group. The optimality of the proposed algorithm is established by a careful approximate independence argument between the two steps.

Read more
Statistics Theory

Optimal Posteriors for Chi-squared Divergence based PAC-Bayesian Bounds and Comparison with KL-divergence based Optimal Posteriors and Cross-Validation Procedure

We investigate optimal posteriors for recently introduced \cite{begin2016pac} chi-squared divergence based PAC-Bayesian bounds in terms of nature of their distribution, scalability of computations, and test set performance. For a finite classifier set, we deduce bounds for three distance functions: KL-divergence, linear and squared distances. Optimal posterior weights are proportional to deviations of empirical risks, usually with subset support. For uniform prior, it is sufficient to search among posteriors on classifier subsets ordered by these risks. We show the bound minimization for linear distance as a convex program and obtain a closed-form expression for its optimal posterior. Whereas that for squared distance is a quasi-convex program under a specific condition, and the one for KL-divergence is non-convex optimization (a difference of convex functions). To compute such optimal posteriors, we derive fast converging fixed point (FP) equations. We apply these approaches to a finite set of SVM regularization parameter values to yield stochastic SVMs with tight bounds. We perform a comprehensive performance comparison between our optimal posteriors and known KL-divergence based posteriors on a variety of UCI datasets with varying ranges and variances in risk values, etc. Chi-squared divergence based posteriors have weaker bounds and worse test errors, hinting at an underlying regularization by KL-divergence based posteriors. Our study highlights the impact of divergence function on the performance of PAC-Bayesian classifiers. We compare our stochastic classifiers with cross-validation based deterministic classifier. The latter has better test errors, but ours is more sample robust, has quantifiable generalization guarantees, and is computationally much faster.

Read more
Statistics Theory

Optimal Sequential Detection of Signals with Unknown Appearance and Disappearance Points in Time

The paper addresses a sequential changepoint detection problem, assuming that the duration of change may be finite and unknown. This problem is of importance for many applications, e.g., for signal and image processing where signals appear and disappear at unknown points in time or space. In contrast to the conventional optimality criterion in quickest change detection that requires minimization of the expected delay to detection for a given average run length to a false alarm, we focus on a reliable maximin change detection criterion of maximizing the minimal probability of detection in a given time (or space) window for a given local maximal probability of false alarm in the prescribed window. We show that the optimal detection procedure is a modified CUSUM procedure. We then compare operating characteristics of this optimal procedure with popular in engineering the Finite Moving Average (FMA) detection algorithm and the ordinary CUSUM procedure using Monte Carlo simulations, which show that typically the later algorithms have almost the same performance as the optimal one. At the same time, the FMA procedure has a substantial advantage -- independence to the intensity of the signal, which is usually unknown. Finally, the FMA algorithm is applied to detecting faint streaks of satellites in optical images.

Read more
Statistics Theory

Optimal Statistical Hypothesis Testing for Social Choice

We address the following question in this paper: "What are the most robust statistical methods for social choice?'' By leveraging the theory of uniformly least favorable distributions in the Neyman-Pearson framework to finite models and randomized tests, we characterize uniformly most powerful (UMP) tests, which is a well-accepted statistical optimality w.r.t. robustness, for testing whether a given alternative is the winner under Mallows' model and under Condorcet's model, respectively.

Read more
Statistics Theory

Optimal convergence rates for the invariant density estimation of jump-diffusion processes

We aim at estimating the invariant density associated to a stochastic differential equation with jumps in low dimension, which is for d=1 and d=2 . We consider a class of jump diffusion processes whose invariant density belongs to some Hölder space. Firstly, in dimension one, we show that the kernel density estimator achieves the convergence rate 1 T , which is the optimal rate in the absence of jumps. This improves the convergence rate obtained in [Amorino, Gloter (2021)], which depends on the Blumenthal-Getoor index for d=1 and is equal to logT T for d=2 . Secondly, we show that is not possible to find an estimator with faster rates of estimation. Indeed, we get some lower bounds with the same rates { 1 T , logT T } in the mono and bi-dimensional cases, respectively. Finally, we obtain the asymptotic normality of the estimator in the one-dimensional case.

Read more

Ready to get started?

Join us today