Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Stefan Vlaski is active.

Publication


Featured researches published by Stefan Vlaski.


international conference on acoustics, speech, and signal processing | 2016

Diffusion stochastic optimization with non-smooth regularizers

Stefan Vlaski; Lieven Vandenberghe; Ali H. Sayed

We develop an effective distributed strategy for seeking the Pareto solution of an aggregate cost consisting of regularized risks. The focus is on stochastic optimization problems where each risk function is expressed as the expectation of some loss function and the probability distribution of the data is unknown. We assume each risk function is regularized and allow the regularizer to be non-smooth. Under conditions that are weaker than assumed earlier in the literature and, hence, applicable to a broader class of adaptation and learning problems, we show how the regularizers can be smoothed and how the Pareto solution can be sought by appealing to a multi-agent diffusion strategy. The formulation is general enough and includes, for example, a multi-agent proximal strategy as a special case.


international conference on acoustics, speech, and signal processing | 2015

Proximal diffusion for stochastic costs with non-differentiable regularizers

Stefan Vlaski; Ali H. Sayed

We consider networks of agents cooperating to minimize a global objective, modeled as the aggregate sum of regularized costs that are not required to be differentiable. Since the subgradients of the individual costs cannot generally be assumed to be uniformly bounded, general distributed subgradient techniques are not applicable to these problems. We isolate the requirement of bounded subgradients into the regularizer and use splitting techniques to develop a stochastic proximal diffusion strategy for solving the optimization problem by continuously learning from streaming data. We represent the implementation as the cascade of three operators and invoke Banachs fixed-point theorem to establish that, despite gradient noise, the stochastic implementation is able to converge in the mean-square-error sense within O(μ) from the optimal solution, for a sufficiently small step-size parameter, μ.


international workshop on machine learning for signal processing | 2016

Stochastic gradient descent with finite samples sizes

Kun Yuan; Bicheng Ying; Stefan Vlaski; Ali H. Sayed

The minimization of empirical risks over finite sample sizes is an important problem in large-scale machine learning. A variety of algorithms has been proposed in the literature to alleviate the computational burden per iteration at the expense of convergence speed and accuracy. Many of these approaches can be interpreted as stochastic gradient descent algorithms, where data is sampled from particular empirical distributions. In this work, we leverage this interpretation and draw from recent results in the field of online adaptation to derive new tight performance expressions for empirical implementations of stochastic gradient descent, mini-batch gradient descent, and importance sampling. The expressions are exact to first order in the step-size parameter and are tighter than existing bounds. We further quantify the performance gained from employing mini-batch solutions, and propose an optimal importance sampling algorithm to optimize performance.


international conference on acoustics, speech, and signal processing | 2014

Robust bootstrap methods with an application to geolocation in harsh LOS/NLOS environments

Stefan Vlaski; Michael Muma; Abdelhak M. Zoubir

The bootstrap is a powerful computational tool for statistical inference that allows for the estimation of the distribution of an estimate without distributional assumptions on the underlying data, reliance on asymptotic results or theoretical derivations. On the other hand, robustness properties of the bootstrap in the presence of outliers are very poor, irrespective of the robustness of the underlying estimator. This motivates the need to robustify the bootstrap procedure itself. Improvements to two existing robust bootstrap methods are suggested and a novel approach for robustifying the bootstrap is introduced. The methods are compared in a simulation study and the proposed method is applied to robust geolocation.


international ieee/embs conference on neural engineering | 2017

A blind Adaptive Stimulation Artifact Rejection (ASAR) engine for closed-loop implantable neuromodulation systems

Sina Basir-Kazeruni; Stefan Vlaski; Hawraa Salami; Ali H. Sayed; Dejan Markovic

In this work we propose an energy-efficient, implantable, real-time, blind Adaptive Stimulation Artifact Rejection (ASAR) engine. This enables concurrent neural stimulation and recording for state-of-the-art closed-loop neuromodulation systems. Two engines, implemented in 40nm CMOS, achieve convergence of <42µs for Spike ASAR and <167µs for LFP ASAR, and can attenuate artifacts up to 100mVp-p by 49.2dB, without any prior knowledge of the stimulation pulse. The LFP and Spike ASAR designs occupy an area of 0.197mm2 and 0.209mm2, and consume 1.73µW and 3.02µW, respectively at 0.644V.


information theory and applications | 2017

On the performance of random reshuffling in stochastic learning

Bicheng Ying; Kun Yuan; Stefan Vlaski; Ali H. Sayed

In empirical risk optimization, it has been observed that gradient descent implementations that rely on random reshuffling of the data achieve better performance than implementations that rely on sampling the data randomly and independently of each other. Recent works have pursued justifications for this behavior by examining the convergence rate of the learning process under diminishing step-sizes. Some of these justifications rely on loose bounds, or their conclusions are dependent on the sample size which is problematic for large datasets. This work focuses on constant step-size adaptation, where the agent is continuously learning. In this case, convergence is only guaranteed to a small neighborhood of the optimizer albeit at a linear rate. The analysis establishes analytically that random reshuffling outperforms independent sampling by showing that the iterate at the end of each run approaches a smaller neighborhood of size O(μ2) around the minimizer rather than O(μ). Simulation results illustrate the theoretical findings.


ieee global conference on signal and information processing | 2016

The brain strategy for online learning

Stefan Vlaski; Bicheng Ying; Ali H. Sayed

Complexity is a double-edged sword for learning algorithms when the number of available samples for training in relation to the dimension of the feature space is small. This is because simple models do not sufficiently capture the nuances of the data set, while complex models overfit. While remedies such as regularization and dimensionality reduction exist, they themselves can suffer from overfitting or introduce bias. To address the issue of overfitting, the incorporation of prior structural knowledge is generally of paramount importance. In this work, we propose a BRAIN strategy for learning, which enhances the performance of traditional algorithms, such as logistic regression and SVM learners, by incorporating a graphical layer that tracks and learns in real-time the underlying correlation structure among feature subspaces. In this way, the algorithm is able to identify salient subspaces and their correlations, while simultaneously dampening the effect of irrelevant features. This effect is particularly useful for high-dimensional feature spaces.


ieee signal processing workshop on statistical signal processing | 2014

Robust bootstrap based observation classification for Kalman Filtering in harsh LOS/NLOS environments

Stefan Vlaski; Abdelhak M. Zoubir

The bootstrap allows for the estimation of the distribution of an estimate without requiring assumptions on the distribution of the underlying data, relying on asymptotic results or theoretical derivations. In contrast to a point estimate, the distribution estimate captures the uncertainty about the statistic of interest. We introduce a novel robust bootstrap method and demonstrate how this additional information is utilized to improve the performance of robust tracking methods. A robust bootstrap method is crucial, because the classical bootstrap is highly sensitive to outliers, irrespective of the robustness of the underlying estimator. Using the robust distribution estimate of the state prediction as a measure of confidence, the bootstrap allows to incorporate an observation weighting scheme into the tracking algorithm, which enhances performance.


international workshop on signal processing advances in wireless communications | 2018

Distributed Inference Over Multitask Graphs Under Smoothness

Roula Nassif; Stefan Vlaski; Ali H. Sayed


arXiv: Learning | 2018

Stochastic Learning under Random Reshuffling.

Bicheng Ying; Kun Yuan; Stefan Vlaski; Ali H. Sayed

Collaboration


Dive into the Stefan Vlaski's collaboration.

Top Co-Authors

Avatar

Ali H. Sayed

École Polytechnique Fédérale de Lausanne

View shared research outputs
Top Co-Authors

Avatar

Bicheng Ying

University of California

View shared research outputs
Top Co-Authors

Avatar

Kun Yuan

University of California

View shared research outputs
Top Co-Authors

Avatar

Roula Nassif

University of Nice Sophia Antipolis

View shared research outputs
Top Co-Authors

Avatar

Abdelhak M. Zoubir

Technische Universität Darmstadt

View shared research outputs
Top Co-Authors

Avatar

Dejan Markovic

University of California

View shared research outputs
Top Co-Authors

Avatar

Hawraa Salami

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hermina Petric Maretic

École Polytechnique Fédérale de Lausanne

View shared research outputs
Researchain Logo
Decentralizing Knowledge