Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alexander V. Nazin is active.

Publication


Featured researches published by Alexander V. Nazin.


Automatica | 2005

Nonlinear system identification via direct weight optimization

Jacob Roll; Alexander V. Nazin; Lennart Ljung

A general framework for estimating nonlinear functions and systems is described and analyzed in this paper. Identification of a system is seen as estimation of a predictor function. The considered predictor function estimate at a particular point is defined to be affine in the observed outputs and the estimate is defined by the weights in this expression. For each given point, the maximal mean-square error (or an upper bound) of the function estimate over a class of possible true functions is minimized with respect to the weights, which is a convex optimization problem. This gives different types of algorithms depending on the chosen function class. It is shown how the classical linear least squares is obtained as a special case and how unknown-but-bounded disturbances can be handled. Most of the paper deals with the method applied to locally smooth predictor functions. It is shown how this leads to local estimators with a finite bandwidth, meaning that only observations in a neighborhood of the target point will be used in the estimate. The size of this neighborhood (the bandwidth) is automatically computed and reflects the noise level in the data and the smoothness priors. The approach is applied to a number of dynamical systems to illustrate its potential.


Problems of Information Transmission | 2005

Recursive Aggregation of Estimators by the Mirror Descent Algorithm with Averaging

Anatoli Juditsky; Alexander V. Nazin; Alexandre B. Tsybakov; Nicolas Vayatis

AbstractWe consider a recursive algorithm to construct an aggregated estimator from a finite number of base decision rules in the classification problem. The estimator approximately minimizes a convex risk functional under the ℓ1-constraint. It is defined by a stochastic version of the mirror descent algorithm which performs descent of the gradient type in the dual space with an additional averaging. The main result of the paper is an upper bound for the expected accuracy of the proposed estimator. This bound is of the order


conference on decision and control | 2006

Rejection of Bounded Disturbances via Invariant Ellipsoids Technique

Boris T. Polyak; Alexander V. Nazin; Michael Topunov; Sergey A. Nazin


conference on decision and control | 2002

A non-asymptotic approach to local modelling

Jacob Roll; Alexander V. Nazin; Lennart Ljung

C\sqrt {(\log M)/t}


IFAC Proceedings Volumes | 2008

Gap-free Bounds for Stochastic Multi-Armed Bandit

Anatoly Juditsky; Alexander V. Nazin; Alexander Tsybakov; Nicolas Vayatis


Automation and Remote Control | 2011

Randomized algorithm to determine the eigenvector of a stochastic matrix with application to the PageRank problem

Alexander V. Nazin; Boris T. Polyak

with an explicit and small constant factor C, where M is the dimension of the problem and t stands for the sample size. A similar bound is proved for a more general setting, which covers, in particular, the regression model with squared loss.


conference on decision and control | 2009

Adaptive randomized algorithm for finding eigenvector of stochastic matrix with application to PageRank

Alexander V. Nazin; Boris T. Polyak

In this paper an approach based on invariant ellipsoids is applied to the problem of persistent disturbance rejection by means of static state-feedback control. Dynamic system is supposed to be linear time-invariant and affected by unknown-but-bounded exogenous disturbances. Synthesis of an optimal controller that returns a minimum of the size of the corresponding invariant ellipsoid is reduced to one-dimensional convex minimization with LMI constraints. The problem is considered in continuous and discrete time cases


IFAC Proceedings Volumes | 1997

Asymptotic Properties of Just-in-Time Models

Anders Stenman; Alexander V. Nazin; Fredrik Gustafsson

Local models and methods construct function estimates or predictions from observations in a local neighborhood of the point of interest. The bandwidth, i.e., how large the local neighborhood should be, is often determined based on asymptotic analysis. In the paper, an alternative, non-asymptotic approach that minimizes a uniform upper bound on the mean square error for a linear estimate is proposed. It is shown, for the scalar case, that the solution is obtained from a quadratic program, and that it maintains many of the key features of the asymptotic approaches. Moreover, examples show that the proposed approach in some cases is superior to an asymptotically based local linear estimator.


IFAC Proceedings Volumes | 2005

A General Direct Weight Optimization Framework for Nonlinear System Identification

Jacob Roll; Alexander V. Nazin; Lennart Ljung

We consider the stochastic multi-armed bandit problem with unknown horizon. We present a randomized decision strategy which is based on updating a probability distribution through a stochastic mirror descent/exponentiated gradient type algorithm. We consider separately two assumptions: nonnegative losses or arbitrary losses with an exponential moment condition. We prove optimal (up to logarithmic factors) gap-free bounds on the excess risk of the average over time of the instantaneous losses induced by the choice of a specific action.


IFAC Proceedings Volumes | 2003

Local Modelling of Non linear Dynamic Systems Using Direct Weight Optimization

Jacob Roll; Alexander V. Nazin; Lennart Ljung

Consideration was given to estimation of the eigenvector corresponding to the greatest eigenvalue of a stochastic matrix. There exist numerous applications of this problem arising at ranking the results of search, coordination of the multiagent system actions, network control, and data analysis. The standard technique for its solution comes to the power method with an additional regularization of the original matrix. A new randomized algorithm was proposed, and a uniform—over the entire class of the stochastic matrices of a given size—upper boundary of the convergence rate was validated. It is given by {ie342-1}, where C is an absolute constant, N is size, and n is the number of iterations. This boundary seems promising because ln(N) is smallish even for a very great size. The algorithm relies on the mirror descent method for the problems of convex stochastic optimization. Applicability of the method to the PageRank problem of ranking the Internet pages was discussed.

Collaboration


Dive into the Alexander V. Nazin's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Boris T. Polyak

Russian Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Boris M. Miller

Russian Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Andrey A. Tremba

Russian Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

Nicolas Vayatis

École normale supérieure de Cachan

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sergey A. Nazin

Russian Academy of Sciences

View shared research outputs
Top Co-Authors

Avatar

E. V. Piterskaya

Russian Academy of Sciences

View shared research outputs
Researchain Logo
Decentralizing Knowledge