Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alexander P. Kuleshov is active.

Publication


Featured researches published by Alexander P. Kuleshov.


IFAC Proceedings Volumes | 2009

Cognitive technologies in adaptive models of complex plants

Alexander P. Kuleshov; Alexander V. Bernstein

Abstract The paper deals with various aspects of the construction and application of surrogate models in CAD systems, outlines basic data analysis and simulation tasks essential for surrogate model construction, reviews the current state of the art and proposes innovative approaches based on cognitive data analysis and simulation technologies.


machine learning and data mining in pattern recognition | 2014

Manifold Learning in Data Mining Tasks

Alexander P. Kuleshov; Alexander V. Bernstein

Many Data Mining tasks deal with data which are presented in high dimensional spaces, and the ‘curse of dimensionality’ phenomena is often an obstacle to the use of many methods for solving these tasks. To avoid these phenomena, various Representation learning algorithms are used as a first key step in solutions of these tasks to transform the original high-dimensional data into their lower-dimensional representations so that as much information about the original data required for the considered Data Mining task is preserved as possible. The above Representation learning problems are formulated as various Dimensionality Reduction problems (Sample Embedding, Data Manifold embedding, Manifold Learning and newly proposed Tangent Bundle Manifold Learning) which are motivated by various Data Mining tasks. A new geometrically motivated algorithm that solves the Tangent Bundle Manifold Learning and gives new solutions for all the considered Dimensionality Reduction problems is presented.


Lecture Notes in Computer Science | 2015

Manifold Learning in Regression Tasks

Alexander V. Bernstein; Alexander P. Kuleshov; Yury Yanovich

The paper presents a new geometrically motivated method for non-linear regression based on Manifold learning technique. The regression problem is to construct a predictive function which estimates an unknown smooth mapping f from q-dimensional inputs to m-dimensional outputs based on a training data set consisting of given ‘input-output’ pairs. The unknown mapping f determines q-dimensional manifold M(f) consisting of all the ‘input-output’ vectors which is embedded in (q+m)-dimensional space and covered by a single chart; the training data set determines a sample from this manifold. Modern Manifold Learning methods allow constructing the certain estimator M* from the manifold-valued sample which accurately approximates the manifold. The proposed method called Manifold Learning Regression (MLR) finds the predictive function fMLR to ensure an equality M(fMLR) = M*. The MLR simultaneously estimates the m×q Jacobian matrix of the mapping f.


Problems of Information Transmission | 2008

Optimal filtering of a random background in image processing problems

Alexander Bernstein; Alexander P. Kuleshov

We describe a recurrent construction procedure for mean-square optimal linear spatio-temporal filtering of a random background, which makes it possible to construct filtered frames using explicitly written compact analytical expressions.


artificial neural networks in pattern recognition | 2018

Manifold Learning Regression with Non-stationary Kernels

Alexander P. Kuleshov; Alexander Bernstein; Evgeny Burnaev

Nonlinear multi-output regression problem is to construct a predictive function which estimates an unknown smooth mapping from q-dimensional inputs to m-dimensional outputs based on a training data set consisting of given “input-output” pairs. In order to solve this problem, regression models based on stationary kernels are often used. However, such approaches are not efficient for functions with strongly varying gradients. There exist some attempts to introduce non-stationary kernels to account for possible non-regularities, although even the most efficient one called Manifold Learning Regression (MLR), which estimates the unknown function as well its Jacobian matrix, is too computationally expensive. The main problem is that the MLR is based on a computationally intensive manifold learning technique. In this paper we propose a modified version of the MLR with significantly less computational complexity while preserving its accuracy.


international conference on machine vision | 2015

Locally isometric and conformal parameterization of image manifold

Alexander V. Bernstein; Alexander P. Kuleshov; Yury Yanovich

Images can be represented as vectors in a high-dimensional Image space with components specifying light intensities at image pixels. To avoid the ‘curse of dimensionality’, the original high-dimensional image data are transformed into their lower-dimensional features preserving certain subject-driven data properties. These properties can include ‘information-preserving’ when using the constructed low-dimensional features instead of original high-dimensional vectors, as well preserving the distances and angles between the original high-dimensional image vectors. Under the commonly used Manifold assumption that the high-dimensional image data lie on or near a certain unknown low-dimensional Image manifold embedded in an ambient high-dimensional ‘observation’ space, a constructing of the lower-dimensional features consists in constructing an Embedding mapping from the Image manifold to Feature space, which, in turn, determines a low-dimensional parameterization of the Image manifold. We propose a new geometrically motivated Embedding method which constructs a low-dimensional parameterization of the Image manifold and provides the information-preserving property as well as the locally isometric and conformal properties.


international conference on machine learning and applications | 2015

Data-Based Statistical Models of Data Networks

Alexander P. Kuleshov; Alexander V. Bernstein; Yury Agalakov

Machine (Statistical) learning methods are used for predicting the delivery times of the packages transmitted through the data network (DN). The statistical model of the DN is proposed, this model allows predicting the delivery times depending on a state of the DN (network load) and the statistical dependences between the delivery times of different transmitted packages. For constructing this model, various statistical methods (forecasting, dimensionality reduction) are applied to the data which are the results of computational experiments performed with detailed simulation model of the DN. The constructed model simulates the processes of package transmission over the DN. Motivation for a construction of such model is a need to create Monte Carlo network simulators to imitate the delivery times of transmitted packages, such simulators can be used in modeling of Information and Control Systems whose objects communicate with each other through the DN.


arXiv: Learning | 2012

Tangent Bundle Manifold Learning via GrassmannaStiefel Eigenmaps

Alexander Bernstein; Alexander P. Kuleshov


artificial neural networks in pattern recognition | 2014

Low-Dimensional Data Representation in Data Analysis

Alexander V. Bernstein; Alexander P. Kuleshov


ieee international conference on data science and advanced analytics | 2015

Information preserving and locally isometric&conformal embedding via Tangent Manifold Learning

Alexander V. Bernstein; Alexander P. Kuleshov; Yury Yanovich

Collaboration


Dive into the Alexander P. Kuleshov's collaboration.

Top Co-Authors

Avatar

Alexander Bernstein

Skolkovo Institute of Science and Technology

View shared research outputs
Top Co-Authors

Avatar

Evgeny Burnaev

Skolkovo Institute of Science and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge