Dmitry Kropotov
Russian Academy of Sciences
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Dmitry Kropotov.
Pattern Recognition and Image Analysis | 2010
Dmitry Kropotov; D. Laptev; Anton Osokin; Dmitry P. Vetrov
We consider image and signal segmentation problems within the Markov random field (MRF) approach and try to take into account label frequency constraints. Incorporating these constraints into MRF leads to an NP-hard optimization problem. For solving this problem we present a two-step approximation scheme that allows one to use hard, interval and soft constraints on label frequencies. On the first step a factorized approximation of the joint distribution is made (only local terms are included) and then, on the second step, the labeling is found by conditional maximization of the factorized joint distribution. The latter task is reduced to an easy-to-solve transportation problem. Basing on the proposed two-step approximation scheme we derive the ELM algorithm for tuning MRF parameters. We show the efficiency of our approach on toy signals and on the task of automated segmentation of Google Maps.
Computational Mathematics and Mathematical Physics | 2010
Dmitry P. Vetrov; Dmitry Kropotov; Anton Osokin
The classical EM algorithm for the restoration of the mixture of normal probability distributions cannot determine the number of components in the mixture. An algorithm called ARD EM for the automatic determination of the number of components is proposed, which is based on the relevance vector machine. The idea behind this algorithm is to use a redundant number of mixture components at the first stage and then determine the relevant components by maximizing the evidence. Experiments with model problems show that the number of clusters thus determined either coincides with the actual number or slightly exceeds it. In addition, clusterization using ARD EM turns out to be closer to the actual clusterization than that obtained by the analogs based on cross validation and the minimum description length principle.
international conference on machine learning | 2007
Dmitry Kropotov; Dmitry P. Vetrov
In the paper we propose a new type of regularization procedure for training sparse Bayesian methods for classification. Transforming Hessian matrix of log-likelihood function to diagonal form with further regularization of its eigenvectors allows us to optimize evidence explicitly as a product of one-dimensional integrals. The process of automatic regularization coefficients determination then converges in one iteration. We show how to use the proposed approach for Gaussian and Laplace priors. Both algorithms show comparable performance with the state-of-the-art Relevance Vector Machines (RVM) but require less time for training and produce more sparse decision rules (in terms of degrees of freedom).
Pattern Recognition and Image Analysis | 2009
E. Lomakina-Rumyantseva; P. Voronin; Dmitry Kropotov; Dmitry P. Vetrov; Anton Konushin
In this paper a system for laboratory rodent video tracking and behavior segmentation is proposed. A new real-time mouse pose estimation method is proposed based on semi-automatically generated animal shape model. Behavior segmentation into separate behavior acts is considered as a signal segmentation problem using hidden Markov models (HMM). Conventional first order HMM supposes a geometric prior distribution on segment’s length, which is inadequate for behavior segmentation. We propose a modification of conventional first order HMM that allows any prior distribution on segment’s length. Experiments show that the developed approach can lead to more adequate results comparing to conventional HMM.
Pattern Recognition and Image Analysis | 2009
Dmitry Kropotov; Dmitry P. Vetrov
In the paper we consider the problem of model selection for linear regression within Bayesian and information-based frameworks. For both cases we generalize known approaches (evidence-based and Akaike information criterion) and derive criterion functions in terms of (in general case non-factorial) weight priors which are assumed to be Gaussian. Optimization of these criterion functions leads to two semidefinite optimization problems which can be solved analytically. We present a method that finds best priors in both approaches and show their equivalence. Surprisingly it appears that optimal prior has rank one covariance matrix. We derive explicit condition of degenerative decision rule, i.e., regression with all weights equal to zero. We conclude with experiments that show that the proposed approach significantly reduces the time needed for model selection in comparison with alternatives based on cross-validation and iterative evidence maximization while keeping generalization ability
iberoamerican congress on pattern recognition | 2005
Dmitry Kropotov; Nikita Ptashko; Dmitry P. Vetrov
In the paper we propose a method based on Bayesian framework for selecting the best kernel function for supervised learning problem. The parameters of the kernel function are considered as model parameters and maximum evidence principle is applied for model selection. We describe a general scheme of Bayesian regularization, present model of kernel classifiers as well as our approximations for evidence estimation, and then give some results of experimental evaluation.
computational intelligence methods for bioinformatics and biostatistics | 2009
Anton Osokin; Dmitry P. Vetrov; Dmitry Kropotov
The paper describes a method of fully automatic 3D-reconstruction of a mouse brain from a sequence of histological coronal 2D slices. The model is constructed via non-linear transformations between the neighboring slices and further morphing. We also use rigid-body transforms in the preprocessing stage to align the slices. Afterwards, the obtained 3D-model is used to generate virtual 2D-images of the brain in arbitrary section-plane. We use this approach to construct a highresolution anatomic 3D-model of a mouse brain using well-known Allen Brain Atlas which is publicly available.
Computational Mathematics and Mathematical Physics | 2011
Dmitry Kropotov
Problems of classification and regression estimation in which objects are represented by multidimensional arrays of features are considered. Many practical statements can be reduced to such problems, for example, the popular approach to the description of images as a set of patches and a set of descriptors in each patch or the description of an object in the form of a set of distances from it to certain support objects selected based on a set of features. For solving problems concerning the objects thus described, a generalization of the relevance vector model is proposed. In this generalization, specific regularization coefficients are defined for each dimension of the multidimensional array of the object description; the resultant regularization coefficient for a given element in the multidimensional array is determined as a combination of the regularization coefficients for all the dimensions. The models with the sum and product used for such combinations are examined. Algorithms based on the variational approach are proposed for learning in these models. These algorithms enable one to find the so-called “sparse” solutions, that is, exclude from the consideration the irrelevant dimensions in the multidimensional array of the object description. Compared with the classical relevance vector model, the proposed approach makes it possible to reduce the number of adjustable parameters because a sum of all the dimensions is considered instead of their product. As a result, the method becomes more robust under overfitting in the case of small samples. This property and the sparseness of the resulting solutions in the proposed models are demonstrated experimentally, in particular, in the case of the known face identification database called Labeled Faces in the Wild.
computational intelligence methods for bioinformatics and biostatistics | 2010
Anton Osokin; Dmitry P. Vetrov; Alexey E Lebedev; V. V. Galatenko; Dmitry Kropotov; K. V. Anokhin
We consider the problem of statistical analysis of gene expression in a mouse brain during cognitive processes. In particular we focus on the problems of anatomical segmentation of a histological brain slice and estimation of slices gene expression level. The first problem is solved by interactive registration of an experimental brain slice into 3D brain model constructed using Allen Brain Atlas. The second problem is solved by special image filtering and further smart resolution reduction. We also describe the procedure of non-linear correction of atlas slices which improves the quality of the 3D-model significantly.
Pattern Recognition and Image Analysis | 2009
Dmitry Kropotov; Nikita Ptashko; Dmitry P. Vetrov
In the paper we propose an algorithm for regressors (features, basis functions) selection in linear regression problems. To do this we use continuous generalization of known Akaike information criterion (AIC). We develop a method for AIC optimization w.r.t. individual regularization coefficients. Each coefficient defines the relevance degree of the corresponding regressor. We provide the experimental results, which prove that the proposed approach can be considered as a non-Bayesian analog of automatic relevance determination (ARD) approach and marginal likelihood optimization used in Relevance Vector Regression (RVR). The key difference of new approach is its ability to find zero regularization coefficients. We hope that this helps to avoid type-II overfitting (underfitting) which is reported for RVR. In the paper we also show that in some special case both methods become identical.