Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Rustam Stolkin is active.

Publication


Featured researches published by Rustam Stolkin.


Information Sciences | 2015

Dynamic-context cooperative quantum-behaved particle swarm optimization based on multilevel thresholding applied to medical image segmentation

Yangyang Li; Licheng Jiao; Ronghua Shang; Rustam Stolkin

This paper proposes a dynamic-context cooperative quantum-behaved particle swarm optimization algorithm. The proposed algorithm incorporates a new method for dynamically updating the context vector each time it completes a cooperation operation with other particles. We first explain how this leads to enhanced search ability and improved optimization over previous methods, and demonstrate this empirically with comparative experiments using benchmark test functions. We then demonstrate a practical application of the proposed method, by showing how it can be applied to optimize the parameters for Otsu image segmentation for processing medical images. Comparative experimental results show that the proposed method outperforms other state-of-the-art methods from the literature.


IEEE Transactions on Evolutionary Computation | 2014

An Evolutionary Multiobjective Approach to Sparse Reconstruction

Lin Li; Xin Yao; Rustam Stolkin; Maoguo Gong; Shan He

This paper addresses the problem of finding sparse solutions to linear systems. Although this problem involves two competing cost function terms (measurement error and a sparsity-inducing term), previous approaches combine these into a single cost term and solve the problem using conventional numerical optimization methods. In contrast, the main contribution of this paper is to use a multiobjective approach. The paper begins by investigating the sparse reconstruction problem, and presents data to show that knee regions do exist on the Pareto front (PF) for this problem and that optimal solutions can be found in these knee regions. Another contribution of the paper, a new soft-thresholding evolutionary multiobjective algorithm (StEMO), is then presented, which uses a soft-thresholding technique to incorporate two additional heuristics: one with greater chance to increase speed of convergence toward the PF, and another with higher probability to improve the spread of solutions along the PF, enabling an optimal solution to be found in the knee region. Experiments are presented, which show that StEMO significantly outperforms five other well known techniques that are commonly used for sparse reconstruction. Practical applications are also demonstrated to fundamental problems of recovering signals and images from noisy data.


Proceedings of the Sixth International Conference | 2006

An Adaptive Background Model for Camshift Tracking with a Moving Camera

Rustam Stolkin; Ionut Florescu; George Kamberov

Continuously Adaptive Mean shift (CAMSHIFT) is a popular algorithm for visual tracking, providing speed and robustness with minimal training and computational cost. While it performs well with a fixed camera and static background scene, it can fail rapidly when the camera moves or the background changes since it relies on static models of both the background and the tracked object. Furthermore it is unable to track objects passing in front of backgrounds with which they share significant colours. We describe a new algorithm, the Adaptive Background CAMSHIFT (ABCshift), which addresses both of these problems by using a background model which can be continuously relearned for every frame with minimal additional computational expense. Further, we show how adaptive background relearning can occasionally lead to a particular mode of instability which we resolve by comparing background and tracked object distributions using a metric based on the Bhattacharyya coefficient.


international conference on robotics and automation | 2011

Learning to predict how rigid objects behave under simple manipulation

Marek Sewer Kopicki; Sebastian Zurek; Rustam Stolkin; Thomas Mörwald; Jeremy L. Wyatt

An important problem in robotic manipulation is the ability to predict how objects behave under manipulative actions. This ability is necessary to allow planning of object manipulations. Physics simulators can be used to do this, but they model many kinds of object interaction poorly. An alternative is to learn a motion model for objects by interacting with them. In this paper we address the problem of learning to predict the interactions of rigid bodies in a probabilistic framework, and demonstrate the results in the domain of robotic push manipulation. A robot arm applies random pushes to various objects and observes the resulting motion with a vision system. The relationship between push actions and object motions is learned, and enables the robot to predict the motions that will result from new pushes. The learning does not make explicit use of physics knowledge, or any pre-coded physical constraints, nor is it even restricted to domains which obey any particular rules of physics. We use regression to learn efficiently how to predict the gross motion of a particular object. We further show how different density functions can encode different kinds of information about the behaviour of interacting objects. By combining these as a product of densities, we show how learned predictors can cope with a degree of generalisation to previously unencountered object shapes, subjected to previously unencountered push directions. Performance is evaluated through a combination of virtual experiments in a physics simulator, and real experiments with a 5-axis arm equipped with a simple, rigid finger.


IEEE Sensors Journal | 2014

Particle Filter Tracking of Camouflaged Targets by Adaptive Fusion of Thermal and Visible Spectra Camera Data

Mohammed Talha; Rustam Stolkin

This paper presents a method for tracking a moving target by fusing bi-modal visual information from a deep infra-red thermal imaging camera and a conventional visible spectrum color camera. The tracking method builds on well-known methods for color-based tracking using particle filtering, but it extends these to handle fusion of color and thermal information when evaluating each particle. The key innovation is a method for continuously relearning local background models for each particle in each imaging modality, comparing these against a model of the foreground object being tracked, and thereby adaptively weighting the data fusion process in favor of whichever imaging modality is currently the most discriminating at each successive frame. The method is evaluated by testing on a variety of extremely challenging video sequences, in which people and other targets are tracked past occlusion, clutter, and distracters causing severe and sustained camouflage conditions in one or both imaging modalities.


Information Sciences | 2013

A novel selection evolutionary strategy for constrained optimization

Licheng Jiao; Lin Li; Ronghua Shang; Fang Liu; Rustam Stolkin

The existence of infeasible solutions makes it very difficult to handle constrained optimization problems (COPs) in a way that ensures efficient, optimal and constraint-satisfying convergence. Although further optimization from feasible solutions will typically lead in a direction that generates further feasible solutions, certain infeasible solutions can also provide useful information about the optimal direction of improvement for the objective function. How well an algorithm makes use of these two solutions determines its performance on COPs. This paper proposes a novel selection evolutionary strategy (NSES) for constrained optimization. A self-adaptive selection method is introduced to exploit both informative infeasible and feasible solutions from a perspective of combining feasibility with multi-objective problem (MOP) techniques. Since the global optimal solution of a COP is a feasible non-dominated solution, both non-dominated solutions with low constraint violation and feasible ones with low objective values are beneficial to an evolution process. Thus, the exploration and exploitation of both of these two kinds of solutions are preferred during the selection procedure. Several theorems and properties are given to prove the above assertion. Furthermore, the performance of our method is evaluated using 22 well-known benchmark functions. Experimental results show that the proposed method outperforms state-of-the-art algorithms in terms of the speed of finding feasible solutions and the stability of converging to global optimal solutions. In particular, when dealing with problems that have zero feasibility ratios and more than one active constraint, our method provides feasible solutions within fewer fitness evaluations (FES) and converges to the optimal solutions more reliably than other popular methods from the literature.


The International Journal of Robotics Research | 2016

One-shot learning and generation of dexterous grasps for novel objects

Marek Sewer Kopicki; Renaud Detry; Maxime Adjigble; Rustam Stolkin; Aleš Leonardis; Jeremy L. Wyatt

This paper presents a method for one-shot learning of dexterous grasps and grasp generation for novel objects. A model of each grasp type is learned from a single kinesthetic demonstration and several types are taught. These models are used to select and generate grasps for unfamiliar objects. Both the learning and generation stages use an incomplete point cloud from a depth camera, so no prior model of an object shape is used. The learned model is a product of experts, in which experts are of two types. The first type is a contact model and is a density over the pose of a single hand link relative to the local object surface. The second type is the hand-configuration model and is a density over the whole-hand configuration. Grasp generation for an unfamiliar object optimizes the product of these two model types, generating thousands of grasp candidates in under 30 seconds. The method is robust to incomplete data at both training and testing stages. When several grasp types are considered the method selects the highest-likelihood grasp across all the types. In an experiment, the training set consisted of five different grasps and the test set of 45 previously unseen objects. The success rate of the first-choice grasp is 84.4% or 77.7% if seven views or a single view of the test object are taken, respectively.


computer vision and pattern recognition | 2015

Single target tracking using adaptive clustered decision trees and dynamic multi-level appearance models

Jingjing Xiao; Rustam Stolkin; Aleš Leonardis

This paper presents a method for single target tracking of arbitrary objects in challenging video sequences. Targets are modeled at three different levels of granularity (pixel level, parts-based level and bounding box level), which are cross-constrained to enable robust model relearning. The main contribution is an adaptive clustered decision tree method which dynamically selects the minimum combination of features necessary to sufficiently represent each target part at each frame, thereby providing robustness with computational efficiency. The adaptive clustered decision tree is implemented in two separate parts of the tracking algorithm: firstly to enable robust matching at the parts-based level between successive frames; and secondly to select the best superpixels for learning new parts of the target. We have tested the tracker using two different tracking benchmarks (VOT2013-2014 and CVPR2013 tracking challenges), based on two different test methodologies, and show it to be significantly more robust than the best state-of-the-art methods from both of those tracking challenges, while also offering competitive tracking precision.


Image and Vision Computing | 2008

An EM/E-MRF algorithm for adaptive model based tracking in extremely poor visibility

Rustam Stolkin; Alistair Greig; Mark Hodgetts; John Gilby

This paper addresses the problems of visual tracking in conditions of extremely poor visibility. The human visual system can often correctly interpret images that are of such poor quality that they contain insufficient explicit information to do so. We assert that such systems must therefore make use of prior knowledge in several forms. A tracking algorithm is presented which combines observed data (the current image) with predicted data derived from prior knowledge of the object being viewed and an estimate of the cameras motion. During image segmentation, a predicted image is used to estimate class conditional distribution models and an Extended-Markov Random Field technique is used to combine observed image data with expectations of that data within a probabilistic framework. Interpretations of scene content and camera position are then mutually improved using Expectation Maximisation. Models of background and tracked object are continually relearned and adapt iteratively with each new image frame. The algorithm is tested using real video sequences, filmed in poor visibility conditions with complete pre-measured ground-truth data.


international conference on computer vision | 2013

An Enhanced Adaptive Coupled-Layer LGTracker++

Jingjing Xiao; Rustam Stolkin; Aleš Leonardis

This paper addresses the problems of tracking targets which undergo rapid and significant appearance changes. Our starting point is a successful, state-of-the-art tracker based on an adaptive coupled-layer visual model [10]. In this paper, we identify four important cases when the original tracker often fails: significant scale changes, environment clutter, and failures due to occlusion and rapid disordered movement. We suggest four new enhancements to solve these problems: we adapt the scale of the patches in addition to adapting the bounding box, marginal patch distributions are used to solve patch drifting in environment clutter, a memory is added and used to assist recovery from occlusion, situations where the tracker may lose the target are automatically detected, and a particle filter is substituted for the Kalman filter to help recover the target. We demonstrate the advantages of the enhanced tracker over the original tracker using a test toolkit [17]. We demonstrate the advantages of the enhanced tracker over the original tracker, as well as several other state-of-the art trackers from the literature.

Collaboration


Dive into the Rustam Stolkin's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jeffrey A. Kuo

National Nuclear Laboratory

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Liesl Hotaling

Stevens Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Alistair Greig

University College London

View shared research outputs
Top Co-Authors

Avatar

Jingjing Xiao

University of Birmingham

View shared research outputs
Researchain Logo
Decentralizing Knowledge