Uday Kumar Chakraborty
University of Missouri–St. Louis
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Uday Kumar Chakraborty.
IEEE Transactions on Evolutionary Computation | 2009
Swagatam Das; Ajith Abraham; Uday Kumar Chakraborty; Amit Konar
Differential evolution (DE) is well known as a simple and efficient scheme for global optimization over continuous spaces. It has reportedly outperformed a few evolutionary algorithms (EAs) and other search heuristics like the particle swarm optimization (PSO) when tested over both benchmark and real-world problems. DE, however, is not completely free from the problems of slow and/or premature convergence. This paper describes a family of improved variants of the DE/target-to-best/1/bin scheme, which utilizes the concept of the neighborhood of each population member. The idea of small neighborhoods, defined over the index-graph of parameter vectors, draws inspiration from the community of the PSO algorithms. The proposed schemes balance the exploration and exploitation abilities of DE without imposing serious additional burdens in terms of function evaluations. They are shown to be statistically significantly better than or at least comparable to several existing DE variants as well as a few other significant evolutionary computing techniques over a test suite of 24 benchmark functions. The paper also investigates the applications of the new DE variants to two real-life problems concerning parameter estimation for frequency modulated sound waves and spread spectrum radar poly-phase code design.
genetic and evolutionary computation conference | 2005
Swagatam Das; Amit Konar; Uday Kumar Chakraborty
Differential evolution (DE) is well known as a simple and efficient scheme for global optimization over continuous spaces. In this paper we present two new, improved variants of DE. Performance comparisons of the two proposed methods are provided against (a) the original DE, (b) the canonical particle swarm optimization (PSO), and (c) two PSO-variants. The new DE-variants are shown to be statistically significantly better on a seven-function test bed for the following performance measures: solution quality, time to find the solution, frequency of finding the solution, and scalability.
systems man and cybernetics | 2009
Aruna Chakraborty; Amit Konar; Uday Kumar Chakraborty; Amita Chatterjee
This paper presents a fuzzy relational approach to human emotion recognition from facial expressions and its control. The proposed scheme uses external stimulus to excite specific emotions in human subjects whose facial expressions are analyzed by segmenting and localizing the individual frames into regions of interest. Selected facial features such as eye opening, mouth opening, and the length of eyebrow constriction are extracted from the localized regions, fuzzified, and mapped onto an emotion space by employing Mamdani-type relational models. A scheme for the validation of the system parameters is also presented. This paper also provides a fuzzy scheme for controlling the transition of emotion dynamics toward a desired state. Experimental results and computer simulations indicate that the proposed scheme for emotion recognition and control is simple and robust, with good accuracy.
congress on evolutionary computation | 2005
Swagatam Das; Amit Konar; Uday Kumar Chakraborty
Differential evolution (DE) is a simple and efficient algorithm for function optimization over continuous spaces. It has reportedly outperformed many types of evolutionary algorithms and other search heuristics when tested over both benchmark and real-world problems. However, the performance of DE deteriorates severely if the fitness function is noisy and continuously changing. In this paper two improved DE algorithms have been proposed that can efficiently find the global optima of noisy functions. This is achieved firstly by weighing the difference vector by a random scale factor and secondly by employing two novel selection strategies as opposed to the conventional one used in the original versions of DE. An extensive performance comparison of the newly proposed scheme, the original DE (DE/Rand/1/Exp), the canonical PSO and the standard real-coded EA has been presented using well-known benchmarks corrupted by zero-mean Gaussian noise. It has been found that the proposed method outperforms the others in a statistically significant way.
ieee international conference on evolutionary computation | 2006
Uday Kumar Chakraborty; Swagatam Das; Amit Konar
Differential evolution (DE) is well known as a simple and efficient scheme for global optimization over continuous spaces. It is, however, not free from the problem of slow and premature convergence. In this paper we present an improved variant of the classical DE2 scheme, by utilizing the concept of the local neighborhood of each vector. This scheme attempts to balance the exploration and exploitation abilities of DE without requiring additional function evaluations. The new scheme is shown to be statistically significantly better than three other popular DE variants on a six-function test-bed and also on two real-world optimization problems with respect to the following performance measures: solution quality, time to find the solution, frequency of finding the solution, and scalability.
genetic and evolutionary computation conference | 2005
Swagatam Das; Amit Konar; Uday Kumar Chakraborty
This paper introduces a novel scheme of improving the performance of particle swarm optimization (PSO) by a vector differential operator borrowed from differential evolution (DE). Performance comparisons of the proposed method are provided against (a) the original DE, (b) the canonical PSO, and (c) three recent, high-performance PSO-variants. The new algorithm is shown to be statistically significantly better on a seven-function test suite for the following performance measures: solution quality, time to find the solution, frequency of finding the solution, and scalability.
Information Sciences | 2005
Amit Konar; Uday Kumar Chakraborty
This paper presents a new model for unsupervised learning and reasoning on a special type of cognitive maps realized with Petri nets. The unsupervised learning process in the present context adapts the weights of the directed arcs from transitions to places in the Petri net. A Hebbian-type learning algorithm with a natural decay in weights is employed to study the dynamic behavior of the algorithm. The algorithm is conditionally stable for a suitable range of the mortality rate. After convergence of the learning algorithm, the network may be used for computing the beliefs of the desired propositions from the supplied beliefs of the axioms (places with no input arcs). Because of the conditional stability of the algorithm, it may be used in complex decision-making and learning such as automated car driving in an accident-prone environment. The paper also presents a new model for knowledge refinement by adaptation of weights in a fuzzy Petri net using a different form of Hebbian learning. This second model converges to stable points in both encoding and recall phases.
electronic commerce | 1996
Uday Kumar Chakraborty; Kalyanmoy Deb; Mandira Chakraborty
A Markov chain framework is developed for analyzing a wide variety of selection techniques used in genetic algorithms (GAs) and evolution strategies (ESs). Specifically, we consider linear ranking selection, probabilistic binary tournament selection, deterministic s-ary (s = 3,4, ) tournament selection, fitness-proportionate selection, selection in Whitleys GENITOR, selection in (, )-ES, selection in ( + )-ES, (, )-linear ranking selection in GAs, ( + )-linear ranking selection in GAs, and selection in Eshelmans CHC algorithm. The analysis enables us to compare and contrast the various selection algorithms with respect to several performance measures based on the probability of takeover. Our analysis is exactwe do not make any assumptions or approximations. Finite population sizes are considered. Our approach is perfectly general, and following the methods of this paper, it is possible to analyze any selection strategy in evolutionary algorithms.
Journal of Intelligent and Fuzzy Systems | 2009
Jayasree Chakraborty; Amit Konar; Lakhmi C. Jain; Uday Kumar Chakraborty
This paper provides an alternative approach to the co-operative multi-robot path planning problem using parallel differential evolution algorithms. Both centralized and distributed realizations for multi-robot path planning have been studied, and the performances of the methods have been compared with respect to a few pre-defined yardsticks. The distributed approach to this problem out-performs its centralized version for multi-robot planning. Relative performance of the distributed version of the differential evolution algorithm has been studied with varying numbers of robots and obstacles. The distributed version of the algorithm is also compared with a PSO-based realization, and the results are competitive.
Journal of Systems Architecture | 2001
R. Bandyopadhyay; Uday Kumar Chakraborty; D. Patranabis
Abstract A new method for tuning the parameters of the proportional integral derivative (PID) controller is presented in this paper. The technique adopted in this proposition is based on the format of dead-beat control. Fuzzy inference mechanism has been used here for predicting the future values of the controller output while crisp consequent values of the rulebase of the Takagi–Sugeno model are optimized using a genetic algorithm. The proposition is an extension of the work in R. Bandyopadhyay, D. Patranabis (A new autotuning algorithm for PID controllers using dead-beat format, ISA Trans., accepted for publication) where the rulebase was prepared based on the knowledge of process experts. The use of genetic algorithm for optimizing the crisp values of the rulebase has considerably improved the performance of the PID autotuner. The proposed algorithm seems to be a complete and generalized PID autotuner as can be seen from the simulated and experimental results. In all the cases the method shows substantial improvement over the controller tuned with Ziegler–Nichols formula and the PID controller proposed in (loc cit).