Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chilukuri K. Mohan is active.

Publication


Featured researches published by Chilukuri K. Mohan.


Neural Networks | 1992

Original Contribution: Forecasting the behavior of multivariate time series using neural networks

Kanad Chakraborty; Kishan G. Mehrotra; Chilukuri K. Mohan; San Jay Ranka

This paper presents a neural network approach to multivariate time-series analysis. Real world observations of flour prices in three cities have been used as a benchmark in our experiments. Feedforward connectionist networks have been designed to model flour prices over the period from August 1972 to November 1980 for the cities of Buffalo, Minneapolis, and Kansas City. Remarkable success has been achieved in training the networks to learn the price curve for each of these cities and in making accurate price predictions. Our results show that the neural network approach is a leading contender with the statistical modeling approaches.


IEEE Transactions on Neural Networks | 1995

Efficient classification for multiclass problems using modular neural networks

Rangachari Anand; Kishan G. Mehrotra; Chilukuri K. Mohan; Sanjay Ranka

The rate of convergence of net output error is very low when training feedforward neural networks for multiclass problems using the backpropagation algorithm. While backpropagation will reduce the Euclidean distance between the actual and desired output vectors, the differences between some of the components of these vectors increase in the first iteration. Furthermore, the magnitudes of subsequent weight changes in each iteration are very small, so that many iterations are required to compensate for the increased error in some components in the initial iterations. Our approach is to use a modular network architecture, reducing a K-class problem to a set of K two-class problems, with a separately trained network for each of the simpler problems. Speedups of one order of magnitude have been obtained experimentally, and in some cases convergence was possible using the modular approach but not using a nonmodular network.


ieee swarm intelligence symposium | 2003

Fitness-distance-ratio based particle swarm optimization

Thanmaya Peram; Kalyan Veeramachaneni; Chilukuri K. Mohan

This paper presents a modification of the particle swarm optimization algorithm (PSO) intended to combat the problem of premature convergence observed in many applications of PSO. The proposed new algorithm moves particles towards nearby particles of higher fitness, instead of attracting each particle towards just the best position discovered so far by any particle. This is accomplished by using the ratio of the relative fitness and the distance of other particles to determine the direction in which each component of the particle position needs to be changed. The resulting algorithm (FDR-PSO) is shown to perform significantly better than the original PSO algorithm and some of its variants, on many different benchmark optimization problems. Empirical examination of the evolution of the particles demonstrates that the convergence of the algorithm does not occur at an early phase of particle evolution, unlike PSO. Avoiding premature convergence allows FDR-PSO to continue search for global optima in difficult multimodal optimization problems.


congress on evolutionary computation | 1999

Particle swarm optimization: surfing the waves

Ender Özcan; Chilukuri K. Mohan

A new optimization method has been proposed by J. Kennedy and R.C. Eberhart (1997; 1995), called Particle Swarm Optimization (PSO). This approach combines social psychology principles and evolutionary computation. It has been applied successfully to nonlinear function optimization and neural network training. Preliminary formal analyses showed that a particle in a simple one-dimensional PSO system follows a path defined by a sinusoidal wave, randomly deciding on both its amplitude and frequency (Y. Shi and R. Eberhart, 1998). The paper takes the next step, generalizing to obtain closed form equations for trajectories of particles in a multi-dimensional search space.


genetic and evolutionary computation conference | 2003

Optimization using particle swarms with near neighbor interactions

Kalyan Veeramachaneni; Thanmaya Peram; Chilukuri K. Mohan; Lisa Ann Osadciw

This paper presents a modification of the particle swarm optimization algorithm (PSO) intended to combat the problem of premature convergence observed in many applications of PSO. In the new algorithm, each particle is attracted towards the best previous positions visited by its neighbors, in addition to the other aspects of particle dynamics in PSO. This is accomplished by using the ratio of the relative fitness and the distance of other particles to determine the direction in which each component of the particle position needs to be changed. The resulting algorithm, known as Fitness-Distance-Ratio based PSO (FDR-PSO), is shown to perform significantly better than the original PSO algorithm and several of its variants, on many different benchmark optimization problems. Avoiding premature convergence allows FDR-PSO to continue search for global optima in difficult multimodal optimization problems, reaching better solutions than PSO and several of its variants.


IEEE Transactions on Neural Networks | 1993

An improved algorithm for neural network classification of imbalanced training sets

Rangachari Anand; Kishan G. Mehrotra; Chilukuri K. Mohan; Sanjay Ranka

The backpropagation algorithm converges very slowly for two-class problems in which most of the exemplars belong to one dominant class. An analysis shows that this occurs because the computed net error gradient vector is dominated by the bigger class so much that the net error for the exemplars in the smaller class increases significantly in the initial iteration. The subsequent rate of convergence of the net error is very low. A modified technique for calculating a direction in weight-space which decreases the error for each class is presented. Using this algorithm, the rate of learning for two-class classification problems is accelerated by an order of magnitude.


IEEE Transactions on Neural Networks | 1991

Bounds on the number of samples needed for neural learning

Kishan G. Mehrotra; Chilukuri K. Mohan; Sanjay Ranka

The relationship between the number of hidden nodes in a neural network, the complexity of a multiclass discrimination problem, and the number of samples needed for effect learning are discussed. Bounds for the number of samples needed for effect learning are given. It is shown that Omega(min (d,n) M) boundary samples are required for successful classification of M clusters of samples using a two-hidden-layer neural network with d-dimensional inputs and n nodes in the first hidden layer.


systems man and cybernetics | 2010

A Multiobjective Optimization Approach to Obtain Decision Thresholds for Distributed Detection in Wireless Sensor Networks

Engin Masazade; Ramesh Rajagopalan; Pramod K. Varshney; Chilukuri K. Mohan; Gullu Kiziltas Sendur; Mehmet Keskinoz

For distributed detection in a wireless sensor network, sensors arrive at decisions about a specific event that are then sent to a central fusion center that makes global inference about the event. For such systems, the determination of the decision thresholds for local sensors is an essential task. In this paper, we study the distributed detection problem and evaluate the sensor thresholds by formulating and solving a multiobjective optimization problem, where the objectives are to minimize the probability of error and the total energy consumption of the network. The problem is investigated and solved for two types of fusion schemes: 1) parallel decision fusion and 2) serial decision fusion. The Pareto optimal solutions are obtained using two different multiobjective optimization techniques. The normal boundary intersection (NBI) method converts the multiobjective problem into a number of single objective-constrained subproblems, where each subproblem can be solved with appropriate optimization methods and nondominating sorting genetic algorithm-II (NSGA-II), which is a multiobjective evolutionary algorithm. In our simulations, NBI yielded better and evenly distributed Pareto optimal solutions in a shorter time as compared with NSGA-II. The simulation results show that, instead of only minimizing the probability of error, multiobjective optimization provides a number of design alternatives, which achieve significant energy savings at the cost of slightly increasing the best achievable decision error probability. The simulation results also show that the parallel fusion model achieves better error probability, but the serial fusion model is more efficient in terms of energy consumption.


Neural Networks | 1996

Characterization of a class of sigmoid functions with applications to neural networks

Anil Ravindran Menon; Kishan G. Mehrotra; Chilukuri K. Mohan; Sanjay Ranka

We study two classes of sigmoids: the simple sigmoids, defined to be odd, asymptotically bounded, completely monotone functions in one variable, and the hyperbolic sigmoids, a proper subset of simple sigmoids and a natural generalization of the hyperbolic tangent. We obtain a complete characterization for the inverses of hyperbolic sigmoids using Eulers incomplete beta functions, and describe composition rules that illustrate how such functions may be synthesized from others. These results are applied to two problems. First we show that with respect to simple sigmoids the continuous Cohen-Grossberg-Hopfield model can be reduced to the (associated) Legendre differential equations. Second, we show that the effect of using simple sigmoids as node transfer functions in a one-hidden layer feedforward network with one summing output may be interpreted as representing the output function as a Fourier series sine transform evaluated at the hidden layer node inputs, thus extending and complementing earlier results in this area. Copyright 1996 Elsevier Science Ltd


congress on evolutionary computation | 2005

Multi-objective mobile agent routing in wireless sensor networks

Ramesh Rajagopalan; Chilukuri K. Mohan; Pramod K. Varshney; Kishan G. Mehrotra

An approach for data fusion in wireless sensor networks involves the use of mobile agents that selectively visit the sensors and incrementally fuse the data, thereby eliminating the unnecessary transmission of irrelevant or non-critical data. The order of sensors visited along the route determines the quality of the fused data and the communication cost. We model the mobile agent routing problem as a multi-objective optimization problem, maximizing the total detected signal energy while minimizing the energy consumption and path loss. Simulation results show that this problem can be solved successfully using evolutionary multi-objective algorithms such as EMOCA and NSGA-II. This approach also enables choosing between two alternative routing algorithms, to determine which one results in higher detection accuracy.

Collaboration


Dive into the Chilukuri K. Mohan's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge