Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Kishan G. Mehrotra is active.

Publication


Featured researches published by Kishan G. Mehrotra.


Neural Networks | 1992

Original Contribution: Forecasting the behavior of multivariate time series using neural networks

Kanad Chakraborty; Kishan G. Mehrotra; Chilukuri K. Mohan; San Jay Ranka

This paper presents a neural network approach to multivariate time-series analysis. Real world observations of flour prices in three cities have been used as a benchmark in our experiments. Feedforward connectionist networks have been designed to model flour prices over the period from August 1972 to November 1980 for the cities of Buffalo, Minneapolis, and Kansas City. Remarkable success has been achieved in training the networks to learn the price curve for each of these cities and in making accurate price predictions. Our results show that the neural network approach is a leading contender with the statistical modeling approaches.


IEEE Transactions on Neural Networks | 1995

Efficient classification for multiclass problems using modular neural networks

Rangachari Anand; Kishan G. Mehrotra; Chilukuri K. Mohan; Sanjay Ranka

The rate of convergence of net output error is very low when training feedforward neural networks for multiclass problems using the backpropagation algorithm. While backpropagation will reduce the Euclidean distance between the actual and desired output vectors, the differences between some of the components of these vectors increase in the first iteration. Furthermore, the magnitudes of subsequent weight changes in each iteration are very small, so that many iterations are required to compensate for the increased error in some components in the initial iterations. Our approach is to use a modular network architecture, reducing a K-class problem to a set of K two-class problems, with a separately trained network for each of the simpler problems. Speedups of one order of magnitude have been obtained experimentally, and in some cases convergence was possible using the modular approach but not using a nonmodular network.


IEEE Transactions on Neural Networks | 1993

An improved algorithm for neural network classification of imbalanced training sets

Rangachari Anand; Kishan G. Mehrotra; Chilukuri K. Mohan; Sanjay Ranka

The backpropagation algorithm converges very slowly for two-class problems in which most of the exemplars belong to one dominant class. An analysis shows that this occurs because the computed net error gradient vector is dominated by the bigger class so much that the net error for the exemplars in the smaller class increases significantly in the initial iteration. The subsequent rate of convergence of the net error is very low. A modified technique for calculating a direction in weight-space which decreases the error for each class is presented. Using this algorithm, the rate of learning for two-class classification problems is accelerated by an order of magnitude.


Journal of Parallel and Distributed Computing | 2003

Parallel biological sequence comparison using prefix computations

Srinivas Aluru; Natsuhiko Futamura; Kishan G. Mehrotra

We present practical parallel algorithms using prefix computations for various problems that arise in pairwise comparison of biological sequences. We consider both constant and affine gap penalty functions, full-sequence and subsequence matching, and space-saving algorithms. Commonly used sequential algorithms solve the sequence comparison problems in O(mn) time and O(m + n) space, where m and n are the lengths of the sequences being compared. All the algorithms presented in this paper are time optimal with respect to the sequential algorithms and can use O(n/log n) processors where n is the length of the larger sequence. While optimal parallel algorithms for many of these problems are known, we use a simple framework and demonstrate how these problems can be solved systematically using repeated parallel prefix operations. We also present a space-saving algorithm that uses O(m + n/p) space and runs in optimal time where p is the number of the processors used. We implemented the parallel space-saving algorithm and provide experimental results on an IBM SP-2 and a Pentium cluster.


IEEE Transactions on Neural Networks | 1991

Bounds on the number of samples needed for neural learning

Kishan G. Mehrotra; Chilukuri K. Mohan; Sanjay Ranka

The relationship between the number of hidden nodes in a neural network, the complexity of a multiclass discrimination problem, and the number of samples needed for effect learning are discussed. Bounds for the number of samples needed for effect learning are given. It is shown that Omega(min (d,n) M) boundary samples are required for successful classification of M clusters of samples using a two-hidden-layer neural network with d-dimensional inputs and n nodes in the first hidden layer.


Neural Networks | 1996

Characterization of a class of sigmoid functions with applications to neural networks

Anil Ravindran Menon; Kishan G. Mehrotra; Chilukuri K. Mohan; Sanjay Ranka

We study two classes of sigmoids: the simple sigmoids, defined to be odd, asymptotically bounded, completely monotone functions in one variable, and the hyperbolic sigmoids, a proper subset of simple sigmoids and a natural generalization of the hyperbolic tangent. We obtain a complete characterization for the inverses of hyperbolic sigmoids using Eulers incomplete beta functions, and describe composition rules that illustrate how such functions may be synthesized from others. These results are applied to two problems. First we show that with respect to simple sigmoids the continuous Cohen-Grossberg-Hopfield model can be reduced to the (associated) Legendre differential equations. Second, we show that the effect of using simple sigmoids as node transfer functions in a one-hidden layer feedforward network with one summing output may be interpreted as representing the output function as a Fourier series sine transform evaluated at the hidden layer node inputs, thus extending and complementing earlier results in this area. Copyright 1996 Elsevier Science Ltd


congress on evolutionary computation | 2005

Multi-objective mobile agent routing in wireless sensor networks

Ramesh Rajagopalan; Chilukuri K. Mohan; Pramod K. Varshney; Kishan G. Mehrotra

An approach for data fusion in wireless sensor networks involves the use of mobile agents that selectively visit the sensors and incrementally fuse the data, thereby eliminating the unnecessary transmission of irrelevant or non-critical data. The order of sensors visited along the route determines the quality of the fused data and the communication cost. We model the mobile agent routing problem as a multi-objective optimization problem, maximizing the total detected signal energy while minimizing the energy consumption and path loss. Simulation results show that this problem can be solved successfully using evolutionary multi-objective algorithms such as EMOCA and NSGA-II. This approach also enables choosing between two alternative routing algorithms, to determine which one results in higher detection accuracy.


international symposium on intelligent control | 1990

Sunspot numbers forecasting using neural networks

Ming Li; Kishan G. Mehrotra; Chilulcuri Mohan; Sanjay Ranka

A recurrent connectionist network has been designed to model sunspot data. The network architecture, sunspot data, and statistical models are described, and experimental results are provided. This preliminary experimental work shows that the network can produce competitive prediction results that compare with those of traditional autoregressive models. The method is not problem specific and could be applied to other problems in dynamical system modeling, recognition, prediction, and control fields.<<ETX>>


conference on high performance computing (supercomputing) | 1994

Genetic algorithms for graph partitioning and incremental graph partitioning

Harpal Maini; Kishan G. Mehrotra; Chilukuri K. Mohan; Sanjay Ranka

Partitioning graphs into equally large groups of nodes, minimizing the number of edges between different groups, is an extremely important problem in parallel computing. This paper presents genetic algorithms for suboptimal graph partitioning, with new crossover operators (KNUX, DKNUX) that lead to orders of magnitude improvement over traditional genetic operators in solution quality and speed. Our method can improve on good solutions previously obtained by using other algorithms or graph theoretic heuristics in, minimizing the total communication cost or the worst case cost of communication for a single processor. We also extend our algorithm to incremental graph partitioning problems, in which the graph structure or system properties changes with time.<<ETX>>


Archive | 2011

Modern Approaches in Applied Intelligence

Kishan G. Mehrotra; Chilukuri K. Mohan; Jae C. Oh; Pramod K. Varshney; Moonis Ali

The two volume set LNAI 6703 and LNAI 6704 constitutes the thoroughly refereed conference proceedings of the 24th International Conference on Industrial Engineering and Other Applications of Applied Intelligend Systems, IEA/AIE 2011, held in Syracuse, NY, USA, in June/July 2011. The total of 92 papers selected for the proceedings were carefully reviewed and selected from 206 submissions. The papers cover a wide number of topics including feature extraction, discretization, clustering, classification, diagnosis, data refinement, neural networks, genetic algorithms, learning classifier systems, Bayesian and probabilistic methods, image processing, robotics, navigation, optimization, scheduling, routing, game theory and agents, cognition, emotion, and beliefs

Collaboration


Dive into the Kishan G. Mehrotra's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gouri K. Bhattacharyya

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Richard A. Johnson

University of Wisconsin-Madison

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge