Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sridhar Narayan is active.

Publication


Featured researches published by Sridhar Narayan.


southeastcon | 2004

An Algorithm for Swarm-based Color Image Segmentation

Charles E. White; Gene Tagliarini; Sridhar Narayan

Segmentation of nontrivial color images is one of the most difficult tasks in digital image processing. This paper presents a novel color image segmentation algorithm, which uses a biologically inspired paradigm known as swarm intelligence, to segment images based on color similarity. The swarm algorithm employed uses image pixel data and a corresponding segment map to form a context in which stigmergy can occur. The emergent property of the algorithm is that connected segments of similar pixels are found and may later be referenced. We demonstrate the algorithm by applying it to the task of segmenting digital images of butterflies for the purpose of automatic classification.


systems man and cybernetics | 1996

Enhancing MLP networks using a distributed data representation

Sridhar Narayan; Gene Tagliarini; Edward W. Page

Multilayer perceptron (MLP) networks trained using backpropagation can be slow to converge in many instances. The primary reason for slow learning is the global nature of backpropagation. Another reason is the fact that a neuron in an MLP network functions as a hyperplane separator and is therefore inefficient when applied to classification problems in which decision boundaries are nonlinear. This paper presents a data representational approach that addresses these problems while operating within the framework of the familiar backpropagation model. We examine the use of receptors with overlapping receptive fields as a preprocessing technique for encoding inputs to MLP networks. The proposed data representation scheme, termed ensemble encoding, is shown to promote local learning and to provide enhanced nonlinear separability. Simulation results for well known problems in classification and time-series prediction indicate that the use of ensemble encoding can significantly reduce the time required to train MLP networks. Since the choice of representation for input data is independent of the learning algorithm and the functional form employed in the MLP model, nonlinear preprocessing of network inputs may be an attractive alternative for many MLP network applications.


North American Journal of Fisheries Management | 2006

Forecasting Annual Harvests of Atlantic and Gulf Menhaden

Peter J. Hanson; Douglas S. Vaughan; Sridhar Narayan

Abstract Continuous records of annual landings and fishing effort exist in the Atlantic purse-seine fishery for Atlantic menhaden Brevoortia tyrannus since 1940 and the Gulf of Mexico fishery for Gulf menhaden B. patronus since 1946. Currently, year-ahead forecasts of landings from these species-specific fisheries separated by the Florida peninsula are provided to the industry by means of multiple-linear-regression models that relate landings and effort over the data series. Here, we compare three methods for this purpose—multiple regression, time series, and artificial neural networks—to determine whether forecast accuracy can be increased. Best-fit models were developed with each method for each fishery, and then 10-year retrospective analyses of 1-year-ahead catch forecasts were compared among the three methods. In general, multiple-regression and artificial neural network models were similar in their fit to the data series and both were better than time series models, judging from the Akaike informati...


Information Sciences | 1997

The generalized sigmoid activation function: competitive supervised learning

Sridhar Narayan

Abstract Multilayer perceptron (MLP) networks trained using backpropagation are perhaps the most commonly used neural network model. Central to the MLP model is the use of neurons with nonlinear and differentiable activation functions. The most commonly used activation function is a sigmoidal function, and frequently all neurons in an MLP network employ the same activation function. In this paper, we introduce the notion of the generalized sigmoid as an activation function for neurons in the output layer of an MLP network. The enhancements afforded by the use of the generalized sigmoid are analyzed and demonstrated in the context of some well-known classification problems.


Neural Computing and Applications | 2006

Receptive field optimization for ensemble encoding

M. Abdelbar; O. Hassan; A. Tagliarini; Sridhar Narayan

Ensemble encoding is a distributed data representation scheme that uses multiple, overlapping receptive fields to encode inputs to MLP networks. The number, placement, and form of the receptive fields can have a significant impact on the effectiveness of ensemble encoding. We present four approaches, two based on descriptive statistics, and two based on clustering, for optimizing receptive field configuration, and compare their performance on three benchmark data sets. To ensure fairness of comparison and reduce the effects of random noise, leave-one-out cross-validation is employed, and, a test of statistical significance is applied to the results.


international symposium on neural networks | 2005

An analysis of underfitting in MLP networks

Sridhar Narayan; Gene Tagliarini

The generalization ability of an MLP network has been shown to be related to both the number and magnitudes of the network weights. Thus, there exists a tension between employing networks with few weights that have relatively large magnitudes, and networks with a greater number of weights with relatively small magnitudes. The analysis presented in this paper indicates that large magnitudes for network weights potentially increase the propensity of a network to interpolate poorly. Experimental results indicate that when bounds are imposed on network weights, the backpropagation algorithm is capable of discovering networks with small weight magnitudes that retain their expressive power and exhibit good generalization.


international symposium on neural networks | 1999

Enhancing incremental learning in MLP networks using ensemble encoding of network inputs

Sridhar Narayan

Local learning techniques associated with multilayer perceptron (MLP) networks typically employ receptive fields as an integral part of the network. However, data representation schemes that employ multiple, overlapping receptive fields to preprocess network inputs can be another source of local learning in MLP networks. Earlier work has shown that ensemble encoding, a distributed data representation scheme, promotes local learning and can accelerate learning in MLP networks. We demonstrate that networks using ensemble encoding display an enhanced capacity for incremental learning.


international symposium on neural networks | 1999

Using simulated annealing to optimize receptive fields for MLP networks with ensemble encoding

Ashraf M. Abdelbar; Sridhar Narayan

Ensemble encoding is a distributed data representation scheme that uses multiple, overlapping receptive fields to encode inputs to MLP networks. The number, placement, and form of the receptive fields can have a significant impact on the effectiveness of ensemble encoding. This paper demonstrates the use of simulated annealing as an optimization technique to determine the optimal shape and placement of receptive fields. Experimental results in the context of predicting protein localization sites suggest that the proposed technique improves generalization by 30%, when compared to ad hoc techniques for determining receptive field parameters.


international symposium on neural networks | 1999

Artificial neural networks for predicting the optimal number of kanbans in a JIT manufacturing environment

Sridhar Narayan; Barry A. Wray; Richard G. Mathieu

Current techniques for predicting the number of kanbans needed at a workcenter typically use only efficient factory data to develop a predictive model that maps relationships between inputs (shop operating conditions) and a desired output (number of kanbans). The paper presents a methodology that uses autoassociative neural networks to determine if a proposed number of kanbans will result in a starved, efficient, or saturated factory, based on a given set of factory conditions.


international symposium on neural networks | 2003

Three heuristics for receptive field optimization for ensemble encoding

Ashraf M. Abdelbar; D.O. Hassan; Gene Tagliarini; Sridhar Narayan

Ensemble encoding is a biologically-motivated, distributed data representation scheme for MLP networks. Multiple overlapping receptive fields are used to enhance locality of representation. The number, form, and placement of receptive fields has a great impact on performance. We present three heuristics, two based on descriptive statistics, and one based on clustering, for optimizing receptive field configuration, and compare their performance on three benchmark data sets. Performance varies among the benchmarks, but on one benchmark, the clustering heuristic yields a 56% improvement in test set classification over unencoded data, and a 48% improvement over symmetrical-placement three-receptor ensemble encoding.

Collaboration


Dive into the Sridhar Narayan's collaboration.

Top Co-Authors

Avatar

Gene Tagliarini

University of North Carolina at Wilmington

View shared research outputs
Top Co-Authors

Avatar

Shelby Morge

University of North Carolina at Wilmington

View shared research outputs
Top Co-Authors

Avatar

Mahnaz Moallem

University of North Carolina at Wilmington

View shared research outputs
Top Co-Authors

Avatar

Karen Hill

University of North Carolina at Wilmington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chris Gordon

University of North Carolina at Wilmington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Elizabeth Snead

University of North Carolina at Wilmington

View shared research outputs
Top Co-Authors

Avatar

Wesley Williams

University of North Carolina at Wilmington

View shared research outputs
Top Co-Authors

Avatar

D.O. Hassan

American University in Cairo

View shared research outputs
Researchain Logo
Decentralizing Knowledge