Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sanjeev S. Malalur is active.

Publication


Featured researches published by Sanjeev S. Malalur.


Proceedings of SPIE | 2010

Multiple optimal learning factors for feed-forward networks

Sanjeev S. Malalur; Michael T. Manry

A batch training algorithm for feed-forward networks is proposed which uses Newtons method to estimate a vector of optimal learning factors, one for each hidden unit. Backpropagation, using this learning factor vector, is used to modify the hidden units input weights. Linear equations are then solved for the networks output weights. Elements of the new methods Gauss-Newton Hessian matrix are shown to be weighted sums of elements from the total networks Hessian. In several examples, the new method performs better than backpropagation and conjugate gradient, with similar numbers of required multiplies. The method performs as well as or better than Levenberg-Marquardt, with several orders of magnitude fewer multiplies due to the small size of its Hessian.


international symposium on neural networks | 2009

Feed-forward network training using optimal input gains

Sanjeev S. Malalur; Michael T. Manry

In this paper, an effective batch training algorithm is developed for feed-forward networks such as the multilayer perceptron. First, the effects of input transforms are reviewed and explained, using the concept of equivalent networks. Next, a non-singular diagonal transform matrix for the inputs is proposed. Use of this transform is equivalent to altering the input gains in the network. Newtons method is used to solve for the input gains and an optimal learning factor. In several examples, it is shown that the final algorithm is a reasonable compromise between first order training methods and Levenburg-Marquardt.


Neurocomputing | 2015

Multiple optimal learning factors for the multi-layer perceptron

Sanjeev S. Malalur; Michael T. Manry; Praveen Jesudhas

Abstract A batch training algorithm is developed for a fully connected multi-layer perceptron, with a single hidden layer, which uses two-stages per iteration. In the first stage, Newton׳s method is used to find a vector of optimal learning factors (OLFs), one for each hidden unit, which is used to update the input weights. Linear equations are solved for output weights in the second stage. Elements of the new method׳s Hessian matrix are shown to be weighted sums of elements from the Hessian of the whole network. The effects of linearly dependent inputs and hidden units on training are analyzed and an improved version of the batch training algorithm is developed. In several examples, the improved method performs better than first order training methods like backpropagation and scaled conjugate gradient, with minimal computational overhead and performs almost as well as Levenberg–Marquardt, a second order training method, with several orders of magnitude fewer multiplications.


international symposium on neural networks | 2007

A Piecewise Linear Network Classifier

Abdul A. Abdurrab; Michael T. Manry; Jiang Li; Sanjeev S. Malalur; Robert G. Gore

A piecewise linear network is discussed which classifies N-dimensional input vectors. The network uses a distance measure to assign incoming input vectors to an appropriate cluster. Each cluster has a linear classifier for generating class discriminants. A training algorithm is described for generating the clusters and discriminants. Theorems are given which relate the networks performance to that of nearest neighbor and k-nearest neighbor classifiers. It is shown that the error approaches Bayes error as the number of clusters and patterns per cluster approach infinity.


international symposium on neural networks | 2011

Analysis and improvement of multiple optimal learning factors for feed-forward networks

Praveen Jesudhas; Michael T. Manry; Rohit Rawat; Sanjeev S. Malalur

The effects of transforming the net function vector in the multilayer perceptron are analyzed. The use of optimal diagonal transformation matrices on the net function vector is proved to be equivalent to training the network using multiple optimal learning factors (MOLF). A method for linearly compressing large ill-conditioned MOLF Hessian matrices into smaller well-conditioned ones is developed. This compression approach is shown to be equivalent to using several hidden units per learning factor. The technique is extended to large networks. In simulations, the proposed algorithm performs almost as well as the Levenberg Marquardt algorithm with the computational complexity of a first order training algorithm.


asilomar conference on signals, systems and computers | 2004

A pseudospectral fusion approach to fingerprint matching

Sanjeev S. Malalur; Michael T. Manry; Pramod Lakshmi Narasimha

A prototype fingerprint verification system is described which combines the direction and density images into a complex pseudo-spectrum. Two methods for extracting the direction and density images are presented. The proposed feature extraction method is shown to be fast, efficient and robust Verification is achieved by correlation matching.


Neural Processing Letters | 2017

Properties of a Batch Training Algorithm for Feedforward Networks

Melvin D. Robinson; Michael T. Manry; Sanjeev S. Malalur; Changhua Yu

We examine properties for a batch training algorithm known as the output weight optimization–hidden weight optimization (OWO–HWO). Using the concept of equivalent networks, we analyze the effect of input transformation on BP. We introduce new theory of affine invariance and partial affine invariance for neural networks and prove this property for OWO–HWO. Finally, we relate HWO to BP and show that every iteration of HWO is equivalent to BP applied to whitening transformed data. Experimental results validate the connection between OWO–HWO and OWO–BP.


ieee india conference | 2009

Fingerprint Feature Compression Using Statistical Coding Techniques

C. Saravanan; Sanjeev S. Malalur; Micahel T. Manry

We have proposed a new fingerprint feature compression scheme, which extracts the fingerprint feature using pseudo-spectral fusion approach and compresses the fingerprint feature using statistical coding technique. The compression scheme starts with fingerprint image acquisition, enhancement, and normalization. Further, the direction and density of the fingerprint image are extracted. Fusing these two direction and density data results finger print feature data. The fingerprint feature data is applied directly using a statistical coding technique to get compressed data. The Huffman coding and arithmetic coding is used for compression and found arithmetic coding performs better. This proposed compression scheme achieves nearly 85% compression rate, which is higher compared to other present techniques achieved 45% to 63% compression.


A family of robust second order training algorithms | 2009

A family of robust second order training algorithms

Michael T. Manry; Sanjeev S. Malalur


the florida ai research society | 2008

Small models of large machines

Pramod Lakshmi Narasimha; Sanjeev S. Malalur; Michael T. Manry

Collaboration


Dive into the Sanjeev S. Malalur's collaboration.

Top Co-Authors

Avatar

Michael T. Manry

University of Texas at Arlington

View shared research outputs
Top Co-Authors

Avatar

Pramod Lakshmi Narasimha

University of Texas at Arlington

View shared research outputs
Top Co-Authors

Avatar

Praveen Jesudhas

University of Texas at Arlington

View shared research outputs
Top Co-Authors

Avatar

Changhua Yu

University of Texas at Arlington

View shared research outputs
Top Co-Authors

Avatar

Jiang Li

Old Dominion University

View shared research outputs
Top Co-Authors

Avatar

Melvin D. Robinson

University of Texas at Tyler

View shared research outputs
Top Co-Authors

Avatar

Micahel T. Manry

University of Texas at Arlington

View shared research outputs
Top Co-Authors

Avatar

Rohit Rawat

University of Texas at Arlington

View shared research outputs
Top Co-Authors

Avatar

C. Saravanan

National Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge