Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Omid M. Omidvar is active.

Publication


Featured researches published by Omid M. Omidvar.


Neural Networks | 1997

Training dynamics and neural network performance

Charles L. Wilson; James L. Blue; Omid M. Omidvar

We use an analysis of a simple model of recurrent network dynamics to gain qualitative insights into the training dynamics of feedforward multilayer perceptrons (MLPs) used for classification. These insights suggest changes to the training methods used for MLPs that improve network performance significantly. In previous work, the probabilistic neural network (PNN) was shown to provide better zero-reject error performance on character and fingerprint classification problems than radial basis function and MLP-based neural network methods. We will show that performance equal to or better than PNN can be achieved with a single three-layer MLP by making fundamental changes in the network optimization strategy. These changes are: 1) use of neuron activation functions, which reduce the probability of singular Jacobians; 2) use of successive regularization to constrain the volume of the minimized weight space; 3) use of Boltzmann pruning to constrain the dimension of the weight space; 4) use of Prior class probabilities to normalize all error calculations, so that statistically significant samples of rare but important classes can be included without distorting the error surface. All four of these changes are made in the inner loop of a conjugate gradient optimization iteration and are intended to simplify the training dynamics of the optimization. On handprinted digits and fingerprint classification problems these modifications improve error-reject performance by factors between 2 and 4, and reduce network size by 40-60%. Copyright 1997 Elsevier Science Ltd.


Connection Science | 1994

Information content in neural net optimization

Omid M. Omidvar; Charles L. Wilson

Abstract Reduction in the size and complexity of neural networks is essential to improve generalization, reduce training error and improve network speed. Most of the known optimization methods heavily rely on weight-sharing concepts for pattern separation and recognition. In weight-sharing methods the redundant weights from specific areas of input layer are pruned and the value of weights and their information content play a very minimal role in the pruning process. The method presented here focuses on network topology and information content for optimization. We have studied the change in the network topology and its effects on information content dynamically during the optimization of the network. The primary optimization uses scaled conjugate gradient and the secondary method of optimization is a Boltzmann method. The conjugate gradient optimization serves as a connection creation operator and the Boltzmann method serves as a competitive connection annihilation operator. By combining these two methods,...


Journal of Electronic Imaging | 1997

Neurodynamics of learning and network performance

Charles L. Wilson; James L. Blue; Omid M. Omidvar

A simple dynamic model of a neural network is presented. Using the dynamic model of a neural network, we improve the performance of a three-layer multilayer perceptron (MLP). The dynamic model of a MLP is used to make fundamental changes in the network optimization strategy. These changes are: Neuron activation functions are used, which reduce the probability of singular Jacobians; Successive regularization is used to constrain the volume of the weight space being minimized; Boltzmann pruning is used to constrain the dimension of the weight space; and prior class probabilities are used to normalize all error calculations, so that statistically significant samples of rare but important classes can be included without distortion of the error surface. All four of these changes are made in the inner loop of a conjugate gradient optimization iteration and are intended to simplify the training dynamics ofthe optimization. On handprinted digits and fingerprint classification problems, these modifications improve error-reject performance by factors between 2 and 4 and reduce network size by 40 to 60%.


Journal of Electronic Imaging | 1997

Design of a handprint recognition system

Michael D. Garris; James L. Blue; Gerald T. Candela; Darrin L. Dimmick; Jon C. Geist; Patrick J. Grother; Stanley Janet; Omid M. Omidvar; Charles L. Wilson

A public domain optical character recognition (OCR) system has been developed by the National Institute of Standards and Technology (NIST). This standard reference form-based handprint recognition system is designed to provide a baseline of performance on an open application. The systems source code, training data, performance assessment tools, and type of forms processed are all publicly available. The system is modular, allowing for system component testing and comparisons, and it can be used to validate training and testing sets in an end-to-end application. The systems source code is written in C and will run on virtually any UNIX-based computer. The presented functional components of the system are divided into three levels of processing: (1) form-level processing includes the tasks of form registration and form removal; (2) field-level processing includes the tasks of field isolation, line trajectory reconstruction, and field segmentation; and (3) character-level processing includes character normalization, feature extraction, character classification, and dictionary-based postprocessing. The system contains a number of significant contributions to OCR technology, including an optimized probabilistic neural network (PNN) classifier that operates a factor of 20 times faster than traditional software implementations of the algorithm. Provided in the system are a host of data structures and low-level utilities for computing spatial histograms, least-squares fitting, spatial zooming, connected components, Karhunen Loe` ve feature extraction, optimized PNN classification, and dynamic string alignment. Any portion of this standard reference OCR system can be used in commercial products without restrictions.


Neural and Stochastic Methods in Image and Signal Processing | 1992

Topological separation versus weight sharing in neural net optimization

Omid M. Omidvar; Charles L. Wilson

Recent advances in neural networks application development for real life problems have drawn attention to network optimization. Most of the known optimization methods rely heavily on a weight sharing concept for pattern separation and recognition. The shortcoming of the weight sharing method is attributed to a large number of extraneous weights which play a minimal role in pattern separation and recognition. Our experiments have shown that up to 97% of the connections in the network can be eliminated with little or no change in the network performance. Topological separation should be used when the size of the network is large enough to tackle real life problems such as fingerprint classification. Our research has focused on the network topology by changing the number of connections as secondary method of optimization. Our findings so far indicate that for large networks topological separation yields smaller network size which is more suitable for VLSI implementation. Topological separation is based on the error surface and information content of the network. As such it is an economical way of size reduction which leads to overall optimization. The differential pruning of the connections is based on the weight contents rather than number of connections. The training error may vary with the topological dynamics but the correlation between the error surface and recognition rate decreases to a minimum. Topological separation reduces the size of the network by changing its architecture without degrading its performance. The method also results in a network which is considerably smaller in size with a better performance.


Archive | 1990

A Drive Reinforcement Model for Visual Perception

Omid M. Omidvar

Neural networks have been used to mimic cognitive processes which take place in animal brains. The learning capability inherent in neural networks makes them suitable candidates for adaptive tasks such as formation of visual perception. Synaptic reinforcements creates a proper condition for adaptation, which results in memorization, formation of perception, and higher order information processing activities.


Proceedings of SPIE | 1993

Optimization of ART network with Boltzmann machine

Omid M. Omidvar; Charles L. Wilson

Optimization of large neural networks is essential in improving the network speed and generalization power, while at the same time reducing the training error and the network complexity. Boltzmann methods have been used as a statistical method for combinatorial optimization and for the design of learning algorithm. In the networks studied here, the Adaptive Resonance Theory (ART) serves as a connection creation operator and the Boltzmann method serves as a competitive connection annihilation operator. By combining these two methods it is possible to generate small networks that have similar testing and training accuracy and good generalization from small training sets. Our findings demonstrate that for a character recognition problem the number of weights in a fully connected network can be reduced by over 80%. We have applied the Boltzmann criteria to differential pruning of the connections which is based on the weight contents rather than on the number of connections.


Image Processing Algorithms and Techniques II | 1991

Massively parallel implementation of neural network architectures

Omid M. Omidvar; Charles L. Wilson

In recent years neural networks have been used to solve some of the difficult real time character recognition problems. These SIMD implementations of the networks have achieve some success, but the real potential of neural networks are yet to be utilized. Several well known neural network architectures have been, modified, and implemented. These architecture are then applied to character recognition. The performance of these parallel character recognition systems are compared and contrasted. Feature localization and noise reduction are achieved using least squares optimized Gabor filtering. The filtered images are then presented to an FAUST based learning algorithm which produces the self- organizing sets of neural network generated features used for character recognition. Implementation of these algorithms on highly parallel computer with 1024 processors allows high speed character recognition to be achieved at a speed of 2.3 ms/image, with greater than 99% accuracy on machine print and 89% accuracy on unconstrained hand printed characters. These results are achieved using identical parallel processor programs demonstrating that the method is truly font independent. The back propagation is included to allow comparison with more conventional neural network character recognition methods. The network has one hidden layer with multiple concurrent feedback from the output layer to the hidden and from hidden layer to the input layer. This concurrent feedback and weight adjustment is only possible on a SIMD computer.


Electronic Imaging '90, Santa Clara, 11-16 Feb'94 | 1990

Goal-seeking neural net for recall and recognition

Omid M. Omidvar

Neural networks have been used to mimic cognitive processes which take place in animal brains. The learning capability inherent in neural networks makes them suitable candidates for adaptive tasks such as recall and recognition. The synaptic reinforcements create a proper condition for adaptation, which results in memorization, formation of perception, and higher order information processing activities. In this research a model of a goal seeking neural network is studied and the operation of the network with regard to recall and recognition is analyzed. In these analyses recall is defined as retrieval of stored information where little or no matching is involved. On the other hand recognition is recall with matching; therefore it involves memorizing a piece of information with complete presentation. This research takes the generalized view of reinforcement in which all the signals are potential reinforcers. The neuronal response is considered to be the source of the reinforcement. This local approach to adaptation leads to the goal seeking nature of the neurons as network components. In the proposed model all the synaptic strengths are reinforced in parallel while the reinforcement among the layers is done in a distributed fashion and pipeline mode from the last layer inward. A model of complex neuron with varying threshold is developed to account for inhibitory and excitatory behavior of real neuron. A goal seeking model of a neural network is presented. This network is utilized to perform recall and recognition tasks. The performance of the model with regard to the assigned tasks is presented.


NIST Interagency/Internal Report (NISTIR) - 5695 | 1995

Improving Neural Network Performance for Character and Fingerprint Classification by Altering Network Dynamics | NIST

Charles L. Wilson; Omid M. Omidvar

Collaboration


Dive into the Omid M. Omidvar's collaboration.

Top Co-Authors

Avatar

Charles L. Wilson

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

James L. Blue

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Darrin L. Dimmick

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Gerald T. Candela

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Jon C. Geist

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Michael D. Garris

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Patrick J. Grother

National Institute of Standards and Technology

View shared research outputs
Top Co-Authors

Avatar

Stanley Janet

National Institute of Standards and Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge