Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where David Saad is active.

Publication


Featured researches published by David Saad.


International Journal of Neural Systems | 1993

NEURAL NET PRUNING BASED ON FUNCTIONAL BEHAVIOR OF NEURONS

Nachum Shamir; David Saad; Emanuel Marom

This paper proposes a new pruning method based on merging neurons with similar functional behavior which is defined by the internal representations of each neuron for the entire training set. Classification of neurons by their functional behavior with respect to the input vectors provides a powerful tool for pruning neurons and connections, thus reducing the network complexity and increasing its generalization capability. The most remarkable property of this pruning scheme is its ability to preserve net functionality by transferring the role of every removed neuron to the most fitted neuron of the surviving ones, using a unique merging and compensation procedure. The implementation of the proposed method is demonstrated using a detailed numerical example and its performance is examined by a statistical measure calculated by repeating the training procedure several times. The influence of parameter selection on pruning performance and generalization ability is discussed and demonstrated by examining statistical results.


International Journal of Neural Systems | 1992

TRAINING RECURRENT NEURAL NETWORKS — THE MINIMAL TRAJECTORY ALGORITHM

David Saad

The Minimal Trajectory (MINT) algorithm for training recurrent neural networks with a stable end point is based on an algorithmic search for the systems’ representations in the neighbourhood of the minimal trajectory connecting the input-output representations. The said representations appear to be the most probable set for solving the global perceptron problem related to the common weight matrix, connecting all representations of successive time steps in a recurrent discrete neural networks. The search for a proper set of system representations is aided by representation modification rules similar to those presented in our former paper,1 aimed to support contributing hidden and non-end-point representations while supressing non-contributing ones. Similar representation modification rules were used in other training methods for feed-forward networks,2–4 based on modification of the internal representations. A feed-forward version of the MINT algorithm will be presented in another paper.5 Once a proper set of system representations is chosen, the weight matrix is then modified accordingly, via the Perceptron Learning Rule (PLR) to obtain the proper input-output relation. Computer simulations carried out for the restricted cases of parity and teacher-net problems show rapid convergence of the algorithm in comparison with other existing algorithms, together with modest memory requirements.


IEEE Transactions on Neural Networks | 1993

Training a network with ternary weights using the CHIR algorithm

Shai Abramson; David Saad; Emanuel Marom

A modification of the binary weight CHIR algorithm is presented, whereby a zero state is added to the possible binary weight states. This method allows solutions with reduced connectivity to be obtained, by offering disconnections in addition to the excitatory and inhibitory connections. The algorithm has been examined via extensive computer simulations for the restricted cases of parity, symmetry, and teacher problems, which show convergence rates similar to those presented for the binary CHIR2 algorithm, but with reduced connectivity. Moreover, this method expands the set of problems solvable via the binary weight network configuration with no additional parameter requirements.


Applied Optics | 1993

Four-quadrant optical matrix-vector multiplication machine as a neural-network processor.

Shai Abramson; David Saad; Emanuel Marom; N. Konforti

Optical processors for neural networks are primarily fast matrix-vector multiplication machines that can potentially compete with serial computers owing to their parallelism and their ability to facilitate densely connected networks. However, in most proposed systems the multiplication supports only two quadrants and is thus unable to provide bipolar neuron outputs for increasing network capabilities and learning rate. We propose and demonstrate an opto-electronic four-quadrant matrix-vector multiplier that can be used for feed-forward neural-network recall and learning. Experimental results obtained with common commercial components demonstrate a novel, useful, and reliable approach for fourquadrant matrix-vector multiplication in general and for feed-forward neural-network training and recall in particular.


8th Meeting on Optical Engineering in Israel: Optoelectronics and Applications in Industry and Medicine | 1993

Four-quadrant optical matrix vector multiplication machine as a neural network processor

Shai Abramson; David Saad; Emanuel Marom; Naim Konforti

Optical processors for neural networks are primarily fast matrix-vector multiplication machines that can potentially compete with serial computers owing to their parallelism and their ability to facilitate densely connected networks. However, in most proposed systems the multiplication supports only two quadrants and is thus unable to provide bipolar neuron outputs for increasing network capabilities and learning rate. We propose and demonstrate an opto-electronic four quadrant matrix-vector multiplier that can be used for feedforward neural networks recall and learning. Experimental results obtained with common commercial components demonstrate a novel, useful, and reliable approach for four quadrant matrix-vector multiplication in general and for feedforward neural network training and recall in particular.


International Journal of Neural Systems | 1992

EXAMINING THE CHIR ALGORITHM PERFORMANCE FOR MULTILAYER NETWORKS AND CONTINUOUS INPUT VECTORS

David Saad; R. Sasson

Learning by Choice of Internal Representations (CHIR) is a training algorithm presented by Grossman et al.1 based on modification of the Internal Representations (IR) along side of the direct weight matrix modification performed in conventional training methods. This algorithm was presented in several versions aimed to tackle the various training problems of nets with continuous and binary weights, multilayer and multi-output-neuron nets and training without storing the Internal Representations. The capability of one of these versions, the CHIR2 algorithm, to tackle multilayer training tasks of nets with continuous input vectors is examined in this paper. A comparison between the performance of this algorithm and of the Backpropagation algorithm2 is carried out via extensive computer simulations for the “two-spirals” problem, aimed to classify two classes of dots forming two intertwined spirals. The CHIR24 algorithm shows a rapid convergence rate for this problem, an order of magnitude faster than the results reported for the BP training algorithm (as well as those obtained by us) regarding the same training problem and network architecture.11 Moreover, the CHIR2 algorithm finds solution nets for the above mentioned problem with reduced architectures, reported as hard to solve by the BP training algorithm.11


Complex Systems | 1990

Learning by Choice of Internal Representations: An Energy Minimization Approach

David Saad; Emanuel Marom


Complex Systems | 1990

Training Feed Forward Nets with Binary Weights via a Modified CHIR Algorithm.

David Saad; Emanuel Marom


Complex Systems | 1993

Using the Functional Behavior of Neurons for Genetic Recombination in Neural Nets Training.

Nachum Shamir; David Saad; Emanuel Marom


Complex Systems | 1993

Preserving the Diversity of a Genetically Evolving Population of Nets Using the Functional Behavior of Neurons.

Nachum Shamir; David Saad; Emanuel Marom

Collaboration


Dive into the David Saad's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kalyanmoy Deb

Michigan State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wentian Li

Rockefeller University

View shared research outputs
Researchain Logo
Decentralizing Knowledge