Leonard Neiberg
Carnegie Mellon University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Leonard Neiberg.
Neural Networks | 1995
David Casasent; Leonard Neiberg
Abstract Automatic target recognition processors typically employ several stages of processing, each with a different operational purpose. New shift-invariant filters using morphological and Gabor wavelet transform operations are described for use in the initial stages of such a system. Their realization on simple correlation neural networks are noted, together with the use of neural net optimization techniques to design such filters. A new feature space trajectory classifier neural network is described that identifies the class and pose of each object, rejects clutter false alarms, and overcomes various issues associated with other classier neural networks.
Intelligent Robots and Computer Vision XIII: Algorithms and Computer Vision | 1994
Leonard Neiberg; David Casasent
A new classifier neural network is described for distortion-invariant multi-class pattern recognition. Its input training data in different classes are described by a feature space. As a distortion parameter (such as aspect view) of a training set object is varied, an ordered training set is produced. This ordered training set describes the object as a trajectory in feature space, with different points along the trajectory corresponding to different aspect views. Different object classes are described by different trajectories. Classification involves calculation of the distance from an input feature space point to the nearest trajectory (this denotes the object class) and the position of the nearest point along that trajectory (this denotes the pose of the object). Comparison to other neural networks and other classifiers show that this feature space trajectory neural network yields better classification performance and can reject non-object data. The FST classifier performs well with different numbers of training images and hidden layer neurons and also generalizes better than other classifiers.
Optical Engineering | 1998
David Casasent; Leonard Neiberg; Michael A. Sipe
The feature space trajectory (FST) neural net is used for clas- sification and pose estimation of the contents of regions of interest. The FST provides an attractive representation of distorted objects that over- comes problems present in other classifiers. We discuss its use in re- jecting clutter inputs, selecting the number and identity of the aspect views most necessary to represent an object, and to distinguish between two objects, temporal image processing, automatic target recognition, and active vision.
Proceedings of SPIE | 1995
Leonard Neiberg; David Casasent; Robert J. Fontana; Jeffrey E. Cade
A novel neural network for distortion-invariant pattern recognition is described. Image regions of interest are determined using a detection stage, each region is then enhanced (the steps used are detailed), features are extracted (new Gabor wavelet features are used), and these features are used to classify the contents of each input region. A new feature space trajectory neural network (FST NN) classifier is used. A new 8 class database is used, a new multilayer NN to calculate the distance measures necessary is detailed, its low storage and on-line computational load requirements are noted. The ability of the adaptive FST algorithm to reduce network complexity while achieving excellent performance is demonstrated. The clutter rejection ability of this neural network to reject false alarm inputs is demonstrated, and time-history processing to further reduce false alarms is discussed. Hardware and commercial realizations are noted.
Applied Optics | 1994
Leonard Neiberg; David Casasent
We present a new training-out algorithm for neural networks that permits good performance on nonideal hardware with limited analog neuron and weight accuracy. Optical neural networks are emphasized with the error sources including nonuniform beam illumination and nonlinear device characteristics. We compensate for processor nonidealities during gated learning (off-line training); thus our algorithm does not require real-time neural networks with adaptive weights. This permits use of high-accuracy nonadaptive weights and reduced hardware complexity. The specific neural network we consider is the Ho-Kashyap associative processor because it provides the largest storage capacity. Simulation results and optical laboratory data are provided. The storage measure we use is the ratio M/N of the number of vectors stored (M) to the dimensionality of the vectors stored (N). We show a storage capacity of M/N = 1.5 on our optical laboratory system with excellent recall accuracy, > 95%. The theoretical maximum storage is M/N = 2 (as N approaches infinity), and thus the storage and performance we demonstrate are impressive considering the processor nonidealities we present. Our techniques can be applied to other neural network algorithms and other nonideal processing hardware.
Applications and science of artificial neural networks. Conference | 1997
Michael A. Sipe; David Casasent; Leonard Neiberg
A new feature space trajectory (FST) description of 3D distorted views of an object is advanced for active vision applications. In an FST, different distorted object views are vertices in feature space. A new eigen-feature space and Fourier transform features are used. Vertices for different adjacent distorted views are connected by straight lines so that an FST is created as the viewpoint changes. Each different object is represented by a distinct FST. An object to be recognized is represented as a point in feature space; the closest FST denotes the class of the object, and the closest line segment on the FST indicates its pose. A new neural network is used to efficiently calculated distances. We discuss its uses in active vision. Apart from an initial estimate of object class and pose, the FST processor can specify where to move the sensor to: confirm close and pose, to grasp the object, or to focus on a specific object part for assembly or inspection. We advanced initial remarks on the number of aspect views needed and which aspect views are needed to represent an object.
Proceedings of SPIE | 1996
David Casasent; Rajesh Shenoy; Leonard Neiberg
We consider use of eigenvector feature inputs to our feature space trajectory (FST) neural net classifier for SAR data with 3D aspect distortions. We consider its use for classification and pose estimation and rejection of clutter. Prior and new MINACE distortion-invariant and shift- invariant filter work to locate the position of objects in regions of interest is reviewed. Test results on a number of SAR databases are included to show the robustness of the algorithm. New results include techniques to determine: the number of eigenvectors per class to retain, the number and order of final features to use, if the training set size is adequate, and if the training and test sets are compatible.
SPIE's 1995 Symposium on OE/Aerospace Sensing and Dual Use Photonics | 1995
Leonard Neiberg; David Casasent
A new classifier neural network is described for distortion-invariant multi-class pattern recognition. The input analog neurons are a feature space. All distorted aspect views of one object are described by a trajectory in feature space. Classification of test data involves calculation of the closest feature space trajectory. Pose estimation is achieved by determining the closest line segment on the closest trajectory. Rejection of false class clutter is demonstrated. Comparisons are made to other neural network classifiers, including a radial basis function and a new standard backpropagation neural net. The shapes of the different decision surfaces produced by our feature space trajectory classifier are analyzed.
Proceedings of SPIE | 1996
Leonard Neiberg; David Casasent; Ashit Talukder
The feature space trajectory neural net is reviewed. Its advantages over other classifiers are noted; it allows use of smaller training sets, large numbers of hidden layer neurons, low on- line computational loads, higher-order decision surfaces, the ability to reject false class input (clutter) data, etc. New test results on its 3D distortion-invariant classification performance are provided using a larger object and clutter database, input object contrast differences, a new preprocessing algorithm, and a new feature space. We note the problems with other neural net classifiers that our architecture and algorithm overcomes, the use of different distance thresholds and confidence measures to improve performance, advantages of using adjunct features, and numerous new test results.
Intelligent Robots and Computer Vision XI: Biological, Neural Net, and 3D Methods | 1992
Leonard Neiberg; David Casasent