Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where William A. Porter is active.

Publication


Featured researches published by William A. Porter.


southeastern symposium on system theory | 1997

Man-machine interface through eyeball direction of gaze

Abdel-Latief Abou-Ali; William A. Porter

Measurements of the eyeball direction of gaze are employed to provide a computer interface. The interface incorporates a keyboard as well as mouse function. The interface uses ISCAN equipment to provide the eye coordinates in an image captured by an IR camera. The calibration process is studied to provide the mapping between the equipment coordinates and the keyboard coordinates. A minimum likelihood based procedure is developed to drive the interface during the running process. The interface is found to be stable enough and responsive enough for a typing speed of 5.5 characters per minute.


Circuits Systems and Signal Processing | 1993

Recent progress on higher order moment neural arrays

William A. Porter

Higher order moment computations are generally necessary wherever the recognition task exceeds the confines of linearity. This paper provides an overview of recent progress on a specific neural network design, which explicitly uses higher order moment information. Attention is focused on the training algorithms used in the design and on network performance in prototype applications.


Information Sciences | 1994

Steering high order moment calculations from lower dimensional spaces

William A. Porter; Wei Liu

Abstract In this study we consider some well established algorithms used to determine separating hyperplanes for sets in R n . We implement these algorithms in a high dimensional tensor space that is affiliated with representing the higher order moment information for the sets in question. We identify an inner product preserving morphism from tensor space to R n . Using the morphism, the calculations of the algorithm are brought down to a space of much lower dimension. Our results make it clear that higher order moment computations can often avoid the burden of computation in a high dimensional vector space.


Information Sciences | 1998

Vector quantization for multiple classes

William A. Porter; Abdel-Latief H. Abou-Ali

Abstract Vector quantization algorithms have long been used to find a finite set of exemplars which represent a data set to within an a priori error tolerance. Such a representation is essential in codebook-based data compression and transmission. The present study considers the situation where the data to be encoded consists of subclasses. The codebook must provide information compression within the several subclasses, however, minimization of interclass errors is of equal importance. We present modifications to a basic vector quanitization algorithm which adapts it to the multiclass vector quantizing setting. We then explore the behavior of the modified algorithm on selected benchmark applications. We show, in particular, that overlapping subclasses can be accommodated by the algorithm.


Information Sciences | 1996

Fuzzy HMC classifiers

William A. Porter; Wei Liu

In this study, we present a design strategy for hierarchical modular classifiers applicable to fuzzy and/or overlapping categories. The design includes techniques for decomposing a large-scale problem into a hierarchy of subproblems. A neural classification module is then designed for each of the subproblems. The collection of classification modules, and a rule book governing their use, comprise the resultant design. As an ancillary, but necessary, issue, we demonstrate the uses of a specific neural network for generating membership functions. We also explore the use of maximum membership as a classification criterion.


Journal of The Franklin Institute-engineering and Applied Mathematics | 1995

Efficient exemplars for classifier design

William A. Porter; Wei Liu

Let X1, …, Xm denote subsets of a vector space. A function ϕ is said to classify (distinguish, recognize, …) the sets Xj if the set images ϕ(Xj), j = 1, …, m are distinctive. Although a broad variety of classifier designs (statistical, separating hyperplane, neural, etc.) are available, a common problem plagues these designs whenever sets Xj are of high cardinality. Namely, the identification of exemplars which (i) are representative of the several classes, (ii) emphasize the importance of critical boundaries, and (iii) in total are of small cardinality. Such exemplar sets lead to robust designs, reduce computational costs, enhance algorithm convergence and often reduce the hardware attendant to design implementation. In the present study we address the problem of choosing training sets for classifier designs. One result is an algorithm for selecting exemplars on the boundary of a set. Classifiers based on the boundary exemplars are effective for the binary problem: x ∈; X or x ∉ X. Using this algorithm we then present a procedure for obtaining boundary sets for the multiple classification case. These sets X′j ⊂ Xj have the following properties: (i) the cardinality of X′j is small, (ii) the critical boundaries between the Xj are delineated, and (iii) the sets Xj are synchronized in the sense that the boundary points are selected in pairs. These three properties suggest that the X′j will facilitate a computational speed up in several classifier design methodologies, in particular nearest neighbor, separating hyperplane and neural network classifiers. We also present some preliminary tests which confirm this hypothesis.


Circuits Systems and Signal Processing | 1996

Auxilliary computations for perceptron networks

William A. Porter; Wei Liu; Chin-Yi Wu

In this study we consider a multilayer perceptron network with sigmoidal activation and trained via the backpropagation algorithm. The output of all neurons is collected and a simple linear regression is performed. It is shown that untrained networks with randomly chosen coefficients perform comparably with fully trained networks. This result casts a new light on the role of activation functions, the impact of dimensionality, and the efficacy of training algorithms such as backpropagation.


Circuits Systems and Signal Processing | 1998

On neural network design — Part I: Using the MVQ algorithm

William A. Porter; A. H. Abouali

In this two-part study we present a new design methodology for neural classifiers. The design procedure utilizes a multiclass vector quantization (MVQ) algorithm for information extraction from the training set. The extracted information suffices to specify the hidden layer in a canonical neural network architecture. The extracted information also leads to the specification of neuron inhibition rules and subsequently the design of the hidden layer-to-output map. In Part I of the study we focus attention on the MVQ algorithm and how it is used to extract information from a training set. The extracted information is referred to as thecodebook. The codebook is used to directly specify the hidden layer. This specification can take the form of a perceptron layer, a radial basis layer, or a heterogeneous layer involving a mixture of neuron types. These and otherh-layer specifications are determined directly from the same extracted information. The MVQ codebook also suffices to scale the activation function of each neuron. In Part II we consider the nonsimplistic hidden layer-to-output map design. We note that the MVQ algorithm, as it extracts information, decomposes the design set into disjoint neighborhoods. For each neighborhood we identify subsets of the hidden layer neurons, which are significant sensors for the neighborhood. For each such subset we construct an output map. Inhibition rules are established to ensure that the proper output map is activated. In benchmark simulations the overall design exhibits excellent performance, to the extent that we are hard pressed to identify bounds on performance, if any.


Circuits Systems and Signal Processing | 1996

Neural network training enhancement

William A. Porter; Wei Liu

In this study we explore the use of nonlinear embedding maps to expand the dimension of the input space. The efficacy of such maps to speed training and to enhance performance is illustrated through several examples. A natural connection to nonlinear synaptic interconnects is also developed.


Circuits Systems and Signal Processing | 1998

On neural network design part II: Inhibition and the output map

William A. Porter; A. H. Abouali

In this two part study, we presented a new design methodology for neural classifiers. The design procedure utilizes a multiclass vector quantization, MVQ, algorithm for information extraction from the training set. The extracted information suffices to specify the hidden layer in a canonical neural network architecture. The extracted information also leads to the specification of neuron inhibition rules and subsequently to the design of the hidden layer to output map. In part I of that study, we focused attention on the MVQ algorithm and how it is used to extract information from a training set. The extracted information is used to directly specify the hidden layer. In part II, we consider the non-simplistic hidden layer to output map design. We note that the MVQ algorithm, as it extracts information, decomposes the design set into disjoint neighborhoods. For each neighborhood we identify subsets of the hidden layer neurons which are significant sensors for the neighborhood. For each subset we construct an output map. Inhibition rules are established to assure that the proper output map is activated. In benchmark simulations, the overall design exhibits performance, to the extent that we are hard pressed to identify bounds on performance, if any.

Collaboration


Dive into the William A. Porter's collaboration.

Top Co-Authors

Avatar

Wei Liu

University of Alabama in Huntsville

View shared research outputs
Top Co-Authors

Avatar

A. H. Abouali

University of Alabama in Huntsville

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Abdel-Latief H. Abou-Ali

University of Alabama in Huntsville

View shared research outputs
Top Co-Authors

Avatar

Chin-Yi Wu

University of Alabama in Huntsville

View shared research outputs
Researchain Logo
Decentralizing Knowledge