Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Stanley C. Ahalt is active.

Publication


Featured researches published by Stanley C. Ahalt.


Neural Networks | 1990

Competitive learning algorithms for vector quantization

Stanley C. Ahalt; Ashok K. Krishnamurthy; Prakoon Chen; Douglas E. Melton

Abstract We compare a number of training algorithms for competitive learning networks applied to the problem of vector quantization for data compression. A new competitive-learning algorithm based on the “conscience” learning method is introduced. The performance of competitive learning neural networks and traditional non-neural algorithms for vector quantization is compared. The basic properties of the algorithms are discussed and we present a number of examples that illustrate their use. The new algorithm is shown to be efficient and yields near-optimal results. This algorithm is used to design a vector quantizer for a speech database. We conclude with a discussion of continuing work.


IEEE Journal on Selected Areas in Communications | 1990

Neural networks for vector quantization of speech and images

Ashok K. Krishnamurthy; Stanley C. Ahalt; Douglas Melton; Prakoon Chen

Using neural networks for vector quantization (VQ) is described. The authors show how a collection of neural units can be used efficiently for VQ encoding, with the units performing the bulk of the computation in parallel, and describe two unsupervised neural network learning algorithms for training the vector quantizer. A powerful feature of the new training algorithms is that the VQ codewords are determined in an adaptive manner, compared to the popular LBG training algorithm, which requires that all the training data be processed in a batch mode. The neural network approach allows for the possibility of training the vector quantizer online, thus adapting to the changing statistics of the input data. The authors compare the neural network VQ algorithms to the LBG algorithm for encoding a large database of speech signals and for encoding images. >


Journal of the Acoustical Society of America | 1995

Variability in the production of quantal vowels revisited

Mary E. Beckman; Tzyy-Ping Jung; Sook‐hyang Lee; Kenneth A. De Jong; Ashok K. Krishnamurthy; Stanley C. Ahalt; K. Bretonnel Cohen; Michael J. Collins

Articulatory and acoustic variability in the production of five American English vowels was examined. The data were movement records for selected fleshpoints on the midsagittal tongue surface, recorded using the x‐ray microbeam. An algorithm for nonlinearly transforming fleshpoint positions to a new Cartesian space in which the x and y axes represent, respectively, the distance of the fleshpoint along the opposing vocal tract wall and the distance perpendicular to the tract wall, is described. The transformation facilitates a test of Quantal Theory in which variability in the two dimensions is compared over many productions of a given vowel type. The data provide some support for the theory. For fleshpoints near ‘‘quantal’’ constriction sites, the primary variability was in the x dimension (constriction location). The y‐dimension values were more tightly constrained, and the formant frequencies were more significantly correlated with the y values than with the x values. The greater variability in constric...


Software - Practice and Experience | 1991

Compiled instruction set simulation

Christopher Mills; Stanley C. Ahalt; James E. Fowler

An efficient method for simulating instruction sets is described. The method allows for compiled instruction set simulation using the macro expansion capabilities found in many languages. Additionally, we show how the semantics of the C case statement allows instruction branching to be incorporated in an efficient manner. The method is compared with conventional interpreted techniques and is shown to offer considerable performance benefits.


IEEE Transactions on Aerospace and Electronic Systems | 1993

Classification of radar targets using synthetic neural networks

I. Jouny; F. D. Garber; Stanley C. Ahalt

Radar target classification performance of neural networks is evaluated. Time-domain and frequency-domain target features are considered. The sensitivity of the neural network algorithm to changes in network topology and training noise level is examined. The problem of classifying radar targets at unknown aspect angles is considered. The performance of the neural network algorithms is compared with that of decision-theoretic classifiers. Neural networks can be effectively used as radar target classification algorithms with an expected performance within 10 dB (worst case) of the optimum classifier. >


IEEE Transactions on Circuits and Systems for Video Technology | 2002

A hybrid DCT-SVD image-coding algorithm

Adriana Dapena; Stanley C. Ahalt

We propose an image-coding algorithm which combines the discrete cosine transform (DCT) and the singular value decomposition (SVD). The DCT is used to transform those blocks in the source image that exhibit a high correlation, while blocks with greater high-frequency content are transformed using the SVD. A simple criterion is used to decide which transform should be used on each block. Simulation results show that the new hybrid algorithm provides good distortion, bit rate, and image quality-especially in images which are less spatially correlated.


IEEE Transactions on Signal Processing | 1995

Computationally attractive real Gabor transforms

Daniel F. Stewart; Lee C. Potter; Stanley C. Ahalt

We present a Gabor transform for real, discrete signals and present a computationally attractive method for computing the transform. For the critically sampled case, we derive a biorthogonal function which is very localized in the time domain. Therefore, truncation of this biorthogonal function allows us to compute approximate expansion coefficients with significantly reduced computational requirements. Further, truncation does not degrade the numerical stability of the transform. We present a tight upper bound on the reconstruction error incurred due to use of a truncated biorthogonal function and summarize computational savings. For example, the expense of transforming a length 2048 signal using length 16 blocks is reduced by a factor of 26 over similar FFT-based methods with at most 0.04% squared error in the reconstruction. >


IEEE Transactions on Circuits and Systems for Video Technology | 1995

Real-time video compression using differential vector quantization

James E. Fowler; Kenneth C. Adkins; Steven B. Bibyk; Stanley C. Ahalt

This paper describes hardware that has been built to compress video in real time using full-search vector quantization (VQ). This architecture implements a differential-vector-quantization (DVQ) algorithm and features a special-purpose digital associative memory, the VAMPIRE chip, which has been fabricated in 2 /spl mu/m CMOS. We describe the DVQ algorithm, its adaptations for sampled NTSC composite-color video, and details of its hardware implementation. We conclude by presenting both numerical results and images drawn from real-time operation of the DVQ hardware. >


IEEE Transactions on Neural Networks | 1996

Codeword distribution for frequency sensitive competitive learning with one-dimensional input data

Aristides S. Galanopoulos; Stanley C. Ahalt

We study the codeword distribution for a conscience-type competitive learning algorithm, frequency sensitive competitive learning (FSCL), using one-dimensional input data. We prove that the asymptotic codeword density in the limit of large number of codewords is given by a power law of the form Q(x)=C.P(x)(alpha), where P(x) is the input data density and alpha depends on the algorithm and the form of the distortion measure to be minimized. We further show that the algorithm can be adjusted to minimize any L(p) distortion measure with p ranging in (0,2].


Neural Networks | 1996

On temporal generalization of simple recurrent networks

DeLiang Wang; Xiaomei Liu; Stanley C. Ahalt

Simple recurrent networks (Elman networks) have been widely used in temporal processing applications. In this study we investigate temporal generalization of simple recurrent networks, drawing comparisons between network capabilities and human performance. Elman networks are trained to generate temporal trajectories sampled at different rates. The networks are then tested with trajectories at the trained rates and other sampling rates, including trajectories representing mixtures of different sampling rates. It is found that for simple trajectories the networks show interval invariance, but not rate invariance. However, for complex trajectories which require greater contextural information, these networks do not seem to show any temporal generalization. Similar results are also obtained using measured speech data. These results suggest that this class of recurrent networks exhibits severe limitations in temporal generalization. Discussions are provided regarding rate invariance and possible ways to achieve it in neural networks. Copyright 1996 Elsevier Science Ltd

Collaboration


Dive into the Stanley C. Ahalt's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

James E. Fowler

Mississippi State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Xun Du

Ohio State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tzyy-Ping Jung

University of California

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge