Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Dean K. McNeill is active.

Publication


Featured researches published by Dean K. McNeill.


IEEE Transactions on Computers | 1998

Competitive learning algorithms and neurocomputer architecture

Howard C. Card; G. K. Rosendahl; Dean K. McNeill; Robert D. McLeod

This paper begins with an overview of several competitive learning algorithms in artificial neural networks, including self-organizing feature maps, focusing on properties of these algorithms important to hardware implementations. We then discuss previously reported digital implementations of these networks. Finally, we report a reconfigurable parallel neurocomputer architecture we have designed using digital signal processing chips and field-programmable gate array devices. Communications are based upon a broadcast network with FPGA-based message preprocessing and postprocessing. A small prototype of this system has been constructed and applied to competitive learning in self-organizing maps. This machine is able to model slowly-varying nonstationary data in real time.


Analog Integrated Circuits and Signal Processing | 1998

Analog VLSI Circuits for Competitive Learning Networks

Howard C. Card; Dean K. McNeill; Christian Schneider

An investigation is made concerning implementations of competitive learning algorithms in analog VLSI circuits and systems. Analog and low power digital circuits for competitive learning are currently important for their applications in computationally-efficient speech and image compression by vector quantization, as required for example in portable multi-media terminals. A summary of competitive learning models is presented to indicate the type of VLSI computations required, and the effects of weight quantization are discussed. Analog circuit representations of computational primitives for learning and evaluation of distortion metrics are discussed. The present state of VLSI implementations of hard and soft competitive learning algorithms are described, as well as those for topological feature maps. Tolerance of learning algorithms to observed analog circuit properties is reported. New results are also presented from simulations of frequency-sensitive and soft competitive learning concerning sensitivity of these algorithms to precision in VLSI learning computations. Applications of these learning algorithms to unsupervised feature extraction and to vector quantization of speech and images are also described.


international symposium on neural networks | 1994

Analog hardware tolerance of soft competitive learning

Dean K. McNeill; Howard C. Card

This paper examines issues in the analog CMOS circuit implementation of the soft competitive neural learning algorithm. Results of simulations based on actual measurements of previously fabricated analog components, primarily CMOS Gilbert multipliers, are presented. These results demonstrate that a generalized version of the soft competitive learning algorithm is capable of discovering appropriate features in an unsupervised learning mode. At the same time it is also well suited to fabrication in an analog environment, and inherent fabrication variations, such as transistor threshold variation and circuit noise, do not adversely effect the performance of the algorithm on a selected test problem.<<ETX>>


international symposium on circuits and systems | 1995

The impact of VLSI fabrication on neural learning

Howard C. Card; Dean K. McNeill; Christian Schneider; Roland S. Schneider

The fabrication of silicon versions of artificial neural learning algorithms in existing VLSI processes introduces a variety of concerns which do not exist in a theoretical system. These include such well known circuit properties as noise, variations and nonlinearity of fabricated devices, arithmetic inaccuracy, and capacitive decay. The supervised learning algorithm-contrastive Hebbian learning, and unsupervised soft competitive learning have demonstrated their resiliency in the presence of these effects as observed in 1.2 /spl mu/m CMOS circuits employing Gilbert multipliers. It has been found that the learning circuits will operate correctly in the presence of offset errors in analog multipliers and adders, if thresholding is applied when performing weight updates.


international symposium on neural networks | 1994

Is VLSI neural learning robust against circuit limitations

Howard C. Card; B.K. Dolenko; Dean K. McNeill; Christian Schneider; R.S. Schneider

An investigation is made of the tolerance of various in-circuit learning algorithms to component imprecision and other circuit limitations in artificial neural networks. Supervised learning mechanisms including backpropagation and contrastive Hebbian leaning, and unsupervised soft competitive learning are all shown to be tolerant of those levels of arithmetic inaccuracy, noise, nonlinearity, weight decay, and statistical variation from fabrication that the authors have experienced in 1.2 /spl mu/m analog CMOS circuits employing Gilbert multipliers as the primary computational element. These learning circuits also function properly in the presence of offset errors in analog multipliers and adders, provided that the computed weight updates are constrained by the circuitry to be made only when they exceed certain minimum or threshold values. These results are also relevant for compact (low bit rate) digital implementations.<<ETX>>


IEEE Transactions on Neural Networks | 2002

Gaussian activation functions using Markov chains

Howard C. Card; Dean K. McNeill

We extend, in two major ways, earlier work in which sigmoidal neural nonlinearities were implemented using stochastic counters. 1) We define the signal to noise limitations of unipolar and bipolar stochastic arithmetic and signal processing. 2) We generalize the use of stochastic counters to include neural transfer functions employed in Gaussian mixture models. The hardware advantages of (nonlinear) stochastic signal processing (SSP) may be offset by increased processing time; we quantify these issues. The ability to realize accurate Gaussian activation functions for neurons in pulsed digital networks using simple hardware with stochastic signals is also analyzed quantitatively.


Neurocomputing | 1998

Competitive learning and vector quantization in digital VLSI systems

Howard C. Card; Srigouri Kamarsu; Dean K. McNeill

Abstract This paper reviews implementations of competitive learning algorithms in digital VLSI circuits and systems, including recent results from our laboratory on neural circuits for efficient learning computations and for vector quantization. Digital circuits for competitive learning, especially those operating with low-power requirements, are currently important for their applications in computationally efficient speech and image compression by vector quantization, as required, for example, in portable multimedia terminals. A summary of competitive learning models is presented to indicate the type of VLSI computations required, and the effects of weight quantization are discussed. Circuit representations of computational primitives for learning and evaluation of distortion metrics by digital circuits are also reviewed. The present state of VLSI implementations of hard and soft competitive learning algorithms are discussed, as well as those for topological feature maps. Recent results are also presented from simulations of frequency-sensitive competitive learning concerning sensitivity of these algorithms to limited precision in VLSI learning computations. Applications of these learning algorithms to unsupervised feature extraction and to vector quantization are also described.


Archive | 2005

Adaptive Event Detection for Shm System Monitoring

Dean K. McNeill; Loren Card

One of the greatest challenges affecting the use of structural health monitoring is the establishment of effective and reliable techniques for processing and management of the accumulated measurement data. This article presents results achieved in the use of an artificial neural computing approach applied to this problem. The unsupervised neural learning algorithm known as frequency sensitive competitive learning is employed in the processing of sensor data from three civil engineering structures. It is shown that the algorithm is capable of learning the normal response of the structure and provides effective means of identifying novel features in the sensor record thereafter. This permits further detailed study of these specific noteworthy events. Events are identified using a relative novelty index computed by the neural network architecture. Examples demonstrate the identification of vehicle traffic on one bridge, seismic activity on a second and the response to wind loading on a feature statue.


International Journal of Neural Systems | 2001

An investigation of competitive learning for autonomous cluster identification in embedded systems.

Dean K. McNeill; Howard C. Card; Alan F. Murray

Robust signal processing for embedded systems requires the effective identification and representation of features within raw sensory data. This task is inherently difficult due to unavoidable long-term changes in the sensory systems and/or the sensed environment. In this paper we explore four variations of competitive learning and examine their suitability as an unsupervised technique for the automated identification of data clusters within a given input space. The relative performance of the four techniques is evaluated through their ability to effectively represent the structure underlying artificial and real-world data distributions. As a result of this study it was found that frequency sensitive competitive learning provides both reliable and efficient solutions to complex data distributions. As well, frequency sensitive and soft competitive learning are shown to exhibit properties which may permit the evolution of an appropriate network structure through the use of growing or pruning procedures.


Neurocomputing | 2002

Dynamic range and error tolerance of stochastic neural rate codes

Dean K. McNeill; Howard C. Card

Abstract An investigation is made of the statistical properties of unary stochastic rate codes, as employed in the implementation of pulsed neural networks. The dynamic range exhibits a reduced dependence on the representation time due to increased estimation errors from the variance in the stochastic encodings of the integers. At the same time, these codes exhibit improved tolerance to noise-induced bit errors. Comparisons are made with conventional integer arithmetic and with deterministic unary codes for neural networks, up to bit error rates as large as 0.3.

Collaboration


Dive into the Dean K. McNeill's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Loren Card

University of Manitoba

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Brion Dolenko

National Research Council

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Joe LoVetri

University of Manitoba

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge