Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Lex A. Akers is active.

Publication


Featured researches published by Lex A. Akers.


Solid-state Electronics | 1982

Threshold voltage models of short, narrow and small geometry MOSFET's: A review

Lex A. Akers; Julian J. Sanchez

Abstract As MOS devices are shrunk to near and submicrometer dimensions, short channel, narrow width and small geometry effects cause variations in the threshold voltage. It is critical for circuit and device designers to be able to predict these variations. This paper reviews and compares the various modeling techniques developed to determine the threshold voltage as a function of device geometry. It is hoped this review will provide insights for the development of new models for todays small devices.


IEEE Transactions on Neural Networks | 1993

An adaptive neural processing node

James Donald; Lex A. Akers

The design and test results for two analog adaptive VLSI processing chips are described. These chips use pulse coded signals for communication between processing nodes and analog weights for information storage. The weight modification rule, implemented on chip, uses concepts developed by E. Oja (1982) and later extended by T. Leen et al. (1989) and T. Sanger (1989). Experimental results demonstrate that the network produces linearly separable outputs that correspond to dominant features of the inputs. Such representations allow for efficient additional neural processing. Part of the adaptation rule also includes a small number of fixed inputs and a variable lateral inhibition mechanism. Experimental results from the first chip show the operation of function blocks that make a single processing node. These function blocks include forward transfer function, weight modification, and inhibition. Experimental results from the second chip show the ability of an array of processing elements to extract important features from the input data.


IEEE Transactions on Circuits and Systems Ii: Analog and Digital Signal Processing | 1997

A scalable low voltage analog Gaussian radial basis circuit

Luke Theogarajan; Lex A. Akers

Gaussian basis function (GBF) networks are powerful systems for learning and approximating complex input-output mappings. Networks composed of these localized receptive field units trained with efficient learning algorithms have been simulated solving a variety of interesting problems. For real-time and portable applications however, direct hardware implementation is needed. We describe experimental results from the most compact, low voltage analog Gaussian basis circuit yet reported. We also extend our circuit to handle large fan-in with minimal additional hardware. Our design is hierarchical and the number of transistors scales almost linearly with the input dimension making it amenable to VLSI implementation.


international symposium on circuits and systems | 1998

Programmable current mode Hebbian learning neural network using programmable metallization cell

B. Swaroop; W.C. West; G. Martinez; M.N. Kozicki; Lex A. Akers

The design and performance of a Hebbian learning based neural network is presented in this work. In situ analog learning was employed, thus computing the synaptic weight changes continuously during the normal operation of the artificial neural network (ANN). The complexity of a synapse is minimized by using a novel device called the Programmable Metallization Cell (PMC). Simulations with circuits with three PMCs per synapse showed that appropriate learning behaviour was obtained at the synaptic level.


international symposium on microarchitecture | 1989

A CMOS neural network for pattern association

Mark R. Walker; Paul E. Hasler; Lex A. Akers

The authors present an analog complementary metal-oxide semiconductor (CMOS) version of a model for pattern association, along with discussions of design philosophy, electrical results, and a chip architecture for a 512-element, feed-forward IC. They discuss hardware implementations of neural networks and the effect of limited interconnections. They then examine network design, processor-element design, and system operation.<<ETX>>


international symposium on circuits and systems | 1993

A neural processing node with on-chip learning

James Donald; Lex A. Akers

Real-time control requires adaptive, analog VLSI chips. The authors describe the design and test results of an adaptive analog processing chip. These chips are pulse coded signals for communication between processing nodes and analog weights for information storage. The adaptive rule is implemented on chip. Experimental results demonstrate that the network produces unsupervised linearly separable outputs that correspond to dominant features of the inputs.<<ETX>>


Proceedings of the NATO Advanced Research Workshop on Neural computers | 1989

Limited interconnectivity in synthetic neural systems

Lex A. Akers; Mark R. Walker; D. K. Ferry; Robert O. Grondin

If designers of integrated circuits are to make a quantum jump forward in the capabilities of microchips, the development of a coherent, parallel type of processing that provides robustness and is not sensitive to failure of a few individual gates is needed. The problem of using arrays of devices, highly integrated within a chip and coupled to each other, is not one of making the arrays, but is one of introducing the hierarchial control structure necessary to fully implement the various system or computer algorithms necessary. In other words, how are the interactions between the devices orchestrated so as to map a desired architecture onto the array itself? We have suggested in the past that these arrays could be considered as local cellular automata [1], but this does not alleviate the problem of global control which must change the local computational rules in order to implement a general algorithm. Huberman [2,3] has studied the nature of attractors on finite sets in the context of iterative arrays, and has shown in a simple example how several inputs can be mapped into the same output. The ability to change the function during processing has allowed him to demonstrate adaptive behavior in which dynamical associations are made between different inputs, which initially produced sharply distinct outputs. However, these remain only the initial small steps toward the required design theory to map algorithms into network architecture. Hopfield and coworkers [4,5], in turn, have suggested using a quadratic cost function, which in truth is just the potential energy surface commonly used for Liaponuv stability trials, to formulate a design interconnection for an array of neuron-like switching elements. This approach puts the entire foundation of the processing into the interconnections.


Archive | 1989

A Limited-Interconnect, Highly Layered Synthetic Neural Architecture

Lex A. Akers; Mark R. Walker; D. K. Ferry; Robert O. Grondin

Recent encouraging results have occurred in the application of neuromorphic, ie. neural network inspired, software simulations of speech synthesis, word recognition, and image processing. Hardware implementations of neuromorphic systems are required for real-time applications such as control and signal processing. Two disparate groups of workers are interested in VLSI hardware implementations of neural networks. The first is interested in electronic-based implementations of neural networks and use standard or custom VLSI chips for the design. The second group wants to build fault tolerant adaptive VLSI chips and are much less concerned with whether the design rigorously duplicates the neural models. In either case, the central issue in construction of a electronic neural network is that the design constraints of VLSI differ from those of biology (Walker and Akers 1988). In particular, the high fan-in/fan-outs of biology impose connectivity requirements such that the electronic implementation of a highly interconnected biological neural networks of just a few thousand neurons would require a level of connectivity which exceeds the current or even projected interconnection density of ULSI systems. Fortunately, highly-layered limited interconnected networks can be formed that are functionally equivalent to highly connected systems (Akers et al. 1988). Highly layered, limited-interconnected architectures are especially well suited for VLSI implementations. The objective of our work is to design highly layered, limited-interconnect synthetic neural architectures and develop training algorithms for systems made from these chips. These networks are specifically designed to scale to tens of thousands of processing elements on current production size dies.


Solid-state Electronics | 1983

Drain-voltage effects on the threshold voltage of a small-geometry MOSFET☆

C.S. Chao; Lex A. Akers; D.N. Pattanayak

Abstract Closed form analytical expressions are developed to predict the threshold voltage of a small geometry MOSFET with a nonzero drain voltage. Two expressions are developed. The first expression is for an abrupt oxide transition from the thin gate to thick field oxide with uniform doping and the second expression includes the effects of a tapered recessed field oxide, and field doping encroachment at the channel edges. The theory is compared with experimental results obtained from n -channel small geometry MOSFETs.


international symposium on circuits and systems | 1996

A multi-dimensional analog Gaussian radial basis circuit

Luke Theogarajan; Lex A. Akers

Gaussian basis function (GBF) networks are powerful systems for learning and approximating complex input-output mappings. Networks composed of these localized receptive field units trained with efficient learning algorithms have been simulated solving a variety of interesting problems. For real-time and portable applications however, direct hardware implementation is needed. We describe simulated and experimental results from the most compact, low voltage analog Gaussian basis circuit yet reported. We also extend our circuit to handle large fan-in with minimal additional hardware. We show a SPICE simulation of our circuit implementing a multivalued exponential associative memory (MERAM).

Collaboration


Dive into the Lex A. Akers's collaboration.

Top Co-Authors

Avatar

Mark R. Walker

Arizona State University

View shared research outputs
Top Co-Authors

Avatar

D. K. Ferry

Arizona State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

James Donald

Arizona State University

View shared research outputs
Top Co-Authors

Avatar

Paul E. Hasler

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Arun Rao

Arizona State University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

A. Rao

Arizona State University

View shared research outputs
Top Co-Authors

Avatar

A. Zakaria

Arizona State University

View shared research outputs
Researchain Logo
Decentralizing Knowledge