Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gary C. Marsden is active.

Publication


Featured researches published by Gary C. Marsden.


Optics Letters | 1993

Optical transpose interconnection system architectures

Gary C. Marsden; Philippe Marchand; Phil Harvey; Sadik C. Esener

The optical transpose interconnection system uses a simple pair of lenslet arrays to implement a one-to-one interconnection that is useful for shuffle-based multistage interconnection networks, mesh-of-trees matrix processors, and hypercubes.


Applied Optics | 1995

Digital free-space optical interconnections: a comparison of transmitter technologies

Chi Fan; Barmak Mansoorian; Daniel Van Blerkom; M.W. Hansen; Volkan H. Ozguz; Sadik C. Esener; Gary C. Marsden

We investigate the performance of free-space optical interconnection systems at the technology level. Specifically, three optical transmitter technologies, lead-lanthanum-zirconate-titanate and multiple-quantum-well modulators and vertical-cavity surface-emitting lasers, are evaluated. System performance is measured in terms of the achievable areal data throughput and the energy required per transmitted bit. It is shown that lead-lanthanum-zirconate-titanate modulator and vertical-cavity surface-emitting laser technologies are well suited for applications in which a large fan-out per transmitter is required but the total number of transmitters is relatively small. Multiple-quantum-well modulators, however, are good candidates for applications in which many transmitters with a limited fan-out are needed.


Proceedings of the IEEE | 1994

A prototype 3D optically interconnected neural network

Gökçe I. Yayla; Ashok V. Krishnamoorthy; Gary C. Marsden; Sadik C. Esener

We report the implementation of a prototype three-dimensional (3D) optoelectronic neural network that combines free-space optical interconnects with silicon-VLSI-based optoelectronic circuits. The prototype system consists of a 16-node input, 4-neuron hidden, and a single-neuron output layer, where the denser input-to-hidden-layer connections are optical. The input layer uses PLZT light modulators to generate optical outputs which are distributed over an optoelectronic neural network chip through space-invariant holographic optical interconnects. Optical interconnections provide negligible fan-out delay and allow compact, purely on-chip electronic H-tree type fan-in structure. The small prototype system achieves a measured 8-bit electronic fan-in precision and a calculated maximum speed of 640 million interconnections per second. The system was tested using synaptic weights learned off system and was shown to distinguish any vertical line from any horizontal one in an image of 4/spl times/4 pixels. New, more efficient light detector and small-area analog synapse circuits and denser optoelectronic neuron layouts are proposed to scale up the system. A high-speed, feed-forward optoelectronic synapse implementation density of up to 10/sup 4//cm/sup 2/ seems feasible using new synapse design. A scaling analysis of the system shows that the optically interconnected neural network implementation can provide higher fan-in speed and lower power consumption characteristics than a purely electronic, crossbar-based neural network implementation. >


Optics Letters | 1991

Dual-scale topology optoelectronic processor

Gary C. Marsden; Ashok V. Krishnamoorthy; Sadik C. Esener; Sing H. Lee

The dual-scale topology optoelectronic processor (D-STOP) is a parallel optoelectronic architecture for matrix algebraic processing. The architecture can be used for matrix-vector multiplication and two types of vector outer product. The computations are performed electronically, which allows multiplication and summation concepts in linear algebra to be generalized to various nonlinear or symbolic operations. This generalization permits the application of D-STOP to many computational problems. The architecture uses a minimum number of optical transmitters, which thereby reduces fabrication requirements while maintaining area-efficient electronics. The necessary optical interconnections are space invariant, minimizing space-bandwidth requirements.


International Journal of Parallel Programming | 1991

Parallel path consistency

Thomas C. Henderson; Joseph L. Zachary; Charles D. Hansen; Paul A. Hinker; Gary C. Marsden

Filtering algorithms are well accepted as a means of speeding up the solution of the consistent labeling problem (CLP). Despite the fact that path consistency does a better job of filtering than arc consistency, AC is still the preferred technique because it has a much lower time complexity. We are implementing parallel path consistency algorithms on multiprocessors and comparing their performance to the best sequential and parallel arc consistency algorithms.(1,2) (See also work by Kerethoet al.(3) and Kasif(4)) Preliminary work has shown linear performance increases for parallelized path consistency and also shown that in many cases performance is significantly better than the theoretical worst case. These two results lead us to believe that parallel path consistency may be a superior filtering technique. Finally, we have implemented path consistency as an outer product computation and have obtained good results (e.g., linear speedup on a 64K-node Connection Machine 2).


Applied Optics | 1991

Highly parallel consistent labeling algorithm suitable for optoelectronic implementation

Gary C. Marsden; Fouad E. Kiamilev; Sadik C. Esener; Sing H. Lee

Constraint satisfaction problems require a search through a large set of possibilities. Consistent labeling is a method by which search spaces can be drastically reduced. We present a highly parallel consistent labeling algorithm, which achieves strong k-consistency for any value k and which can include higher-order constraints. The algorithm uses vector outer product, matrix summation, and matrix intersection operations. These operations require local computation with global communication and, therefore, are well suited to a optoelectronic implementation.


Applied Optics | 1996

Parallel fuzzy inference with an optoelectronic H-tree architecture

Gary C. Marsden; Brita H. Olson; Sadik C. Esener

Fuzzy inference is a method of reasoning with imprecise information. The mathematical operations of fuzzy inference can be stated in terms of generalized vector algebra, in which multiplication and summation are generalized to min and max operations. An optoelectronic H-tree architecture is ideally suited to perform these generalized vector operations in parallel and requires only a simple imaging optical interconnection. Appropriate data encodings and electronic circuitry permit large scale, pipelined systems.


Applied Optics | 1993

Optical matrix–vector implementation of the content-addressable network

Stephen A. Brodsky; Gary C. Marsden; Clark C. Guest

The content-addressable network (CAN) is an efficient, intrinsically discrete training algorithm for binary-valued classification networks. The binary nature of the CAN network permits accelerated learning and significantly reduced hardware-implementation requirements. A multilayer optoelectronic CAN network employing matrix-vector multiplication was constructed. The network learned and correctly classified trained patterns, gaining a measure of fault tolerance by learning associative solutions to optical hardware imperfections. Operation of this system is possible owing to the reduced hardware accuracy requirements of the CAN learning algorithm.


international symposium on neural networks | 1994

Hardware efficient learning on a 3-D optoelectronic neural system

Ashok V. Krishnamoorthy; Stephen A. Brodsky; Clark C. Guest; Gary C. Marsden; Matthias Blume; Gökçe I. Yayla; Jean Merckle; Sadik C. Esener

Discusses the dual-scale topology optoelectronic processor (D-STOP) neural network, a scalable, optically interconnected neural network architecture. The authors present the tandem D-STOP system, which provides the connectivity needed for building fully-parallel neural networks with generic gradient-descent learning rules. The authors review the content addressable network (CAN) learning algorithm, a discrete learning algorithm that provides accelerated learning with reduced hardware requirements. The authors then show how the CAN algorithm can be effectively mapped onto D-STOP, and they investigate associated optoelectronic hardware tradeoffs.<<ETX>>


San Diego '92 | 1993

Prototype 3D optoelectronic neural system

Gökçe I. Yayla; Ashok V. Krishnamoorthy; Gary C. Marsden; Joseph E. Ford; Volkan H. Ozguz; Chi Fan; Subramania Krishnakumar; Jinghua Wang; Sadik C. Esener; William J. Miceli; John A. Neff; Stephen T. Kowel

We report the implementation of a prototype 3-D optoelectronic neural system that combines free-space optical interconnects with silicon-VLSI-based hybrid optoelectronic circuits. the prototype system consists of a 16-pixel input, 4-neuron hidden and a single-neuron output layer, where the denser input-to-hidden layer connections are optical. The input layer uses PLZT light modulators to generate optical outputs which are distributed to an optoelectronic analog neural network chip through space invariant holographic optical interconnects. Optical interconnections provide fan-out with negligible delay and allow the use of compact, purely on-chip electronic H-tree fan-in structures. The scalable prototype system achieves 8-bit electronic fan-in precision and a maximum speed of 640 million interconnections per second. The system was tested using synaptic weights learned off-system and applied to a simple line recognition task.© (1993) COPYRIGHT SPIE--The International Society for Optical Engineering. Downloading of the abstract is permitted for personal use only.

Collaboration


Dive into the Gary C. Marsden's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sing H. Lee

University of California

View shared research outputs
Top Co-Authors

Avatar

Brita H. Olson

University of California

View shared research outputs
Top Co-Authors

Avatar

Chi Fan

University of California

View shared research outputs
Top Co-Authors

Avatar

Clark C. Guest

University of California

View shared research outputs
Top Co-Authors

Avatar

Jean Merckle

University of California

View shared research outputs
Top Co-Authors

Avatar

Joseph E. Ford

University of California

View shared research outputs
Top Co-Authors

Avatar

Matthias Blume

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge