Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Stamatis Vassiliadis is active.

Publication


Featured researches published by Stamatis Vassiliadis.


Proceeding of an international workshop on VLSI for neural networks and artificial intelligence | 1995

A VLSI pipelined neuroemulator

José G. Delgado-Frias; Stamatis Vassiliadis; Gerald G. Pechanek; Wei Lin; Steven M. Barber; Hui Ding

Applications and interest on artificial neural networks (ANN) have been increasing in recent years. Applications include pattern matching, associative memory, image processing and word recognition (Simpson 1992). ANNs is a novel computing paradigm in which an artificial neuron produces an output that depends on the inputs (from other neurons), the strength or weights associated with the inputs, and an activation function.


international symposium on neural networks | 1996

A neuro-emulator with learning and virtual emulation capabilities

V.C. Aikens; J.G. Delgado Frias; S.M. Barber; Gerald G. Pechanek; Stamatis Vassiliadis

In this paper we present and evaluate a novel neuro-emulator. The architecture of this neuro-emulator provides support for learning as well as handling large neural networks in virtual mode. We have identified a set of computational, communication and storage requirements for learning in artificial neural networks. These requirements are representative of a wide variety of algorithms for different styles of learning. The proposed novel neuro-emulator provides the computational ability for the stated requirements. While meeting all the identified requirements the new architecture maintains a high utilization of the machines resources during learning. To show the capabilities of the proposed machine we present four diverse learning algorithms. We include an evaluation of the machine performance as well as a comparison with other architectures. It is shown that with a modest amount of hardware the proposed architecture yields an extremely high number of connections per second.


Proceeding of an international workshop on VLSI for neural networks and artificial intelligence | 1995

A dataflow approach for neural networks

Thomas F. Ryan; José G. Delgado-Frias; Stamatis Vassiliadis; Gerald G. Pechanek; Douglas M. Green

Artificial neural networks have been introduced as a novel computing paradigm (Kohenen 1988). Processing (or retrieving) in neural networks requires a collective interaction of a number of neurons. Output of neurons are computed based on the inputs from other neurons, weights associated with such inputs, and a non-linear activation function. Specifically, most artificial neurons follow a mathematical model that is expressed as:


international symposium on neural networks | 1994

An investigation of the precision impact on the Hopfield-Tank neural network model for the TSP

Wei Lin; José G. Delgado-Frias; Stamatis Vassiliadis; Gerald G. Pechanek


international symposium on neural networks | 1994

Scalable completely connected digital neural networks

G.G. Pechanek; Stamatis Vassiliadis; José G. Delgado-Frias; G. Triantafyllos

{Y_i}(t + 1) = F(\sum\limits_{j = 1}^N {{W_{ij}}{Y_j}(t)} )


Archive | 1995

Processor using folded array structures for transposition memory and fast cosine transform computation

Gerald G. Pechanek; Stamatis Vassiliadis


Archive | 1993

Multiple-fold clustered processor mesh array

Gerald G. Pechanek; Stamatis Vassiliadis; Jose G. Delgado

(1) where Wij is the weight, Yj(t) is the neuron input, N is the number of neurons connected to neuron 1, and F is a non linear function which is usually a sigmoid (Hopfield 1984) as shown below.


Archive | 2004

Methods and apparatus for general deferred execution processors

Gerald G. Pechanek; Stamatis Vassiliadis

An advantage of the use of neural networks is the utilization of a number of processing units to obtain the solution in short time. In the design of a neural network, different bit precisions alter the computer architecture and organization on design. The authors study the impact of the precision by using the Hopfield-Tank neural network model for the traveling salesman problem (TSP). In order to simulate the TSP problem using the Hopfield-Tank model, the authors have used a number of previous studies to determine some of the required parameters. To investigate the influence of the precision, the authors have simulated the TSP problem in a MIPS R3000. The authors have considered: five different bit precisions (8- 16- 24- 32- and double precision mantissas), three values of the sigmoid generation parameters, and convergency within 1000 neuron update cycles. The authors have run a total of 7,080 simulations for the established benchmark in the MIPS-3000 machines; the simulation results are extensively discussed. Additionally, two novel approaches to measure the performance of the network, namely the average network performance and computational efficiency, are introduced and used in the evaluation of the performance of the model. Further information extraction is done by using Dempsters rule of combination for the average network performance and computational efficiency.<<ETX>>


Archive | 1996

High - Performance Parallel FFT Algorithms on M

Clair John Glossner; Gerald G. Pechanek; Stamatis Vassiliadis; J. R. Landon

A machine organization is presented for the digital emulation of completely connected and multi-layer neural networks including back-propagation learning. The system architecture lends itself to a hierarchical machine organization of six levels and supports the direct emulation of network models for up to N neurons and the virtual emulation of an arbitrary number of V neurons for V>N. The system is scalable for both direct and virtual processing. Based on performance estimations, the proposed structure is shown to provide a 3X to 133X speed-up for NETtalk emulation when compared to other neuroemulators.<<ETX>>


Archive | 1992

A Processor Unit for Flexible Multiprocessor Machine Organizations

J. Delgado Frias; Stamatis Vassiliadis; Gerald G. Pechanek

Collaboration


Dive into the Stamatis Vassiliadis's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Wei Lin

Westport Innovations

View shared research outputs
Researchain Logo
Decentralizing Knowledge