Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alireza Goudarzi is active.

Publication


Featured researches published by Alireza Goudarzi.


Physical Review Letters | 2012

Emergent Criticality Through Adaptive Information Processing in Boolean Networks

Alireza Goudarzi; Christof Teuscher; Natali Gulbahce; Thimo Rohlf

We study information processing in populations of boolean networks with evolving connectivity and systematically explore the interplay between the learning capability, robustness, the network topology, and the task complexity. We solve a long-standing open question and find computationally that, for large system sizes N, adaptive information processing drives the networks to a critical connectivity K(c)=2. For finite size networks, the connectivity approaches the critical value with a power law of the system size N. We show that network learning and generalization are optimized near criticality, given that the task complexity and the amount of information provided surpass threshold values. Both random and evolved networks exhibit maximal topological diversity near K(c). We hypothesize that this diversity supports efficient exploration and robustness of solutions. Also reflected in our observation is that the variance of the fitness values is maximal in critical network populations. Finally, we discuss implications of our results for determining the optimal topology of adaptive dynamical networks that solve computational tasks.


international symposium on nanoscale architectures | 2015

Hierarchical composition of memristive networks for real-time computing

Jens Bürger; Alireza Goudarzi; Darko Stefanovic; Christof Teuscher

Advances in materials science have led to physical instantiations of self-assembled networks of memristive devices and demonstrations of their computational capability through reservoir computing. Reservoir computing is an approach that takes advantage of collective system dynamics for real-time computing. A dynamical system, called a reservoir, is excited with a time-varying signal and observations of its states are used to reconstruct a desired output signal. However, such a monolithic assembly limits the computational power due to signal interdependency and the resulting correlated readouts. Here, we introduce an approach that hierarchically composes a set of interconnected memristive networks into a larger reservoir. We use signal amplification and restoration to reduce reservoir state correlation, which improves the feature extraction from the input signals. Using the same number of output signals, such a hierarchical composition of heterogeneous small networks outperforms monolithic memristive networks by at least 20% on waveform generation tasks. On the NARMA-10 task, we reduce the error by up to a factor of 2 compared to homogeneous reservoirs with sigmoidal neurons, whereas single memristive networks are unable to produce the correct result. Hierarchical composition is key for solving more complex tasks with such novel nano-scale hardware.


international conference on unconventional computation | 2014

Reservoir Computing Approach to Robust Computation Using Unreliable Nanoscale Networks

Alireza Goudarzi; Matthew R. Lakin; Darko Stefanovic

As we approach the physical limits of CMOS technology, advances in materials science and nanotechnology are making available a variety of unconventional computing substrates that can potentially replace top-down-designed silicon-based computing devices. Inherent stochasticity in the fabrication process and nanometer scale of these substrates inevitably lead to design variations, defects, faults, and noise in the resulting devices. A key challenge is how to harness such devices to perform robust computation. We propose reservoir computing as a solution. In reservoir computing, computation takes place by translating the dynamics of an excited medium, called a reservoir, into a desired output. This approach eliminates the need for external control and redundancy, and the programming is done using a closed-form regression problem on the output, which also allows concurrent programming using a single device. Using a theoretical model, we show that both regular and irregular reservoirs are intrinsically robust to structural noise as they perform computation.


Procedia Computer Science | 2014

Towards a Calculus of Echo State Networks

Alireza Goudarzi; Darko Stefanovic

Reservoir computing is a recent trend in neural networks which uses the dynamical perturbations on the phase space of a system to compute a desired target function. We present how one can formulate an expectation of system performance in a simple class of reservoir computing called echo state networks. In contrast with previous theoretical frameworks, which only reveal an upper bound on the total memory in the system, we analytically calculate the entire memory curve as a function of the structure of the system and the properties of the input and the target function. We demonstrate the precision of our framework by validating its result for a wide range of system sizes and spectral radii. Our analytical calculation agrees with numerical simulations. To the best of our knowledge this work presents the first exact analytical characterization of the memory curve in echo state networks.


international symposium on nanoscale architectures | 2014

A model for variation- and fault-tolerant digital logic using self-assembled nanowire architectures

Alireza Goudarzi; Matthew R. Lakin; Darko Stefanovic; Christof Teuscher

Reconfiguration has been used for both defect- and fault-tolerant nanoscale architectures with regular structure. Recent advances in self-assembled nanowires have opened doors to a new class of electronic devices with irregular structure. For such devices, reservoir computing has been shown to be a viable approach to implement computation. This approach exploits the dynamical properties of a system rather than specifics of its structure. Here, we extend a model of reservoir computing, called the echo state network, to reflect more realistic aspects of self-assembled nanowire networks. As a proof of concept, we use echo state networks to implement basic building blocks of digital computing: AND, OR, and XOR gates, and 2-bit adder and multiplier circuits. We show that the system can operate perfectly in the presence of variations five orders of magnitude higher than ITRSs 2005 target, 6%, and achieves success rates 6 times higher than related approaches at half the cost. We also describe an adaptive algorithm that can detect faults in the system and reconfigure it to resume perfect operational condition.


international conference on dna computing | 2013

DNA Reservoir Computing: A Novel Molecular Computing Approach

Alireza Goudarzi; Matthew R. Lakin; Darko Stefanovic

We propose a novel molecular computing approach based on reservoir computing. In reservoir computing, a dynamical core, called a reservoir, is perturbed with an external input signal while a readout layer maps the reservoir dynamics to a target output. Computation takes place as a transformation from the input space to a high-dimensional spatiotemporal feature space created by the transient dynamics of the reservoir. The readout layer then combines these features to produce the target output. We show that coupled deoxyribozyme oscillators can act as the reservoir. We show that despite using only three coupled oscillators, a molecular reservoir computer could achieve 90% accuracy on a benchmark temporal problem.


Artificial Life | 2012

Finding Optimal Random Boolean Networks for Reservoir Computing

David R. Snyder; Alireza Goudarzi; Christof Teuscher

Reservoir Computing (RC) is a computational model in which a trained readout layer interprets the dynamics of a component called a reservoir that is excited by external input stimuli. The reservoir is often constructed using homogeneous neural networks in which a neuron’s in-degree distributions as well as its functions are uniform. RC lends itself to computing with physical and biological systems. However, most such systems are not homogeneous. In this paper, we use Random Boolean Networks (RBN) to build the reservoir. We explore the computational capabilities of such a RC device using the temporal parity task and the temporal density classification. We study the sufficient dynamics of RBNs using kernel quality and generalization rank measures. We verify findings by Lizier et al. (2008) that the critical connectivity of RBNs optimizes the balance between the


computational intelligence and security | 2015

Exploring transfer function nonlinearity in echo state networks

Alireza Goudarzi; Alireza Shabani; Darko Stefanovic

Supralinear and sublinear pre-synaptic and dendritic integration is considered to be responsible for nonlinear computation power of biological neurons, emphasizing the role of nonlinear integration as opposed to nonlinear output thresholding. How, why, and to what degree the transfer function nonlinearity helps biologically inspired neural network models is not fully understood. Here, we study these questions in the context of echo state networks (ESN). ESN is a simple neural network architecture in which a fixed recurrent network is driven with an input signal, and the output is generated by a readout layer from the measurements of the network states. ESN architecture enjoys efficient training and good performance on certain signal-processing tasks, such as system identification and time series prediction. ESN performance has been analyzed with respect to the connectivity pattern in the network structure and the input bias. However, the effects of the transfer function in the network have not been studied systematically. Here, we use an approach tanh on the Taylor expansion of a frequently used transfer function, the hyperbolic tangent function, to systematically study the effect of increasing nonlinearity of the transfer function on the memory, nonlinear capacity, and signal processing performance of ESN. Interestingly, we find that a quadratic approximation is enough to capture the computational power of ESN with tanh function. The results of this study apply to both software and hardware implementation of ESN.


international conference on nanotechnology | 2011

Latency and power consumption in unstructured nanoscale boolean networks

Avinash Amarnath; Prateen Reddy Damera; Alireza Goudarzi; Christof Teuscher

The self-assembly of nanoelectronic devices may result in network of heterogeneous components with an unstructured interconnect. It has been shown that the fan-in of two per gate optimizes the number of required gates in logical blocks for feed-forward networks of gates. On the other hand, the local connectivity in a mesh network optimizes the interconnects energy consumption. In this paper, we address the question of what fan-in optimizes both the power consumption and the latency in an unstructured network. We show that an average fan-in of K = 3.3 is optimal for random Boolean networks when energy and latency are considered equally important. Our results are important as they show an inverse relationship between the energy consumption and the performance, and this allow us to determine the optimal connectivity of a certain class of self-assembled nanoscale devices.


bioinspired models of network, information, and computing systems | 2010

Learning and Generalization in Random Automata Networks

Alireza Goudarzi; Christof Teuscher; Natali Gulbahce

It has been shown [7,6] that feedforward Boolean networks can learn to perform specific simple tasks and generalize well if only a subset of the learning examples is provided for learning. Here, we extend this body of work and show experimentally that random Boolean networks (RBNs), where both the interconnections and the Boolean transfer functions are chosen at random initially, can be evolved by using a state-topology evolution to solve simple tasks. We measure the learning and generalization performance, investigate the influence of the average node connectivity K, the system size N, and introduce a new measure that allows to better describe the network’s learning and generalization behavior. Our results show that networks with higher average connectivity K (supercritical) achieve higher memorization and partial generalization. However, near critical connectivity, the networks show a higher perfect generalization on the even-odd task.

Collaboration


Dive into the Alireza Goudarzi's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jens Bürger

Portland State University

View shared research outputs
Top Co-Authors

Avatar

Kurt Brian Ferreira

Sandia National Laboratories

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David R. Snyder

Portland State University

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge