Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Utku Salihoglu is active.

Publication


Featured researches published by Utku Salihoglu.


Neural Computation | 2007

The road to chaos by time-asymmetric Hebbian learning in recurrent neural networks

Colin Molter; Utku Salihoglu; Hugues Bersini

This letter aims at studying the impact of iterative Hebbian learning algorithms on the recurrent neural networks underlying dynamics. First, an iterative supervised learning algorithm is discussed. An essential improvement of this algorithm consists of indexing the attractor information items by means of external stimuli rather than by using only initial conditions, as Hopfield originally proposed. Modifying the stimuli mainly results in a change of the entire internal dynamics, leading to an enlargement of the set of attractors and potential memory bags. The impact of the learning on the networks dynamics is the following: the more information to be stored as limit cycle attractors of the neural network, the more chaos prevails as the background dynamical regime of the network. In fact, the background chaos spreads widely and adopts a very unstructured shape similar to white noise. Next, we introduce a new form of supervised learning that is more plausible from a biological point of view: the network has to learn to react to an external stimulus by cycling through a sequence that is no longer specified a priori. Based on its spontaneous dynamics, the network decides on its own the dynamical patterns to be associated with the stimuli. Compared with classical supervised learning, huge enhancements in storing capacity and computational cost have been observed. Moreover, this new form of supervised learning, by being more respectful of the network intrinsic dynamics, maintains much more structure in the obtained chaos. It is still possible to observe the traces of the learned attractors in the chaotic regime. This complex but still very informative regime is referred to as frustrated chaos.


international symposium on neural networks | 2007

How Stochastic Noise Helps Memory Retrieval in a Chaotic Brain.

Colin Molter; Utku Salihoglu; Hugues Bersini

How information and more particularly memories are represented in brain dynamics is still an open question. By using, a recurrent network receiving a stimulus dependent external input, the author have demonstrated that the use of limit cycle attractors encompass in many aspects the limitations of fixed points attractors and gives better correspondence with neurophysiological facts. A main outcome of this perspective is the apparition of chaotic trajectories: instead of the overwhelming presence of spurious attractors, chaotic dynamics shows up when facing ambiguous situation. Contrary to intuition, many studies reported that noise can have beneficial effects in dynamical systems. Inline with these studies, it is demonstrated here how stochastic noise can make converge the chaotic trajectories to the expected limit cycle attractors and accordingly can improve consequently the retrieval performance. This noise induced retrieval enhancement is very dependent of the type of chaotic dynamics which is function of how information is coded.


international symposium on neural networks | 2009

Online organization of chaotic cell assemblies. A model for the cognitive map formation

Utku Salihoglu; Hugues Bersini; Yoko Yamaguchi; Colin Molter

While fixed point dynamics is still the predominant regime used for information processing, recent brain observations and computational results suggest more and more the importance/necessity to include and rely on more complex dynamics. Independently, since their introduction sixty years ago, cell assemblies are still a powerful substrate for brain information processing.


Archive | 2007

Giving Meaning to Cycles to Go Beyond the Limitations of Fixed Point Attractors

Colin Molter; Utku Salihoglu; Hugues Bersini

This chapter focuses on associativememories in recurrent artificial neural networks using the same kind of very simple neurons usually found in neural nets. The past 25 years have dedicated much of the research endeavor on coding the information in fixed point attractors. From a cognitive or neurophysiological point of view, this choice is rather arbitrary. This paper justifies the need to switch to another encoding mechanism exploiting limit cycles and complex dynamics in the background rather than fixed points. It is shown how these attractors encompass in many aspects the limitations of fixed points: better correspondence with neurophysiological facts, increase of the encoding capacity, improved robustness during the retrieval phase, decrease in the number of spurious attractors. However, how to exploit and learn these cycles for encoding the relevant information is still an open issue. In this paper two learning strategies are proposed, tested and compared, one rather classical, very reminiscent of the usual supervised hebbian learning, the other one, rather original since allowing the coding attractor to be chosen by the network itself. Computer experiments of these two learning strategies will be presented and explained. The second learning mechanism will be advocated both for its highly cognitive relevance and on account of its much better performance in encoding and retrieving the information. Since the kind of dynamics observed in our experiments (cyclic attractor when a stimulus is presented and a weak background chaos in the absence of such stimulation) faithfully reminds neurophysiological data and although no straightforward applications have been found so far, we limit our justification to this qualitative mapping with brain observations and the need to better explore how a physical device such as a brain can store and retrieve in a robust way a huge quantity of information.


international symposium on neural networks | 2005

Introduction of a Hebbian unsupervised learning algorithm to boost the encoding capacity of Hopfield networks

Colin Molter; Utku Salihoglu; Hugues Bersini


international joint conference on neural network | 2005

Learning Cycles brings Chaos in Continuous Hopfield Networks

Colin Molter; Utku Salihoglu; Hugues Bersini


Archive | 2005

Storing static and cyclic patterns in an Hopfield neural network

Colin Molter; Utku Salihoglu; Hugues Bersini


international conference on neural information processing | 2006

How reward can induce reverse replay of behavioral sequences in the hippocampus

Colin Molter; Naoyuki Sato; Utku Salihoglu; Yoko Yamaguchi


Lecture Notes in Computer Science | 2005

Phase synchronization and chaotic dynamics in Hebbian learned artificial recurrent neural networks

Colin Molter; Utku Salihoglu; Hugues Bersini


Archive | 2009

Toward a brain-like memory with recurrent neural networks

Utku Salihoglu; Hugues Bersini

Collaboration


Dive into the Utku Salihoglu's collaboration.

Top Co-Authors

Avatar

Hugues Bersini

Université libre de Bruxelles

View shared research outputs
Top Co-Authors

Avatar

Colin Molter

RIKEN Brain Science Institute

View shared research outputs
Top Co-Authors

Avatar

Colin Molter

RIKEN Brain Science Institute

View shared research outputs
Top Co-Authors

Avatar

Yoko Yamaguchi

RIKEN Brain Science Institute

View shared research outputs
Top Co-Authors

Avatar

Naoyuki Sato

Future University Hakodate

View shared research outputs
Researchain Logo
Decentralizing Knowledge