Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Nihat Ay is active.

Publication


Featured researches published by Nihat Ay.


Entropy | 2014

Quantifying unique information

Nils Bertschinger; Johannes Rauh; Eckehard Olbrich; Jürgen Jost; Nihat Ay

We propose new measures of shared information, unique information and synergistic information that can be used to decompose the mutual information of a pair of random variables (Y, Z) with a third random variable X. Our measures are motivated by an operational idea of unique information, which suggests that shared information and unique information should depend only on the marginal distributions of the pairs (X, Y) and (X,Z). Although this invariance property has not been studied before, it is satisfied by other proposed measures of shared information. The invariance property does not uniquely determine our new measures, but it implies that the functions that we define are bounds to any other measures satisfying the same invariance property. We study properties of our measures and compare them to other candidate measures.


Neural Computation | 2011

Refinements of universal approximation results for deep belief networks and restricted boltzmann machines

Guido Montúfar; Nihat Ay

We improve recently published results about resources of restricted Boltzmann machines (RBM) and deep belief networks (DBN) required to make them universal approximators. We show that any distribution on the set of binary vectors of length can be arbitrarily well approximated by an RBM with hidden units, where is the minimal number of pairs of binary vectors differing in only one entry such that their union contains the support set of . In important cases this number is half the cardinality of the support set of (given in Le Roux & Bengio, 2008). We construct a DBN with , hidden layers of width that is capable of approximating any distribution on arbitrarily well. This confirms a conjecture presented in Le Roux and Bengio (2010).


Adaptive Behavior | 2010

Higher Coordination With Less Control-A Result of Information Maximization in the Sensorimotor Loop

Keyan Zahedi; Nihat Ay; Ralf Der

This work presents a novel learning method in the context of embodied artificial intelligence and self-organization, which has as few assumptions and restrictions as possible about the world and the underlying model. The learning rule is derived from the principle of maximizing the predictive information in the sensorimotor loop. It is evaluated on robot chains of varying length with individually controlled, noncommunicating segments. The comparison of the results shows that maximizing the predictive information per wheel leads to a higher coordinated behavior of the physically connected robots compared with a maximization per robot. Another focus of this article is the analysis of the effect of the robot chain length on the overall behavior of the robots. It will be shown that longer chains with less capable controllers outperform those of shorter length and more complex controllers. The reason is found and discussed in the information-geometric interpretation of the learning process.


Entropy | 2015

Information Geometry on Complexity and Stochastic Interaction

Nihat Ay

Interdependencies of stochastically interacting units are usually quantified by the Kullback-Leibler divergence of a stationary joint probability distribution on the set of all configurations from the corresponding factorized distribution. This is a spatial approach which does not describe the intrinsically temporal aspects of interaction. In the present paper, the setting is extended to a dynamical version where temporal interdependencies are also captured by using information geometry of Markov chain manifolds.


Philosophical Transactions of the Royal Society B | 2007

Robustness and complexity co-constructed in multimodal signalling networks

Nihat Ay; Jessica C. Flack; David C. Krakauer

In animal communication, signals are frequently emitted using different channels (e.g. frequencies in a vocalization) and different modalities (e.g. gestures can accompany vocalizations). We explore two explanations that have been provided for multimodality: (i) selection for high information transfer through dedicated channels and (ii) increasing fault tolerance or robustness through multichannel signals. Robustness relates to an accurate decoding of a signal when parts of a signal are occluded. We show analytically in simple feed-forward neural networks that while a multichannel signal can solve the robustness problem, a multimodal signal does so more effectively because it can maximize the contribution made by each channel while minimizing the effects of exclusion. Multimodality refers to sets of channels where within each set information is highly correlated. We show that the robustness property ensures correlations among channels producing complex, associative networks as a by-product. We refer to this as the principle of robust overdesign. We discuss the biological implications of this for the evolution of combinatorial signalling systems; in particular, how robustness promotes enough redundancy to allow for a subsequent specialization of redundant components into novel signals.


PLOS ONE | 2013

Information driven self-organization of complex robotic behaviors.

Georg Martius; Ralf Der; Nihat Ay

Information theory is a powerful tool to express principles to drive autonomous systems because it is domain invariant and allows for an intuitive interpretation. This paper studies the use of the predictive information (PI), also called excess entropy or effective measure complexity, of the sensorimotor process as a driving force to generate behavior. We study nonlinear and nonstationary systems and introduce the time-local predicting information (TiPI) which allows us to derive exact results together with explicit update rules for the parameters of the controller in the dynamical systems framework. In this way the information principle, formulated at the level of behavior, is translated to the dynamics of the synapses. We underpin our results with a number of case studies with high-dimensional robotic systems. We show the spontaneous cooperativity in a complex physical system with decentralized control. Moreover, a jointly controlled humanoid robot develops a high behavioral variety depending on its physics and the environment it is dynamically embedded into. The behavior can be decomposed into a succession of low-dimensional modes that increasingly explore the behavior space. This is a promising way to avoid the curse of dimensionality which hinders learning systems to scale well.


Neural Networks | 2003

Dynamical properties of strongly interacting Markov chains

Nihat Ay; Thomas Wennekers

Spatial interdependences of multiple stochastic units can be suitably quantified by the Kullback-Leibler divergence of the joint probability distribution from the corresponding factorized distribution. In the present paper, a generalized measure for stochastic interaction, which also captures temporal interdependences, is analysed within the setting of Markov chains. The dynamical properties of systems with strongly interacting stochastic units are analytically studied and illustrated by computer simulations. In particular, the emergence of determinism in such systems is demonstrated.


Theory in Biosciences | 2006

Geometric robustness theory and biological networks

Nihat Ay; David C. Krakauer

We provide a geometric framework for investigating the robustness of information flows over biological networks. We use information measures to quantify the impact of knockout perturbations on simple networks. Robustness has two components, a measure of the causal contribution of a node or nodes, and a measure of the change or exclusion dependence, of the network following node removal. Causality is measured as statistical contribution of a node to network function, wheras exclusion dependence measures a distance between unperturbed network and reconfigured network function. We explore the role that redundancy plays in increasing robustness, and how redundacy can be exploited through error-correcting codes implemented by networks. We provide examples of the robustness measure when applied to familiar boolean functions such as the AND, OR and XOR functions. We discuss the relationship between robustness measures and related measures of complexity and how robustness always implies a minimal level of complexity.


Chaos | 2011

A Geometric Approach to Complexity

Nihat Ay; Eckehard Olbrich; Nils Bertschinger; Jürgen Jost

We develop a geometric approach to complexity based on the principle that complexity requires interactions at different scales of description. Complex systems are more than the sum of their parts of any size and not just more than the sum of their elements. Using information geometry, we therefore analyze the decomposition of a system in terms of an interaction hierarchy. In mathematical terms, we present a theory of complexity measures for finite random fields using the geometric framework of hierarchies of exponential families. Within our framework, previously proposed complexity measures find their natural place and gain a new interpretation.


Theory in Biosciences | 2012

Information-driven self-organization: the dynamical system approach to autonomous robot behavior.

Nihat Ay; Holger Bernigau; Ralf Der; Mikhail Prokopenko

In recent years, information theory has come into the focus of researchers interested in the sensorimotor dynamics of both robots and living beings. One root for these approaches is the idea that living beings are information processing systems and that the optimization of these processes should be an evolutionary advantage. Apart from these more fundamental questions, there is much interest recently in the question how a robot can be equipped with an internal drive for innovation or curiosity that may serve as a drive for an open-ended, self-determined development of the robot. The success of these approaches depends essentially on the choice of a convenient measure for the information. This article studies in some detail the use of the predictive information (PI), also called excess entropy or effective measure complexity, of the sensorimotor process. The PI of a process quantifies the total information of past experience that can be used for predicting future events. However, the application of information theoretic measures in robotics mostly is restricted to the case of a finite, discrete state-action space. This article aims at applying the PI in the dynamical systems approach to robot control. We study linear systems as a first step and derive exact results for the PI together with explicit learning rules for the parameters of the controller. Interestingly, these learning rules are of Hebbian nature and local in the sense that the synaptic update is given by the product of activities available directly at the pertinent synaptic ports. The general findings are exemplified by a number of case studies. In particular, in a two-dimensional system, designed at mimicking embodied systems with latent oscillatory locomotion patterns, it is shown that maximizing the PI means to recognize and amplify the latent modes of the robotic system. This and many other examples show that the learning rules derived from the maximum PI principle are a versatile tool for the self-organization of behavior in complex robotic systems.

Collaboration


Dive into the Nihat Ay's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lorenz J. Schwachhöfer

Technical University of Dortmund

View shared research outputs
Researchain Logo
Decentralizing Knowledge