Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Christopher L. Buckley is active.

Publication


Featured researches published by Christopher L. Buckley.


Artificial Life | 2011

Global adaptation in networks of selfish components: Emergent associative memory at the system scale

Richard A. Watson; Rob Mills; Christopher L. Buckley

In some circumstances complex adaptive systems composed of numerous self-interested agents can self-organize into structures that enhance global adaptation, efficiency, or function. However, the general conditions for such an outcome are poorly understood and present a fundamental open question for domains as varied as ecology, sociology, economics, organismic biology, and technological infrastructure design. In contrast, sufficient conditions for artificial neural networks to form structures that perform collective computational processes such as associative memory/recall, classification, generalization, and optimization are well understood. Such global functions within a single agent or organism are not wholly surprising, since the mechanisms (e.g., Hebbian learning) that create these neural organizations may be selected for this purpose; but agents in a multi-agent system have no obvious reason to adhere to such a structuring protocol or produce such global behaviors when acting from individual self-interest. However, Hebbian learning is actually a very simple and fully distributed habituation or positive feedback principle. Here we show that when self-interested agents can modify how they are affected by other agents (e.g., when they can influence which other agents they interact with), then, in adapting these inter-agent relationships to maximize their own utility, they will necessarily alter them in a manner homologous with Hebbian learning. Multi-agent systems with adaptable relationships will thereby exhibit the same system-level behaviors as neural networks under Hebbian learning. For example, improved global efficiency in multi-agent systems can be explained by the inherent ability of associative memory to generalize by idealizing stored patterns and/or creating new combinations of subpatterns. Thus distributed multi-agent systems can spontaneously exhibit adaptive global behaviors in the same sense, and by the same mechanism, as with the organizational principles familiar in connectionist models of organismic learning.


Complexity | 2011

Optimization in “self-modeling” complex adaptive systems

Richard A. Watson; Christopher L. Buckley; Rob Mills

When a dynamical system with multiple point attractors is released from an arbitrary initial condition, it will relax into a configuration that locally resolves the constraints or opposing forces between interdependent state variables. However, when there are many conflicting interdependencies between variables, finding a configuration that globally optimizes these constraints by this method is unlikely or may take many attempts. Here, we show that a simple distributed mechanism can incrementally alter a dynamical system such that it finds lower energy configurations, more reliably and more quickly. Specifically, when Hebbian learning is applied to the connections of a simple dynamical system undergoing repeated relaxation, the system will develop an associative memory that amplifies a subset of its own attractor states. This modifies the dynamics of the system such that its ability to find configurations that minimize total system energy, and globally resolve conflicts between interdependent variables, is enhanced. Moreover, we show that the system is not merely “recalling” low energy states that have been previously visited but “predicting” their location by generalizing over local attractor states that have already been visited. This “self-modeling” framework, i.e., a system that augments its behavior with an associative memory of its own attractors, helps us better understand the conditions under which a simple locally mediated mechanism of self-organization can promote significantly enhanced global resolution of conflicts between the components of a complex adaptive system. We illustrate this process in random and modular network constraint problems equivalent to graph coloring and distributed task allocation problems.


Artificial Life | 2011

if you can't be with the one you love, love the one you're with: How individual habituation of agent interactions improves global utility

Adam Davies; Richard A. Watson; Rob Mills; Christopher L. Buckley; Jason Noble

Simple distributed strategies that modify the behavior of selfish individuals in a manner that enhances cooperation or global efficiency have proved difficult to identify. We consider a network of selfish agents who each optimize their individual utilities by coordinating (or anticoordinating) with their neighbors, to maximize the payoffs from randomly weighted pairwise games. In general, agents will opt for the behavior that is the best compromise (for them) of the many conflicting constraints created by their neighbors, but the attractors of the system as a whole will not maximize total utility. We then consider agents that act as creatures of habit by increasing their preference to coordinate (anticoordinate) with whichever neighbors they are coordinated (anticoordinated) with at present. These preferences change slowly while the system is repeatedly perturbed, so that it settles to many different local attractors. We find that under these conditions, with each perturbation there is a progressively higher chance of the system settling to a configuration with high total utility. Eventually, only one attractor remains, and that attractor is very likely to maximize (or almost maximize) global utility. This counterintuitive result can be understood using theory from computational neuroscience; we show that this simple form of habituation is equivalent to Hebbian learning, and the improved optimization of global utility that is observed results from well-known generalization capabilities of associative memory acting at the network scale. This causes the system of selfish agents, each acting individually but habitually, to collectively identify configurations that maximize total utility.


Evolutionary Biology-new York | 2016

Evolutionary Connectionism: Algorithmic Principles Underlying the Evolution of Biological Organisation in Evo-Devo, Evo-Eco and Evolutionary Transitions

Richard A. Watson; Rob Mills; Christopher L. Buckley; Konstantinos Kouvaris; Adam Jackson; Simon T. Powers; Chris R. Cox; Simon Tudge; Adam Davies; Loizos Kounios; Daniel Power

The mechanisms of variation, selection and inheritance, on which evolution by natural selection depends, are not fixed over evolutionary time. Current evolutionary biology is increasingly focussed on understanding how the evolution of developmental organisations modifies the distribution of phenotypic variation, the evolution of ecological relationships modifies the selective environment, and the evolution of reproductive relationships modifies the heritability of the evolutionary unit. The major transitions in evolution, in particular, involve radical changes in developmental, ecological and reproductive organisations that instantiate variation, selection and inheritance at a higher level of biological organisation. However, current evolutionary theory is poorly equipped to describe how these organisations change over evolutionary time and especially how that results in adaptive complexes at successive scales of organisation (the key problem is that evolution is self-referential, i.e. the products of evolution change the parameters of the evolutionary process). Here we first reinterpret the central open questions in these domains from a perspective that emphasises the common underlying themes. We then synthesise the findings from a developing body of work that is building a new theoretical approach to these questions by converting well-understood theory and results from models of cognitive learning. Specifically, connectionist models of memory and learning demonstrate how simple incremental mechanisms, adjusting the relationships between individually-simple components, can produce organisations that exhibit complex system-level behaviours and improve the adaptive capabilities of the system. We use the term “evolutionary connectionism” to recognise that, by functionally equivalent processes, natural selection acting on the relationships within and between evolutionary entities can result in organisations that produce complex system-level behaviours in evolutionary systems and modify the adaptive capabilities of natural selection over time. We review the evidence supporting the functional equivalences between the domains of learning and of evolution, and discuss the potential for this to resolve conceptual problems in our understanding of the evolution of developmental, ecological and reproductive organisations and, in particular, the major evolutionary transitions.


PLOS ONE | 2011

Competition-Based Model of Pheromone Component Ratio Detection in the Moth

Andrei Zavada; Christopher L. Buckley; Dominique Martinez; Jean-Pierre Rospars; Thomas Nowotny

For some moth species, especially those closely interrelated and sympatric, recognizing a specific pheromone component concentration ratio is essential for males to successfully locate conspecific females. We propose and determine the properties of a minimalist competition-based feed-forward neuronal model capable of detecting a certain ratio of pheromone components independently of overall concentration. This model represents an elementary recognition unit for the ratio of binary mixtures which we propose is entirely contained in the macroglomerular complex (MGC) of the male moth. A set of such units, along with projection neurons (PNs), can provide the input to higher brain centres. We found that (1) accuracy is mainly achieved by maintaining a certain ratio of connection strengths between olfactory receptor neurons (ORN) and local neurons (LN), much less by properties of the interconnections between the competing LNs proper. An exception to this rule is that it is beneficial if connections between generalist LNs (i.e. excited by either pheromone component) and specialist LNs (i.e. excited by one component only) have the same strength as the reciprocal specialist to generalist connections. (2) successful ratio recognition is achieved using latency-to-first-spike in the LN populations which, in contrast to expectations with a population rate code, leads to a broadening of responses for higher overall concentrations consistent with experimental observations. (3) when longer durations of the competition between LNs were observed it did not lead to higher recognition accuracy.


Adaptive Behavior | 2011

Transformations in the scale of behavior and the global optimization of constraints in adaptive networks

Richard A. Watson; Rob Mills; Christopher L. Buckley

The natural energy minimization behavior of a dynamical system can be interpreted as a simple optimization process, finding a locally optimal resolution of problem constraints. In human problem solving, high-dimensional problems are often made much easier by inferring a low-dimensional model of the system in which search is more effective. But this is an approach that seems to require top-down domain knowledge—not one amenable to the spontaneous energy minimization behavior of a natural dynamical system. However, in this article we investigate the ability of distributed dynamical systems to improve their constraint resolution ability over time by self-organization. We use a ‘‘self-modeling’’ Hopfield network with a novel type of associative connection to illustrate how slowly changing relationships between system components can result in a transformation into a new system which is a low-dimensional caricature of the original system. The energy minimization behavior of this new system is significantly more effective at globally resolving the original system constraints. This model uses only very simple, and fully distributed, positive feedback mechanisms that are relevant to other ‘‘active linking’’ and adaptive networks. We discuss how this neural network model helps us to understand transformations and emergent collective behavior in various non-neural adaptive networks such as social, genetic and ecological networks.


Complexity | 2010

Spatial, temporal, and modulatory factors affecting GasNet evolvability in a visually guided robotics task

Philip Husbands; Andrew Philippides; Patricia A. Vargas; Christopher L. Buckley; Peter Fine; Ezequiel A. Di Paolo; Michael O'Shea

Spatial, temporal, and modulatory factors affecting the evolvability of GasNets — a style of artificial neural network incorporating an analogue of volume signalling — are investigated. The focus of the article is a comparative study of variants of the GasNet, implementing various spatial, temporal, and modulatory constraints, used as control systems in an evolutionary robotics task involving visual discrimination. The results of the study are discussed in the context of related research.


european conference on artificial life | 2005

Timescale and stability in adaptive behaviour

Christopher L. Buckley; Seth Bullock; Netta Cohen

Recently, in both the neuroscience and adaptive behaviour communities, there has been growing interest in the interplay of multiple timescales within neural systems. In particular, the phenomenon of neuromodulation has received a great deal of interest within neuroscience and a growing amount of attention within adaptive behaviour research. This interest has been driven by hypotheses and evidence that have linked neuromodulatory chemicals to a wide range of important adaptive processes such as regulation, reconfiguration, and plasticity. Here, we first demonstrate that manipulating timescales can qualitatively alter the dynamics of a simple system of coupled model neurons. We go on to explore this effect in larger systems within the framework employed by Gardner, Ashby and May in their seminal studies of stability in complex networks. On the basis of linear stability analysis, we conclude that, despite evidence that timescale is important for stability, the presence of multiple timescales within a single system has, in general, no appreciable effect on the May-Wigner stability/connectance relationship. Finally we address some of the shortcomings of linear stability analysis and conclude that more sophisticated analytical approaches are required in order to explore the impact of multiple timescales on the temporally extended dynamics of adaptive systems.


BioSystems | 2008

Sensitivity and stability: A signal propagation sweet spot in a sheet of recurrent centre crossing neurons

Christopher L. Buckley; Seth Bullock

In this paper we demonstrate that signal propagation across a laminar sheet of recurrent neurons is maximised when two conditions are met. First, neurons must be in the so-called centre crossing configuration. Second, the networks topology and weights must be such that the network comprises strongly coupled nodes, yet lies within the weakly coupled regime. We develop tools from linear stability analysis with which to describe this regime in terms of the connectivity and weight strengths of a network. We use these results to examine the apparent tension between the sensitivity and instability of centre crossing networks.


Energy Policy | 1976

Energy use in UK industry

John Chesshire; Christopher L. Buckley

Abstract This paper presents the results of a series of interviews with large industrial energy users in the UK held between October 1975 and January 1976, and is part of an ongoing study on industrial energy use. The aim is to cast light on the factors affecting current and future patterns of industrial energy use, and the medium term prospects of the individual fuels. Oil will probably remain the residual industrial fuel in the medium term but natural gas will continue to increase its share of the market, perhaps leading to an absolute decline in the consumption of other fuels unless there is government intervention in depletion policy.

Collaboration


Dive into the Christopher L. Buckley's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Rob Mills

University of Southampton

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Adam Davies

University of Southampton

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Taro Toyoizumi

RIKEN Brain Science Institute

View shared research outputs
Top Co-Authors

Avatar

Jean-Pierre Rospars

Institut national de la recherche agronomique

View shared research outputs
Researchain Logo
Decentralizing Knowledge