Rob Mills
University of Southampton
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Rob Mills.
Evolution | 2014
Richard A. Watson; Günter P. Wagner; Mihaela Pavlicev; Daniel M. Weinreich; Rob Mills
Development introduces structured correlations among traits that may constrain or bias the distribution of phenotypes produced. Moreover, when suitable heritable variation exists, natural selection may alter such constraints and correlations, affecting the phenotypic variation available to subsequent selection. However, exactly how the distribution of phenotypes produced by complex developmental systems can be shaped by past selective environments is poorly understood. Here we investigate the evolution of a network of recurrent nonlinear ontogenetic interactions, such as a gene regulation network, in various selective scenarios. We find that evolved networks of this type can exhibit several phenomena that are familiar in cognitive learning systems. These include formation of a distributed associative memory that can “store” and “recall” multiple phenotypes that have been selected in the past, recreate complete adult phenotypic patterns accurately from partial or corrupted embryonic phenotypes, and “generalize” (by exploiting evolved developmental modules) to produce new combinations of phenotypic features. We show that these surprising behaviors follow from an equivalence between the action of natural selection on phenotypic correlations and associative learning, well‐understood in the context of neural networks. This helps to explain how development facilitates the evolution of high‐fitness phenotypes and how this ability changes over evolutionary time.
Artificial Life | 2011
Richard A. Watson; Rob Mills; Christopher L. Buckley
In some circumstances complex adaptive systems composed of numerous self-interested agents can self-organize into structures that enhance global adaptation, efficiency, or function. However, the general conditions for such an outcome are poorly understood and present a fundamental open question for domains as varied as ecology, sociology, economics, organismic biology, and technological infrastructure design. In contrast, sufficient conditions for artificial neural networks to form structures that perform collective computational processes such as associative memory/recall, classification, generalization, and optimization are well understood. Such global functions within a single agent or organism are not wholly surprising, since the mechanisms (e.g., Hebbian learning) that create these neural organizations may be selected for this purpose; but agents in a multi-agent system have no obvious reason to adhere to such a structuring protocol or produce such global behaviors when acting from individual self-interest. However, Hebbian learning is actually a very simple and fully distributed habituation or positive feedback principle. Here we show that when self-interested agents can modify how they are affected by other agents (e.g., when they can influence which other agents they interact with), then, in adapting these inter-agent relationships to maximize their own utility, they will necessarily alter them in a manner homologous with Hebbian learning. Multi-agent systems with adaptable relationships will thereby exhibit the same system-level behaviors as neural networks under Hebbian learning. For example, improved global efficiency in multi-agent systems can be explained by the inherent ability of associative memory to generalize by idealizing stored patterns and/or creating new combinations of subpatterns. Thus distributed multi-agent systems can spontaneously exhibit adaptive global behaviors in the same sense, and by the same mechanism, as with the organizational principles familiar in connectionist models of organismic learning.
Complexity | 2011
Richard A. Watson; Christopher L. Buckley; Rob Mills
When a dynamical system with multiple point attractors is released from an arbitrary initial condition, it will relax into a configuration that locally resolves the constraints or opposing forces between interdependent state variables. However, when there are many conflicting interdependencies between variables, finding a configuration that globally optimizes these constraints by this method is unlikely or may take many attempts. Here, we show that a simple distributed mechanism can incrementally alter a dynamical system such that it finds lower energy configurations, more reliably and more quickly. Specifically, when Hebbian learning is applied to the connections of a simple dynamical system undergoing repeated relaxation, the system will develop an associative memory that amplifies a subset of its own attractor states. This modifies the dynamics of the system such that its ability to find configurations that minimize total system energy, and globally resolve conflicts between interdependent variables, is enhanced. Moreover, we show that the system is not merely “recalling” low energy states that have been previously visited but “predicting” their location by generalizing over local attractor states that have already been visited. This “self-modeling” framework, i.e., a system that augments its behavior with an associative memory of its own attractors, helps us better understand the conditions under which a simple locally mediated mechanism of self-organization can promote significantly enhanced global resolution of conflicts between the components of a complex adaptive system. We illustrate this process in random and modular network constraint problems equivalent to graph coloring and distributed task allocation problems.
european conference on artificial life | 2009
Richard A. Watson; Niclas Palmius; Rob Mills; Simon T. Powers; Alexandra S. Penn
The role of symbiosis in macro-evolution is poorly understood. On the one hand, symbiosis seems to be a perfectly normal manifestation of individual selection, on the other hand, in some of the major transitions in evolution it seems to be implicated in the creation of new higher-level units of selection. Here we present a model of individual selection for symbiotic relationships where individuals can genetically specify traits which partially control which other species they associate with - i.e. they can evolve species-specific grouping. We find that when the genetic evolution of symbiotic relationships occurs slowly compared to ecological population dynamics, symbioses form which canalise the combinations of species that commonly occur at local ESSs into new units of selection. Thus even though symbioses will only evolve if they are beneficial to the individual, we find that the symbiotic groups that form are selectively significant and result in combinations of species that are more cooperative than would be possible under individual selection. These findings thus provide a systematic mechanism for creating significant higher-level selective units from individual selection, and support the notion of a significant and systematic role of symbiosis in macroevolution.
Artificial Life | 2011
Adam Davies; Richard A. Watson; Rob Mills; Christopher L. Buckley; Jason Noble
Simple distributed strategies that modify the behavior of selfish individuals in a manner that enhances cooperation or global efficiency have proved difficult to identify. We consider a network of selfish agents who each optimize their individual utilities by coordinating (or anticoordinating) with their neighbors, to maximize the payoffs from randomly weighted pairwise games. In general, agents will opt for the behavior that is the best compromise (for them) of the many conflicting constraints created by their neighbors, but the attractors of the system as a whole will not maximize total utility. We then consider agents that act as creatures of habit by increasing their preference to coordinate (anticoordinate) with whichever neighbors they are coordinated (anticoordinated) with at present. These preferences change slowly while the system is repeatedly perturbed, so that it settles to many different local attractors. We find that under these conditions, with each perturbation there is a progressively higher chance of the system settling to a configuration with high total utility. Eventually, only one attractor remains, and that attractor is very likely to maximize (or almost maximize) global utility. This counterintuitive result can be understood using theory from computational neuroscience; we show that this simple form of habituation is equivalent to Hebbian learning, and the improved optimization of global utility that is observed results from well-known generalization capabilities of associative memory acting at the network scale. This causes the system of selfish agents, each acting individually but habitually, to collectively identify configurations that maximize total utility.
Adaptive Behavior | 2011
Richard A. Watson; Rob Mills; Christopher L. Buckley
The natural energy minimization behavior of a dynamical system can be interpreted as a simple optimization process, finding a locally optimal resolution of problem constraints. In human problem solving, high-dimensional problems are often made much easier by inferring a low-dimensional model of the system in which search is more effective. But this is an approach that seems to require top-down domain knowledge—not one amenable to the spontaneous energy minimization behavior of a natural dynamical system. However, in this article we investigate the ability of distributed dynamical systems to improve their constraint resolution ability over time by self-organization. We use a ‘‘self-modeling’’ Hopfield network with a novel type of associative connection to illustrate how slowly changing relationships between system components can result in a transformation into a new system which is a low-dimensional caricature of the original system. The energy minimization behavior of this new system is significantly more effective at globally resolving the original system constraints. This model uses only very simple, and fully distributed, positive feedback mechanisms that are relevant to other ‘‘active linking’’ and adaptive networks. We discuss how this neural network model helps us to understand transformations and emergent collective behavior in various non-neural adaptive networks such as social, genetic and ecological networks.
IEEE Transactions on Evolutionary Computation | 2014
Rob Mills; Thomas Jansen; Richard A. Watson
The intuitive idea that good solutions to small problems can be reassembled into good solutions to larger problems is widely familiar in many fields including evolutionary computation. This idea has motivated the building-block hypothesis and model-building optimization methods that aim to identify and exploit problem structure automatically. Recently, a small number of works make use of such ideas by learning problem structure and using this information in a particular manner: these works use the results of a simple search process in primitive units to identify structural correlations (such as modularity) in the problem that are then used to redefine the variational operators of the search process. This process is applied recursively such that search operates at successively higher scales of organization, hence multi-scale search. Here, we show for the first time that there is a simple class of (modular) problems that a multi-scale search algorithm can solve in polynomial time that requires super-polynomial time for other methods. We discuss strengths and limitations of the multi-scale search approach and note how it can be developed further.
genetic and evolutionary computation conference | 2007
Rob Mills; Richard A. Watson
Recent work has provided functions that can be used to prove a principled distinction between the capabilities of mutation-based and crossover-based algorithms. However, prior functions are isolated problem instances that do not provide much intuition about the space of possible functions that is relevant to this distinction or the characteristics of the problem class that affect the relative success of these operators. Modularity is a ubiquitous and intuitive concept in design, engineering and optimisation, and can be used to produce functions that discriminate the ability of crossover from mutation. In this paper, we present a new approach to representing modular problems, which parameterizes the amount of modular structure that is present in the epistatic dependencies of the problem. This adjustable level of modularity can be used to give rise to tuneable discrimination of the ability of genetic algorithms with crossover versus mutation-only algorithms.
european conference on artificial life | 2007
Rob Mills; Richard A. Watson
Symbiosis, the collaboration of multiple organisms from different species, is common in nature. A related phenomenon, symbiogenesis, the creation of new species through the genetic integration of symbionts, is a powerful alternative to crossover as a variation operator in evolutionary algorithms. It has inspired several previous models that use the repeated composition of preadapted entities. In this paper we introduce a new algorithm utilizing this concept of symbiosis which is simpler and has a more natural interpretation when compared with previous algorithms. In addition it achieves success on a broader class of modular problems than some prior methods.
european conference on artificial life | 2005
Rob Mills; Richard A. Watson
The Baldwin Effect indicates that individually learned behaviours acquired during an organism’s lifetime can influence the evolutionary path taken by a population, without any direct Lamarckian transfer of traits from phenotype to genotype. Several computational studies modelling this effect have included complications that restrict its applicability. Here we present a simplified model that is used to reveal the essential mechanisms and highlight several conceptual issues that have not been clearly defined in prior literature. In particular, we suggest that canalisation and genetic assimilation, often conflated in previous studies, are separate concepts and the former is actually not required for non-heritable phenotypic variation to guide genetic variation. Additionally, learning, often considered to be essential for the Baldwin Effect, can be replaced with a more general phenotypic plasticity model. These simplifications potentially permit the Baldwin Effect to operate in much more general circumstances.