Chris Thornton
University of Sussex
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Chris Thornton.
Frontiers in Psychology | 2012
K. J. Friston; Chris Thornton; Andy Clark
Recent years have seen the emergence of an important new fundamental theory of brain function. This theory brings information-theoretic, Bayesian, neuroscientific, and machine learning approaches into a single framework whose overarching principle is the minimization of surprise (or, equivalently, the maximization of expectation). The most comprehensive such treatment is the “free-energy minimization” formulation due to Karl Friston (see e.g., Friston and Stephan, 2007; Friston, 2010a,b – see also Fiorillo, 2010; Thornton, 2010). A recurrent puzzle raised by critics of these models is that biological systems do not seem to avoid surprises. We do not simply seek a dark, unchanging chamber, and stay there. This is the “Dark-Room Problem.” Here, we describe the problem and further unpack the issues to which it speaks. Using the same format as the prolog of Eddington’s Space, Time, and Gravitation (Eddington, 1920) we present our discussion as a conversation between: an information theorist (Thornton), a physicist (Friston), and a philosopher (Clark).
Archive | 1998
Chris Thornton
Geometric separability is a generalisation of linear separability, familiar to many from Minsky and Papert’s analysis of the Perceptron learning method. The concept forms a novel dimension along which to conceptualise learning methods. The present paper shows how geometric separability can be defined and demonstrates that it accurately predicts the performance of a at least one empirical learning method.
canadian conference on artificial intelligence | 1996
Chris Thornton
It is well-known that certain learning methods (e.g., the perceptron learning algorithm) cannot acquire complete, parity mappings. But it is often overlooked that state-of-the-art learning methods such as C4.5 and backpropagation cannot generalise from incomplete parity mappings. The failure of such methods to generalise on parity mappings may be sometimes dismissed on the grounds that it is ‘impossible’ to generalise over such mappings, or that parity problems are mathematical constructs having little to do with real-world learning. However, this paper argues that such a dismissal is unwarranted. It shows that parity mappings are hard to learn because they are statistically neutral and that statistical neutrality is a property which we should expect to encounter frequently in real-world contexts. It also shows that the generalization failure on parity mappings occurs even when large, minimally incomplete mappings are used for training purposes, i.e., when claims about the impossibility of generalization are particularly suspect.
Archive | 1992
Chris Thornton; Benedict du Boulay
1. Search-Related Techniques in AI. 2. Simple State Space Search. 3. State-Space Search. 4. Heuristic State-Space Search. 5. Heuristc Search of Game Trees. 6. Problem Reduction (AND/OR-Tree Search). 7. Planning (Microworld Search). 8. Parsing (Search as Analysis). 9. Expert Systems (Probabilistic Search). 10. Concept Learning (Extension-Generating Search). 11. Prolog (Search as Computation). 12. References. Introduction to POP-11 Programming. Index.
Trends in Cognitive Sciences | 2010
Chris Thornton
In Fristons recent article [1], the structure of an agents world is taken to be represented by a ‘conditional density’, a probabilistic mapping from ‘causes’ to sensory stimulation. Friston argues that the brain can arrive at an approximation of this mapping by minimizing ‘free energy’, which is a function of sensory stimulation and brain states. A generative model of causal structure in the environment is then obtained, on which basis the agent is able to infer the ‘causes of sensory samples’ [1]. What is unclear is how this mechanism would function when sensory samples are ambiguous. In general, there are multiple interpretations for the causes of any sensory data, and these cannot be resolved on the basis of inspecting the data alone [2]. For any sense data, there will also generally be causes at multiple levels of description, with causes at one level of description being embedded in causes at higher levels. Sensory stimulation is the result not of distinct causes, but of causal structure. How would a mechanism that acts to infer causes measure up to the task of inferring causal structure? Friston asserts that almost ‘any adaptive change in the brain’ can be viewed as resulting from minimization of free energy [1]. On the face of it, no particular stand is taken on the emergence of the structures that mediate minimization. However, by looking at the definition of free energy [3], one finds a significant part being played by the variable ϑ. It is values of ϑ that encapsulate the representation of ‘environmental causes’ by the brain [3]. The range of ϑ then dictates the gross structural form of any representation acquired. With the framework providing no principle for deciding this range, the representation by the brain of the conditional density is inevitably a ‘slightly mysterious construct’ [4]. The expectation might be that ϑ will be fixed through instantiation of fortuitous ‘matches’ between internal and external structures. ‘Those systems that can match their internal structure to the causal structure of the environment will attain a tighter bound.’ [3]. However, there is a problem of circularity here: agents are posited to be able to form an internal structure matching the environment just in case they already have it. Neither it is clear whether this is intended to be the ‘mechanism’ for fixing ϑ. If there is no principle deciding this crucial designator of representational capacity, then one can only assume that it is fixed at random. It seems right for Friston to emphasize that the entropic basis of surprise reveals a deep connection between processes of knowledge, behavior and life. However, this idea has been in common currency for some time (e.g. Refs [5] and [6]) and it is unclear how introduction of the ‘free energy’ concept, specifically, adds any explanatory content. Free energy is taken to be a ‘good proxy’ for surprise: surely it is minimization of ‘surprise’ that is explanatorily salient. The inability of the present formulation to address the issue of structure emergence also poses difficulties with regard to the specification of ϑ ranges, resolution of sensory ambiguity and inference of causal structure.
Minds and Machines | 1997
Chris Thornton
The paper uses ideas from Machine Learning, Artificial Intelligence and Genetic Algorithms to provide a model of the development of a ‘fight-or-flight’ response in a simulated agent. The modelled development process involves (simulated) processes of evolution, learning and representation development. The main value of the model is that it provides an illustration of how simple learning processes may lead to the formation of structures which can be given a representational interpretation. It also shows how these may form the infrastructure for closely-coupled agent/environment interaction.
Connection Science | 1995
Chris Thornton
Existing complexity measures from contemporary learning theory cannot be conveniently applied to specific learning problems (e.g. training sets). Moreover, they are typically non-generic, i.e. they necessitate making assumptions about the way in which the learner will operate. The lack of a satisfactory, generic complexity measure for learning problems poses difficulties for researchers in various areas; the present paper puts forward an idea which may help to alleviate these. It shows that supervised learning problems fall into two generic complexity classes, only one of which is associated with computational tractability. By determining which class a particular problem belongs to, we can thus effectively evaluate its degree of generic difficulty.
Journal of New Music Research | 2011
Chris Thornton
Abstract The paper introduces the ‘Bayes transform’, a mathematical procedure for putting data into a hierarchical representation. Applicable to any type of data, the procedure yields interesting results when applied to sequences. In this case, the representation obtained implicitly models the repetition hierarchy of the source. There are then natural applications to music. Derivation of Bayes transforms can be the means of determining the repetition hierarchy of note sequences (melodies) in an empirical and domain-general way. The paper investigates application of this approach to Folk Song, examining the results that can be obtained by treating such transforms as generative models.
Adaptive Behavior | 2010
Chris Thornton
While there is increasing recognition of how relations of embodiment can be exploited for achievement of cognitive goals, we still lack any general method for formalizing the benefits that are then obtained, or for quantifying them. The present article describes a method that can be used to calculate the informational benefit obtained when embodiment becomes a vehicle for generation of ‘‘good data,’’ that is, data exhibiting behaviorally salient correlations.
Cognitive Science | 2009
Chris Thornton
Early agreement within cognitive science on the topic of representation has now given way to a combination of positions. Some question the significance of representation in cognition. Others continue to argue in favor, but the case has not been demonstrated in any formal way. The present paper sets out a framework in which the value of representation use can be mathematically measured, albeit in a broadly sensory context rather than a specifically cognitive one. Key to the approach is the use of Bayesian networks for modeling the distal dimension of sensory processes. More relevant to cognitive science is the theoretical result obtained, which is that a certain type of representational architecture is necessary for achievement of sensory efficiency. While exhibiting few of the characteristics of traditional, symbolic encoding, this architecture corresponds quite closely to the forms of embedded representation now being explored in some embedded/embodied approaches. It becomes meaningful to view that type of representation use as a form of information recovery. A formal basis then exists for viewing representation not so much as the substrate of reasoning and thought, but rather as a general medium for efficient, interpretive processing.