Aaron Sloman
University of Birmingham
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Aaron Sloman.
Cognition & Emotion | 1987
Aaron Sloman
Abstract Intelligent animals are solutions to a design problem posed by: the varying requirements of individuals, the more permanent requirements of species and social groups, the constraints of the environment, and the available biological mechanisms. Analysis of this design problem, especially the implications of limited knowledge and a continuous flow of information in a rapidly changing environment, leads to a theory of how new motives are processed in an intelligent system. The need for speed leads to architectures and algorithms that are fallible in ways that explain why intelligent agents are susceptible to emotions and errors. This holds also for intelligent robots. A study of such mechanisms and processes is a step towards a computational theory of emotions, attitudes, moods, character traits, and other aspects of mind so far not studied in Artificial Intelligence. In particular, it turns out that no special emotional subsystem is required. This framework clarifies and refines ordinary concepts o...
national conference on artificial intelligence | 1999
Aaron Sloman
This paper is about how to give human-like powers to complete agents. For this the most important design choice concerns the overall architecture. Questions regarding detailed mechanisms, forms of representations, inference capabilities, knowledge etc. are best addressed in the context of a global architecture in which different design decisions need to be linked. Such a design would assemble various kinds of functionality into a complete coherent working system, in which there are many concurrent, partly independent, partly mutually supportive, partly potentially incompatible processes, addressing a multitude of issues on different time scales, including asynchronous, concurrent, motive generators. Designing human like agents is part of the more general problem of understanding design space, niche space and their interrelations, for, in the abstract, there is no one optimal design, as biological diversity on earth shows.
Artificial Intelligence | 1971
Aaron Sloman
Abstract This paper echoes, from a philosophical standpoint, the claim of McCarthy and Hayes that Philosophy and Artificial Intelligence have important relations. Philosophical problems about the use of “intuition” in reasoning are related, via a concept of anlogical representation, to problems in the simulation of perception, problem-solving and the generation of useful sets of possibilities in considering how to act. The requirements for intelligent decision-making proposed by McCarthy and Hayes are criticised as too narrow, and more general requirements are suggested instead.
Royal Institute of Philosophy Supplement | 1993
Aaron Sloman
Many people who favour the design-based approach to the study of mind, including the author previously, have thought of the mind as a computational system, though they don’t all agree regarding the forms of computation required for mentality. Because of ambiguities in the notion of ‘computation’ and also because it tends to be too closely linked to the concept of an algorithm, it is suggested in this paper that we should rather construe the mind (or an agent with a mind) as a control system involving many interacting control loops of various kinds, most of them implemented in high level virtual machines, and many of them hierarchically organised. (Some of the sub-processes are clearly computational in character, though not necessarily all.) A number of implications are drawn out, including the implication that there are many informational substates, some incorporating factual information, some control information, using diverse forms of representation. The notion of architecture, i.e. functional differentiation into interacting components, is explained, and the conjecture put forward that in order to account for the main characteristics of the human mind it is more important to get the architecture right than to get the mechanisms right (e.g. symbolic vs neural mechanisms). Architecture dominates mechanism
Communications of The ACM | 1999
Aaron Sloman; Brian Logan
Synthetic agents with varying degrees of intelligence and autonomy are being designed in many research laboratories. The motivations include military training simulations, games and entertainments, educational software, digital personal assistants, software agents managing Internet transactions or purely scientific curiosity. Different approaches are being explored, including, at one extreme, research on the interactions between agents, and at the other extreme research on processes within agents. The first approach focuses on forms of communication, requirements for consistent collaboration, planning of coordinated behaviours to achieve collaborative goals, extensions to logics of action and belief for multiple agents, and types of emergent phenomena when many agents interact, for instance taking routing decisions on a telecommunications network. The second approach focuses on the internal architecture of individual agents required for social interaction, collaborative behaviours, complex decision making, learning, and emergent phenomena within complex agents. Agents with complex internal structure may, for example, combine perception, motive generation, planning, plan execution, execution monitoring, and even emotional reactions. We expect the second approach to become increasingly important for large multi-agent systems deployed in networked environments, as the level of intelligence required of individual agents increases. This is particularly relevant to work on agents which must cooperate to perform tasks requiring planning, problem solving, learning, opportunistic redirection of plans, and fine judgement, in a partially unpredictable environment. In such contexts, important new information about something other than the current goal can arrive at unexpected times or be found in unexpected contexts, and there is often insufficient time for deliberation. This requires reactive mechanisms. However some tasks involve achieving new types of goals or acting in novel contexts, which may require deliberative mechanisms. Dealing with conflicting goals, or adapting to changing opportunities and cultures may require sophisticated motivational mechanisms. Motivations for such research include: an interest in modelling human mental functioning (e.g., emotions), a desire for more interesting synthetic agents (‘believable agents’) in games and computer entertainments, and the need for intelligent agents capable of performing more complex tasks than hitherto.
Creating Personalities for Synthetic Actors, Towards Autonomous Personality Agents | 1997
Aaron Sloman
This paper outlines a design-based methodology for the study of mind as a part of the broad discipline of Artificial Intelligence. Within that framework some architectural requirements for human-like minds are discussed, and some preliminary suggestions made regarding mechanisms underlying motivation, emotions, and personality. A brief description is given of the ‘Nursemaid’ or ‘Minder’ scenario being used at the University of Birmingham as a framework for research on these problems. It may be possible later to combine some of these ideas with work on synthetic agents inhabiting virtual reality environments.
Archive | 2002
Aaron Sloman
It is argued that our ordinary concepts of mind are both implicitly based on architectural presuppositions and also cluster concepts. By showing that different information processing architectures support different classes of possible concepts, and that cluster concepts have inherent indeterminacy that can be reduced in different ways for different purposes we point the way to a research programme that promises important conceptual clarification in disciplines concerned with what minds are, how they evolved, how they can go wrong, and how new types can be made, e.g. philosophy, neuroscience, psychology, biology and artificial intelligence.
Creating Brain-Like Intelligence | 2009
Aaron Sloman
Some issues concerning requirements for architectures, mechanisms, ontologies and forms of representation in intelligent human-like or animal-like robots are discussed. The tautology that a robot that acts and perceives in the world must be embodied is often combined with false premises, such as the premiss that a particular type of body is a requirement for intelligence, or for human intelligence, or the premiss that all cognition is concerned with sensorimotor interactions, or the premiss that all cognition is implemented in dynamical systems closely coupled with sensors and effectors. It is time to step back and ask what robotic research in the past decade has been ignoring. I shall try to identify some major research gaps by a combination of assembling requirements that have been largely ignored and design ideas that have not been investigated --- partly because at present it is too difficult to make significant progress on those problems with physical robots, as too many different problems need to be solved simultaneously. In particular, the importance of studying some abstract features of the environment about which the animal or robot has to learn (extending ideas of J.J.Gibson) has not been widely appreciated.
Archive | 1992
Aaron Sloman
As a step towards comprehensive computer models of communication and effective human machine dialogue, some of the relationships between communication and affect are explored. An outline theory is presented of the architecture that makes various kinds of affective states possible, or even inevitable, in intelligent agents, along with some of the implications of this theory for various communicative processes. The model implies that human beings typically have many different, hierarchically organized, dispositions capable of interacting with new information to produce affective states, distract attention, interrupt ongoing actions, and so on. High “insistence” of motives is defined in relation to a tendency to penetrate an attention filter mechanism, which seems to account for the partial loss of control involved in emotions. One conclusion is that emulating human communicative abilities will not be achieved easily. Another is that it will be even more difficult to design and build computing systems that reliably achieve interesting communicative goals.
Proceedings of the 2nd Asia-Pacific Conference on IAT | 2001
Matthias Scheutz; Aaron Sloman
We analyse control functions of affective states in relatively simple agents in a variety of en-vironments and test the analysis in various simulation experiments in competitive multi-agentenvironments. The results show that simple affective states (like “hunger”) can be effective inagent control and are likely to evolve in certain competitive environments. This illustrates themethodology of exploring neighbourhoods in “design space” in order to understand tradeoffs inthe development of different kinds of agent architectures, whether natural or artificial.