Stan Franklin
University of Memphis
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Stan Franklin.
intelligent agents | 1996
Stan Franklin; Arthur C. Graesser
The advent of software agents gave rise to much discussion of just what such an agent is, and of how they differ from programs in general. Here we propose a formal definition of an autonomous agent which clearly distinguishes a software agent from just any program. We also offer the beginnings of a natural kinds taxonomy of autonomous agents, and discuss possibilities for further classification. Finally, we discuss subagents and multiagent systems.
Trends in Cognitive Sciences | 2003
Bernard J. Baars; Stan Franklin
Active components of classical working memory are conscious, but traditional theory does not account for this fact. Global Workspace theory suggests that consciousness is needed to recruit unconscious specialized networks that carry out detailed working memory functions. The IDA model provides a fine-grained analysis of this process, specifically of two classical working-memory tasks, verbal rehearsal and the utilization of a visual image. In the process, new light is shed on the interactions between conscious and unconscious aspects of working memory.
Knowledge Engineering Review | 1997
Jim Doran; Stan Franklin; Nicholas R. Jennings; Timothy J. Norman
Cooperation is often presented as one of the key concepts which differentiates multi-agent systems from other related disciplines such as distributed computing, object-oriented systems, and expert systems. However, it is a concept whose precise usage in agent-based systems is at best unclear and at worst highly inconsistent. Given the centrality of the issue, and the different ideological viewpoints on the subject, this was a lively panel which dealt with the following main issues.
Cybernetics and Systems | 1997
Stan Franklin
This paper is primarily concerned with answering two questions: What are necessary elements of embodied architectures? How are we to proceed in a science of embodied systems? Autonomous agents, more specifically cognitive agents, are offered as the appropriate objects of study for embodied AI. The necessary elements of the architectures of these agents are then those of embodied AI as well. A concrete proposal is presented as to how to proceed with such a study. This proposal includes a synergistic parallel employment of an engineering approach and a scientific approach. It also supports the exploration of design space and of niche space. A general architecture for a cognitive agent is outlined and discussed.
Frontiers in Psychology | 2013
Bernard J. Baars; Stan Franklin; Thomas Z. Ramsøy
A global workspace (GW) is a functional hub of binding and propagation in a population of loosely coupled signaling elements. In computational applications, GW architectures recruit many distributed, specialized agents to cooperate in resolving focal ambiguities. In the brain, conscious experiences may reflect a GW function. For animals, the natural world is full of unpredictable dangers and opportunities, suggesting a general adaptive pressure for brains to resolve focal ambiguities quickly and accurately. GW theory aims to understand the differences between conscious and unconscious brain events. In humans and related species the cortico-thalamic (C-T) core is believed to underlie conscious aspects of perception, thinking, learning, feelings of knowing (FOK), felt emotions, visual imagery, working memory, and executive control. Alternative theoretical perspectives are also discussed. The C-T core has many anatomical hubs, but conscious percepts are unitary and internally consistent at any given moment. Over time, conscious contents constitute a very large, open set. This suggests that a brain-based GW capacity cannot be localized in a single anatomical hub. Rather, it should be sought in a functional hub – a dynamic capacity for binding and propagation of neural signals over multiple task-related networks, a kind of neuronal cloud computing. In this view, conscious contents can arise in any region of the C-T core when multiple input streams settle on a winner-take-all equilibrium. The resulting conscious gestalt may ignite an any-to-many broadcast, lasting ∼100–200 ms, and trigger widespread adaptation in previously established networks. To account for the great range of conscious contents over time, the theory suggests an open repertoire of binding1 coalitions that can broadcast via theta/gamma or alpha/gamma phase coupling, like radio channels competing for a narrow frequency band. Conscious moments are thought to hold only 1–4 unrelated items; this small focal capacity may be the biological price to pay for global access. Visuotopic maps in cortex specialize in features like color, retinal size, motion, object identity, and egocentric/allocentric framing, so that a binding coalition for the sight of a rolling billiard ball in nearby space may resonate among activity maps of LGN, V1-V4, MT, IT, as well as the dorsal stream. Spatiotopic activity maps can bind into coherent gestalts using adaptive resonance (reentry). Single neurons can join a dominant coalition by phase tuning to regional oscillations in the 4–12 Hz range. Sensory percepts may bind and broadcast from posterior cortex, while non-sensory FOKs may involve prefrontal and frontotemporal areas. The anatomy and physiology of the hippocampal complex suggest a GW architecture as well. In the intact brain the hippocampal complex may support conscious event organization as well as episodic memory storage.
International Journal of Machine Consciousness | 2009
Bernard J. Baars; Stan Franklin
We argue that the functions of consciousness are implemented in a bio-computational manner. That is to say, the conscious as well as the non-conscious aspects of human thinking, planning, and perception are produced by adaptive, biological algorithms. We propose that machine consciousness may be produced by similar adaptive algorithms running on the machine. Global Workspace Theory is currently the most empirically supported and widely discussed theory of consciousness. It provides a high-level description of such algorithms, based on a large body of psychological and brain evidence. LIDA provides an explicit implementation of much of GWT, which can be shown to perform human-like tasks, such as the interactive assignment of naval jobs to sailors. Here we provide brief descriptions of both GWT and LIDA in relation to the scientific evidence bearing on consciousness in the brain. A companion article explores how this approach could lead to machine consciousness.1 We also discuss the important distinction between volition and consciously mediated action selection, and describe an operational definition of consciousness via verifiable reportability. These are issues that may well bear on the possibility of machine consciousness.
IEEE Transactions on Autonomous Mental Development | 2014
Stan Franklin; Tamas Madl; Sidney K. D'Mello; Javier Snaider
We describe a cognitive architecture learning intelligent distribution agent (LIDA) that affords attention, action selection and human-like learning intended for use in controlling cognitive agents that replicate human experiments as well as performing real-world tasks. LIDA combines sophisticated action selection, motivation via emotions, a centrally important attention mechanism, and multimodal instructionalist and selectionist learning. Empirically grounded in cognitive science and cognitive neuroscience, the LIDA architecture employs a variety of modules and processes, each with its own effective representations and algorithms. LIDA has much to say about motivation, emotion, attention, and autonomous learning in cognitive agents. In this paper, we summarize the LIDA model together with its resulting agent architecture, describe its computational implementation, and discuss results of simulations that replicate known experimental data. We also discuss some of LIDAs conceptual modules, propose nonlinear dynamics as a bridge between LIDAs modules and processes and the underlying neuroscience, and point out some of the differences between LIDA and other cognitive architectures. Finally, we discuss how LIDA addresses some of the open issues in cognitive architecture research.
Topics in Cognitive Science | 2010
Wendell Wallach; Stan Franklin; Colin Allen
Recently, there has been a resurgence of interest in general, comprehensive models of human cognition. Such models aim to explain higher-order cognitive faculties, such as deliberation and planning. Given a computational representation, the validity of these models can be tested in computer simulations such as software agents or embodied robots. The push to implement computational models of this kind has created the field of artificial general intelligence (AGI). Moral decision making is arguably one of the most challenging tasks for computational approaches to higher-order cognition. The need for increasingly autonomous artificial agents to factor moral considerations into their choices and actions has given rise to another new field of inquiry variously known as Machine Morality, Machine Ethics, Roboethics, or Friendly AI. In this study, we discuss how LIDA, an AGI model of human cognition, can be adapted to model both affective and rational features of moral decision making. Using the LIDA model, we will demonstrate how moral decisions can be made in many domains using the same mechanisms that enable general decision making. Comprehensive models of human cognition typically aim for compatibility with recent research in the cognitive and neural sciences. Global workspace theory, proposed by the neuropsychologist Bernard Baars (1988), is a highly regarded model of human cognition that is currently being computationally instantiated in several software implementations. LIDA (Franklin, Baars, Ramamurthy, & Ventura, 2005) is one such computational implementation. LIDA is both a set of computational tools and an underlying model of human cognition, which provides mechanisms that are capable of explaining how an agents selection of its next action arises from bottom-up collection of sensory data and top-down processes for making sense of its current situation. We will describe how the LIDA model helps integrate emotions into the human decision-making process, and we will elucidate a process whereby an agent can work through an ethical problem to reach a solution that takes account of ethically relevant factors.
artificial general intelligence | 2011
Javier Snaider; Ryan James McCall; Stan Franklin
Intelligent software agents aiming for general intelligence are likely to be exceedingly complex systems and, as such, will be difficult to implement and to customize. Frameworks have been applied successfully in large-scale software engineering applications. A framework constitutes the skeleton of the application, capturing its generic functionality. Frameworks are powerful as they promote code reusability and significantly reduce the amount of effort necessary to develop customized applications. They are well suited for the implementation of AGI software agents. Here we describe the LIDA framework, a customizable implementation of the LIDA model of cognition. We argue that its characteristics make it suitable for wider use in developing AGI cognitive architectures.
PLOS ONE | 2011
Tamas Madl; Bernard J. Baars; Stan Franklin
We propose that human cognition consists of cascading cycles of recurring brain events. Each cognitive cycle senses the current situation, interprets it with reference to ongoing goals, and then selects an internal or external action in response. While most aspects of the cognitive cycle are unconscious, each cycle also yields a momentary “ignition” of conscious broadcasting. Neuroscientists have independently proposed ideas similar to the cognitive cycle, the fundamental hypothesis of the LIDA model of cognition. High-level cognition, such as deliberation, planning, etc., is typically enabled by multiple cognitive cycles. In this paper we describe a timing model LIDAs cognitive cycle. Based on empirical and simulation data we propose that an initial phase of perception (stimulus recognition) occurs 80–100 ms from stimulus onset under optimal conditions. It is followed by a conscious episode (broadcast) 200–280 ms after stimulus onset, and an action selection phase 60–110 ms from the start of the conscious phase. One cognitive cycle would therefore take 260–390 ms. The LIDA timing model is consistent with brain evidence indicating a fundamental role for a theta-gamma wave, spreading forward from sensory cortices to rostral corticothalamic regions. This posteriofrontal theta-gamma wave may be experienced as a conscious perceptual event starting at 200–280 ms post stimulus. The action selection component of the cycle is proposed to involve frontal, striatal and cerebellar regions. Thus the cycle is inherently recurrent, as the anatomy of the thalamocortical system suggests. The LIDA model fits a large body of cognitive and neuroscientific evidence. Finally, we describe two LIDA-based software agents: the LIDA Reaction Time agent that simulates human performance in a simple reaction time task, and the LIDA Allport agent which models phenomenal simultaneity within timeframes comparable to human subjects. While there are many models of reaction time performance, these results fall naturally out of a biologically and computationally plausible cognitive architecture.