Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Bas R. Steunebrink is active.

Publication


Featured researches published by Bas R. Steunebrink.


Synthese | 2012

A formal model of emotion triggers: an approach for BDI agents

Bas R. Steunebrink; Mehdi Dastani; John-Jules Ch. Meyer

This paper formalizes part of a well-known psychological model of emotions. In particular, the logical structure underlying the conditions that trigger emotions are studied and then hierarchically organized. The insights gained therefrom are used to guide a formalization of emotion triggers, which proceeds in three stages. The first stage captures the conditions that trigger emotions in a semiformal way, i.e., without committing to an underlying formalism and semantics. The second stage captures the main psychological notions used in the emotion model in dynamic doxastic logic. The third stage introduces a BDI-based framework (belief–desire–intention) with achievement goals, which is used to firmly ground the preceding stages. The result is a formalization of emotion triggers for BDI agents with achievement goals. The idea of proceeding in these stages is to provide different levels of commitment to formalisms, so that it remains relatively easy to extend or replace the used formalisms without having to start from scratch. Finally, we show that the formalization renders properties of emotions that are in line with the psychological model on which it is based.


portuguese conference on artificial intelligence | 2009

A Formal Model of Emotion-Based Action Tendency for Intelligent Agents

Bas R. Steunebrink; Mehdi Dastani; John-Jules Ch. Meyer

Although several formal models of emotions for intelligent agents have recently been proposed, such models often do not formally specify how emotions influence the behavior of an agent. In psychological literature, emotions are often viewed as heuristics that give an individual the tendency to perform particular actions. In this paper, we take an existing formalization of how emotions come about in intelligent agents and extend this with a formalization of action tendencies. The resulting model specifies how the emotions of an agent determine a set of actions from which it can select one to perform. We show that the presented model of how emotions influence behavior is intuitive and discuss interesting properties of the model.


pacific rim international conference on multi agents | 2008

Modularity in Agent Programming Languages

Mehdi Dastani; Christian P. Mol; Bas R. Steunebrink

This paper discusses a module-based vision for designing BDI-based multi-agent programming languages. The introduced concept of modules is generic and facilitates the implementation of different agent concepts such as agent roles and agent profiles, and enables common programming techniques such as encapsulation and information hiding for BDI-based agents. This vision is applied to 2APL, which is an existing BDI-based agent programming language. Specific programming constructs are added to 2APL to allow the implementation of modules. The syntax and intuitive meaning of these programming constructs are provided as well as the operational semantics of one of the programming constructs. Some informal properties of the programming constructs are discussed and it is explained how these modules can be used to implement agent roles, agent profiles, or the encapsulation of BDI concepts.


artificial general intelligence | 2016

Growing Recursive Self-Improvers

Bas R. Steunebrink; Kristinn R. Thórisson; Jürgen Schmidhuber

Research into the capability of recursive self-improvement typically only considers pairs of \(\langle \)agent, self-modification candidate\(\rangle \), and asks whether the agent can determine/prove if the self-modification is beneficial and safe. But this leaves out the much more important question of how to come up with a potential self-modification in the first place, as well as how to build an AI system capable of evaluating one. Here we introduce a novel class of AI systems, called experience-based AI (expai), which trivializes the search for beneficial and safe self-modifications. Instead of distracting us with proof-theoretical issues, expai systems force us to consider their education in order to control a system’s growth towards a robust and trustworthy, benevolent and well-behaved agent. We discuss what a practical instance of expai looks like and build towards a “test theory” that allows us to gauge an agent’s level of understanding of educational material.


artificial general intelligence | 2014

Bounded Seed-AGI

Eric Nivel; Kristinn R. Thórisson; Bas R. Steunebrink; Haris Dindo; Giovanni Pezzulo; Manuel Rodríguez; Carlos Hernández; Dimitri Ognibene; Jürgen Schmidhuber; Ricardo Sanz; Helgi Páll Helgason; Antonio Chella

Four principal features of autonomous control systems are left both unaddressed and unaddressable by present-day engineering methodologies: (1) The ability to operate effectively in environments that are only partially known at design time; (2) A level of generality that allows a system to re-assess and re-define the fulfillment of its mission in light of unexpected constraints or other unforeseen changes in the environment; (3) The ability to operate effectively in environments of significant complexity; and (4) The ability to degrade gracefully—how it can continue striving to achieve its main goals when resources become scarce, or in light of other expected or unexpected constraining factors that impede its progress. We describe new methodological and engineering principles for addressing these shortcomings, that we have used to design a machine that becomes increasingly better at behaving in underspecified circumstances, in a goal-directed way, on the job, by modeling itself and its environment as experience accumulates. The work provides an architectural blueprint for constructing systems with high levels of operational autonomy in underspecified circumstances, starting from only a small amount of designer-specified code—a seed. Using value-driven dynamic priority scheduling to control the parallel execution of a vast number of lines of reasoning, the system accumulates increasingly useful models of its experience, resulting in recursive self-improvement that can be autonomously sustained after the machine leaves the lab, within the boundaries imposed by its designers. A prototype system named AERA has been implemented and demonstrated to learn a complex real-world task—real-time multimodal dialogue with humans—by on-line observation. Our work presents solutions to several challenges that must be solved for achieving artificial general intelligence.


web intelligence | 2009

Modularity in BDI-Based Multi-agent Programming Languages

Mehdi Dastani; Bas R. Steunebrink

This paper proposes a module-based vision for designing BDI-based multi-agent programming languages. The introduced concept of modules enables common programming techniques such as encapsulation and information hiding for BDI-based programs, and facilitates the implementation of agent roles and profiles. This vision is applied to a BDI-based agent programming language to which specific programming constructs are added to allow the implementation of modules. The syntax and intuitive semantics of module based programming constructs are explained. An example is presented to illustrate how modules can be used to implement BDI-based multi-agent systems.


artificial general intelligence | 2015

Anytime Bounded Rationality

Eric Nivel; Kristinn R. Thórisson; Bas R. Steunebrink; Jürgen Schmidhuber

Dependable cyber-physical systems strive to deliver anticipative, multi-objective performance anytime, facing deluges of inputs with varying and limited resources. This is even more challenging for life-long learning rational agents as they also have to contend with the varying and growing know-how accumulated from experience. These issues are of crucial practical value, yet have been only marginally and unsatisfactorily addressed in AGI research. We present a value-driven computational model of anytime bounded rationality robust to variations of both resources and knowledge. It leverages continually learned knowledge to anticipate, revise and maintain concurrent courses of action spanning over arbitrary time scales for execution anytime necessary.


international conference on development and learning | 2012

Continually adding self-invented problems to the repertoire: First experiments with POWERPLAY

Rupesh Kumar Srivastava; Bas R. Steunebrink; Marijn F. Stollenga; Jürgen Schmidhuber

Pure scientists do not only invent new methods to solve given problems. They also invent new problems. The recent POWERPLAY framework formalizes this type of curiosity and creativity in a new, general, yet practical way. To acquire problem solving prowess through playing, POWERPLAY-based artificial explorers by design continually come up with the fastest to find, initially novel, but eventually solvable problems. They also continually simplify or speed up solutions to previous problems. We report on results of first experiments with POWERPLAY. A self-delimiting recurrent neural network (SLIM RNN) is used as a general computational architecture to implement the systems solver. Its weights can encode arbitrary, self-delimiting, halting or non-halting programs affecting both environment (through effectors) and internal states encoding abstractions of event sequences. In open-ended fashion, our POWERPLAY-driven RNNs learn to become increasingly general problem solvers, continually adding new problem solving procedures to the growing repertoire, exhibiting interesting developmental stages.


web intelligence | 2007

Towards Programming Multimodal Dialogues

Nieske L. Vergunst; Bas R. Steunebrink; Mehdi Dastani; Frank Dignum; J.-J. Ch. Meyer

This paper identifies several issues in multimodal dialogues between a companion robot and a human user. Specifically, these issues pertain to the synchronization of multimodal input and output, and the handling of expected and unexpected input, including input contradicting over different modalities. Furthermore, a novel way of visually representing multimodal dialogues is presented. Ultimately, this work represents some steps towards the development of a principled and generic method for programming multimodal dialogues.


artificial general intelligence | 2016

Why Artificial Intelligence Needs a Task Theory

Kristinn R. Thórisson; Jordi Bieger; Thröstur Thorarensen; Jóna S. Sigurðardóttir; Bas R. Steunebrink

The concept of “task” is at the core of artificial intelligence (AI): Tasks are used for training and evaluating AI systems, which are built in order to perform and automatize tasks we deem useful. In other fields of engineering theoretical foundations allow thorough evaluation of designs by methodical manipulation of well understood parameters with a known role and importance; this allows an aeronautics engineer, for instance, to systematically assess the effects of wind speed on an airplane’s performance and stability. No framework exists in AI that allows this kind of methodical manipulation: Performance results on the few tasks in current use (cf. board games, question-answering) cannot be easily compared, however similar or different. The issue is even more acute with respect to artificial general intelligence systems, which must handle unanticipated tasks whose specifics cannot be known beforehand. A task theory would enable addressing tasks at the class level, bypassing their specifics, providing the appropriate formalization and classification of tasks, environments, and their parameters, resulting in more rigorous ways of measuring, comparing, and evaluating intelligent behavior. Even modest improvements in this direction would surpass the current ad-hoc nature of machine learning and AI evaluation. Here we discuss the main elements of the argument for a task theory and present an outline of what it might look like for physical tasks.

Collaboration


Dive into the Bas R. Steunebrink's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jürgen Schmidhuber

Dalle Molle Institute for Artificial Intelligence Research

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge