Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sebastian Sardina is active.

Publication


Featured researches published by Sebastian Sardina.


Multi-Agent Programming, Languages, Tools and Applications | 2009

IndiGolog: a high-level programming language for embedded reasoning agents

Giuseppe De Giacomo; Yves Lespérance; Hector J. Levesque; Sebastian Sardina

IndiGolog isaprogramming languagefor autonomousagentsthat sense their environment anddo planning astheyoperate. Insteadof classical planning, it supports high-level program execution. The programmer provides a high-level nondeterministicprograminvolving domain-speci? c actions andteststo perform the agent’s tasks. The IndiGolog interpreterthenreasons aboutthepreconditions andeffectsofthe actionsintheprogramtonda legalterminatingexecution.To support this, the programmer provides a declarative specication of the domain (i.e.,primitive actions,preconditions andeffects, whatis known aboutthe initial state)inthe situation calculus. Theprogrammer can controlthe amountof non-determinism in the program and how muchof it is searched over. The language isrichand supports concurrentprogramming.Programsareexecuted onlinetogether withsensingthe environment and monitoringforevents,thus supporting thedevelopmentofreactiveagents.We discussthe language, itsimplementation, and applicationsthathave beenrealized withit.


Autonomous Agents and Multi-Agent Systems | 2011

A BDI agent programming language with failure handling, declarative goals, and planning

Sebastian Sardina; Lin Padgham

Agents are an important technology that have the potential to take over contemporary methods for analysing, designing, and implementing complex software. The Belief-Desire-Intention (BDI) agent paradigm has proven to be one of the major approaches to intelligent agent systems, both in academia and in industry. Typical BDI agent-oriented programming languages rely on user-provided “plan libraries” to achieve goals, and online context sensitive subgoal selection and expansion. These allow for the development of systems that are extremely flexible and responsive to the environment, and as a result, well suited for complex applications with (soft) real-time reasoning and control requirements. Nonetheless, complex decision making that goes beyond, but is compatible with, run-time context-dependent plan selection is one of the most natural and important next steps within this technology. In this paper we develop a typical BDI-style agent-oriented programming language that enhances usual BDI programming style with three distinguished features: declarative goals, look-ahead planning, and failure handling. First, an account that mixes both procedural and declarative aspects of goals is necessary in order to reason about important properties of goals and to decouple plans from what these plans are meant to achieve. Second, lookahead deliberation about the effects of one choice of expansion over another is clearly desirable or even mandatory in many circumstances so as to guarantee goal achievability and to avoid undesired situations. Finally, a failure handling mechanism, suitably integrated with both declarative goals and planning, is required in order to model an adequate level of commitment to goals, as well as to be consistent with most real BDI implemented systems.


adaptive agents and multi-agents systems | 2007

Goals in the context of BDI plan failure and planning

Sebastian Sardina; Lin Padgham

We develop a Belief-Desire-Intention (BDI) style agent-oriented programming language with special emphasis on the semantics of goals in the presence of the typical BDI failure handling present in many BDI systems and a novel account of hierarchical lookahead planning. The work builds incrementally on two existing languages and accommodates three type of goals: classical BDI-style event goals, declarative goals, and planning goals. We mainly focus on the dynamics of these type of goals and, in particular, on a kind of commitment scheme that brings the new language closer to the solid existing work in agent theory. To that end, we develop a semantics that recognises the usual hierarchical structure of active goals as well as their declarative aspects. In contrast with previous languages, the new language prevents an agent from blindly persisting with a (blocked) subsidiary goal when an alternative strategy for achieving a higher-level motivating goal exists. In addition, the new semantics ensures watchfulness by the agent to ensure that goals that succeed or are deemed impossible are immediately dropped, thus conforming to the requirements of basic rational commitment strategy. Finally, a mechanism for the proactive adoption of new goals, other than the mere reaction to events, and a formal account of interaction with the external environment are provided. We believe that the new language is an important step towards turning practical BDI programming languages more compatible with the established results in the area of agent theory.


international joint conference on artificial intelligence | 2011

Integrating learning into a BDI Agent for environments with changing dynamics

Dhirendra Singh; Sebastian Sardina; Lin Padgham; Geoff James

We propose a framework that adds learning for improving plan selection in the popular BDI agent programming paradigm. In contrast with previous proposals, the approach given here is able to scale up well with the complexity of the agents plan library. Technically, we develop a novel confidence measure which allows the agent to adjust its reliance on the learning dynamically, facilitating in principle infinitely many (re)learning phases. We demonstrate the benefits of the approach in an example controller for energy management.


Robotics and Autonomous Systems | 2010

Extending BDI plan selection to incorporate learning from experience

Dhirendra Singh; Sebastian Sardina; Lin Padgham

An important drawback to the popular Belief, Desire, and Intentions (BDI) paradigm is that such systems include no element of learning from experience. We describe a novel BDI execution framework that models context conditions as decision trees, rather than boolean formulae, allowing agents to learn the probability of success for plans based on experience. By using a probabilistic plan selection function, the agents can balance exploration and exploitation of their plans. We extend earlier work to include both parameterised goals and recursion and modify our previous approach to decision tree confidence to include large and even non-finite domains that arise from such consideration. Our evaluation on a pre-existing program that relies heavily on recursion and parametrised goals confirms previous results that naive learning fails in some circumstances, and demonstrates that the improved approach learns relatively well.


ACM Transactions on Intelligent Systems and Technology | 2017

Intelligent Process Adaptation in the SmartPM System

Andrea Marrella; Massimo Mecella; Sebastian Sardina

The increasing application of process-oriented approaches in new challenging dynamic domains beyond business computing (e.g., healthcare, emergency management, factories of the future, home automation, etc.) has led to reconsider the level of flexibility and support required to manage complex knowledge-intensive processes in such domains. A knowledge-intensive process is influenced by user decision making and coupled with contextual data and knowledge production, and involves performing complex tasks in the “physical” real world to achieve a common goal. The physical world, however, is not entirely predictable, and knowledge-intensive processes must be robust to unexpected conditions and adaptable to unanticipated exceptions, recognizing that in real-world environments it is not adequate to assume that all possible recovery activities can be predefined for dealing with the exceptions that can ensue. To tackle this issue, in this paper we present SmartPM, a model and a prototype Process Management System featuring a set of techniques providing support for automated adaptation of knowledge-intensive processes at runtime. Such techniques are able to automatically adapt process instances when unanticipated exceptions occur, without explicitly defining policies to recover from exceptions and without the intervention of domain experts at runtime, aiming at reducing error-prone and costly manual ad-hoc changes, and thus at relieving users from complex adaptations tasks. To accomplish this, we make use of well-established techniques and frameworks from Artificial Intelligence, such as situation calculus, IndiGolog and classical planning. The approach, which is backed by a formal model, has been implemented and validated with a case study based on real knowledge-intensive processes coming from an emergency management domain.


International Journal of Agent Technologies and Systems | 2009

Enhancing the Adaptation of BDI Agents Using Learning Techniques

Stéphane Airiau; Lin Padgham; Sebastian Sardina; Sandip Sen

Belief, Desire, and Intentions (BDI) agents are well suited for complex applications with (soft) real-time reasoning and control requirements. BDI agents are adaptive in the sense that they can quickly reason and react to asynchronous events and act accordingly. However, BDI agents lack learning capabilities to modify their behavior when failures occur frequently. We discuss the use of past experience to improve the agents behavior. More precisely, we use past experience to improve the context conditions of the plans contained in the plan library, initially set by a BDI programmer. First, we consider a deterministic and fully observable environment and we discuss how to modify the BDI agent to prevent re-occurrence of failures, which is not a trivial task. Then, we discuss how we can use decision trees to improve the agents behavior in a non-deterministic environment.


Artificial Intelligence | 2016

Agent planning programs

Giuseppe De Giacomo; Alfonso Gerevini; Fabio Patrizi; Alessandro Saetti; Sebastian Sardina

This work proposes a novel high-level paradigm, agent planning programs, for modeling agents behavior, which suitably mixes automated planning with agent-oriented programming. Agent planning programs are finite-state programs, possibly containing loops, whose atomic instructions consist of a guard, a maintenance goal, and an achievement goal, which act as precondition-invariance-postcondition assertions in program specification. Such programs are to be executed in possibly nondeterministic planning domains and their execution requires generating plans that meet the goals specified in the atomic instructions, while respecting the program control flow. In this paper, we define the problem of automatically synthesizing the required plans to execute an agent planning program, propose a solution technique based on model checking of two-player game structures, and use it to characterize the worst-case computational complexity of the problem as EXPTIME-complete. Then, we consider the case of deterministic domains and propose a different technique to solve agent planning programs, which is based on iteratively solving classical planning problems and on exploiting goal preferences and plan adaptation methods. Finally, we study the effectiveness of this approach for deterministic domains through an experimental analysis on well-known planning domains.


workshops on enabling technologies infrastracture for collaborative enterprises | 2008

Coordinating Mobile Actors in Pervasive and Mobile Scenarios: An AI-Based Approach

de M Massimiliano Leoni; Andrea Marrella; Massimo Mecella; S Valentini; Sebastian Sardina

Process management systems (PMSs) can be used not only in classical business scenarios, but also in highly dynamic and uncertain environments, for example, in supporting operators during emergency management for coordinating their activities. In such challenging situations, processes should be adapted in order to cope with anomalous situations, including connection anomalies and task faults. This requires the provision of intelligent support for the planning and enactment of complex processes, that allows to capture the knowledge about the dynamic context of a process. In this paper, we show how this knowledge, together with information about the capabilities of the available actors, may be specified and used to not only to support the selection of an appropriate set of agents to fill the roles in a given task, but also to solve the problem of adaptivity. The paper describes a first prototype of a PMS based on well-known artificial intelligence techniques and how it can be extended to tackle adaptation.


adaptive agents and multi-agents systems | 2006

Modelling situations in intelligent agents

John Thangarajah; Lin Padgham; Sebastian Sardina

BDI agent systems and languages such as PRS, JAM, JACK, 3APL, and AgentSpeak have been widely used in developing robust and exible applications in dynamic domains. However, one criticism of these systems is that the modelling of how agent reasoning progresses is too reliant on the rather low level notion of individual events. In our own work in a number of application areas, we have consistently noticed a need for a more abstract concept, which we call a situation. Recognition by the agent that it is in a particular situation may affect the goals that it has, may place overarching constraints on how it operates, or may influence the way that it chooses to achieve its goals.BDI agent systems and languages such as PRS, JAM, JACK, 3APL, and AgentSpeak have been widely used in developing robust and exible applications in dynamic domains. However, one criticism of these systems is that the modelling of how agent reasoning progresses is too reliant on the rather low level notion of individual events. In our own work in a number of application areas, we have consistently noticed a need for a more abstract concept, which we call a situation. Recognition by the agent that it is in a particular situation may affect the goals that it has, may place overarching constraints on how it operates, or may influence the way that it chooses to achieve its goals.

Collaboration


Dive into the Sebastian Sardina's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Massimo Mecella

Sapienza University of Rome

View shared research outputs
Top Co-Authors

Avatar

Andrea Marrella

Sapienza University of Rome

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Fabio Patrizi

Sapienza University of Rome

View shared research outputs
Top Co-Authors

Avatar

Paolo Felli

University of Melbourne

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge