David N. Morley
SRI International
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by David N. Morley.
Ai Magazine | 2007
Karen L. Myers; Pauline M. Berry; Jim Blythe; Ken Conley; Melinda T. Gervasio; Deborah L. McGuinness; David N. Morley; Avi Pfeffer; Martha E. Pollack; Milind Tambe
We describe an intelligent personal assistant that has been developed to aid a busy knowledge worker in managing time commitments and performing tasks. The design of the system was motivated by the complementary objectives of (1) relieving the user of routine tasks, thus allowing her to focus on tasks that critically require human problem-solving skills, and (2) intervening in situations where cognitive overload leads to oversights or mistakes by the user. The system draws on a diverse set of AI technologies that are linked within a Belief-Desire-Intention (BDI) agent system. Although the system provides a number of automated functions, the overall framework is highly user centric in its support for human needs, responsiveness to human inputs, and adaptivity to user working style and preferences.
adaptive agents and multi-agents systems | 2004
David N. Morley; Karen L. Myers
There is a need for agent systems that can scale to realworld applications, yet retain the clean semantic underpinning of more formal agent frameworks. We describe the SRI Procedural Agent Realization Kit (SPARK), a new BDI agent framework that combines these two qualities. In contrast to most practical agent frameworks, SPARK has a clear, well-defined formal semantics that is intended to support reasoning techniques such as procedure validation, automated synthesis, and procedure repair. SPARK also provides a variety of capabilities such as introspection and meta-level reasoning to enable more sophisticated methods for agent control, and advisability techniques that support user directability. On the practical side, SPARK has several design constructs that support the development of large-scale agent applications. SPARK is currently being used as the agent infrastructure for a personal assistant system for a manager in an office environment.
international conference on knowledge capture | 2001
Karen L. Myers; David N. Morley
Many potential applications for agent technology require humans and agents to work together in order to achieve complex tasks effectively. In contrast, much of the work in the agents community to date has focused on technologies for fully autonomous agent systems. This paper presents a framework for the directability of agents, in which a human supervisor can define policies to influence agent activities at execution time. The framework focuses on the concepts of adjustable autonomy for agents (ie, varying the degree to which agents make decisions without human intervention) and strategy preference (ie, recommending how agents should accomplish assigned task). The directability framework has been implemented within a PRS environment, and applied to a multiagent intelligence-gathering domain.
adaptive agents and multi-agents systems | 2007
John Thangarajah; James Harland; David N. Morley; Neil Yorke-Smith
Intelligent agents that are intended to work in dynamic environments must be able to gracefully handle unsuccessful tasks and plans. In addition, such agents should be able to make rational decisions about an appropriate course of action, which may include aborting a task or plan, either as a result of the agents own deliberations, or potentially at the request of another agent. In this paper we investigate the incorporation of aborts into a BDI-style architecture. We discuss some conditions under which aborting a task or plan is appropriate, and how to determine the consequences of such a decision. We augment each plan with an optional abort-method, analogous to the failure method found in some agent programming languages. We provide an operational semantics for the execution cycle in the presence of aborts in the abstract agent language CAN, which enables us to specify a BDI-based execution model without limiting our attention to a particular agent system (such as JACK, Jadex, Jason, or SPARK). A key technical challenge we address is the presence of parallel execution threads and of sub-tasks, which require the agent to ensure that the abort methods for each plan are carried out in an appropriate sequence.
International Journal on Artificial Intelligence Tools | 2012
Neil Yorke-Smith; Shahin Saadati; Karen L. Myers; David N. Morley
Personal assistant agents capable of proactively offering assistance can be more helpful to their users through their ability to perform tasks that otherwise would require user involvement. This article characterizes the properties desired of proactive behavior by a personal assistant agent in the realm of task management and develops an operational framework to implement such capabilities. We present an extended agent architectural model that features a meta-level layer charged with identifying potentially helpful actions and determining when it is appropriate to perform them. The reasoning that answers these questions draws on a theory of proactivity that describes user desires and a model of helpfulness. Operationally, assistance patterns represent a compiled form of this knowledge, instantiating meta-reasoning over the agents beliefs about its users activities as well as over world state. The resulting generic framework for proactive goal generation and deliberation has been implemented as part of a personal assistant agent in the computer desktop domain.
Archive | 2003
Karen L. Myers; David N. Morley
Many potential applications for agent technology require humans and agents to work together to achieve complex tasks effectively. In contrast, most of the work in the agents community to date has focused on technologies for fully autonomous agent systems. This paper presents a framework for the directability of agents, in which a human supervisor can define policies to influence agent activities at execution time. The framework focuses on the concepts of adjustable autonomy for agents (i.e., varying the degree to which agents make decisions without human intervention) and strategy preference (i.e., recommending how agents should accomplish assigned tasks). These mechanisms enable a human to customize the operations of agents to suit individual preferences and situation dynamics, leading to improved system reliability and increased user confidence over fully automated agent systems. The directability framework has been implemented within a BDI environment, and applied to a multiagent intelligence-gathering domain.
declarative agent languages and technologies | 2010
John Thangarajah; James Harland; David N. Morley; Neil Yorke-Smith
Deliberation over and management of goals is a key aspect of an agents architecture. We consider the various types of goals studied in the literature, including performance, achievement, and maintenance goals. Focusing on BDI agents, we develop a detailed description of goal states (such as whether goals have been suspended or not) and a comprehensive suite of operations that may be applied to goals (including dropping, aborting, suspending and resuming them). We show how to specify an operational semantics corresponding to this detailed description in an abstract agent language (CAN). The three key contributions of our generic framework for goal states and transitions are (1) to encompass both goals of accomplishment and rich goals of monitoring, (2) to provide the first specification of abort and suspend for all the common goal types, and (3) to account for plan execution as well as the dynamics of sub-goaling.
adaptive agents and multi-agents systems | 2006
David N. Morley; Karen L. Myers; Neil Yorke-Smith
The challenge we address is to reason about projected resource usage within a hierarchical task execution framework in order to improve agent effectiveness. Specifically, we seek to define and maintain maximally informative guaranteed bounds on projected resource requirements, in order to enable an agent to take full advantage of available resources while avoiding problems of resource conflict. Our approach is grounded in well-understood techniques for resource projection over possible paths through the plan space of an agent, but introduces three technical innovations. The first is the use of multi-fidelity models of projected resource requirements that provide increasingly more accurate projections as additional information becomes available. The second is execution-time refinement of initial bounds through pruning possible execution paths and variable domains based on the current world and execution state. The third is exploitation of additional semantic information about tasks that enables improved bounds on resource consumption. In contrast to earlier work in this area, we consider an expressive procedure language that includes complex control constructs and parameterized tasks. The approach has been implemented in the SPARK agent system and is being used to improve the performance of an operational intelligent assistant application.
Autonomous Agents and Multi-Agent Systems | 2014
James Harland; David N. Morley; John Thangarajah; Neil Yorke-Smith
A fundamental feature of intelligent agents is their ability to deliberate over their goals. Operating in an environment that may change in unpredictable ways, an agent needs to regularly evaluate whether its current set of goals is the most appropriate set to pursue. The management of goals is thus a key aspect of an agent’s architecture. Focusing on BDI agents, we consider the various types of goals studied in the literature, including both achievement and maintenance goals. We develop a detailed description of goal states (such as whether goals have been suspended or not), and a comprehensive suite of operations that may be applied to goals (including dropping, aborting, suspending and resuming them). We provide an operational semantics corresponding to this detailed description in an abstract agent language (CAN), and demonstrate on a detailed real-life scenario. The three key contributions of our generic framework for goal states and transitions are (1) to encompass both goals of accomplishment and rich goals of monitoring, (2) to provide the first specification of abort and suspend for all the common goal types, and (3) to account for plan execution as well as the dynamics of subgoaling. Our semantics clarifies how an agent can manage its goals, based on the decisions that it chooses to make, and further provides a foundation for correctness verification of agent behaviour.
adaptive agents and multi-agents systems | 2002
Karen L. Myers; David N. Morley
For agent technology to be accepted in real-world applications, humans must be able to customize and control agent operations. One approach for providing such controllability is to enable a human supervisor to define guidance for agents in the form of policies that establish boundaries on agent behavior. We consider the problem of conflicting guidance for agents, making contributions in two areas: (a) outlining a space of conflict types, and (b) defining resolution methods that provide robust operation in the face of conflicts. These resolution methods combine a guidance-based preference relation over plans with extensions to the set of options considered by an agent when conflicts arise.