William Taysom
Florida Institute for Human and Machine Cognition
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by William Taysom.
AUTONOMY'03 Proceedings of the 2003 International Conference on Agents and Computational Autonomy | 2003
Jeffrey M. Bradshaw; Paul J. Feltovich; Hyuckchul Jung; Shriniwas Kulkarni; William Taysom; Andrzej Uszok
Several research groups have grappled with the problem of characterizing and developing practical approaches for implementing adjustable autonomy and mixed-initiative interaction in deployed systems. However, each group takes a little different approach and uses variations of the same terminology in a somewhat different fashion. In this chapter, we will describe some common dimensions in an effort to better understand these important but ill-characterized topics. We are developing a formalism and implementation of these concepts as part of the KAoS framework in the context of our research on policy-governed autonomous systems.
systems, man and cybernetics | 2004
Jeff Bradshaw; Paul J. Feltovich; Hyuckchul Jung; Shri Kulkarni; James F. Allen; Larry Bunch; Nathanael Chambers; Lucian Galescu; Renia Jeffers; Matthew P. Johnson; Maarten Sierhuis; William Taysom; Andrzej Uszok; R. Van Hoof
In this paper, we outline an approach to policy-based coordination in joint human-agent activity. The approach is grounded in a theory of joint activity originally developed in the context of discourse, and now applied to the broader realm of human-agent interaction. We have been gradually implementing selected aspects of policy-based coordination within the KAoS services framework and have been developing a body of examples that guide additional testing of these ideas through detailed studies of work practice.
Lecture Notes in Computer Science | 2005
Jeffrey M. Bradshaw; Hyuckchul Jung; Shriniwas Kulkarni; Matthew Johnson; Paul J. Feltovich; James F. Allen; Larry Bunch; Nathanael Chambers; Lucian Galescu; Renia Jeffers; Niranjan Suri; William Taysom; Andrzej Uszok
Trust is arguably the most crucial aspect of agent acceptability. At its simplest level, it can be characterized in terms of judgments that people make concerning three factors: an agents competence, its benevolence, and the degree to which it can be rapidly and reliably brought into compliance when things go wrong. Adjustable autonomy consists of the ability to dynamically impose and modify constraints that affect the range of actions that the human-agent team can successfully perform, consistently allowing the highest degrees of useful autonomy while maintaining an acceptable level of trust. Many aspects of adjustable autonomy can be addressed through policy. Policies are a means to dynamically regulate the behavior of system components without changing code or requiring the cooperation of the components being governed. By changing policies, a system can be adjusted to accommodate variations in externally imposed constraints and environmental conditions. In this paper we describe some important dimensions relating to autonomy and give examples of how these dimensions might be adjusted in order to enhance performance of human-agent teams. We introduce Kaa (KAoS adjustable autonomy) and provide a brief comparison with two other implementations of adjustable autonomy concepts.
adaptive agents and multi-agents systems | 2005
Jeffrey M. Bradshaw; Hyuckchul Jung; Shriniwas Kulkarni; Matthew Johnson; Paul J. Feltovich; James F. Allen; Larry Bunch; Nathanael Chambers; Lucian Galescu; Renia Jeffers; Niranjan Suri; William Taysom; Andrzej Uszok
Though adjustable autonomy is hardly a new topic in agent systems, there has been a general lack of consensus on terminology and basic concepts. In this paper, we describe the multi-dimensional nature of adjustable autonomy and give examples of how various dimensions might be adjusted in order to enhance performance of human-agent teams. We then introduce Kaa (KAoS adjustable autonomy), which extends our previous work on KAoS policy and domain services to provide a policy-based capability for adjustable autonomy based on this richer notion of adjustable autonomy. The current implementation of Kaa uses a combination of ontologies represented in OWL and influence-diagram-based decision-theoretic algorithms to determine what if any changes should be made in agent autonomy in a given context. We have demonstrated Kaa as part of ONR-sponsored research to improve naval de-mining operations through more effective human-robot interaction. A brief comparison among alternate approaches to adjustable autonomy is provided.
Journal of Logic and Computation | 2008
Hyuckchul Jung; James F. Allen; Lucian Galescu; Nathanael Chambers; Mary D. Swift; William Taysom
Learning tasks from a single demonstration presents a significant challenge because the observed sequence is specific to the current situation and is inherently an incomplete representation of the procedure. Observation-based machine-learning techniques are not effective without multiple examples. However, when a demonstration is accompanied by natural language explanation, the language provides a rich source of information about the relationships between the steps in the procedure and the decision-making processes that led to them. In this article, we present a one-shot task learning system built on TRIPS, a dialogue-based collaborative problem solving system, and show how natural language understanding can be used for effective one-shot task learning.
north american chapter of the association for computational linguistics | 2007
James F. Allen; Nathanael Chambers; George Ferguson; Lucian Galescu; Hyuckchul Jung; Mary D. Swift; William Taysom
We describe a system that can learn new procedure models effectively from one demonstration by the user. Previous work to learn tasks through observing a demonstration (e.g., Lent & Laird, 2001) has required observing many examples of the same task. One-shot learning of tasks presents a significant challenge because the observed sequence is inherently incomplete -- the user only performs the steps required for the current situation. Furthermore, their decision-making processes, which reflect the control structures in the procedure, are not revealed.
national conference on artificial intelligence | 2007
James F. Allen; Nathanael Chambers; George Ferguson; Lucian Galescu; Hyuckchul Jung; Mary D. Swift; William Taysom
Archive | 2007
James F. Allen; Nathanael Chambers; Lucian Galescu; Hyuckchul Jung; William Taysom
Lecture Notes in Computer Science | 2004
Jeffrey M. Bradshaw; Paul J. Feltovich; Hyuckchul Jung; Shriniwas Kulkarni; William Taysom; Andrzej Uszok
Archive | 2007
James F. Allen; Nathanael Chambers; Lucian Galescu; Hyuckchul Jung; William Taysom