K. Suzanne Barber
University of Texas at Austin
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by K. Suzanne Barber.
adaptive agents and multi-agents systems | 2005
Karen K. Fullam; Tomas Klos; Guillaume Muller; Jordi Sabater; Andreas Schlosser; Zvi Topol; K. Suzanne Barber; Jeffrey S. Rosenschein; Laurent Vercouter; Marco Voss
A diverse collection of trust-modeling algorithms for multi-agent systems has been developed in recent years, resulting in significant breadth-wise growth without unified direction or benchmarks. Based on enthusiastic response from the agent trust community, the Agent Reputation and Trust (ART) Testbed initiative has been launched, charged with the task of establishing a testbed for agent trust- and reputation-related technologies. This testbed serves in two roles: (1) as a competition forum in which researchers can compare their technologies against objective metrics, and (2) as a suite of tools with flexible parameters, allowing researchers to perform customizable, easily-repeatable experiments. This paper first enumerates trust research objectives to be addressed in the testbed and desirable testbed characteristics, then presents a competition testbed specification that is justified according to these requirements. In the testbeds artwork appraisal domain, agents, who valuate paintings for clients, may gather opinions from other agents to produce accurate appraisals. The testbeds implementation architecture is discussed briefly, as well.
Journal of Experimental and Theoretical Artificial Intelligence | 2000
K. Suzanne Barber; Anuj Goel; Cheryl E. Martin
Multi-agent systems require adaptability to perform effectively in complex and dynamic environments. This article shows that agents should be able to benefit from dynamically adapting their decision-making frameworks. A decision-making framework describes the set of multi-agent decision-making interactions exercised by members of an agent group in the course of pursuing a goal or set of goals. The decision-making interaction style an agent adopts with respect to other agents influences that agents degree of autonomy. The article introduces the capability of Dynamic Adaptive Autonomy (DAA), which allows an agent to dynamically modify its autonomy along a defined spectrum (from command-driven to consensus to locally autonomous/master) for each goal it pursues. This article presents one motivation for DAA through experiments showing that the ‘best’ decision-making framework for a group of agents depends not only on the problem domain and pre-defined characteristics of the system, but also on run-time factors that can change during system operation. This result holds regardless of which performance metric is used to define ‘best’. Thus, it is possible for agents to benefit by dynamically adapting their decision-making frameworks to their situation during system operation.
adaptive agents and multi-agents systems | 2007
Karen K. Fullam; K. Suzanne Barber
Trust is essential when an agent must rely on others to provide resources for accomplishing its goals. When deciding whether to trust, an agent may rely on, among other types of trust information, its past experience with the trustee or on reputations provided by third-party agents. However, each type of trust information has strengths and weaknesses: trust models based on past experience are more certain, yet require numerous transactions to build, while reputations provide a quick source of trust information, but may be inaccurate due to unreliable reputation providers. This research examines how the accuracy of experience- and reputation-based trust models is influenced by parameters such as: frequency of transactions with the trustee, trustworthiness of the trustee, and accuracy of provided reputations. More importantly, this research presents a technique for dynamically learning the best source of trust information given these parameters. The demonstrated learning technique achieves payoffs equal to those achieved by the best single trust information source (experience or reputation) in nearly every scenario examined.
adaptive agents and multi-agents systems | 2006
Karen K. Fullam; K. Suzanne Barber
An agents trust decision strategy consists of the agents policies for making trust-related decisions, such as who to trust, how trustworthy to be, what reputations to believe, and when to tell truthful reputations. In reputation exchange networks, learning trust decision strategies is complex, compared to non-reputation-communicating systems. When potential partners may exchange reputation information about an agent, the agents interactions with one partner are no longer independent from interactions with another; partners may tell each other about their experiences with the agent, influencing future behavior. This research enumerates the types of decisions an agent faces in reputation exchange networks, explains the interdependencies between these decisions, and correlates rewards to each decision. Experimental results using the Agent Reputation and Trust (ART) Testbed demonstrate the success of strategy-learning agents over agents employing naive strategies. The variation in performance of reputation-based learning vs. experience-based learning over different opponents illustrates the need to dynamically determine when to utilize reputations vs. experience in making trust decisions.
trust and trustworthy computing | 2002
K. Suzanne Barber; Karen K. Fullam; Joonoo Kim
Discussions at the 5th Workshop on Deception, Fraud and Trust in Agent Societies held at the 1st International Joint Conference on Autonomous Agents and Multi-Agent Systems (AAMAS 2002) centered around many important research issues 1 . This paper attempts to challenge researchers in the community toward future work concerning three issues inspired by the workshops roundtable discussion: (1) distinguishing elements of an agents behavior that influence its trustworthiness, (2) building reputation-based trust models without relying on interaction, and (3) benchmarking trust modeling algorithms. Arguments justifying the validity of each problem are presented, and benefits from their solutions are enumerated.
Autonomous Agents and Multi-Agent Systems | 2006
Cheryl E. Martin; K. Suzanne Barber
This article presents a capability called Adaptive Decision-Making Frameworks (ADMF) and shows that it can result in significantly improved system performance across run-time situation changes in a multi-agent system. Specifically, ADMF can result in improved and more robust performance compared to the use of a single static decision-making framework (DMF). The ADMF capability allows agents to dynamically adapt the DMF in which they participate to fit their run-time situation as it changes. A DMF identifies a set of agents and specifies the distribution of decision-making control and the authority to assign subtasks among these agents as they determine how a goal or set of goals should be achieved. The ADMF capability is a form of organizational adaptation and differs from previous approaches to organizational adaptation and dynamic coordination in that it is the first to allow dynamic and explicit manipulation of these DMF characteristics at run-time as variables controlling agent behavior. The approach proposed for selecting DMFs at run-time parameterizes all domain-specific knowledge as characteristics of the agents’ situation, so the approach is application-independent. The presented evaluation empirically shows that, for at least one multi-agent system, there is no one best DMF for multiple agents across run-time situational changes. Next, it motivates the further exploration of ADMF by showing that adapting DMFs to run-time variations in situation can result in improved overall system performance compared to static or random DMFs.
Lecture Notes in Computer Science | 1999
K. Suzanne Barber; T. H. Liu; David C. Han
Recent development in the field of Multi-Agent Systems (MAS) has attracted researchers from various fields with new techniques rapidly emerging. Due to its multi-disciplinary nature, it is not surprising that proposed theories and research results in the field are not coherent and hard to integrate. In this paper we propose a functional decomposition of problem solving activities to serve as a framework to assist MAS designers in their selection and integration of different techniques and existing research results according to their system requirements. The basic phases include agent organization construction, plan generation, task allocation, plan integration, and plan execution. An example usage of the proposed model for the domain of naval radar frequency management is also presented.
programming multi agent systems | 2004
Dung N. Lam; K. Suzanne Barber
As agent systems become more sophisticated, there is a growing need for agent-oriented debugging, maintenance, and testing methods and tools. This paper presents the Tracing Method and accompanying Tracer tool to help debug agents by explaining actual agent behavior in the implemented system. The Tracing Method captures dynamic run-time data by logging actual agent behavior, creates modeled interpretations in terms of agent concepts (e.g. beliefs, goals, and intentions), and analyzes those models to gain insight into both the design and the implemented agent behavior. An implementation of the Tracing Method is the Tracer tool, which is demonstrated in a target-monitoring domain. The Tracer tool can help (1) determine if agent design specifications are correctly implemented and guide debugging efforts and (2) discover and examine motivations for agent behaviors such as beliefs, communications, and intentions.
Autonomous Agents and Multi-Agent Systems | 2000
K. Suzanne Barber; Joonoo Kim
In this paper, we propose a multi-agent belief revision algorithm that utilizes knowledge about the reliability or trustworthiness (reputation) of information sources. Incorporating reliability information into belief revision mechanisms is essential for agents in real world multi-agent systems. This research assumes the global truth is not available to individual agents and agents only maintain a local subjective perspective, which often is different from the perspective of others. This assumption is true for many domains where the global truth is not available (or infeasible to acquire and maintain) and the cost of collecting and maintaining a centralized global perspective is prohibitive. As an agent builds its local perspective, the variance on the quality of the incoming information depends on the originating information sources. Modeling the quality of incoming information is useful regardless of the level and type of security in a given system. This paper introduces the definition of the trust as the agents confidence in the ability and intention of an information source to deliver correct information and reputation as the amount of trust an information source has created for itself through interactions with other agents. This economical (or monetary) perspective of reputation, viewing reputation as an asset, serves as social law that mandates staying trustworthy to other agents. Algorithms (direct and indirect) maintaining the model of the reputations of other information sources are also introduced.
ieee wic acm international conference on intelligent agent technology | 2007
Jaesuk Ahn; David DeAngelis; K. Suzanne Barber
When agents form a team to solve a given problem, a critical step in improving performance is selecting beneficial teammates by identifying the helpfulness of other agents. To maximize its performance, an agent must consider the trustworthiness of potential teammates relative to multiple behavioral constraints. This multidimensional trustworthiness assessment is shown to be of significant benefit in solving the team formation problem. This research introduces the concept of attitude to assert how much an agent should trust other agents by identifying the most influential facet among multiple trustworthiness assessments. In this sense, attitudes define how an agent selects beneficial teammates given different situations. In addition, this research shows how those attitudes are learned and aid in teammate selection.