Karen K. Fullam
University of Texas at Austin
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Karen K. Fullam.
adaptive agents and multi-agents systems | 2005
Karen K. Fullam; Tomas Klos; Guillaume Muller; Jordi Sabater; Andreas Schlosser; Zvi Topol; K. Suzanne Barber; Jeffrey S. Rosenschein; Laurent Vercouter; Marco Voss
A diverse collection of trust-modeling algorithms for multi-agent systems has been developed in recent years, resulting in significant breadth-wise growth without unified direction or benchmarks. Based on enthusiastic response from the agent trust community, the Agent Reputation and Trust (ART) Testbed initiative has been launched, charged with the task of establishing a testbed for agent trust- and reputation-related technologies. This testbed serves in two roles: (1) as a competition forum in which researchers can compare their technologies against objective metrics, and (2) as a suite of tools with flexible parameters, allowing researchers to perform customizable, easily-repeatable experiments. This paper first enumerates trust research objectives to be addressed in the testbed and desirable testbed characteristics, then presents a competition testbed specification that is justified according to these requirements. In the testbeds artwork appraisal domain, agents, who valuate paintings for clients, may gather opinions from other agents to produce accurate appraisals. The testbeds implementation architecture is discussed briefly, as well.
adaptive agents and multi-agents systems | 2007
Karen K. Fullam; K. Suzanne Barber
Trust is essential when an agent must rely on others to provide resources for accomplishing its goals. When deciding whether to trust, an agent may rely on, among other types of trust information, its past experience with the trustee or on reputations provided by third-party agents. However, each type of trust information has strengths and weaknesses: trust models based on past experience are more certain, yet require numerous transactions to build, while reputations provide a quick source of trust information, but may be inaccurate due to unreliable reputation providers. This research examines how the accuracy of experience- and reputation-based trust models is influenced by parameters such as: frequency of transactions with the trustee, trustworthiness of the trustee, and accuracy of provided reputations. More importantly, this research presents a technique for dynamically learning the best source of trust information given these parameters. The demonstrated learning technique achieves payoffs equal to those achieved by the best single trust information source (experience or reputation) in nearly every scenario examined.
adaptive agents and multi-agents systems | 2006
Karen K. Fullam; K. Suzanne Barber
An agents trust decision strategy consists of the agents policies for making trust-related decisions, such as who to trust, how trustworthy to be, what reputations to believe, and when to tell truthful reputations. In reputation exchange networks, learning trust decision strategies is complex, compared to non-reputation-communicating systems. When potential partners may exchange reputation information about an agent, the agents interactions with one partner are no longer independent from interactions with another; partners may tell each other about their experiences with the agent, influencing future behavior. This research enumerates the types of decisions an agent faces in reputation exchange networks, explains the interdependencies between these decisions, and correlates rewards to each decision. Experimental results using the Agent Reputation and Trust (ART) Testbed demonstrate the success of strategy-learning agents over agents employing naive strategies. The variation in performance of reputation-based learning vs. experience-based learning over different opponents illustrates the need to dynamically determine when to utilize reputations vs. experience in making trust decisions.
trust and trustworthy computing | 2002
K. Suzanne Barber; Karen K. Fullam; Joonoo Kim
Discussions at the 5th Workshop on Deception, Fraud and Trust in Agent Societies held at the 1st International Joint Conference on Autonomous Agents and Multi-Agent Systems (AAMAS 2002) centered around many important research issues 1 . This paper attempts to challenge researchers in the community toward future work concerning three issues inspired by the workshops roundtable discussion: (1) distinguishing elements of an agents behavior that influence its trustworthiness, (2) building reputation-based trust models without relying on interaction, and (3) benchmarking trust modeling algorithms. Arguments justifying the validity of each problem are presented, and benefits from their solutions are enumerated.
adaptive agents and multi-agents systems | 2004
Karen K. Fullam; K. S. Barber
This research presents a belief revision algorithm, based on intuitive policies for information valuation. Policies for valuation of information, based on characteristics of the information and the sources providing it, are delineated as guidelines for algorithm construction. In addition, each policy is traceably incorporated into the belief revision process to provide justification for calculated belief certainties. Finally, since modeling of information source trustworthiness can be complicated, significant effort is devoted to constructing these reputation models. Experimental results show that application of information valuation policies to belief revision yields significant improvement in belief accuracy and precision over no-policy or single-policy belief revision.
international conference on trust management | 2006
Karen K. Fullam; Tomas Klos; Guillaume Muller; Jordi Sabater-Mir; K. Suzanne Barber; Laurent Vercouter
The Agent Reputation and Trust (ART) Testbed initiative has been launched with the goal of establishing a testbed for agent reputation- and trust-related technologies. The art Testbed serves in two roles: (1) as a competition forum in which researchers can compare their technologies against objective metrics, and (2) as a suite of tools with flexible parameters, allowing researchers to perform customizable, easily-repeatable experiments. In the Testbeds artwork appraisal domain, agents, who valuate paintings for clients, may purchase opinions and reputation information from other agents to produce accurate appraisals. The art Testbed features useful data collection tools for storing, downloading, and replaying game data for experimental analysis.
Lecture Notes in Computer Science | 2005
Karen K. Fullam; K. Suzanne Barber
In making a decision, an agent requires information from other agents about the current state of its environment. Unfortunately, the agent can never know the absolute truth about its environment because the information it receives is uncertain. When the environment changes more rapidly than sources provide information, an agent faces the problem of forming its beliefs from information that may be out-of-date. This research reviews several logical policies for evaluating the trustworthiness of information; most importantly, this work introduces a new policy for temporal information trust assessment, basing an agents trust in information on its recentness. The belief maintenance algorithm described here values information against these policies and evaluates tradeoffs in cases of policy conflicts. The definition of a belief interval provides the agent with flexibility to acknowledge that a belief subject may be changing between belief revision instances. Since the belief interval framework describes the belief probability distribution over time, it allows the agent to decrease its certainty on its beliefs as they age. Experimental results show the clear advantage of an algorithm that performs certainty depreciation over belief intervals and evaluates source information based in information age. This algorithm derives more accurate beliefs at belief revision and maintains more accurate belief certainty assessments as belief intervals age than an algorithm that is not temporally-sensitive.
adaptive agents and multi-agents systems | 2007
K. S. Barber; Jaesuk Ahn; S. Budalakoti; David DeAngelis; Karen K. Fullam; Chris L. D. Jones; Xin Sui
This demonstration highlights different aspects of the bottom-up assembly of multi-agent teams; illustrating trust evaluation of potential partners via experience- and reputation-based trust models, multi-dimensional trust evaluation of potential partners, task selection through personality-based modeling and team selection strategies that maximize a teams ability to function in dynamic environments. The demonstration format will be a software live demo with supporting slide shows.
ieee wic acm international conference on intelligent agent technology | 2007
Chris L. D. Jones; Karen K. Fullam; Suzanne Barber
This paper presents a trust-based mechanism for team formation wherein agents selectively pursue partners of varying trustworthiness. The mechanism is tested by using a market-based simulation to determine how pursuing partners with different trust ratings affects an agents utility. Results from these experiments show that for jobs with few subtasks, an agent profits by selecting less trustworthy partners, rather than more trustworthy partners.Accounting for social, cultural, and political factors must form the basis for understanding decision-making, actions, and reactions of individuals, thus driving their behaviors and intentions. Clearly, the individual is not wholly defined by just personal social, cultural, and political beliefs but also functions within a group of individuals. Within these groups (or organizations), they assimilate a potentially wide variety of different social factors, which may or may not differ from their own. Also, the group itself can vary in degrees of complexity, styles of interaction, and so forth, resulting in highly dynamic and emergent modes of behaviors. Even more difficult, this also includes taking into account the values, attitudes, and beliefs of the local population/environment that the individual/group is situated within. Without all these factors, we cannot expect to effectively understand, analyze, or predict the behaviors and intentions of others which grows ever more critical as our society continues to globalize and especially in todays conflicts and catastrophes. Thus, the need for a comprehensive modeling framework is evident as our only real hope of addressing such complexity. However, to date, only small isolated groups of pertinent behavioral factors have been studied, while there is little or no work towards developing a general unified and comprehensive approach that is also computational. The major challenges we face can be summed up in the following questions: 1. For prediction and explanation of intent and behavior, how does one computationally model individual or organizations and their emergent interactions with others, in various situations? 2. How does one organize and build the necessary social, cultural, political, behavioral, etc. knowledge- base? 3. How do you avoid brittleness and overspecialization? How do you construct these models efficiently and effectively, and dynamically evolve such models over time based on changing cultural and social factors? 4. How do you validate your models? In this talk, we will explore these challenges and focus on addressing pragmatic and computational issues in such modeling and examine some existing real-world efforts, current solutions, and openquestions.This paper presents a trust-based mechanism for team formation wherein agents selectively pursue partners of varying trustworthiness. The mechanism is tested by using a market-based simulation to determine how pursuing partners with different trust ratings affects an agents utility. Results from these experiments show that for jobs with few subtasks, an agent profits by selecting less trustworthy partners, rather than more trustworthy partners.
international conference on knowledge based and intelligent information and engineering systems | 2005
Jisun Park; Karen K. Fullam; David C. Han; K. Suzanne Barber
This paper illustrates three agent technologies deployed in the Unmanned Aerial Vehicle (UAV) target tracking domain. These capabilities enable: (1) coordination of the tracking of multiple targets among a set of UAVs, (2) identification of the best subset of assigned UAVs from which to collect location information, and (3) evaluation of location information accuracy. These capabilities aid the efficient and effective collection and verification of target location information.