Henry Hexmoor
Southern Illinois University Carbondale
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Henry Hexmoor.
Archive | 2003
Sviatoslav Braynov; Henry Hexmoor
In the paper we introduce a quantitative measure of autonomy in multiagent interactions. We quantify and analyse different types of agent autonomy: (a) decision autonomy versus action autonomy, (b) autonomy with respect to an agent’s user, (c) autonomy with respect to other agents and groups of agents, and (d) a measure of group autonomy that accounts for the degree with which one group depends on another group. We analyse the problem of composing multiagent group with maximum overall autonomy and we prove that this problem is NP-complete. Therefore, finding the optimal group or agent with whom to share a task (or to whom to delegate a task) is in general computationally hard.
pacific rim international conference on artificial intelligence | 2000
Henry Hexmoor
We introduce situated autonomy and present it as part of the process of action selection. We then discuss the cognitive ingredients of situated autonomy and derive a degree of situated autonomy.
Journal of Experimental and Theoretical Artificial Intelligence | 2006
Henry Hexmoor; Satish Gunnu Venkata; Donald Hayes
Social norms are cultural phenomena that naturally emerge in human societies and help to prescribe and proscribe normative patterns of behaviour. In recent times, the discipline of multi-agent systems has been used to model social norms in an artificial society of agents. In this paper we review norms in multi-agent systems and then explore a series of norms in a simulated urban traffic setting. Using game-theoretic concepts we define and offer an account of norm stability. Particularly in small groups, a relatively small number of individuals with cooperative attitude are needed for the norm of cooperation to evolve and be stable. In contrast, in larger populations, a larger proportion of cooperating individuals are required to achieve stability.
systems man and cybernetics | 2001
Henry Hexmoor
We discuss stages of autonomy determination for software agents that manage and manipulate knowledge in organizations that house other software agents and human knowledge workers. We suggest recognition of potential autonomies in the belief-desire-intention (BDI) paradigm and actual reasoning about autonomy choice decision theoretically. We show how agents might revise their autonomies in light of one anothers autonomy and might also experience new, derived autonomies. We discuss the conditions under which an entire group of agents might have a collective autonomy attitude toward agents outside their group. We believe group attitudes are a novel concept and form a strong basis for developing theories of dynamic organizational structure. We briefly sketch an outline of a case study that motivates reasoning about autonomies.
Archive | 1995
Johan M. Lammens; Henry Hexmoor; Stuart C. Shapiro
In the elephant paper, Brooks criticized the ungroundedness of traditional symbol systems and proposed physically grounded systems as an alternative. We want to make a contribution towards integrating the old with the new. We describe the GLAIR agent architecture that specifies an integration of explicit representation and reasoning mechanisms, embodied semantics through grounding symbols in perception and action, and implicit representations of special-purpose mechanisms of sensory processing, perception, and motor control. We present some agent components that we place in our architecture to build agents that exhibit situated activity and learning, and some applications. We believe that the Brooksian behavior generation approach goes a long way towards modeling elephant behavior, which we find most interesting, but that in order to generate more deliberative behavior we need something more.
collaboration technologies and systems | 2009
Purvag Patel; Henry Hexmoor
In modern computer games, ‘bots’ - Intelligent realistic agents play a prominent role in success of a game in market. Typically, bots are modeled using finite-state machine and then programmed via simple conditional statements which are hard-coded in bots logic. Since these bots have become quite predictable to an experienced games player, she might lose her interest in game. We present a model of bots using BDI agents, which will show more human-like behavior, more believable and will provide more realistic feel to the game. These bots will use the inputs from actual game players to specify her Beliefs, Desires, and Intentions while game playing.
Journal of Experimental and Theoretical Artificial Intelligence | 2009
Henry Hexmoor; Brian McLaughlan; Gaurav Tuli
Large collections of communities such as those found in complex control systems necessitate sophisticated techniques for high-level human supervision due to their requirements of influence spanning over individuals, communities, and global system behaviours. We have developed psycho-socio-cultural models for mediation of system-level behaviours and interactions. The ensuing lowered human cognitive load will enable supervisors to effectively guide large systems with competing objectives. This model paves the way for developing the ‘Man On The Loop’ (MOTL) paradigm, a phrase we are proposing for a novel human supervision role that contrasts with typical micromanagement. We have highlighted and validated MOTL parameters through the implementation of a domain-neutral simulation and compared our results with those found in natural systems.
Connection Science | 2002
Henry Hexmoor
A model of absolute autonomy and power in agent systems is presented. This absolute sense of autonomy captures the agents liberty over an agents preferences. The model characterizes an affinity between autonomy and power. It is argued that agents with similar individual autonomy and power experience an adjusted level of autonomy and power due to being in a group of like agents. The model is then illustrated on the problem of task allocation.
Archive | 2003
Henry Hexmoor; Cristiano Castelfranchi; Rino Falcone
This paper summarizes the state of art in agent autonomy. It dispels myths and builds a foundation for study of autonomy. We point to a renewed interest in good old-fashioned AI that has emerged from consideration of agents and autonomy. This paper also serves as a readers guide to the paper in this book. We end with sobering thoughts about the future of the human relationship with machines.
acm symposium on applied computing | 2002
Henry Hexmoor; Justin Tyrel Vaughn
We will describe a simulator and simulated teamwork among a number of Personal Satellite Assistants (PSA) onboard the simulated space station patrolling for problem detection and isolation. PSAs reason about autonomies of potential helpers while helpers reason about their autonomies for deciding to help or to break away from prior commitments to help. We describe algorithms for computing PSA autonomies when there are concurrent and conflicting situations. We also offer empirical results about qualities of help a recruiting PSA receives when there are multiple, concurrent problems.