Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jens Pfau is active.

Publication


Featured researches published by Jens Pfau.


australian conference on artificial life | 2009

Evolving Cooperation in the N-player Prisoner's Dilemma: A Social Network Model

Golriz Rezaei; Michael Kirley; Jens Pfau

We introduce a social network based model to investigate the evolution of cooperation in the N-player prisoners dilemma game. Agents who play cooperatively form social links, which are reinforced by subsequent cooperative actions. Agents tend to interact with players from their social network. However, when an agent defects, the links with its opponents in that game are broken. We examine two different scenarios: (a) where all agents are equipped with a pure strategy, and (b) where some agents play with a mixed strategy. In the mixed case, agents base their decision on a function of the weighted links within their social network. Detailed simulation experiments show that the proposed model is able to promote cooperation. Social networks play an increasingly important role in promoting and sustaining cooperation in the mixed strategy case. An analysis of the emergent social networks shows that they are characterized by high average clustering and broad-scale heterogeneity, especially for a relatively small number of players per game.


Journal of Artificial Intelligence Research | 2017

Logics of Common Ground

Tim Miller; Jens Pfau; Liz Sonenberg; Yoshihisa Kashima

According to Clarks seminal work on common ground and grounding, participants collaborating in a joint activity rely on their shared information, known as common ground, to perform that activity successfully, and continually align and augment this information during their collaboration. Similarly, teams of human and artificial agents require common ground to successfully participate in joint activities. Indeed, without appropriate information being shared, using agent autonomy to reduce the workload on humans may actually increase workload as the humans seek to understand why the agents are behaving as they are. While many researchers have identified the importance of common ground in artificial intelligence, there is no precise definition of common ground on which to build the foundational aspects of multi-agent collaboration. In this paper, building on previously-defined modal logics of belief, we present logic definitions for four different types of common ground. We define modal logics for three existing notions of common ground and introduce a new notion of common ground, called salient common ground. Salient common ground captures the common ground of a group participating in an activity and is based on the common ground that arises from that activity as well as on the common ground they shared prior to the activity. We show that the four definitions share some properties, and our analysis suggests possible refinements of the existing informal and semi-formal definitions.


SpaceOps 2014 Conference | 2014

Modelling and Using Common Ground in Human-agent Collaboration during Spacecraft Operations

Jens Pfau; Tim Miller; Liz Sonenberg

Space operations involve the control of complex technical equipment in highly dynamic and unknown environments. This is a challenging task for human operators. To facilitate this task, mission-critical software systems need to be able to truly engage in joint activities with their human operators; i.e. these systems need to be designed for interdependence with the operators. A key aspect of undertaking joint activities in human teams is the representation and maintenance of common ground—the information that participants in a joint activity share and assume to be shared. It has been proposed that maintenance of common ground is also important in the operation of space software systems, but computational representations of the concept have been rather ad-hoc. In recent work, we introduced the first formal account of common ground, which we based on a detailed analysis of philosophical and psychological literature. In this paper, we analyze how computational representations of common ground can support the interdependence of space software systems with human operators; and we explore how this aspect of joint activities might be implemented computationally. Our analysis is relevant to the design of future mission control systems and to the design of human-agent-robot teams in space operations.


Archive | 2014

Towards Agent-Based Models of Cultural Dynamics: A Case of Stereotypes

Jens Pfau; Yoshihisa Kashima; Liz Sonenberg

We analyze from a semi-formal perspective the grounding model of cultural transmission, a social psychological theory that emphasizes the role of everyday joint activities in the transmission of cultural information. The model postulates that cultural transmission during joint activities depends on the context of the activity and the common ground that participants perceive. We build on a framework of intelligent agents that are able to engage in joint activities and integrate the process of communication as described by the grounding model of cultural transmission. We rely on stereotypes as a type of cultural information to illustrate how our model contributes to bridging the gap between micro- and macro-levels in research on cultural dynamics.


privacy security risk and trust | 2012

Towards a Computational Formalism for a Grounding Model of Cultural Transmission

Jens Pfau; Liz Sonenberg; Yoshihisa Kashima

Computational models of the transmission of cultural information usually neglect that cultural transmission between individuals occurs mainly as a consequence of complex social interactions. We analyze the requirements for a computational model of a social psychological theory of cultural transmission, a theory which postulates that actors in a joint activity build on their common ground to align their information to a degree sufficient for them to carry out their activity. This grounding process adds to their common ground and thus contributes to a diffusion of cultural information. The proposed tentative formal account exploits the Shared Plan model of joint activities and uses an argumentation-based dialogue as the representation of the grounding process. This work advances the social-psychological study of cultural dynamics and the exploration of computational methods for the modeling of social systems.


PRIMA'11 Proceedings of the 14th international conference on Agent Based Simulation for a Sustainable Society and Multi-agent Smart Computing | 2011

An agent-based model of stereotype communication

Jens Pfau; Michael Kirley; Yoshihisa Kashima

We introduce an agent-based model of the communication of stereotype-relevant communication. The model takes into account that the communication of information related to a particular stereotype is governed by the actual and the perceived sharedness of this stereotype. To this end, agents in this model are capable of representing which stereotype-relevant information their communication partners hold. We estimate the parameters of this model with empirical data. The model is a step towards an understanding of how stereotypes are formed and maintained during inter-personal communication as a function of factors such as the social network that underlies this communication.


Web Intelligence and Agent Systems: An International Journal | 2014

Improving cognitive agent decision making: Experience trajectories as plans

Jens Pfau; Samin Karim; Michael Kirley; Liz Sonenberg

In task environments with large state and action spaces, the use of temporal and state abstraction can potentially improve the decision making performance of agents. However, existing approaches within a reinforcement learning framework typically identify possible subgoal states and instantly learn stochastic subpolicies to reach them from other states. In these circumstances, exploration of the reinforcement learner is unfavorably biased towards local behavior around these subgoals; temporal abstractions are not exploited to reduce required deliberation; and the benefit of employing temporal abstractions is conflated with the benefit of additional learning done to define subpolicies. In this paper, we consider a cognitive agent architecture that allows for the extraction and reuse of temporal abstractions in the form of experience trajectories from a bottom-level reinforcement learning module and a top-level module based on the BDI (Belief-Desire-Intention) model. Here, the reuse of trajectories depends on the situation in which their recording was started. We investigate the efficacy of our approach using two well-known domains – the pursuit and the taxi domains. Detailed simulation experiments demonstrate that the use of experience trajectories as plans acquired at runtime can reduce the amount of decision making without significantly affecting asymptotic performance. The combination of temporal and state abstraction leads to improved performance during the initial learning of the reinforcement learner. Our approach can significantly reduce the number of deliberations required.


Physica A-statistical Mechanics and Its Applications | 2013

The co-evolution of cultures, social network communities, and agent locations in an extension of Axelrod’s model of cultural dissemination

Jens Pfau; Michael Kirley; Yoshihisa Kashima


web intelligence | 2010

Distributed Advice-Seeking on an Evolving Social Network

Golriz Rezaei; Jens Pfau; Michael Kirley


Archive | 2012

Towards computational models of cultural dynamics based on the grounding model of cultural transmission

Jens Pfau

Collaboration


Dive into the Jens Pfau's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tim Miller

University of Melbourne

View shared research outputs
Top Co-Authors

Avatar

Samin Karim

University of Melbourne

View shared research outputs
Researchain Logo
Decentralizing Knowledge