Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Maaike Harbers is active.

Publication


Featured researches published by Maaike Harbers.


international conference on digital human modeling | 2009

Intelligent Agents for Training On-Board Fire Fighting

Karel van den Bosch; Maaike Harbers; Annerieke Heuvelink; Willem A. van Doesburg

Simulation-based training in complex decision making often requires ample personnel for playing various roles (e.g. team mates, adversaries). Using intelligent agents may diminish the need for staff. However, to achieve goal-directed training, events in the simulation as well as the behavior of key players must be carefully controlled. We propose to do that by using a director agent (DA). A DA can be seen as a supervisor, capable of instructing agents and steering the simulation. We explain and illustrate the concept in the context of training in on-board fire fighting.


web intelligence | 2009

Modeling Agents with a Theory of Mind

Maaike Harbers; Karel van den Bosch; John-Jules Ch. Meyer

Training systems with intelligent virtual agents provide an effective means to train people for complex, dynamic tasks like crisis management or firefighting. Virtual agents provide more adequate behavior and explanations if they not only take their own goals and beliefs into account, but also the assumed knowledge and intentions of other players in the scenario. This paper describes a study to how agents can be equipped with a theory of mind, i.e. the capability to ascribe mental concepts to others. Based on existing theory of mind theories, a theory-theory (TT) and a simulation-theory (ST) approach for modeling agents with a theory of mind models are proposed. Both approaches have been implemented in a case study, and results show that the ST approach is preferred over the TT approach.


coordination organizations institutions and norms in agent systems | 2007

The examination of an information-based approach to trust

Maaike Harbers; Rineke Verbrugge; Carles Sierra; John K. Debenham

This article presents the results of experiments performed with agents based on an operalization of an information-theoretic model for trust. Experiments have been performed with the ART test-bed, a test domain for trust and reputation aiming to provide transparent and recognizable standards. An agent architecture based on information theory is described in the paper. According to a set of experimental results, information theory is shown to be appropriate for the modelling of trust in multi-agent systems.


web intelligence | 2010

Design and Evaluation of Explainable BDI Agents

Maaike Harbers; Karel van den Bosch; John-Jules Ch. Meyer

It is widely acknowledged that providing explanations is an important capability of intelligent systems. Explanation capabilities are useful, for example, in scenario-based training systems with intelligent virtual agents. Trainees learn more from scenario-based training when they understand why the virtual agents act the way they do. In this paper, we present a model for explainable BDI agents which enables the explanation of BDI agent behavior in terms of underlying beliefs and goals. Different explanation algorithms can be specified in the model, generating different types of explanations. In a user study (n=20), we compare four explanation algorithms by asking trainees which explanations they consider most useful. Based on the results, we discuss which explanation types should be given under what conditions.


requirements engineering: foundation for software quality | 2015

Embedding Stakeholder Values in the Requirements Engineering Process

Maaike Harbers; Christian Detweiler; Mark A. Neerincx

Software has become an integral part of our daily lives and should therefore account for human values such as trust, autonomy and privacy. Human values have received increased attention in the field of Requirements Engineering over the last few years, but existing work offers no systematic way to use elicited values in requirements engineering and evaluation processes. In earlier work we proposed the Value Story workshop, a domain-independent method that connects value elicitation techniques from the field of Human-Computer Interaction to the identification of user stories, a common requirements specification format in Requirements Engineering. This paper studies whether user stories obtained in a Value Story workshop 1) adequately account for values, and 2) are usable by developers. The results of an empirical evaluation show that values are significantly better incorporated in user stories obtained in a Value Story workshop than through user stories obtained in regular requirements elicitation workshops. The results also show that value-based user stories are deemed valuable to the end-user, but rated less well on their size, estimableness and testability. This paper concludes that the Value Story workshop is a promising method for embedding values in the Requirements Engineering process, but that value-based user stories need to be translated to use cases to make them suitable for planning and organizing implementation activities.


computer science and software engineering | 2011

Belief/goal sharing modules for BDI languages

Michal Čáp; Mehdi Dastani; Maaike Harbers

This paper proposes a modularisation framework for BDI based agent programming languages developed from a software engineering perspective. Like other proposals, BDI modules are seen as encapsulations of cognitive components. However, unlike other approaches, modules are here instantiated and manipulated in a similar fashion as objects in object orientation. In particular, an agents mental state is formed dynamically by instantiating and activating BDI modules. The agent deliberates on its active module instances, which interact by sharing their beliefs and goals. The formal semantics of the framework are provided and some desirable properties of the framework are shown.


Web Intelligence and Agent Systems: An International Journal | 2012

Modeling agents with a theory of mind: Theory--theory versus simulation theory

Maaike Harbers; Karel van den Bosch; John-Jules Ch. Meyer

Virtual training systems with intelligent agents provide an effective means to train people for complex, dynamic tasks like crisis management or firefighting. For successful training, intelligent virtual agents should be able to show believable behavior, adapt their behavior to the trainees performance and give useful explanations about their behavior. Agents can provide more believable behavior and explanations if they, besides their own, take the assumed knowledge and intentions of other players in the scenario into account. This paper proposes two ways to model agents with a theory of mind, i.e. equip them with the ability to ascribe mental concepts such as knowledge and intentions to others. The first theory of mind model is based on theory--theory TT and the second on simulation theory ST. In a simulation study, agents with no theory of mind, a TT-based theory of mind, and an ST-based theory of mind are compared. The results show that agents with a theory of mind are preferred over agents with no theory of mind, and that, regarding agent development, the ST model has advantages over the TT model.


ambient intelligence | 2007

Enhancing human understanding through intelligent explanations

Tina Mioch; Maaike Harbers; Willem A. van Doesburg; Karel van den Bosch

Ambient systems that explain their actions promote the user’s understanding as they give the user more insight in the effects of their behavior on the environment. In order to provide individualized intelligent explanations, we need not only to evaluate a user’s observable behavior, but we also need to make sense of the underlying beliefs, intentions and strategies. In this paper we argue for the need of intelligent explanations, identify the requirements of such explanations, propose a method to achieve generation of intelligent explanations, and report on a prototype in the training of naval situation assessment and decision making. We discuss the implications of intelligent explanations in training and set the agenda for future research.


international conference on engineering psychology and cognitive ergonomics | 2014

Automatic Feedback on Cognitive Load and Emotional State of Traffic Controllers

Mark A. Neerincx; Maaike Harbers; Dustin Lim; Veerle van der Tas

Workload research in command, information and process-control centers, resulted in a modular and formal Cognitive Load and Emotional State CLES model with transparent and easy-to-modify classification and assessment techniques. The model distinguishes three representation and analysis layers with an increasing level of abstraction, focusing on respectively the sensing, modeling, and reasoning. Fuzzy logic and its membership rules are generated to map a set of values to a cognitive and emotional state modeling, and to detect surprises of anomalies reasoning. The models and algorithms allow humans to remain in the loop of workload assessments and distributions, an important resilience requirement of human-automation teams. By detecting unexpected changes surprises and anomalies and the corresponding cognition-emotion-performance dependencies, the CLES monitor is expected to improve teams responsiveness to new situations.


web intelligence | 2011

Explanation and Coordination in Human-Agent Teams: A Study in the BW4T Testbed

Maaike Harbers; Jeffrey M. Bradshaw; Matthew Johnson; Paul J. Feltovich; Karel van den Bosch; John-Jules Ch. Meyer

There are several applications in which humans and agents jointly perform a task. If the task involves interdependence among the team members, coordination is required to achieve good team performance. Coordination in human-agent teams can be improved by giving humans insight in the behavior of the agents. When humans are able to understand and predict an agents behavior, they can more easily adapt their own behavior to that of the agent. One way to achieve such understanding is by letting agents explain their behavior. This paper presents a study in the BW4T coordination test bed that examines the effects of agents explaining their behavior on coordination in human-agent teams. The results show that explanations about agent behavior do not always lead to better team performance, but they do impact user experience in a positive way.

Collaboration


Dive into the Maaike Harbers's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mark A. Neerincx

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar

Catholijn M. Jonker

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Paul J. Feltovich

Florida Institute for Human and Machine Cognition

View shared research outputs
Top Co-Authors

Avatar

Christian Detweiler

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar

Joost Broekens

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge