Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Peter-Paul van Maanen is active.

Publication


Featured researches published by Peter-Paul van Maanen.


ieee wic acm international conference on intelligent agent technology | 2006

A Cognitive Model for Visual Attention and Its Application

Tibor Bosse; Peter-Paul van Maanen; Jan Treur

In this paper a cognitive model for visual attention is introduced. The cognitive model is part of the design of a software agent that supports a naval warfare officer in its task to compile a tactical picture of the situation in the field. An executable formal specification of the cognitive model is given and a case study is described in which the model is used to simulate a human subjects attention. The foundation of the model is based on formal specification of representation relations for attentional states, specifying their intended meaning. The model has been automatically verified against these relations.


web intelligence | 2009

Attention Manipulation for Naval Tactical Picture Compilation

Tibor Bosse; Rianne van Lambalgen; Peter-Paul van Maanen; Jan Treur

This paper discusses and evaluates an agent model that is able to manipulate the visual attention of a human, in order to support naval crew. The agent model consists of four submodels, including a model to reason about a subject’s attention. The model was evaluated based on a practical case study which was formally analysed and verified using automated checking tools. Results show how a human subject’s attention is manipulated by adjusting luminance, based on assessment of the subject’s attention. These first evaluations of the agent show a positive effect.


International Journal of Human-computer Studies \/ International Journal of Man-machine Studies | 2013

A framework for explaining reliance on decision aids

Kees van Dongen; Peter-Paul van Maanen

This study presents a framework for understanding task and psychological factors affecting reliance on advice from decision aids. The framework describes how informational asymmetries in combination with rational, motivational and heuristic factors explain human reliance behavior. To test hypotheses derived from the framework, 79 participants performed an uncertain pattern learning and prediction task. They received advice from a decision aid either before or after they expressed their own prediction, and received feedback about performance. When their prediction conflicted with that of the decision aid, participants had to choose to rely on their own prediction or on that of the decision aid. We measured reliance behavior, perceived and actual reliability of self and decision aid, responsibility felt for task outcomes, understandability of ones own reasoning and of the decision aid, and attribution of errors. We found evidence that (1) reliance decisions are based on relative trust, but only when advice is presented after people have formed their own prediction; (2) when people rely as much on themselves as on the decision aid, they still perceive the decision aid to be more reliable than themselves; (3) the less people perceive the decision aids reasoning to be cognitively available and understandable, the less people rely on the decision aid; (4) the more people feel responsible for the task outcome, the more they rely on the decision aid; (5) when feedback about performance is provided, people underestimate both ones own reliability and that of the decision aid; (6) underestimation of the reliability of the decision aid is more prevalent and more persistent than underestimation of ones own reliability; and (7) unreliability of the decision aid is less attributed to temporary and uncontrollable (but not external) causes than ones own unreliability. These seven findings are potentially applicable for the improved design of decision aids and training procedures.


Lecture Notes in Computer Science | 2009

Automated Visual Attention Manipulation

Tibor Bosse; Rianne van Lambalgen; Peter-Paul van Maanen; Jan Treur

In this paper a system for visual attention manipulation is introduced and formally described. This system is part of the design of a software agent that supports naval crew in her task to compile a tactical picture of the situation in the field. A case study is described in which the system is used to manipulate a human subjects attention. To this end the system includes a Theory of Mind about human attention and uses this to estimate the subjects current attention, and to determine how features of displayed objects have to be adjusted to make the attention shift in a desired direction. Manipulation of attention is done by adjusting illumination according to the calculated difference between a model describing the subjects attention and a model prescribing it.


international conference on foundations of augmented cognition | 2007

Augmented metacognition addressing dynamic allocation of tasks requiring visual attention

Tibor Bosse; Willem A. van Doesburg; Peter-Paul van Maanen; Jan Treur

This paper discusses the use of cognitive models as augmented metacognition on task allocation for tasks requiring visual attention. In the domain of naval warfare, the complex and dynamic nature of the environment makes that one has to deal with a large number of tasks in parallel. Therefore, humans are often supported by software agents that take over part of these tasks. However, a problem is how to determine an appropriate allocation of tasks. Due to the rapidly changing environment, such a work division cannot be fixed beforehand: dynamic task allocation at runtime is needed. Unfortunately, in alarming situations the human does not have the time for this coordination. Therefore system-triggered dynamic task allocation is desirable. The paper discusses the possibilities of such a system for tasks requiring visual attention.


web intelligence | 2011

Modeling and Validation of Biased Human Trust

Mark Hoogendoorn; S. Waqar Jaffry; Peter-Paul van Maanen; Jan Treur

When considering intelligent agents that interact with humans, having an idea of the trust levels of the human, for example in other agents or services, can be of great importance. Most models of human trust that exist, are based on some rationality assumption, and biased behavior is not represented, whereas a vast literature in Cognitive and Social Sciences indicates that humans often exhibit non-rational, biased behavior with respect to trust. This paper reports how some variations of biased human trust models have been designed, analyzed and validated against empirical data. The results show that such biased trust models are able to predict human trust significantly better.


international conference on trust management | 2011

Validation and verification of agent models for trust: Independent compared to relative trust

Mark Hoogendoorn; S. Waqar Jaffry; Peter-Paul van Maanen

In this paper, the results of a validation experiment for two existing computational trust models describing human trust are reported. One model uses experiences of performance in order to estimate the trust in different trustees. The second model in addition carries the notion of relative trust. The idea of relative trust is that trust in a certain trustee not solely depends on the experiences with that trustee, but also on trustees that are considered competitors of that trustee. In order to validate the models, parameter adaptation has been used to tailor the models towards human behavior. A comparison between the two models has also been made to see whether the notion of relative trust describes human trust behavior in a more accurate way. The results show that taking trust relativity into account indeed leads to a higher accuracy of the trust model. Finally, a number of assumptions underlying the two models are verified using an automated verification tool.


Lecture Notes in Computer Science | 2008

Simulation and Formal Analysis of Visual Attention in Cognitive Systems

Tibor Bosse; Peter-Paul van Maanen; Jan Treur

In this paper a simulation model for visual attention is discussed and formally analysed. The model is part of the design of a cognitive system which comprises an agent that supports a naval officer in its task to compile a tactical picture of the situation in the field. A case study is described in which the model is used to simulate a human subjects attention. The formal analysis is based on temporal relational specifications for attentional states and for different stages of attentional processes. The model has been automatically verified against these specifications.


50th Annual Meeting of the Human Factors and Ergonomics Society, HFES 2006, 16 October 2006 through 20 October 2006, San Francisco, CA, 225-229 | 2006

Under-reliance on the decision aid: a difference in calibration and attribution between self and aid

Kees van Dongen; Peter-Paul van Maanen

It is often assumed that two heads are better than one, but reliance on decision aids is often inappropriate. Decisions to rely on an aid are thought to be based on a comparison between the perceived reliability of own performance and that of the decision aid. Unfortunately, perceived reliabilities are unlikely to be perfectly calibrated. This may result in inappropriate decisions to rely on advice. In a laboratory experiment with 40 participants, we studied whether calibration improves after practice, whether calibration of own reliability differs from calibration of the aids reliability and whether unreliability of the aid is attributed differently. Under-trust in own reliability disappears after practice but under-trust in the aids reliability persists. Unreliability of the decision aid is less likely to be attributed to temporary, external and uncontrollable factors. This asymmetry in attribution and calibration may explain under-reliance on decision aids. Language: enIt is often assumed that two heads are better than one, but reliance on decision aids is often inappropriate. Decisions to rely on an aid are thought to be based on a comparison between the perceived reliability of own performance and that of the decision aid. Unfortunately, perceived reliabilities are unlikely to be perfectly calibrated. This may result in inappropriate decisions to rely on advice. In a laboratory experiment with 40 participants, we studied whether calibration improves after practice, whether calibration of own reliability differs from calibration of the aids reliability and whether unreliability of the aid is attributed differently. Under-trust in own reliability disappears after practice but under-trust in the aids reliability persists. Unreliability of the decision aid is less likely to be attributed to temporary, external and uncontrollable factors. This asymmetry in attribution and calibration may explain under-reliance on decision aids.


international conference on data mining | 2012

Online Social Behavior in Twitter: A Literature Review

Olav Aarts; Peter-Paul van Maanen; Tanneke Ouboter; J.M.C. Schraagen

This literature review is aimed at examining state of the art research in the field of online social networks. The goal is to identify the current challenges within this area of research, given the questions raised in society. In this review we pay attention to three aspects of social networks: actor, message, and network characteristics. We further limit our review to research based on Twitter data, because this online social network is the most widely used by researchers in the field.

Collaboration


Dive into the Peter-Paul van Maanen's collaboration.

Top Co-Authors

Avatar

Jan Treur

VU University Amsterdam

View shared research outputs
Top Co-Authors

Avatar

Tibor Bosse

VU University Amsterdam

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Catholijn M. Jonker

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tjerk de Greef

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar

Tomas Klos

Delft University of Technology

View shared research outputs
Top Co-Authors

Avatar

Alexei Sharpanskykh

Delft University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge