Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Philipp Kulms is active.

Publication


Featured researches published by Philipp Kulms.


intelligent virtual agents | 2011

It's in their eyes: a study on female and male virtual humans' gaze

Philipp Kulms; Nicole C. Krämer; Jonathan Gratch; Sin-Hwa Kang

Social psychological research demonstrates that the same behavior might lead to different evaluations depending on whether it is shown by a man or a woman. With a view to design decisions with regard to virtual humans it is relevant to test whether this pattern also applies to gendered virtual humans. In a 2×2 between subjects experiment we manipulated the Rapport Agents gaze behavior and its gender in order to test whether especially female agents are evaluated more negatively when they do not show gender specific immediacy behavior and avoid gazing at the interaction partner. Instead of this interaction effect we found two main effects: gaze avoidance was evaluated negatively and female agents were rated more positively than male agents.


intelligent virtual agents | 2014

Let’s Be Serious and Have a Laugh: Can Humor Support Cooperation with a Virtual Agent?

Philipp Kulms; Stefan Kopp; Nicole C. Krämer

A crucial goal within human-computer interaction is to establish cooperation. There is evidence that among the tools being available, humor might be a promising and not uncommon choice. The appeal of humor is supported by its fundamentality for human-human interaction and the variety of functions humor serves, for it can achieve much more than making the user smile. In the present experiment, we sought to further investigate the potential effects of humor for virtual agents. Subjects played the iterated prisoner’s dilemma with a virtual agent that was intended to be funny or not. Additionally, we manipulated cooperativeness of the agent. First, although humor did not increase cooperation among subjects, our results indicate that humor modulates how cooperation is perceived in an agent. Second, humor facilitated the interaction with respect to enjoyment and rapport. Third, although increased enjoyment and overall affective reactions were both measured subjectively, the results were not in line with each other.


intelligent virtual agents | 2013

Using Virtual Agents to Guide Attention in Multi-task Scenarios

Philipp Kulms; Stefan Kopp

Humans have the ability to efficiently decode human and human-like cues. We explore whether a virtual agent’s facial expressions and gaze can be used to guide attention and elicit amplified processing of task-related cues. We argue that an emphasis on information processing will support future development of assistance systems, for example by reducing task load and creating a sense of reliability for such systems. A pilot study indicates subjects’ propensity to respond to the agent’s cues, most importantly gaze, but to not yet rely on them completely, possibly leading to a decreased performance.


intelligent virtual agents | 2016

The Effect of Embodiment and Competence on Trust and Cooperation in Human–Agent Interaction

Philipp Kulms; Stefan Kopp

Success in extended human–agent interaction depends on the ability of the agent to cooperate over repeated tasks. Yet, it is not clear how cooperation and trust change over the course of such interactions, and how this is interlinked with the developing perception of competence of the agent or its social appearance. We report findings from a human–agent experiment designed to measure trust in task-oriented cooperation with agents that vary in competence and embodiment. Results in terms of behavioral and subjective measures demonstrate an initial effect of embodiment, changing over time to a relatively higher importance of agent competence.


intelligent virtual agents | 2015

An Interaction Game Framework for the Investigation of Human–Agent Cooperation

Philipp Kulms; Nikita Mattar; Stefan Kopp

Success in human–agent interaction will to a large extent depend on the ability of the system to cooperate with humans over repeated tasks. It is not yet clear how cooperation between humans and virtual agents evolves and is interlinked with the attribution of qualities like trustworthiness or competence between the cooperation partners. To explore these questions, we present a new interaction game framework that is centered around a collaborative puzzle game and goes beyond commonly adopted scenarios like the Prisoner’s dilemma. First results are presented at the conference.


intelligent virtual agents | 2015

Prototyping User Interfaces for Investigating the Role of Virtual Agents in Human-Machine Interaction

Nikita Mattar; Herwin van Welbergen; Philipp Kulms; Stefan Kopp

To investigate how different levels of embodiment or variations of a task affect the performance or perception of an interaction, researchers need tools that enable them to effortlessly modify such aspects. We demonstrate the MultiPro framework that allows for fast and easy prototyping of applications that are to be used in the context of human-machine interaction research. In conjunction with a newly developed cooperative game scenario we show how different configurations with and without an embodied agent can be configures and how the agent’s behavior can be adapted.


Frontiers in Digital Humanities | 2018

A social cognition perspective on human--computer trust. The effect of perceived warmth and competence on trust in decision-making with computers

Philipp Kulms; Stefan Kopp

Trust is a crucial guide in interpersonal interactions, helping people to navigate through social decision-making problems and cooperate with others. In human--computer interaction (HCI), trustworthy computer agents foster appropriate trust by supporting a match between their perceived and actual characteristics. As computers are increasingly endowed with capabilities for cooperation and intelligent problem-solving, it is critical to ask under which conditions people discern and distinguish trustworthy from untrustworthy technology. We present an interactive cooperation game framework allowing us to capture human social attributions that indicate trust in continued and interdependent human--agent cooperation. Within this framework, we experimentally examine the impact of two key dimensions of social cognition, warmth and competence, as antecedents of behavioral trust and self-reported trustworthiness attributions of intelligent computers. Our findings suggest that, first, people infer warmth attributions from unselfish vs. selfish behavior and competence attributions from competent vs. incompetent problem-solving. Second, warmth statistically mediates the relation between unselfishness and behavioral trust as well as between unselfishness and perceived trustworthiness. We discuss the possible role of human social cognition for human--computer trust.


Kognitive Systeme, 2013 - 1 | 2013

Theory of Mind in Human-Robot-Communication: Appreciated or not?

Brenda Benninghoff; Philipp Kulms; Laura Hoffmann; Nicole C. Krämer


Mensch und Computer 2018 | 2018

MultiPro. Prototyping Multimodal UI with Anthropomorphic Agents

Philipp Kulms; Herwin van Welbergen; Stefan Kopp


Archive | 2015

Interactive Human-Guided Optimization – A Practical Approach to Multi-Step Logistics Planning

Nikita Mattar; Philipp Kulms; Stefan Kopp

Collaboration


Dive into the Philipp Kulms's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Nicole C. Krämer

University of Duisburg-Essen

View shared research outputs
Top Co-Authors

Avatar

Brenda Benninghoff

University of Duisburg-Essen

View shared research outputs
Top Co-Authors

Avatar

Laura Hoffmann

University of Duisburg-Essen

View shared research outputs
Top Co-Authors

Avatar

Jonathan Gratch

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Sin-Hwa Kang

University of Southern California

View shared research outputs
Researchain Logo
Decentralizing Knowledge