Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jrc Jaap Ham is active.

Publication


Featured researches published by Jrc Jaap Ham.


Human Factors | 2012

Trust in Smart Systems: Sharing Driving Goals and Giving Information to Increase Trustworthiness and Acceptability of Smart Systems in Cars

Fmf Frank Verberne; Jrc Jaap Ham; Cjh Cees Midden

Objective: We examine whether trust in smart systems is generated analogously to trust in humans and whether the automation level of smart systems affects trustworthiness and acceptability of those systems. Background: Trust is an important factor when considering acceptability of automation technology. As shared goals lead to social trust, and intelligent machines tend to be treated like humans, the authors expected that shared driving goals would also lead to increased trustworthiness and acceptability of adaptive cruise control (ACC) systems. Method: In an experiment, participants (N = 57) were presented with descriptions of three ACCs with different automation levels that were described as systems that either shared their driving goals or did not. Trustworthiness and acceptability of all the ACCs were measured. Results: ACCs sharing the driving goals of the user were more trustworthy and acceptable than were ACCs not sharing the driving goals of the user. Furthermore, ACCs that took over driving tasks while providing information were more trustworthy and acceptable than were ACCs that took over driving tasks without providing information. Trustworthiness mediated the effects of both driving goals and automation level on acceptability of ACCs. Conclusion: As when trusting other humans, trusting smart systems depends on those systems sharing the user’s goals. Furthermore, based on their description, smart systems that take over tasks are judged more trustworthy and acceptable when they also provide information. Application: For optimal acceptability of smart systems, goals of the user should be shared by the smart systems, and smart systems should provide information to their user.


international conference on persuasive technology | 2009

Using negative and positive social feedback from a robotic agent to save energy

Cjh Cees Midden; Jrc Jaap Ham

An improved process for preparing a phosphated metal surface for painting comprises electrolyzing the surface as cathode in an aqueous solution containing hexavalent and trivalent chromium in specified concentrations and weight ratios.Two experiments explored the persuasive effects of social feedback, as provided by a robotic agent, on behavioral change. Results indicate stronger persuasive effects of social feedback than of factual feedback (Experiment 1) or factual evaluative feedback (Experiment 2), and of negative feedback (especially social but also factual) than of positive feedback.


International Journal of Social Robotics | 2011

When Artificial Social Agents Try to Persuade People: The Role of Social Agency on the Occurrence of Psychological Reactance

Maj Maaike Roubroeks; Jrc Jaap Ham; Cjh Cees Midden

In the near future, robotic agents might employ persuasion to influence people’s behavior or attitudes, just as human agents do in many situations. People can comply with these requests, but, people can also experience psychological reactance, which may lead to the complete opposite of the proposed behavior. In this study we are interested in the social nature of psychological reactance. Social agency theory proposes that more social cues lead to a more social interaction. We argue that this also holds for psychological reactance. Therefore, we expect a positive relationship between the level of social agency of the source of a persuasive message and the amount of psychological reactance the message arouses. In an online experiment, participants read an advice on how to conserve energy when using a washing machine. The advice was either provided as text-only, as text accompanied by a still picture of a robotic agent, or as text accompanied by a short film clip of the same robotic agent. Confirming our expectations, results indicated that participants experienced more psychological reactance when the advice was accompanied by the still picture or when the advice was accompanied by the short film clip as compared to when the advice was provided as text-only. This indicates that stronger social agency of the messenger can lead to more psychological reactance. Furthermore, our results confirmed earlier research about the effects of controlling language on psychological reactance. Implications are discussed.


international conference on social robotics | 2011

Making robots persuasive: the influence of combining persuasive strategies (gazing and gestures) by a storytelling robot on its persuasive power

Jrc Jaap Ham; R René Bokhorst; Rh Raymond Cuijpers; D David van der Pol; John-John Cabibihan

Social agency theory suggests that when an (artificial) agent combines persuasive strategies, its persuasive power increases. Therefore, we investigated whether a robot that uses two persuasive strategies is more persuasive than a robot that uses only one. Because in human face-to-face persuasion two crucial persuasive strategies are gazing and gestures, the current research investigated the combined and individual contribution of gestures and gazing on the persuasiveness of a storytelling robot. A robot told a persuasive story about the aversive consequences of lying to 48 participants. The robot used persuasive gestures (or not) and gazing (or not) to accompany this persuasive story. We assessed persuasiveness by asking participants to evaluate the lying individual in the story told by the robot. Results indicated that only gazing independently led to increased persuasiveness. Using persuasive gestures only led to increased persuasiveness when the robot combined it with (the persuasive strategy of) gazing. Without gazing, using persuasive gestures diminished robot persuasiveness. The implications of the current findings for theory and design of persuasive robots are discussed.


international conference on persuasive technology | 2009

Can ambient persuasive technology persuade unconsciously?: using subliminal feedback to influence energy consumption ratings of household appliances

Jrc Jaap Ham; Cjh Cees Midden; F Femke Beute

In this paper we explore a fundamental characteristic of Ambient Persuasive Technology: Can it persuade the user without receiving the users conscious attention? In a task consisting of 90 trials, participants had to indicate which of three household appliances uses the lowest average amount of energy. After each choice, participants in the supraliminal feedback condition received feedback about the correctness of their choice through presentation of a smiling or a sad face for 150 ms. Participants in the subliminal feedback condition received identical feedback, but the faces were presented only for 25 ms, which prohibited conscious perception of these stimuli. The final third of the participants received no feedback. In the next task, participants rated the energy consumption of all presented appliances. Results indicated that supraliminal feedback and subliminal feedback both led to more correct energy consumption ratings as compared to receiving no feedback. Implications are discussed.


International Journal of Social Robotics | 2014

A Persuasive Robot to Stimulate Energy Conservation: The Influence of Positive and Negative Social Feedback and Task Similarity on Energy-Consumption Behavior

Jrc Jaap Ham; Cjh Cees Midden

This research explored the persuasive effects on behavior of social feedback by a robotic agent. In two experiments, participants could save on energy while carrying out washing tasks on a simulated washing machine. In both experiments, we tested the persuasive effects of positive and negative social feedback and we compared these effects to factual feedback, which is more widely used. Results of both studies indicated that social feedback had stronger persuasive effects than factual feedback. Furthermore, results of both studies suggested an effect of feedback valence indicated by more economic behavior following negative feedback (social or factual) as compared to positive feedback. Overall, the strongest persuasive effects were exerted by negative social feedback. In addition, results of Experiment 2 indicated that task similarity increased the persuasive effects of negative feedback. The implications for persuasive robotic agent theory and design are discussed.


Cognition & Emotion | 2013

Brightness differences influence the evaluation of affective pictures

Daniël Lakens; D A F Fockenberg; Kph Karin Lemmens; Jrc Jaap Ham; Cjh Cees Midden

We explored the possibility of a general brightness bias: brighter pictures are evaluated more positively, while darker pictures are evaluated more negatively. In Study 1 we found that positive pictures are brighter than negative pictures in two affective picture databases (the IAPS and the GAPED). Study 2 revealed that because researchers select affective pictures on the extremity of their affective rating without controlling for brightness differences, pictures used in positive conditions of experiments were on average brighter than those used in negative conditions. Going beyond correlational support for our hypothesis, Studies 3 and 4 showed that brighter versions of neutral pictures were evaluated more positively than darker versions of the same picture. Study 5 revealed that people categorised positive words more quickly than negative words after a bright picture prime, and vice versa for negative pictures. Together, these studies provide strong support for the hypotheses that picture brightness influences evaluations.


international conference on persuasive technology | 2009

Social influence of a persuasive agent: the role of agent embodiment and evaluative feedback

Sh Suzanne Vossen; Jrc Jaap Ham; Cjh Cees Midden

Feedback can serve as an intervention aimed at reducing household energy consumption. The present study focused on the effects of agent embodiment on behavioral change through feedback. The effects of agent embodiment were studied for female vs. male users. Also factual feedback was compared to evaluative feedback. An experiment was conducted in which 76 participants used a virtual washing machine to clean laundry. They received interactive feedback about their energy consumption, from an embodied agent or from a computer. This feedback indicated the consumption level (factual feedback) or good or bad performance (evaluative feedback). The results showed that evaluative feedback, especially when it was negative, was more effective than factual feedback in reducing energy consumption, independent of the source of the feedback. Overall, for men it did not matter whether the feedback was given by a computer or by an embodied agent, but for women it did: women who interacted with the embodied agent used less energy than women who interacted with the computer.


International Journal of Social Robotics | 2015

Combining Robotic Persuasive Strategies: The Persuasive Power of a Storytelling Robot that Uses Gazing and Gestures

Jrc Jaap Ham; Rh Raymond Cuijpers; John-John Cabibihan

Earlier theorizing suggested that an (artificial) agent that combines persuasive strategies will be more persuasive. Therefore, the current research investigated whether a robot that uses two persuasive strategies is more persuasive than a robot that uses only one. Two crucial persuasive strategies that humans use in face-to-face persuasion are gazing and gestures, and therefore we studied the combined and individual contribution of these two persuasive strategies (gestures and gazing) on the persuasiveness of a storytelling robot. A robot told a classical persuasive story about the consequences of lying to forty-eight participants, and was programmed to use (persuasive) gestures (or not) and gazing (or not). Next, we asked participants to evaluate the character in the story thereby assessing the robot’s persuasiveness. Results presented evidence a robot’s persuasiveness is increased when gazing is used. When the robot used gestures, its persuasiveness only increased when it also used gazing. When the robot did not use gazing, using gestures diminished the robot’s persuasiveness. We discuss the implications for theory and design of robots that are more persuasive.


Human Factors | 2015

Trusting a Virtual Driver That Looks, Acts, and Thinks Like You:

Fmf Frank Verberne; Jrc Jaap Ham; Cjh Cees Midden

Objective: We examined whether participants would trust an agent that was similar to them more than an agent that was dissimilar to them. Background: Trust is an important psychological factor determining the acceptance of smart systems. Because smart systems tend to be treated like humans, and similarity has been shown to increase trust in humans, we expected that similarity would increase trust in a virtual agent. Methods: In a driving simulator experiment, participants (N = 111) were presented with a virtual agent that was either similar to them or not. This agent functioned as their virtual driver in a driving simulator, and trust in this agent was measured. Furthermore, we measured how trust changed with experience. Results: Prior to experiencing the agent, the similar agent was trusted more than the dissimilar agent. This effect was mediated by perceived similarity. After experiencing the agent, the similar agent was still trusted more than the dissimilar agent. Conclusion: Just as similarity between humans increases trust in another human, similarity also increases trust in a virtual agent. When such an agent is presented as a virtual driver in a self-driving car, it could possibly enhance the trust people have in such a car. Application: Displaying a virtual driver that is similar to the human driver might increase trust in a self-driving car.

Collaboration


Dive into the Jrc Jaap Ham's collaboration.

Top Co-Authors

Avatar

Cjh Cees Midden

Eindhoven University of Technology

View shared research outputs
Top Co-Authors

Avatar

Pam Peter Ruijten

Eindhoven University of Technology

View shared research outputs
Top Co-Authors

Avatar

Fmf Frank Verberne

Eindhoven University of Technology

View shared research outputs
Top Co-Authors

Avatar

Maj Maaike Roubroeks

Eindhoven University of Technology

View shared research outputs
Top Co-Authors

Avatar

Rh Raymond Cuijpers

Eindhoven University of Technology

View shared research outputs
Top Co-Authors

Avatar

S Shengnan Lu

Eindhoven University of Technology

View shared research outputs
Top Co-Authors

Avatar

Daniël Lakens

Eindhoven University of Technology

View shared research outputs
Top Co-Authors

Avatar

F Femke Beute

Eindhoven University of Technology

View shared research outputs
Top Co-Authors

Avatar

Sh Suzanne Vossen

Eindhoven University of Technology

View shared research outputs
Top Co-Authors

Avatar

A Andreas Spahn

Eindhoven University of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge