Paul Hemeren
University of Skövde
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Paul Hemeren.
Frontiers in Psychology | 2011
Paul Hemeren; Serge Thill
The purpose of the present experiment is to further understand the effect of levels of processing (top-down vs. bottom-up) on the perception of movement kinematics and primitives for grasping actions in order to gain insight into possible primitives used by the mirror system. In the present study, we investigated the potential of identifying such primitives using an action segmentation task. Specifically, we investigated whether or not segmentation was driven primarily by the kinematics of the action, as opposed to high-level top-down information about the action and the object used in the action. Participants in the experiment were shown 12 point-light movies of object-centered hand/arm actions that were either presented in their canonical orientation together with the object in question (top-down condition) or upside down (inverted) without information about the object (bottom-up condition). The results show that (1) despite impaired high-level action recognition for the inverted actions participants were able to reliably segment the actions according to lower-level kinematic variables, (2) segmentation behavior in both groups was significantly related to the kinematic variables of change in direction, velocity, and acceleration of the wrist (thumb and finger tips) for most of the included actions. This indicates that top-down activation of an action representation leads to similar segmentation behavior for hand/arm actions compared to bottom-up, or local, visual processing when performing a fairly unconstrained segmentation task. Motor primitives as parts of more complex actions may therefore be reliably derived through visual segmentation based on movement kinematics.
affective computing and intelligent interaction | 2015
Karl Drejing; Serge Thill; Paul Hemeren
Engagement is essential to meaningful social interaction between humans. Understanding the mechanisms by which we detect engagement of other humans can help us understand how we can build robots that interact socially with humans. However, there is currently a lack of measurable engagement constructs on which to build an artificial system that can reliably support social interaction between humans and robots. This paper proposes a definition, based on motivation theories, and outlines a framework to explore the idea that engagement can be seen as specific behaviors and their attached magnitude or intensity. This is done by the use of data from multiple sources such as observer ratings, kinematic data, audio and outcomes of interactions. We use the domain of human-robot interaction in order to illustrate the application of this approach. The framework further suggests a method to gather and aggregate this data. If certain behaviors and their attached intensities co-occur with various levels of judged engagement, then engagement could be assessed by this framework consequently making it accessible to a robotic platform. This framework could improve the social capabilities of interactive agents by adding the ability to notice when and why an agent becomes disengaged, thereby providing the interactive agent with an ability to reengage him or her. We illustrate and propose validation of our framework with an example from robot-assisted therapy for children with autism spectrum disorder. The framework also represents a general approach that can be applied to other social interactive settings between humans and robots, such as interactions with elderly people.
artificial general intelligence | 2012
Paul Hemeren; Peter Gärdenfors
General (human) intelligence critically includes understanding human action, both action production and action recognition. Human actions also convey social signals that allow us to predict the actions of others (intent) as well as the physical and social consequences of our actions. Whats more, we are able to talk about what we (and others) are doing. We present a framework for action recognition and communication that is based on access to the force dimensions that constrain human actions. The central idea here is that forces and force patterns constitute vectors in conceptual spaces that can represent actions and events. We conclude by pointing to the consequences of this view for how artificial systems could be made to understand and communicate about actions.
human-agent interaction | 2017
Jiong Sun; Sergey Redyuk; Erik Billing; Dan Högberg; Paul Hemeren
This paper presents an ongoing study on affective human-robot interaction. In our previous research, touch type is shown to be informative for communicated emotion. Here, a soft matrix array sensor is used to capture the tactile interaction between human and robot and 6 machine learning methods including CNN, RNN and C3D are implemented to classify different touch types, constituting a pre-stage to recognizing emotional tactile interaction. Results show an average recognition rate of 95% by C3D for classified touch types, which provide stable classification results for developing social touch technology.
2014 IEEE International Inter-Disciplinary Conference on Cognitive Methods in Situation Awareness and Decision Support (CogSIMA) | 2014
Serge Thill; Paul Hemeren; Maria Nilsson
Lund University Cognitive Studies; 140 (2008) | 2008
Paul Hemeren
Journal of Software Engineering for Robotics | 2015
David Vernon; Erik Billing; Paul Hemeren; Serge Thill; Tom Ziemke
2014 IEEE International Inter-Disciplinary Conference on Cognitive Methods in Situation Awareness and Decision Support (CogSIMA) | 2014
Paul Hemeren; Mikael Johannesson; Mikael Lebram; Fredrik Eriksson; Kristoffer Ekman; Peter Veto
conference cognitive science | 2005
Paul Hemeren
automotive user interfaces and interactive vehicular applications | 2013
Serge Thill; Maria Nilsson; Paul Hemeren