Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Paul Bremner is active.

Publication


Featured researches published by Paul Bremner.


human-robot interaction | 2010

Cooperative gestures: effective signaling for humanoid robots

Laurel D. Riek; Tal-Chen Rabinowitch; Paul Bremner; Anthony G. Pipe; Mike Fraser; Peter Robinson

Cooperative gestures are a key aspect of human-human pro-social interaction. Thus, it is reasonable to expect that endowing humanoid robots with the ability to use such gestures when interacting with humans would be useful. However, while people are used to responding to such gestures expressed by other humans, it is unclear how they might react to a robot making them. To explore this topic, we conducted a within-subjects, video based laboratory experiment, measuring time to cooperate with a humanoid robot making interactional gestures. We manipulated the gesture type (beckon, give, shake hands), the gesture style (smooth, abrupt), and the gesture orientation (front, side). We also employed two measures of individual differences: negative attitudes toward robots (NARS) and human gesture decoding ability (DANVA2-POS). Our results show that people cooperate with abrupt gestures more quickly than smooth ones and front-oriented gestures more quickly than those made to the side, peoples speed at decoding robot gestures is correlated with their ability to decode human gestures, and negative attitudes toward robots is strongly correlated with a decreased ability in decoding human gestures.


ieee-ras international conference on humanoid robots | 2011

The effects of robot-performed co-verbal gesture on listener behaviour

Paul Bremner; Anthony G. Pipe; Chris Melhuish; Mike Fraser; Sriram Subramanian

Co-verbal gestures, the spontaneous gestures that accompany human speech, form an integral part of human communications; they have been shown to have a variety of beneficial effects on listener behaviour. Therefore, we suggest that a humanoid robot, which aims to communicate effectively with human users, should gesture in a human-like way, and thus engender similar beneficial effects on users. In order to investigate whether robot-performed co-verbal gestures do produce these effects, and are thus worthwhile for a communicative robot, we have conducted two user studies. In the first study we investigated whether users paid attention to our humanoid robot for longer when it performed co-verbal gestures, than when it performed small arm movements unrelated to the speech. Our findings confirmed our expectations, as there was a very significant difference in the length of time that users paid attention between the two conditions. In the second user study we investigated whether gestures performed during speech improved user memory of facts accompanied by gestures and whether they were linked in memory to the speech they accompanied. An observable affect on the speed and certainty of recall was found. We consider these observations of normative responses to the gestures performed, to be an indication of the value of co-verbal gesture for a communicative humanoid robot, and an objective measure of the success of our gesturing method.


international conference on robotics and automation | 2013

Cooperative tabletop working for humans and humanoid robots: Group interaction with an avatar

Hamzah Z. Hossen Mamode; Paul Bremner; Anthony G. Pipe; Brian Carse

This paper represents the first steps in investigating issues emerging from a scenario where a robot interacts with a group of people around an interactive tabletop. In particular, the impact that a humanoid robot, acting as an avatar for a remote member of the group, has on the collaboration between the members of the group is investigated. This is carried out in the context where the avatar is depicting the actions being carried out by the remote user during a game. A preliminary study was performed to find out how the users interact when they are all co-located. The experiment is then carried out in two variants, in the first, a member of the group is moved to a remote location and allowed to interact with the other members using audio support alone. In the second, a robot is used to represent the actions of the remote user to the co-located users. The results obtained indicate that the addition of an avatar, used in such a way, has a positive impact on group interaction and allows for the task to be completed more successfully than when the avatar is not used. Further, comparative analysis of videos of the group behavior with a co-located team and group behavior with one member represented by the robot avatar, showed similar cooperative action.


human-robot interaction | 2015

ACM/IEEE International Conference on Human-Robot Interaction

Paul Bremner; Ute Leonards

Emphasis, by means of either pitch accents or beat gestures (rhythmic co-verbal gestures with no semantic meaning), has been shown to serve two main purposes in human communication: syntactic disambiguation and salience. To use beat gestures in this role, interlocutors must be able to integrate them with the speech they accompany. Whether such integration is possible when the multi-modal communication information is produced by a humanoid robot, and whether it is as efficient as for human communicators, are questions that need to be answered to further understanding of the efficacy of humanoid robots for naturalistic human-like communication. Here, we present an experiment which, using a fully within subjects design, shows that there is a marked difference in speech and gesture integration between human and robot communicators, being significantly less effective for the robot. In contrast to beat gestures, the effects of speech emphasis are the same whether that speech is played through a robot or as part of a video of a human. Thus, while integration of speech emphasis and verbal information do occur for robot communicators, integration of non-informative beat gestures and verbal information does not, despite comparable timing and motion profiles to human gestures. Categories and Subject Descriptors H.1.2 [Models and Principles]: User/Machine Systems—human factors, software psychology; H.5.2 [Information Interfaces and Presentation]: User Interfaces—evalua-tion/methodology, user-centered design


human-robot interaction | 2015

Speech and Gesture Emphasis Effects for Robotic and Human Communicators

Paul Bremner; Ute Leonards

Emphasis, by means of either pitch accents or beat gestures (rhythmic co-verbal gestures with no semantic meaning), has been shown to serve two main purposes in human communication: syntactic disambiguation and salience. To use beat gestures in this role, interlocutors must be able to integrate them with the speech they accompany. Whether such integration is possible when the multi-modal communication information is produced by a humanoid robot, and whether it is as efficient as for human communicators, are questions that need to be answered to further understanding of the efficacy of humanoid robots for naturalistic human-like communication. Here, we present an experiment which, using a fully within subjects design, shows that there is a marked difference in speech and gesture integration between human and robot communicators, being significantly less effective for the robot. In contrast to beat gestures, the effects of speech emphasis are the same whether that speech is played through a robot or as part of a video of a human. Thus, while integration of speech emphasis and verbal information do occur for robot communicators, integration of non-informative beat gestures and verbal information does not, despite comparable timing and motion profiles to human gestures. Categories and Subject Descriptors H.1.2 [Models and Principles]: User/Machine Systems—human factors, software psychology; H.5.2 [Information Interfaces and Presentation]: User Interfaces—evalua-tion/methodology, user-centered design


systems, man and cybernetics | 2009

Conversational gestures in human-robot interaction

Paul Bremner; Anthony G. Pipe; Chris Melhuish; Mike Fraser; Sriram Subramanian


ieee international conference on cognitive infocommunications | 2012

Cooperative tabletop working for humans and humanoid robots: Early investigations into artifact indication

H. Z. Hossen Mamode; Paul Bremner; T. Pipe; Brian Carse


european conference on artificial life | 2011

An innovative bio-inspired fault tolerant unitronics architecture.

Mohammad Samie; Gabriel Dragffy; Tony Pipe; Paul Bremner


human-robot interaction | 2018

Social Robots for Engagement in Rehabilitative Therapies: Design Implications from a Study with Therapists

Katie Winkle; Praminda Caleb-Solly; Ailie Turton; Paul Bremner


Taylor and Francis | 2011

Biomimetics: Nature-Based Innovation

Anthony G. Pipe; R. Vaidyanathan; Chris Melhuish; Paul Bremner; Peter Robinson; Robert A. J. Clark; A. Lenz; K. Eder; Nick Hawes; Z. Ghahramani; Mike Fraser; M. Mermehdi; P. Healey; S. Skachek

Collaboration


Dive into the Paul Bremner's collaboration.

Top Co-Authors

Avatar

Anthony G. Pipe

University of the West of England

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chris Melhuish

University of the West of England

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Brian Carse

University of the West of England

View shared research outputs
Top Co-Authors

Avatar

Nick Hawes

University of Birmingham

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gabriel Dragffy

University of the West of England

View shared research outputs
Researchain Logo
Decentralizing Knowledge