Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where James Law is active.

Publication


Featured researches published by James Law.


conference on biomimetic and biohybrid systems | 2015

Help! I Can't Reach the Buttons: Facilitating Helping Behaviors Towards Robots

David Cameron; Emily C. Collins; Adriel Chua; Samuel Fernando; Owen McAree; Uriel Martinez-Hernandez; Jonathan M. Aitken; Luke Boorman; James Law

Human-Robot-Interaction HRI research is often built around the premise that the robot is serving to assist a human in achieving a human-led goal or shared task. However, there are many circumstances during HRI in which a robot may need the assistance of a human in shared tasks or to achieve goals. We use the ROBO-GUIDE model as a case study, and insights from social psychology, to examine two factors of user trust and situational ambiguity which may impact promote human user assistance towards a robot. These factors are argued to determine the likelihood of human assistance arriving, individuals perceived competence of the robot, and individuals trust towards the robot. We outline an experimental approach to test these proposals.


conference towards autonomous robotic systems | 2015

ROBO-GUIDE: Towards Safe, Reliable, Trustworthy, and Natural Behaviours in Robotic Assistants

James Law; Jonathan M. Aitken; Luke Boorman; David Cameron; Adriel Chua; Emily C. Collins; Samuel Fernando; Uriel Martinez-Hernandez; Owen McAree

In this paper we describe a novel scenario, whereby an assistive robot is required to use a lift, and results from a preliminary investigation into floor determination using readily-available information. The aim being to create an assistive robot that can naturally integrate into existing infrastructure.


joint ieee international conference on development and learning and epigenetic robotics | 2015

Babybot challenge: Motor skills

Patricia Shaw; Daniel Lewkowicz; Alexandros Giagkos; James Law; Suresh Kumar; Mark H. Lee; Qiang Shen

In 1984, von Hofsten performed a longitudinal study of early reaching in infants between the ages of 1 week and 19 weeks. This paper proposes a possible model using excitation of various subsystems to reproduce the longitudinal study. The model is then implemented and tested on an iCub humanoid robot, and the results compared to the original study. The resulting model shares interesting similarities to the data presented by von Hofsten, in particular a slight dip in the quantity of reaching. However, the dip is shifted along by a few weeks, and the analysis of hand behaviour is inconclusive based on the data recorded.


Archive | 2016

Assessing Graphical Robot Aids for Interactive Co-working

Iveta Eimontaite; Ian Gwilt; David Cameron; Jonathan M. Aitken; Joe Rolph; Saeid Mokaram; James Law

The shift towards more collaborative working between humans and robots increases the need for improved interfaces. Alongside robust measures to ensure safety and task performance, humans need to gain the confidence in robot co-operators to enable true collaboration. This research investigates how graphical signage can support human–robot co-working, with the intention of increased productivity. Participants are required to co-work with a KUKA iiwa lightweight manipulator on a manufacturing task. The three conditions in the experiment differ in the signage presented to the participants—signage relevant to the task, irrelevant to the task, or no signage. A change between three conditions is expected in anxiety and negative attitudes towards robots; error rate; response time; and participants’ complacency, suggested by facial expressions. In addition to understanding how graphical languages can support human–robot co-working, this study provides a basis for further collaborative research to explore human–robot co-working in more detail.


european conference on mobile robots | 2015

Floor determination in the operation of a lift by a mobile guide robot

Owen McAree; Jonathan M. Aitken; Luke Boorman; David Cameron; Adriel Chua; Emily C. Collins; Samuel Fernando; James Law; Uriel Martinez-Hernandez

Robotic assistants operating in multi-floor buildings are required to use lifts to transition between floors. To reduce the need for environments to be tailored to suit robots, and to make robot assistants more applicable, it is desirable that they should make use of existing navigational cues and interfaces designed for human users. In this paper, we examine the scenario whereby a guide robot uses a lift to transition between floors in a building. We describe an experiment into combining multiple data sources, available to a typical robot with simple sensors, to determine which floor of the building it is on. We show the robustness of this approach to realistic scenarios in a busy working environment.


international conference on development and learning | 2012

An infant inspired model of reaching for a humanoid robot

Mark H. Lee; James Law; Patricia Shaw; Michael Sheldon

Infants demonstrate remarkable talents in learning to control their sensor and motor systems. In particular the ability to reach to objects using visual feedback requires overcoming several issues related to coordination, spatial transformations, redundancy, and complex learning spaces, that are also challenges for robotics. The development sequence from tabula rasa to early successful reaching includes learning of saccade control, gaze control, torso control, and visually elicited reaching and grasping in 3D space. This sequence is an essential progression in the acquisition of manipulation behaviour. In this paper we outline the biological and psychological processes behind this sequence, and describe how they can be interpreted to enable cumulative learning of reaching behaviours in robots. Our implementation on an iCub robot produces reaching and manipulation behaviours from scratch in around 2.5 hours. We show snapshots of the learning spaces during this process, and comment on how timing of stage transition impacts on learning.


University of Sheffield Engineering Symposium | 2016

Safety and Verification for a Mobile Guide Robot

Jonathan M. Aitken; Owen McAree; Luke Boorman; David Cameron; Adriel Chua; Emily C. Collins; Samuel Fernando; James Law; Uriel Martinez-Hernandez

This work presents the safety and verification arguments for the development of an autonomous robot platform capable of leading humans around a building. It uses Goal Structuring Notation (GSN) to develop a pattern, a re-usable GSN fragment, that can form part of the safety case surrounding the interaction of a mobile guide robot to: record the decisions taken during the design phase, ensure safe operation around humans, and identify where mitigation must be introduced.


joint ieee international conference on development and learning and epigenetic robotics | 2015

Representations of body schemas for infant robot development

Patricia Shaw; James Law; Mark H. Lee

Psychological studies have often suggested that internal models of the body and its structure are used to process sensory inputs such as proprioception from muscles and joints. Within robotics, there is often a need to have an internal representation of the body, integrating the multi-modal and multi-dimensional spaces in which it operates. Here we propose a body model in the form of a series of distributed spatial maps, that have not been purpose designed but have emerged through our experiments on developmental stages using a minimalist content-neutral approach. The result is an integrated series of 2D maps storing correlations and contingencies across modalities, which has some resonances with the structures used in the brain for sensorimotor coordination.


Archive | 2019

Dynamic Graphical Signage Improves Response Time and Decreases Negative Attitudes Towards Robots in Human-Robot Co-working

Iveta Eimontaite; Ian Gwilt; David Cameron; Jonathan M. Aitken; Joe Rolph; Saeid Mokaram; James Law

Collaborative robots, or ‘co-bots’, are a transformational technology that bridge traditionally segregated manual and automated manufacturing processes. However, to realize its full potential, human operators need confidence in robotic co-worker technologies and their capabilities. In this experiment we investigate the impact of screen-based dynamic instructional signage on 39 participants from a manufacturing assembly line. The results provide evidence that dynamic signage helps to improve response time for the experimental group with task-relevant signage compared to the control group with no signage. Furthermore, the experimental group’s negative attitudes towards robots decreased significantly with increasing accuracy on the task.


conference on biomimetic and biohybrid systems | 2016

Don’t Worry, We’ll Get There: Developing Robot Personalities to Maintain User Interaction After Robot Error

David Cameron; Emily C. Collins; Hugo Cheung; Adriel Chua; Jonathan M. Aitken; James Law

Human robot interaction (HRI) often considers the human impact of a robot serving to assist a human in achieving their goal or a shared task. There are many circumstances though during HRI in which a robot may make errors that are inconvenient or even detrimental to human partners. Using the ROBOtic GUidance and Interaction DEvelopment (ROBO-GUIDE) model on the Pioneer LX platform as a case study, and insights from social psychology, we examine key factors for a robot that has made such a mistake, ensuring preservation of individuals’ perceived competence of the robot, and individuals’ trust towards the robot. We outline an experimental approach to test these proposals.

Collaboration


Dive into the James Law's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Adriel Chua

University of Sheffield

View shared research outputs
Top Co-Authors

Avatar

Owen McAree

University of Sheffield

View shared research outputs
Top Co-Authors

Avatar

Luke Boorman

University of Sheffield

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge