Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jeffrey A. Kuo is active.

Publication


Featured researches published by Jeffrey A. Kuo.


ieee-ras international conference on humanoid robots | 2014

An experimental study of robot control during environmental contacts based on projected operational space dynamics

Valerio Ortenzi; Maxime Adjigble; Jeffrey A. Kuo; Rustam Stolkin; Michael Mistry

This paper addresses the problem of constrained motion for a manipulator performing a task while in contact with the environment, and proposes a solution based on projected operational space dynamics. The main advantages of this control technique are: 1) it exploits the environment contact constraint itself, so as to minimise the joint torques needed to perform the task; 2) it enables full decoupling of motion and force control; 3) force feedback from a force sensor mounted at the end effector or other contact points is not needed. This work is a step towards a robot control strategy which mimics the human behaviour of exploiting contacts with the environment to help perform tasks. We present an experimental implementation of the control method in which a KUKA LWR IV manipulator uses an eraser to wipe a whiteboard, and we show that this controller can effectively exploit contact with the whiteboard in order to reduce joint torques while still performing the desired wiping motion.


international conference on robotics and automation | 2016

Towards advanced robotic manipulation for nuclear decommissioning: A pilot study on tele-operation and autonomy

Naresh Marturi; Alireza Rastegarpanah; Chie Takahashi; Maxime Adjigble; Rustam Stolkin; Sebastian Zurek; Marek Sewer Kopicki; Mohammed Talha; Jeffrey A. Kuo; Yasemin Bekiroglu

We present early pilot-studies of a new international project, developing advanced robotics to handle nuclear waste. Despite enormous remote handling requirements, there has been remarkably little use of robots by the nuclear industry. The few robots deployed have been directly teleoperated in rudimentary ways, with no advanced control methods or autonomy. Most remote handling is still done by an aging workforce of highly skilled experts, using 1960s style mechanical Master-Slave devices. In contrast, this paper explores how novice human operators can rapidly learn to control modern robots to perform basic manipulation tasks; also how autonomous robotics techniques can be used for operator assistance, to increase throughput rates, decrease errors, and enhance safety. We compare humans directly teleoperating a robot arm, against human-supervised semi-autonomous control exploiting computer vision, visual servoing and autonomous grasping algorithms. We show how novice operators rapidly improve their performance with training; suggest how training needs might scale with task complexity; and demonstrate how advanced autonomous robotics techniques can help human operators improve their overall task performance. An additional contribution of this paper is to show how rigorous experimental and analytical methods from human factors research, can be applied to perform principled scientific evaluations of human test-subjects controlling robots to perform practical manipulative tasks.


intelligent robots and systems | 2016

Vision-guided state estimation and control of robotic manipulators which lack proprioceptive sensors

Valerio Ortenzi; Naresh Marturi; Rustam Stolkin; Jeffrey A. Kuo; Michael Mistry

This paper presents a vision-based approach for estimating the configuration of, and providing control signals for, an under-sensored robot manipulator using a single monocular camera. Some remote manipulators, used for decommissioning tasks in the nuclear industry, lack proprioceptive sensors because electronics are vulnerable to radiation. Additionally, even if proprioceptive joint sensors could be retrofitted, such heavy-duty manipulators are often deployed on mobile vehicle platforms, which are significantly and erratically perturbed when powerful hydraulic drilling or cutting tools are deployed at the end-effector. In these scenarios, it would be beneficial to use external sensory information, e.g. vision, for estimating the robot configuration with respect to the scene or task. Conventional visual servoing methods typically rely on joint encoder values for controlling the robot. In contrast, our framework assumes that no joint encoders are available, and estimates the robot configuration by visually tracking several parts of the robot, and then enforcing equality between a set of transformation matrices which relate the frames of the camera, world and tracked robot parts. To accomplish this, we propose two alternative methods based on optimisation. We evaluate the performance of our developed framework by visually tracking the pose of a conventional robot arm, where the joint encoders are used to provide ground-truth for evaluating the precision of the vision system. Additionally, we evaluate the precision with which visual feedback can be used to control the robots end-effector to follow a desired trajectory.


ieee-ras international conference on humanoid robots | 2016

Kinematics-based estimation of contact constraints using only proprioception

Valerio Ortenzi; Hsiu-Chin Lin; Morteza Azad; Rustam Stolkin; Jeffrey A. Kuo; Michael Mistry

Robots are increasingly being required to perform tasks which involve contacts with the environment. This paper addresses the problem of estimating environmental constraints on the robots motion. We present a method which estimates such constraints, by computing the null space of a set of velocity vectors which differ from commanded velocities during contacts. We further extend this method to handle unilateral constraints, for example when the robot touches a rigid surface. Unlike previous work, our method is based on kinematics analysis, using only proprioceptive joint encoders, thus there is no need for either expensive force-torque sensors or tactile sensors at the contact points or any use of vision. We first show results of experiments with a simulated robot in a variety of situations, and we analyse the effect of various levels of observation noise on the resulting contact estimates. Finally we evaluate the performance of our method on two sets of experiments using a KUKA LWR IV manipulator, tasked with exploring and estimating the constraints caused by a horizontal surface and an inclined surface.


Advanced Robotics | 2017

Hybrid motion/force control: a review

Valerio Ortenzi; Rustam Stolkin; Jeffrey A. Kuo; Michael Mistry

Abstract This paper reviews hybrid motion/force control, a control scheme which enables robots to perform tasks involving both motion, in the free space, and interactive force, at the contacts. Motivated by the large amount of literature on this topic, we facilitate comparison and elucidate the key differences among different approaches. An emphasis is placed on the study of the decoupling of motion control and force control. And we conclude that it is indeed possible to achieve a complete decoupling; however, this feature can be relaxed or sacrificed to reduce the robot’s joint torques while still completing the task. Graphical Abstract


international symposium on safety, security, and rescue robotics | 2016

Towards robotic decommissioning of legacy nuclear plant: Results of human-factors experiments with tele-robotic manipulation, and a discussion of challenges and approaches for decommissioning

Mohammed Talha; E. A. M. Ghalamzan; Chie Takahashi; Jeffrey A. Kuo; W. Ingamells; Rustam Stolkin

This paper explores the problems of developing robotic systems for decommissioning legacy nuclear infrastructure, such as the many contaminated gloveboxes present in UK, USA and other countries. We begin with a discussion of these decomissioning challenges. We review the current manual methods for decommissioning alpha-contaminated plant, and review robotic approaches which might replace such direct human interventions. We then present our initial experiments with human test-subjects, exploring the ability of humans to control a remote robot to perform complex manipulation tasks. Our preliminary results reveal a number of interesting lessons: conventional tele-manipulation is very difficult and very slow without significant training; metrics for usability of such technology can be conflicting and hard to interpret; aptitude for tele-manipulation varies significantly between individuals; however such aptitude may be predicted by using spatial awareness tests to select prospective robot operators; additionally the abilities of people with different initial aptitudes appear to converge somewhat as learning progresses. An additional contribution of this paper is to show how rigorous scientific methodologies, drawn from the psychology and human-factors research fields, can be used to analyse the performance of humans using robots to perform practical tasks.


Robotics and Autonomous Systems | 2016

Visual classification of waste material for nuclear decommissioning

Affan Shaukat; Yang Gao; Jeffrey A. Kuo; Bob A. Bowen; Paul Mort


IEEE-ASME Transactions on Mechatronics | 2018

Vision-Based Framework to Estimate Robot Configuration and Kinematic Constraints

Valerio Ortenzi; Naresh Marturi; Michael Mistry; Jeffrey A. Kuo; Rustam Stolkin


Archive | 2017

Towards Advanced Robotic Manipulations for Nuclear Decommissioning

Naresh Marturi; Alireza Rastegarpanah; Valerio Ortenzi Vijaykumar Rajasekaran; Yasemin Bekiroglu; Jeffrey A. Kuo; Rustam Stolkin


Science & Engineering Faculty | 2015

Projected inverse dynamics control and optimal control for robots in contact with the environment: A comparison

Valerio Ortenzi; Rustam Stolkin; Jeffrey A. Kuo; Michael Mistry

Collaboration


Dive into the Jeffrey A. Kuo's collaboration.

Top Co-Authors

Avatar

Rustam Stolkin

University of Birmingham

View shared research outputs
Top Co-Authors

Avatar

Michael Mistry

University of Birmingham

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Chie Takahashi

University of Birmingham

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Mohammed Talha

University of Birmingham

View shared research outputs
Top Co-Authors

Avatar

Yasemin Bekiroglu

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge