Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Alexander Lenz is active.

Publication


Featured researches published by Alexander Lenz.


systems man and cybernetics | 2009

Cerebellar-Inspired Adaptive Control of a Robot Eye Actuated by Pneumatic Artificial Muscles

Alexander Lenz; Sean R. Anderson; Anthony G. Pipe; Chris Melhuish; Paul Dean; John Porrill

In this paper, a model of cerebellar function is implemented and evaluated in the control of a robot eye actuated by pneumatic artificial muscles. The investigated control problem is stabilization of the visual image in response to disturbances. This is analogous to the vestibuloocular reflex (VOR) in humans. The cerebellar model is structurally based on the adaptive filter, and the learning rule is computationally analogous to least-mean squares, where parameter adaptation at the parallel fiber/Purkinje cell synapse is driven by the correlation of the sensory error signal (carried by the climbing fiber) and the motor command signal. Convergence of the algorithm is first analyzed in simulation on a model of the robot and then tested online in both one and two degrees of freedom. The results show that this model of neural function successfully works on a real-world problem, providing empirical evidence for validating: 1) the generic cerebellar learning algorithm; 2) the function of the cerebellum in the VOR; and 3) the signal transmission between functional neural components of the VOR.


IEEE Transactions on Autonomous Mental Development | 2012

Towards a Platform-Independent Cooperative Human Robot Interaction System: III An Architecture for Learning and Executing Actions and Shared Plans

Stéphane Lallée; Ugo Pattacini; Séverin Lemaignan; Alexander Lenz; Chris Melhuish; Lorenzo Natale; Sergey Skachek; Katharina Hamann; Jasmin Steinwender; Emrah Akin Sisbot; Giorgio Metta; Julien Guitton; Rachid Alami; Matthieu Warnier; Tony Pipe; Felix Warneken; Peter Ford Dominey

Robots should be capable of interacting in a cooperative and adaptive manner with their human counterparts in open-ended tasks that can change in real-time. An important aspect of the robot behavior will be the ability to acquire new knowledge of the cooperative tasks by observing and interacting with humans. The current research addresses this challenge. We present results from a cooperative human-robot interaction system that has been specifically developed for portability between different humanoid platforms, by abstraction layers at the perceptual and motor interfaces. In the perceptual domain, the resulting system is demonstrated to learn to recognize objects and to recognize actions as sequences of perceptual primitives, and to transfer this learning, and recognition, between different robotic platforms. For execution, composite actions and plans are shown to be learnt on one robot and executed successfully on a different one. Most importantly, the system provides the ability to link actions into shared plans, that form the basis of human-robot cooperation, applying principles from human cognitive development to the domain of robot cognitive systems.


Autonomous Robots | 2014

A variable compliance, soft gripper

Maria Elena Giannaccini; Ioannis Georgilas; I. Horsfield; B. H. P. M. Peiris; Alexander Lenz; Anthony G. Pipe; Sanja Dogramadzi

Autonomous grasping is an important but challenging task and has therefore been intensively addressed by the robotics community. One of the important issues is the ability of the grasping device to accommodate varying object shapes in order to form a stable, multi-point grasp. Particularly in the human environment, where robots are faced with a vast set of objects varying in shape and size, a versatile grasping device is highly desirable. Solutions to this problem have often involved discrete continuum structures that typically comprise of compliant sections interconnected with mechanically rigid parts. Such devices require a more complex control and planning of the grasping action than intrinsically compliant structures which passively adapt to complex shapes objects. In this paper, we present a low-cost, soft cable-driven gripper, featuring no stiff sections, which is able to adapt to a wide range of objects due to its entirely soft structure. Its versatility is demonstrated in several experiments. In addition, we also show how its compliance can be passively varied to ensure a compliant but also stable and safe grasp.


intelligent robots and systems | 2010

Towards a platform-independent cooperative human-robot interaction system: I. Perception

Stéphane Lallée; Séverin Lemaignan; Alexander Lenz; Chris Melhuish; Lorenzo Natale; Sergey Skachek; Tijn van Der Zant; Felix Warneken; Peter Ford Dominey

One of the long term objectives of robotics and artificial cognitive systems is that robots will increasingly be capable of interacting in a cooperative and adaptive manner with their human counterparts in open-ended tasks that can change in real-time. In such situations, an important aspect of the robot behavior will be the ability to acquire new knowledge of the cooperative tasks by observing humans. At least two significant challenges can be identified in this context. The first challenge concerns development of methods to allow the characterization of human actions such that robotic systems can observe and learn new actions, and more complex behaviors made up of those actions. The second challenge is associated with the immense heterogeneity and diversity of robots and their perceptual and motor systems. The associated question is whether the identified methods for action perception can be generalized across the different perceptual systems inherent to distinct robot platforms. The current research addresses these two challenges. We present results from a cooperative human-robot interaction system that has been specifically developed for portability between different humanoid platforms. Within this architecture, the physical details of the perceptual system (e.g. video camera vs IR video with reflecting markers) are encapsulated at the lowest level. Actions are then automatically characterized in terms of perceptual primitives related to motion, contact and visibility. The resulting system is demonstrated to perform robust object and action learning and recognition on two distinct robotic platforms. Perhaps most interestingly, we demonstrate that knowledge acquired about action recognition with one robot can be directly imported and successfully used on a second distinct robot platform for action recognition. This will have interesting implications for the accumulation of shared knowledge between distinct heterogeneous robotic systems.


ieee-ras international conference on humanoid robots | 2010

The BERT2 infrastructure: An integrated system for the study of human-robot interaction

Alexander Lenz; Sergey Skachek; Katharina Hamann; Jasmin Steinwender; Anthony G. Pipe; Chris Melhuish

Bristol Elumotion Robot Torso Version 2 (BERT2) is a humanoid robot currently in development at Bristol Robotics Laboratory (BRL). In this paper we present the current state of development and demonstrate how the integration of several advanced subsystems (of commercial and non-commercial nature) within a heterogeneous computing infrastructure enables us to construct a unique platform ideally suited to investigate complex human-robot interaction (HRI). We particularly focus on two important domains of non-verbal communication, namely gaze and pointing gestures in a real-world 3D setting and outline our thinking in terms of safety, ambiguities and further experimental work.


Proceedings of the FIRA RoboWorld Congress 2009 on Advances in Robotics | 2009

Robotic Implementation of Realistic Reaching Motion Using a Sliding Mode/Operational Space Controller

Adam Spiers; Guido Herrmann; Chris Melhuish; Tony Pipe; Alexander Lenz

It has been shown that a task-level controller with minimal-effort posture control produces human-like motion in simulation. This control approach is based on the dynamic model of a human skeletal system superimposed with realistic muscle like actuators whose effort is minimised. In practical application, there is often a degree of error between the dynamic model of a system used for controller derivation and the actual dynamics of the system. We present a practical application of the task-level control framework with simplified posture control in order to produce life-like and compliant reaching motions for a redundant task. The addition of a sliding mode controller improves performance of the physical robot by compensating for unknown parametric and dynamic disturbances without compromising the human-like posture.


conference towards autonomous robotic systems | 2011

Towards safe human-robot interaction

Elena Corina Grigore; Kerstin Eder; Alexander Lenz; Sergey Skachek; Anthony G. Pipe; Chris Melhuish

The development of human-assistive robots challenges engineering and introduces new ethical and legal issues. One fundamental concern is whether human-assistive robots can be trusted. Essential components of trustworthiness are usefulness and safety; both have to be demonstrated before such robots could stand a chance of passing product certification. This paper describes the setup of an environment to investigate safety and liveness aspects in the context of human-robot interaction. We present first insights into setting up and testing a humanrobot interaction system in which the role of the robot is that of serving drinks to a human. More specifically, we use this system to investigate when the right time is for the robot to release the drink such that the action is both safe and useful. We briefly outline follow-on research that uses the safety and liveness properties of this scenario as specification.


International Journal of Humanoid Robotics | 2014

Compliance Control and Human–Robot Interaction: Part II — Experimental Examples

Said Ghani Khan; Guido Herrmann; Alexander Lenz; Mubarak Al Grafi; Tony Pipe; Chris Melhuish

Compliance control is highly relevant to human safety in human–robot interaction (HRI). This paper presents multi-dimensional compliance control of a humanoid robot arm. A dynamic model-free adaptive controller with an anti-windup compensator is implemented on four degrees of freedom (DOF) of a humanoid robot arm. The paper is aimed to compliment the associated review paper on compliance control. This is a model reference adaptive compliance scheme which employs end-effector forces (measured via joint torque sensors) as a feedback. The robots body-own torques are separated from external torques via a simple but effective algorithm. In addition, an experiment of physical human robot interaction is conducted employing the above mentioned adaptive compliance control along with a speech interface. The experiment is focused on passing an object (a cup) between a human and a robot. Compliance is providing an immediate layer of safety for this HRI scenario by avoiding pushing, pulling or clamping and minimizing the effect of collisions with the environment.


intelligent robots and systems | 2012

When shared plans go wrong: From atomic- to composite actions and back

Alexander Lenz; Stéphane Lallée; Sergey Skachek; Anthony G. Pipe; Chris Melhuish; Peter Ford Dominey

As elaborate human-robot interaction capabilities continue to develop, humans will increasingly be in proximity with robots, and the management of the ongoing control in case of breakdown becomes increasingly important: taking care of what happens when cooperation goes wrong. The current research addresses three categories of breakdowns where cooperation can go wrong. In the first category, the human detects some type of problem and generates a self-issued stop signal, with a physical palm up posture. In the second category, the human becomes distracted, and physically changes his orientation away from the shared space of cooperation. In the final category that we investigate, the human becomes physically close to the robot such that safety limits are reached and detected by the robot. In each of these three cases, the robot cognitive system detects the failure via the perception of distinct physical states from motion capture: the hand up posture; change in head orientation; and physical distance reaching a minimum threshold. In each case the robot immediately halts the current action. Then, the system should recover appropriately. Each error type returns a specific code, allowing the Supervisor system to handle the specific type of error. Our cognitive system allows the robot to learn composite actions, as a sequence of atomic actions. These composite actions can then be composed into higher level plans. When a plan fails at the level of a composite action, the recovery method is not trivial: should recovery take place at the level of the composite action, or the actual atomic action which physically failed? As the best recovery may depend on the physical context, we expand the plan into atomic actions, and recover at this level, allowing the user to specify whether the action should be skipped or retried. We demonstrate that this system allows graceful recovery from three principal categories of interaction breakdown, and provides an invaluable mechanism for preserving the integrity of cooperative HRI.


FIRA RoboWorld Congress | 2011

Toward Safe Human Robot Interaction: Integration of Compliance Control, an Anthropomorphic Hand and Verbal Communication

Said Ghani Khan; Alexander Lenz; Guido Herrmann; Tony Pipe; Chris Melhuish

In this paper an integrated system for human robot interaction is presented. It is demonstrated that safety features in human robot interaction can be engineered by combining a robotic arm, equipped with a compliant controller, an anthropomorphic robot hand and a spoken language communication system. A simplified human-robot interaction scenario, based on a typical care robot situation, is exploited to show that safety can be enhanced by the monitoring of torques and motor currents to establish contact with the environment. Furthermore, spoken language is utilised to resolve potentially dangerous contact situations.

Collaboration


Dive into the Alexander Lenz's collaboration.

Top Co-Authors

Avatar

Chris Melhuish

University of the West of England

View shared research outputs
Top Co-Authors

Avatar

Anthony G. Pipe

University of the West of England

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sergey Skachek

University of the West of England

View shared research outputs
Top Co-Authors

Avatar

Tony Pipe

University of the West of England

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sanja Dogramadzi

University of the West of England

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Lorenzo Natale

Istituto Italiano di Tecnologia

View shared research outputs
Researchain Logo
Decentralizing Knowledge