Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Anthony G. Pipe is active.

Publication


Featured researches published by Anthony G. Pipe.


IEEE Transactions on Neural Networks | 2007

Implementing Spiking Neural Networks for Real-Time Signal-Processing and Control Applications: A Model-Validated FPGA Approach

Martin J. Pearson; Anthony G. Pipe; Benjamin Mitchinson; Kevin N. Gurney; Chris Melhuish; Ian Gilhespy; Mokhtar Nibouche

In this paper, we present two versions of a hardware processing architecture for modeling large networks of leaky-integrate-and-flre (LIF) neurons; the second version provides performance enhancing features relative to the first. Both versions of the architecture use fixed-point arithmetic and have been implemented using a single field-programmable gate array (FPGA). They have successfully simulated networks of over 1000 neurons configured using biologically plausible models of mammalian neural systems. The neuroprocessor has been designed to be employed primarily for use on mobile robotic vehicles, allowing bio-inspired neural processing models to be integrated directly into real-world control environments. When a neuroprocessor has been designed to act as part of the closed-loop system of a feedback controller, it is imperative to maintain strict real-time performance at all times, in order to maintain integrity of the control system. This resulted in the reevaluation of some of the architectural features of existing hardware for biologically plausible neural networks (NNs). In addition, we describe a development system for rapidly porting an underlying model (based on floating-point arithmetic) to the fixed-point representation of the FPGA-based neuroprocessor, thereby allowing validation of the hardware architecture. The developmental system environment facilitates the cooperation of computational neuroscientists and engineers working on embodied (robotic) systems with neural controllers, as demonstrated by our own experience on the Whiskerbot project, in which we developed models of the rodent whisker sensory system.


IEEE Robotics & Automation Magazine | 2009

Whisking with robots

Tony J. Prescott; Martin J. Pearson; Benjamin Mitchinson; J.C.W. Sullivan; Anthony G. Pipe

This article summarizes some of the key features of the rat vibrissal system, including the actively controlled sweeping movements of the vibrissae known as whisking, and reviews the past and ongoing research aimed at replicating some of this functionality in biomimetic robots.


Adaptive Behavior | 2007

Whiskerbot: A Robotic Active Touch System Modeled on the Rat Whisker Sensory System

Martin J. Pearson; Anthony G. Pipe; Chris Melhuish; Benjamin Mitchinson; Tony J. Prescott

The Whiskerbot project is a collaborative project between robotics engineers, computational neuroscientists and ethologists, aiming to build a biologically inspired robotic implementation of the rodent whisker sensory system. The morphology and mechanics of the large whiskers (macro-vibrissae) have been modeled, as have the neural structures that constitute the rodent central nervous system responsible for macro-vibrissae sensory processing. There are two principal motivations for this project. First, by implementing an artificial whisker sensory system controlled using biologically plausible neural networks we hope to test existing models more thoroughly and develop new hypotheses for vibrissal sensory processing. Second, the sensory mode of tactile whiskers could be useful for general mobile robotic sensory deployment. In this article the robotic platform that has been built is detailed as well as some of the experiments that have been conducted to test the neural control algorithms and architectures inspired from neuroethological observations to mediate adaptive behaviors.


Proceedings of the Royal Society of London B: Biological Sciences | 2004

Empirically inspired simulated electro-mechanical model of the rat mystacial follicle-sinus complex

Ben Mitchinson; Kevin N. Gurney; Peter Redgrave; Chris Melhuish; Anthony G. Pipe; Martin J. Pearson; Ian Gilhespy; Tony J. Prescott

In whiskered animals, activity is evoked in the primary sensory afferent cells (trigeminal nerve) by mechanical stimulation of the whiskers. In some cell populations this activity is correlated well with continuous stimulus parameters such as whisker deflection magnitude, but in others it is observed to represent events such as whisker–stimulator contact or detachment. The transduction process is mediated by the mechanics of the whisker shaft and follicle–sinus complex (FSC), and the mechanics and electro–chemistry of mechanoreceptors within the FSC. An understanding of this transduction process and the nature of the primary neural codes generated is crucial for understanding more central sensory processing in the thalamus and cortex. However, the details of the peripheral processing are currently poorly understood. To overcome this deficiency in our knowledge, we constructed a simulated electro–mechanical model of the whisker–FSC–mechanoreceptor system in the rat and tested it against a variety of data drawn from the literature. The agreement was good enough to suggest that the model captures many of the key features of the peripheral whisker system in the rat.


Philosophical Transactions of the Royal Society B | 2011

Biomimetic vibrissal sensing for robots.

Martin J. Pearson; Ben Mitchinson; J. Charles Sullivan; Anthony G. Pipe; Tony J. Prescott

Active vibrissal touch can be used to replace or to supplement sensory systems such as computer vision and, therefore, improve the sensory capacity of mobile robots. This paper describes how arrays of whisker-like touch sensors have been incorporated onto mobile robot platforms taking inspiration from biology for their morphology and control. There were two motivations for this work: first, to build a physical platform on which to model, and therefore test, recent neuroethological hypotheses about vibrissal touch; second, to exploit the control strategies and morphology observed in the biological analogue to maximize the quality and quantity of tactile sensory information derived from the artificial whisker array. We describe the design of a new whiskered robot, Shrewbot, endowed with a biomimetic array of individually controlled whiskers and a neuroethologically inspired whisking pattern generation mechanism. We then present results showing how the morphology of the whisker array shapes the sensory surface surrounding the robots head, and demonstrate the impact of active touch control on the sensory information that can be acquired by the robot. We show that adopting bio-inspired, low latency motor control of the rhythmic motion of the whiskers in response to contact-induced stimuli usefully constrains the sensory range, while also maximizing the number of whisker contacts. The robot experiments also demonstrate that the sensory consequences of active touch control can be usefully investigated in biomimetic robots.


human-robot interaction | 2010

Cooperative gestures: effective signaling for humanoid robots

Laurel D. Riek; Tal-Chen Rabinowitch; Paul Bremner; Anthony G. Pipe; Mike Fraser; Peter Robinson

Cooperative gestures are a key aspect of human-human pro-social interaction. Thus, it is reasonable to expect that endowing humanoid robots with the ability to use such gestures when interacting with humans would be useful. However, while people are used to responding to such gestures expressed by other humans, it is unclear how they might react to a robot making them. To explore this topic, we conducted a within-subjects, video based laboratory experiment, measuring time to cooperate with a humanoid robot making interactional gestures. We manipulated the gesture type (beckon, give, shake hands), the gesture style (smooth, abrupt), and the gesture orientation (front, side). We also employed two measures of individual differences: negative attitudes toward robots (NARS) and human gesture decoding ability (DANVA2-POS). Our results show that people cooperate with abrupt gestures more quickly than smooth ones and front-oriented gestures more quickly than those made to the side, peoples speed at decoding robot gestures is correlated with their ability to decode human gestures, and negative attitudes toward robots is strongly correlated with a decreased ability in decoding human gestures.


Autonomous Robots | 2009

Contact type dependency of texture classification in a whiskered mobile robot

Charles W. Fox; Benjamin Mitchinson; Martin J. Pearson; Anthony G. Pipe; Tony J. Prescott

Actuated artificial whiskers modeled on rat macrovibrissae can provide effective tactile sensor systems for autonomous robots. This article focuses on texture classification using artificial whiskers and addresses a limitation of previous studies, namely, their use of whisker deflection signals obtained under relatively constrained experimental conditions. Here we consider the classification of signals obtained from a whiskered robot required to explore different surface textures from a range of orientations and distances. This procedure resulted in a variety of deflection signals for any given texture. Using a standard Gaussian classifier we show, using both hand-picked features and ones derived from studies of rat vibrissal processing, that a robust rough-smooth discrimination is achievable without any knowledge of how the whisker interacts with the investigated object. On the other hand, finer discriminations appear to require knowledge of the target’s relative position and/or of the manner in which the whisker contact its surface.


systems man and cybernetics | 2009

Cerebellar-Inspired Adaptive Control of a Robot Eye Actuated by Pneumatic Artificial Muscles

Alexander Lenz; Sean R. Anderson; Anthony G. Pipe; Chris Melhuish; Paul Dean; John Porrill

In this paper, a model of cerebellar function is implemented and evaluated in the control of a robot eye actuated by pneumatic artificial muscles. The investigated control problem is stabilization of the visual image in response to disturbances. This is analogous to the vestibuloocular reflex (VOR) in humans. The cerebellar model is structurally based on the adaptive filter, and the learning rule is computationally analogous to least-mean squares, where parameter adaptation at the parallel fiber/Purkinje cell synapse is driven by the correlation of the sensory error signal (carried by the climbing fiber) and the motor command signal. Convergence of the algorithm is first analyzed in simulation on a model of the robot and then tested online in both one and two degrees of freedom. The results show that this model of neural function successfully works on a real-world problem, providing empirical evidence for validating: 1) the generic cerebellar learning algorithm; 2) the function of the cerebellum in the VOR; and 3) the signal transmission between functional neural components of the VOR.


Autonomous Robots | 2014

A variable compliance, soft gripper

Maria Elena Giannaccini; Ioannis Georgilas; I. Horsfield; B. H. P. M. Peiris; Alexander Lenz; Anthony G. Pipe; Sanja Dogramadzi

Autonomous grasping is an important but challenging task and has therefore been intensively addressed by the robotics community. One of the important issues is the ability of the grasping device to accommodate varying object shapes in order to form a stable, multi-point grasp. Particularly in the human environment, where robots are faced with a vast set of objects varying in shape and size, a versatile grasping device is highly desirable. Solutions to this problem have often involved discrete continuum structures that typically comprise of compliant sections interconnected with mechanically rigid parts. Such devices require a more complex control and planning of the grasping action than intrinsically compliant structures which passively adapt to complex shapes objects. In this paper, we present a low-cost, soft cable-driven gripper, featuring no stiff sections, which is able to adapt to a wide range of objects due to its entirely soft structure. Its versatility is demonstrated in several experiments. In addition, we also show how its compliance can be passively varied to ensure a compliant but also stable and safe grasp.


intelligent robots and systems | 2013

Joint action understanding improves robot-to-human object handover

Elena Corina Grigore; Kerstin Eder; Anthony G. Pipe; Chris Melhuish; Ute Leonards

The development of trustworthy human-assistive robots is a challenge that goes beyond the traditional boundaries of engineering. Essential components of trustworthiness are safety, predictability and usefulness. In this paper we demonstrate that the integration of joint action understanding from human-human interaction into the human-robot context can significantly improve the success rate of robot-to-human object handover tasks. We take a two layer approach. The first layer handles the physical aspects of the handover. The robots decision to release the object is informed by a Hidden Markov Model that estimates the state of the handover. Inspired by human-human handover observations, we then introduce a higher-level cognitive layer that models behaviour characteristic for a human user in a handover situation. In particular, we focus on the inclusion of eye gaze / head orientation into the robots decision making. Our results demonstrate that by integrating these non-verbal cues the success rate of robot-to-human handovers can be significantly improved, resulting in a more robust and therefore safer system.

Collaboration


Dive into the Anthony G. Pipe's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Martin J. Pearson

University of the West of England

View shared research outputs
Top Co-Authors

Avatar

Brian Carse

University of the West of England

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Gabriel Dragffy

University of the West of England

View shared research outputs
Top Co-Authors

Avatar

Mokhtar Nibouche

University of the West of England

View shared research outputs
Top Co-Authors

Avatar

Alan F. T. Winfield

University of the West of England

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alexander Lenz

University of the West of England

View shared research outputs
Researchain Logo
Decentralizing Knowledge