Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Samantha V. Adams is active.

Publication


Featured researches published by Samantha V. Adams.


international joint conference on neural network | 2016

Diverse, noisy and parallel: a New Spiking Neural Network approach for humanoid robot control

Ricardo de Azambuja; Angelo Cangelosi; Samantha V. Adams

How exactly our brain works is still an open question, but one thing seems to be clear: biological neural systems are computationally powerful, robust and noisy. Using the Reservoir Computing paradigm based on Spiking Neural Networks, also known as Liquid State Machines, we present results from a novel approach where diverse and noisy parallel reservoirs, totalling 3,000 modelled neurons, work together receiving the same averaged feedback. Inspired by the ideas of action learning and embodiment we use the safe and flexible industrial robot BAXTER in our experiments. The robot was taught to draw three different 2D shapes on top of a desk using a total of four joints. Together with the parallel approach, the same basic system was implemented in a serial way to compare it with our new method. The results show our parallel approach enables BAXTER to produce the trajectories to draw the learned shapes more accurately than the traditional serial one.


international conference on neural information processing | 2014

Towards Real-World Neurorobotics: Integrated Neuromorphic Visual Attention

Samantha V. Adams; Alexander D. Rast; Cameron Patterson; Francesco Galluppi; Kevin Brohan; José Antonio Pérez-Carrasco; Thomas Wennekers; Stephen B. Furber; Angelo Cangelosi

Neuromorphic hardware and cognitive robots seem like an obvious fit, yet progress to date has been frustrated by a lack of tangible progress in achieving useful real-world behaviour. System limitations: the simple and usually proprietary nature of neuromorphic and robotic platforms, have often been the fundamental barrier. Here we present an integration of a mature “neuromimetic” chip, SpiNNaker, with the humanoid iCub robot using a direct AER - address-event representation - interface that overcomes the need for complex proprietary protocols by sending information as UDP-encoded spikes over an Ethernet link. Using an existing neural model devised for visual object selection, we enable the robot to perform a real-world task: fixating attention upon a selected stimulus. Results demonstrate the effectiveness of interface and model in being able to control the robot towards stimulus-specific object selection. Using SpiNNaker as an embeddable neuromorphic device illustrates the importance of two design features in a prospective neurorobot: universal configurability that allows the chip to be conformed to the requirements of the robot rather than the other way ’round, and standard interfaces that eliminate difficult low-level issues of connectors, cabling, signal voltages, and protocols. While this study is only a building block towards that goal, the iCub-SpiNNaker system demonstrates a path towards meaningful behaviour in robots controlled by neural network chips.


Scientific Reports | 2015

A Computational Model of Innate Directional Selectivity Refined by Visual Experience.

Samantha V. Adams; Christopher M. Harris

The mammalian visual system has been extensively studied since Hubel and Wiesel’s work on cortical feature maps in the 1960s. Feature maps representing the cortical neurons’ ocular dominance, orientation and direction preferences have been well explored experimentally and computationally. The predominant view has been that direction selectivity (DS) in particular, is a feature entirely dependent upon visual experience and as such does not exist prior to eye opening (EO). However, recent experimental work has shown that there is in fact a DS bias already present at EO. In the current work we use a computational model to reproduce the main results of this experimental work and show that the DS bias present at EO could arise purely from the cortical architecture without any explicit coding for DS and prior to any self-organising process facilitated by spontaneous activity or training. We explore how this latent DS (and its corresponding cortical map) is refined by training and that the time-course of development exhibits similar features to those seen in the experimental study. In particular we show that the specific cortical connectivity or ‘proto-architecture’ is required for DS to mature rapidly and correctly with visual experience.


PLOS ONE | 2014

A proto-architecture for innate directionally selective visual maps.

Samantha V. Adams; Christopher M. Harris

Self-organizing artificial neural networks are a popular tool for studying visual system development, in particular the cortical feature maps present in real systems that represent properties such as ocular dominance (OD), orientation-selectivity (OR) and direction selectivity (DS). They are also potentially useful in artificial systems, for example robotics, where the ability to extract and learn features from the environment in an unsupervised way is important. In this computational study we explore a DS map that is already latent in a simple artificial network. This latent selectivity arises purely from the cortical architecture without any explicit coding for DS and prior to any self-organising process facilitated by spontaneous activity or training. We find DS maps with local patchy regions that exhibit features similar to maps derived experimentally and from previous modeling studies. We explore the consequences of changes to the afferent and lateral connectivity to establish the key features of this proto-architecture that support DS.


2014 IEEE Symposium on Computational Intelligence, Cognitive Algorithms, Mind, and Brain (CCMB) | 2014

Learning visual-motor Cell Assemblies for the iCub robot using a neuroanatomically grounded neural network

Samantha V. Adams; Thomas Wennekers; Angelo Cangelosi; Max Garagnani; Friedemann Pulvermüller

In this work we describe how an existing neural model for learning Cell Assemblies (CAs) across multiple neuroanatomical brain areas has been integrated with a humanoid robot simulation to explore the learning of associations of visual and motor modalities. The results show that robust CAs are learned to enable pattern completion to select a correct motor response when only visual input is presented. We also show, with some parameter tuning and the pre-processing of more realistic patterns taken from images of real objects and robot poses the network can act as a controller for the robot in visuo-motor association tasks. This provides the basis for further neurorobotic experiments on grounded language learning.


Robotics and Autonomous Systems | 2018

Visual attention and object naming in humanoid robots using a bio-inspired spiking neural network

Daniel Hernández García; Samantha V. Adams; Alexander D. Rast; Thomas Wennekers; Steve B. Furber; Angelo Cangelosi

Abstract Recent advances in behavioural and computational neuroscience, cognitive robotics, and in the hardware implementation of large-scale neural networks, provide the opportunity for an accelerated understanding of brain functions and for the design of interactive robotic systems based on brain-inspired control systems. This is especially the case in the domain of action and language learning, given the significant scientific and technological developments in this field. In this work we describe how a neuroanatomically grounded spiking neural network for visual attention has been extended with a word learning capability and integrated with the iCub humanoid robot to demonstrate attention-led object naming. Experiments were carried out with both a simulated and a real iCub robot platform with successful results. The iCub robot is capable of associating a label to an object with a ‘preferred’ orientation when visual and word stimuli are presented concurrently in the scene, as well as attending to said object, thus naming it. After learning is complete, the name of the object can be recalled successfully when only the visual input is present, even when the object has been moved from its original position or when other objects are present as distractors.


international conference on neural information processing | 2016

Graceful Degradation Under Noise on Brain Inspired Robot Controllers

Ricardo de Azambuja; Frederico B. Klein; Martin F. Stoelen; Samantha V. Adams; Angelo Cangelosi

How can we build robot controllers that are able to work under harsh conditions, but without experiencing catastrophic failures? As seen on the recent Fukushima’s nuclear disaster, standard robots break down when exposed to high radiation environments. Here we present the results from two arrangements of Spiking Neural Networks, based on the Liquid State Machine (LSM) framework, that were able to gracefully degrade under the effects of a noisy current injected directly into each simulated neuron. These noisy currents could be seen, in a simplified way, as the consequences of exposition to non-destructive radiation. The results show that not only can the systems withstand noise, but one of the configurations, the Modular Parallel LSM, actually improved its results, in a certain range, when the noise levels were increased. Also, the robot controllers implemented in this work are suitable to run on a modern, power efficient neuromorphic hardware such as SpiNNaker.


TOWARD ROBOTIC SOCIALLY BELIEVABLE BEHAVING SYSTEMS-VOLUME I | 2016

Social Development of Artificial Cognition

Tony Belpaeme; Samantha V. Adams; Joachim de Greeff; Alessandro G. Di Nuovo; Anthony F. Morse; Angelo Cangelosi

Recent years have seen a growing interest in applying insights from developmental psychology to build artificial intelligence and robotic systems. This endeavour, called developmental robotics, not only is a novel method of creating artificially intelligent systems, but also offers a new perspective on the development of human cognition. While once cognition was thought to be the product of the embodied brain, we now know that natural and artificial cognition results from the interplay between an adaptive brain, a growing body, the physical environment and a responsive social environment. This chapter gives three examples of how humanoid robots are used to unveil aspects of development, and how we can use development and learning to build better robots. We focus on the domains of word-meaning acquisition, abstract concept acquisition and number acquisition, and show that cognition needs embodiment and a social environment to develop. In addition, we argue that Spiking Neural Networks offer great potential for the implementation of artificial cognition on robots.


international conference on neural information processing | 2015

Transport-Independent Protocols for Universal AER Communications

Alexander D. Rast; Alan B. Stokes; Sergio Davies; Samantha V. Adams; Himanshu Akolkar; David R. Lester; Chiara Bartolozzi; Angelo Cangelosi; Steve B. Furber

The emergence of Address-Event Representation (AER) as a general communications method across a large variety of neural devices suggests that they might be made interoperable. If there were a standard AER interface, systems could communicate using native AER signalling, allowing the construction of large-scale, real-time, heterogeneous neural systems. We propose a transport-agnostic AER protocol that permits direct bidirectional event communications between systems over Ethernet, and demonstrate practical implementations that connect a neuromimetic chip: SpiNNaker, both to standard host PCs and to real-time robotic systems. The protocol specifies a header and packet format that supports a variety of different possible packet types while coping with questions of data alignment, time sequencing, and packet compression. Such a model creates a flexible solution either for real-time communications between neural devices or for live spike I/O and visualisation in a host PC. With its standard physical layer and flexible protocol, the specification provides a prototype for AER protocol standardisation that is at once compatible with legacy systems and expressive enough for future very-large-scale neural systems.


IEEE Transactions on Neural Networks | 2018

Behavioral Learning in a Cognitive Neuromorphic Robot: An Integrative Approach

Alexander D. Rast; Samantha V. Adams; Simon Davidson; Sergio Davies; Michael Hopkins; Andrew Rowley; Alan B. Stokes; Thomas Wennekers; Steve B. Furber; Angelo Cangelosi

We present here a learning system using the iCub humanoid robot and the SpiNNaker neuromorphic chip to solve the real-world task of object-specific attention. Integrating spiking neural networks with robots introduces considerable complexity for questionable benefit if the objective is simply task performance. But, we suggest, in a cognitive robotics context, where the goal is understanding how to compute, such an approach may yield useful insights to neural architecture as well as learned behavior, especially if dedicated neural hardware is available. Recent advances in cognitive robotics and neuromorphic processing now make such systems possible. Using a scalable, structured, modular approach, we build a spiking neural network where the effects and impact of learning can be predicted and tested, and the network can be scaled or extended to new tasks automatically. We introduce several enhancements to a basic network and show how they can be used to direct performance toward behaviorally relevant goals. Results show that using a simple classical spike-timing-dependent plasticity (STDP) rule on selected connections, we can get the robot (and network) to progress from poor task-specific performance to good performance. Behaviorally relevant STDP appears to contribute strongly to positive learning: “do this” but less to negative learning: “don’t do that.” In addition, we observe that the effect of structural enhancements tends to be cumulative. The overall system suggests that it is by being able to exploit combinations of effects, rather than any one effect or property in isolation, that spiking networks can achieve compelling, task-relevant behavior.

Collaboration


Dive into the Samantha V. Adams's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alan B. Stokes

University of Manchester

View shared research outputs
Top Co-Authors

Avatar

Sergio Davies

University of Manchester

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge