Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Yulia Sandamirskaya is active.

Publication


Featured researches published by Yulia Sandamirskaya.


international conference on development and learning | 2010

Serial order in an acting system: A multidimensional dynamic neural fields implementation

Yulia Sandamirskaya; Gregor Schöner

Learning and generating serially ordered sequential behavior in a real, embodied agent that is situated in a partially unknown environment requires that noisy sensory information is used both to control appropriate motor actions and to determine that a particular action has been successfully terminated. While most current models do not address these conditions of embodied sequence generation, we have earlier proposed a neurally inspired model based on Dynamic Field Theory that enables sequences in which each action may take unpredictable amounts of time. Here we extend this earlier work to accommodate heterogeneous sets of actions. We show that a set of matching conditions-of-satisfaction can be used to stably represent the terminal condition of each action and trigger the cascade of instabilities that switches the system from one stable state to the next. A robotic implementation on a vehicle with a camera and a simple robot arm demonstrates the stability of the resulting scheme.


international conference on artificial neural networks | 2012

A dynamic field architecture for the generation of hierarchically organized sequences

Boris Durán; Yulia Sandamirskaya; Gregor Schöner

A dilemma arises when sequence generation is implemented on embodied autonomous agents. While achieving an individual action goal, the agent must be in a stable state to link to fluctuating and time-varying sensory inputs. To transition to the next goal, the previous state must be released from stability. A previous proposal of a neural dynamics solved this dilemma by inducing an instability when a condition of satisfaction signals that an action goal has been reached. The required structure of dynamic coupling limited the complexity and flexibility of sequence generation, however. We address this limitation by showing how the neural dynamics can be generalized to generate hierarchically structured behaviors. Directed couplings downward in the hierarchy initiate chunks of actions, directed couplings upward in the hierarchy signal their termination. We analyze the mathematical mechanisms and demonstrate the flexibility of the scheme in simulation.


Paladyn: Journal of Behavioral Robotics | 2015

Parsing of action sequences: A neural dynamics approach

David Lobato; Yulia Sandamirskaya; Mathis Richter; Gregor Schöner

Abstract Parsing of action sequences is the process of segmenting observed behavior into individual actions. In robotics, this process is critical for imitation learning from observation and for representing an observed behavior in a form that may be communicated to a human. In this paper, we develop a model for action parsing, based on our understanding of principles of grounded cognitive processes, such as perceptual decision making, behavioral organization, and memory formation.We present a neural-dynamic architecture, in which action sequences are parsed using a mathematical and conceptual framework for embodied cognition—the Dynamic Field Theory. In this framework, we introduce a novel mechanism, which allows us to detect and memorize actions that are extended in time and are parametrized by the target object of an action. The core properties of the architecture are demonstrated in a set of simple, proof-of-concept experiments.


Frontiers in Neurorobotics | 2017

Obstacle Avoidance and Target Acquisition for Robot Navigation Using a Mixed Signal Analog/Digital Neuromorphic Processing System

Moritz B. Milde; Hermann Blum; Alexander Dietmüller; Dora Sumislawska; Jörg Conradt; Giacomo Indiveri; Yulia Sandamirskaya

Neuromorphic hardware emulates dynamics of biological neural networks in electronic circuits offering an alternative to the von Neumann computing architecture that is low-power, inherently parallel, and event-driven. This hardware allows to implement neural-network based robotic controllers in an energy-efficient way with low latency, but requires solving the problem of device variability, characteristic for analog electronic circuits. In this work, we interfaced a mixed-signal analog-digital neuromorphic processor ROLLS to a neuromorphic dynamic vision sensor (DVS) mounted on a robotic vehicle and developed an autonomous neuromorphic agent that is able to perform neurally inspired obstacle-avoidance and target acquisition. We developed a neural network architecture that can cope with device variability and verified its robustness in different environmental situations, e.g., moving obstacles, moving target, clutter, and poor light conditions. We demonstrate how this network, combined with the properties of the DVS, allows the robot to avoid obstacles using a simple biologically-inspired dynamics. We also show how a Dynamic Neural Field for target acquisition can be implemented in spiking neuromorphic hardware. This work demonstrates an implementation of working obstacle avoidance and target acquisition using mixed signal analog/digital neuromorphic hardware.


Biological Cybernetics | 2017

Affective–associative two-process theory: a neurocomputational account of partial reinforcement extinction effects

Robert Lowe; Alexander Almér; Erik Billing; Yulia Sandamirskaya; Christian Balkenius

The partial reinforcement extinction effect (PREE) is an experimentally established phenomenon: behavioural response to a given stimulus is more persistent when previously inconsistently rewarded than when consistently rewarded. This phenomenon is, however, controversial in animal/human learning theory. Contradictory findings exist regarding when the PREE occurs. One body of research has found a within-subjects PREE, while another has found a within-subjects reversed PREE (RPREE). These opposing findings constitute what is considered the most important problem of PREE for theoreticians to explain. Here, we provide a neurocomputational account of the PREE, which helps to reconcile these seemingly contradictory findings of within-subjects experimental conditions. The performance of our model demonstrates how omission expectancy, learned according to low probability reward, comes to control response choice following discontinuation of reward presentation (extinction). We find that a PREE will occur when multiple responses become controlled by omission expectation in extinction, but not when only one omission-mediated response is available. Our model exploits the affective states of reward acquisition and reward omission expectancy in order to differentially classify stimuli and differentially mediate response choice. We demonstrate that stimulus–response (retrospective) and stimulus–expectation–response (prospective) routes are required to provide a necessary and sufficient explanation of the PREE versus RPREE data and that Omission representation is key for explaining the nonlinear nature of extinction data.


Proceedings of the 10th International Conference on Distributed Smart Camera | 2016

A Neuromorphic Approach for Tracking using Dynamic Neural Fields on a Programmable Vision-chip

Julien N. P. Martel; Yulia Sandamirskaya

In artificial vision applications, such as tracking, a large amount of data captured by sensors is transferred to processors to extract information relevant for the task at hand. Smart vision sensors offer a means to reduce the computational burden of visual processing pipelines by placing more processing capabilities next to the sensor. In this work, we use a vision-chip in which a small processor with memory is located next to each photosensitive element. The architecture of this device is optimized to perform local operations. To perform a task like tracking, we implement a neuromorphic approach using a Dynamic Neural Field, which allows to segregate, memorize, and track objects. Our system, consisting of the vision-chip running the DNF, outputs only the activity that corresponds to the tracked objects. These outputs reduce the bandwidth needed to transfer information as well as further post-processing, since computation happens at the pixel level.


IEEE Transactions on Circuits and Systems I-regular Papers | 2018

Real-Time Depth From Focus on a Programmable Focal Plane Processor

Julien N. P. Martel; Lorenz K. Müller; Stephen J. Carey; Jonathan Müller; Yulia Sandamirskaya; Piotr Dudek

Visual input can be used to recover the 3-D structure of a scene by estimating distances (depth) to the observer. Depth estimation is performed in various applications, such as robotics, autonomous driving, or surveillance. We present a low-power, compact, passive, and static imaging system that computes a semi-dense depth map in real time for a wide range of depths. This is achieved by using a focus-tunable liquid lens to sweep the optical power of the system at a high frequency, computing depth from focus on a mixed-signal programmable focal-plane processor. The use of local and highly parallel process- ing directly on the focal plane removes the sensor-processor bandwidth limitations typical in conventional imaging and processor technologies and allows real-time performance to be achieved.


international symposium on circuits and systems | 2017

Obstacle avoidance with LGMD neuron: Towards a neuromorphic UAV implementation

Llewyn Salt; Giacomo Indiveri; Yulia Sandamirskaya

We present a neuromorphic adaptation of a spiking neural network model of the locust Lobula Giant Movement Detector (LGMD), which detects objects increasing in size in the field of vision (looming) and can be used to facilitate obstacle avoidance in robotic applications. Our model is constrained by the parameters of a mixed signal analog-digital neuromorphic device, developed by our group, and is driven by the output of a neuromorphic vision sensor. We demonstrate the performance of the model and how it may be used for obstacle avoidance on an unmanned areal vehicle (UAV).


international symposium on circuits and systems | 2017

Live demonstration: Depth from focus on a focal plane processor using a focus tunable liquid lens

Julien N. P. Martel; Lorenz K. Müller; Stephen J. Carey; Jonathan Müller; Yulia Sandamirskaya; Piotr Dudek

We demonstrate a 3D imaging system that produces sparse depth maps. It consists in a liquid focus-tunable lens whose focal power can be changed at high speed, placed in front of a SCAMP5 vision-chip embedding processing capabilities in each pixel. The focus-tunable lens performs focal sweeps with shallow depth of fields. These are sampled by the vision chip taking multiple images at different focus and analyzed on-chip to produce a single depth frame. The combination of the focus tunable-lens with the vision-chip, enabling near-focal plane processing, allows us to present a compact passive system that is static, monocular, real-time (> 25FPS) and low-power (< 1.6W).


international symposium on circuits and systems | 2017

Obstacle avoidance and target acquisition in mobile robots equipped with neuromorphic sensory-processing systems

Moritz B. Milde; Alexander Dietmüller; Hermann Blum; Giacomo Indiveri; Yulia Sandamirskaya

Event based sensors and neural processing architectures represent a promising technology for implementing low power and low latency robotic control systems. However, the implementation of robust and reliable control architectures using neuromorphic devices is challenging, due to their limited precision and variable nature of their underlying computing elements. In this paper we demonstrate robust obstacle avoidance and target acquisition behaviors in a compact mobile platform controlled by a neuromorphic sensory-processing system and validate its performance in a number of robotic experiments.

Collaboration


Dive into the Yulia Sandamirskaya's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Matthew D. Luciw

Dalle Molle Institute for Artificial Intelligence Research

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge