Finding the Gap: Neuromorphic Motion Vision in Cluttered Environments
Thorben Schoepe, Ella Janotte, Moritz B. Milde, Olivier J.N. Bertrand, Martin Egelhaaf, Elisabetta Chicca
FFinding the Gap: Neuromorphic Motion Vision inCluttered Environments
Thorben Schoepe , Ella Janotte , Moritz B. Milde , Olivier J.N. Bertrand , MartinEgelhaaf , and Elisabetta Chicca Faculty of Technology and Cognitive Interaction Technology Center of Excellence (CITEC), Bielefeld University,Germany. Bio-Inspired Circuits and Systems (BICS) Lab. Zernike Institute for Advanced Materials (Zernike Inst Adv Mat),University of Groningen, Netherlands. CogniGron (Groningen Cognitive Systems and Materials Center), University of Groningen, Netherlands. Event Driven Perception for Robotics, Italian Institute of Technology, iCub facility, Genoa, Italy. International Centre for Neuromorphic Systems, MARCS Institute, Western Sydney University, Penrith, Australia. Neurobiology, Faculty of Biology, Bielefeld University, Bielefeld, Germany. * [email protected] ABSTRACT
Many animals meander in environments and avoid collisions. How the underlying neuronal machinery can yield robustbehaviour in a variety of environments remains unclear. In the fly brain, motion-sensitive neurons indicate the presence ofnearby objects and directional cues are integrated within an area known as the central complex. Such neuronal machinery,in contrast with the traditional stream-based approach to signal processing, uses an event-based approach, with eventsoccurring when changes are sensed by the animal. Contrary to classical von Neumann computing architectures, event-basedneuromorphic hardware is designed to process information asynchronously and in a distributed manner. Inspired by the flybrain, we model, for the first time, a neuromorphic closed-loop system mimicking essential behaviours observed in flyinginsects, such as meandering in clutter and crossing of gaps, both of which are also highly relevant for autonomous vehicles.We implemented our system both in software and on neuromorphic hardware. While moving through an environment, ouragent perceives changes in its surroundings and uses this information for collision avoidance. The agent’s manoeuvres resultfrom a closed action-perception loop implementing probabilistic decision-making processes. This loop-closure is thought tohave driven the development of neural circuitry in biological agents since the Cambrian explosion. In the fundamental quest tounderstand neural computation in artificial agents, we come closer to understanding and modelling biological intelligence byclosing the loop also in neuromorphic systems. As a closed-loop system, our system deepens our understanding of processingin neural networks and their computations in both biological and artificial systems. With these investigations, we aim to set thefoundations for neuromorphic intelligence in the future, moving towards leveraging the full potential of neuromorphic systems.
While navigating through the environment, our proprioception informs us about our posture, our eyes look for a familiardirection or goal, and our ears watch-out for dangers. The brain deals with multiple data-streams in a continuous and parallelmanner. Autonomous vehicles requiring to safely manoeuvre in their environment also have to deal with such high-dimensionaldata-streams which are conventionally acquired and analysed at a fixed sampling frequency. A fixed sampling frequency limitsthe temporal resolution of data-processing and the amount of data which can be processed. To address these limitations, twoapproaches can be combined. First, data-streams can be sparsified by sending only information when an observed quantitychanges, i.e. when it is required. Second, the data-stream can be processed in a parallel and asynchronous fashion. This calls foran alternative approach to sensing and computing which, much like the brain, acquires and processes information completelyasynchronously and in a distributed network of computing elements, e.g. neurons and synapses. To fully demonstrate theadvantages of this approach we use the example of autonomous navigation as it is well studied and algorithmically understoodin a variety of environments be they water , ground , air , or space . In the last decades, part of the engineering communityhas sought inspiration from animals . For example, flying insects such as bees and flies share the same requirements aslight-weight flying vehicles manoeuvring in various habitats from almost object-free terrains to overly cluttered forests viahuman-made landscapes. They need to avoid collisions to prevent damaging their wings and they accomplish this task byusing limited neuronal resources (less than 1M and 100k neurons for honeybees and fruit-flies respectively). At the core ofthis machinery is a well-described subset of neurons responding to the apparent motion of surrounding objects . While the a r X i v : . [ c s . N E ] F e b nimal translates in its environment, the responses of such neurons provide estimates to the time-to-contact to nearby objects byapproximating the apparent motion of the objects on the retina (i.e. the optic flow ). These neurons are thought to steer theanimal away from obstacles or toward gaps resulting in a collision-free path.The collision avoidance machinery in insects is thought to be driven by a large array of motion-sensitive neurons, distributedin an omnidirectional visual field. These neurons operate asynchronously. Hence, biology has found an asynchronous anddistributed solution to the problem of collision avoidance. We seek to emulate such a solution in bio-inspired neuromorphichardware which has the advantage of being low-volume and low-power. More importantly, it also requires an asynchronous andparallel information processing implementation yielding a better understanding of neural computation.To date, most of the mimics of the collision avoidance machinery rely on traditional cameras from which every pixel atevery time point (i.e. at a fixed sampling frequency) needs to be processed . The processing occurs even whennothing is changing in the agent’s surroundings. This constant processing leads to a dense stream of data and consequently ahigh energy consumption. To reduce this, an efficient means of communication can be employed, such as action potentialsobserved in biological neural circuits. Action potentials or spikes enable to transmit information only when necessary, i.e.event-driven. In an analogous way, event-based cameras send events asynchronously only when a change in luminance overtime is observed . This sampling scheme is referred to as Lebesgue sampling . Contrary to frame-based cameras, whichemploy Riemann sampling , bandwidth and power demands are significantly reduced (see Section Event-Based Cameras inGazebo for more details).Open-loop collision avoidance based on optic-flow can use event-streams (for more detailed comparison of mentionedapproaches refer to ) and an insect-inspired motion pathway has been suggested for collision avoidance . Closed-loopcollision avoidance behaviour have been demonstrated previously using fully conventional sensory-processing (frame-basedsensor and CPUs/GPUs) approaches (for extensive review please refer to ). These insect-inspired approaches reducethe computational demands for collision avoidance by reducing the bandwidth of the visual input. This reduction is achieved bycollapsing the visual field into a left and right components. Later processing only needs to compare left versus right signals.These approaches, however, are hardwired processing of visual features. The hard-coded features may not be relevant in otherenvironments. Mixed-system (event-based camera and conventional processing) approaches , on the other hand, do notreduce the visual input by separating left-right signal pathways, but utilise event-based cameras which only transmit changes.In contrast to biological systems, they do not, however, leverage the advantages of event-based processing until the actuationof the motors. Finally, fully neuromorphic (event-based camera and parallel, asynchronous processing) approaches relyon spike-based information processing from sensing to actuation of motors. To date, these approaches rely on hardwired,deterministic decision making processing. The hard-coded decisions, i.e. creating a reflex-like machine, may lead to sub-optimaldecisions when multiple directions to avoid collisions are viable. Here, we aim for the first time at closing the action-perceptionloop , while explicitly extracting insect-inspired visual features, making active decisions, and using neuromorphicspike-based computation from sensing to actuation. Inspired by the collision avoidance algorithm proposed for flies andbees, we developed a spiking neural network (SNN) that profits from the parsimony of event-based cameras and is compatiblewith state-of-the-art digital and mixed-signal neuromorphic processing systems. The response of the visual motion pathway ofour network resembles the activity of motion-sensitive neurons in the visual system of flies. We ran closed-loop experimentswith an autonomous agent in a variety of conditions to assess the collision avoidance and gap finding capabilities of our network.These conditions were chosen from the biological evidence for collision avoidance obtained for flying insects (empty box ,corridors , gap crossing , and cluttered environments ). Our agent, utilising its underlying neural network, manages tostay away from walls in a box, centres in corridors, crosses gaps and meanders in cluttered environments. Therefore, it mayfind applications for autonomous vehicles. Besides, it may serve as a theoretical playground to understand biological systemsby using neuromorphic principles replicating an entire action-perception loop. The SNN model proposed in this work consists of two main components, namely a retinotopical map of insect-inspired motiondetectors, i.e. spiking Elementary Motion Detectors (sEMDs) , and an inverse soft Winner-Take-All (WTA) network (seeFigure 1d and Methods Figure 4). The former extracts optic flow (OF) which, during a translation, is anti-proportionally relatedto the agent’s relative distance to objects in the environment. The latter searches for a region of low apparent motion, hence anobstacle free direction (see Figure 1a-c). After the detection of such a path in the environment the agent executes a turn towardsthe new movement course. We characterised the network in two steps. First we evaluated the sEMD’s response and discussedsimilarities to its biological counterpart, i.e. T4/T5 neurons, which are thought to be at the core of elementary motion processingin fruit flies . Second, to further prove the real-world applicability of sEMD based gap finding in an SNN, we performedclosed-loop experiments. We simulated an agent seeing the world through an event-based camera in the Neurorobotics physical Spiking Neural Network: Massively parallel network consisting of populations of spike-based artificial neurons and synapses. imulation platform . The camera output was processed by the SNN resulting in a steering command. We selected a set ofparameters that yield the agent to keep at least a mean clearance of ~ to objects in a box and to enter corridors only witha width greater than 10 a.u. (see Appendix section The Motion-Vision Network). We tested the performance of this simulatedagent with these parameters in all reported experimental conditions hereafter. These experimental conditions were inspired byprevious experiments with flying insects. The sEMD represents an event-driven adaptation for neuromorphic sensory-processing systems of the well establishedcorrelation-based elementary motion detector . To evaluate the response of the sEMD in the Nest simulator , we comparedthe normalised velocity tuning curves of its ON-Pathway (with recorded event-based camera’s input) to the correspondingnormalised tuning curve of Drosophila’s
T4 and T5 neurons . Both velocity tuning curves are determined in response tosquare-wave gratings with 100 % contrast and a wavelength of 20° moving at a range of constant velocities (with temporalfrequencies from 0 . Drosophila ’s T4 cells . While
Drosophila ’s velocitytuning curves peak at 3 Hz in a drug induced flying state, the sEMD’s preferred direction velocity tuning curve peaks at 5 Hz.This suggests that based on the reported parameter set of the sEMD, it is tuned to higher relative velocities. The model performsin a robust way for a wide range of illuminations (from 5 lux to 5000 lux) and relative contrasts (50 % response reached atapproximately 35 % relative contrast), as shown in Figure A.2. The sEMD approximates the elementary motion processing inthe fly brain. This processing is part of the input to the flight control and collision avoidance machinery, hence it can be used asan input for determining a collision-free path.
The robot’s collision avoidance performance was evaluated in an experiment with the agent moving through environmentswith varying obstacle density. To further understand the mechanisms underlying the robot’s movement performance two moreexperiments were designed. The agent’s gap crossing behaviour and tunnel centering behaviour were investigated. Thesebehaviour were analysed in insects in a plane, therefore little is known about the effect of flying altitude in most behaviour. Welimited our agent to a 2D motion due to this limited understanding.
We evaluated the agent’s collision avoidance performance in an arena with an obstacle density between 0 and 38 %(0.05objects per square a.u.). The simulation stops either when the robot collides with an obstacle , when it leaves the arena,or when the simulation real-world-time of six hours is over (see Figure 2f). At low obstacle densities ( < of 7 a.u. (see Figure A.5 left)so that the agent stays close to its start location (see Figure A.5 right and Figure 2c,f). In this range the robot starts to crashinto obstacles reaching a minimum success rate of around 60 % at 22 % obstacle density. For higher obstacle densities thesuccess rate increases again (see Figure 2i). A collision of the robot is generally caused by the robot’s too long reaction time in A.u.: Arbitrary unit, distance divided by robot size, see section Closed-loop simulation in environments Obstacle density: Percentage of total area covered with objects. Collision: Simulated robot’s outline overlaps with area occupied by object Obstacle clearance: Robot’s distance to the center of the closest object. igure 1. (a-c) Network response in a cluttered environment, (d) collision avoidance network, (e) normalised sEMD meanresponse to a square wave grating and (f) robot used in real-world experiment. a) Cluttered neurorobotics platform environment.The obstacle walls are covered with vertical square-wave-gratings only visible for the event-based camera b) Green areas:simulated event-based camera events directly extracted from the Neurorobotics visual front-end while the agent is slowlymoving through the scene in a). c) Bright blue and orange dots: Time Difference Encoder (TDE) left-right and right-left spikeresponse to the scene in a) binned over ~ . Our agent regulates itsspeed based on the global OF and, consequently, moves slower in denser regions of the environment (see Figure A.7). Toexamine the effect of the velocity dependency, we ran a second experiment with the robot moving with constant velocity (seeFigure 2i and Figure A.6). With velocity control collisions were encountered only in few runs, however, for obstacle densitieshigher than 24 percent the number of collisions significantly increased when the velocity was kept constant. When presented with a choice between two gaps of different size bees prefer to pass the larger gap . This behaviour decreasesthe insect’s collision probability significantly. While bees might choose the gap in a complex decision process our agent’spreference underlies a simple probabilistic integration mechanism. The simulated robot’s upcoming movement direction isdetermined by an inverse WTA spike occurring in an obstacle-free direction as shown in Figure 1a-c. When confronted with a igure 2. Agent’s behaviour in different environments. a-c) Trajectories recorded in arenas with increasing obstacle densities.d) Comparison of real-world centering behaviour (red) to Neurorobotics Platform behaviour (black) in a corridor withnormalised corridor width and an absolute corridor length of approximately one meter. e) Simulated robot’s trajectory in thegap crossing experiment in a large arena. Colour represents time ( t : light blue, t end : magenta). f) Simulated robot’sperformance in different environments as shown in a-c with modulated velocity. Simulation time at which the simulated robotleaves the arena, collides or the time is over. g) Trajectories in tunnels with a tunnel width of 15, 12.5 and 11.25 a.u.. h) Gapcrossing probability in dependency of the gap width for a large and a small arena. i) Simulated robot’s performance in clutteredenvironments as shown in a-c with modulated velocity (black, calculated from data in f) and fixed velocity (grey). Agent’ssuccess rate, hence number of runs without collisions. j-l) Agent’s variance from tunnel center for different tunnels.small and a large gap the probability of an inverse WTA spike appearing in the greater gap is higher. Hence, we assume that therobot automatically follows pathways with a larger gap size. To evaluate this assumption we observed the robot’s gap crossingin an arena with two alternative gaps (see Figure 2e). The robot can decide to cross any of the two gaps or stay in one half of thearena. There is a competition between staying in the open-space and crossing a gap. The larger the gap size is, the more likelythe robot will cross a gap. We investigated the probability to cross gaps by having two gaps, one with a fixed gap size (10 timesthe agent width), the other with a gap size between 5 a.u and 13 a.u. We calculated the gap entering probability by comparingthe number of passes through both gaps. As expected the entering probability increases with gap size until a width of 10 a.u.(see Figure 2h). For a larger gap width the entering probability does not change significantly. However, for smaller gap sizesthe probability of a spike pointing towards open space in the inverse WTA becomes significantly higher. Therefore, the robotprefers to pass through gaps of larger size. Besides the gap width the arena size changes the passing probability. In a smallerarena the simulated robot stays closer to the gap entry which increases the relative gap size sensed by the agent. Therefore,a larger part of the vehicle’s visual field is occupied by the gap entry which increases the probability of a spike occurring inthe gap area. In a smaller arena we observed that the robot’s gap entering probability is higher for gaps smaller then 10 a.u.than in a big arena (see Figure 2h). A decrease in arena size can be compared to an increase in obstacle density since bothparameters reduce the robot’s obstacle mean clearance (see Figure A.5, left). Therefore, the agent tends to enter gaps of smallersize in densely cluttered environments. This automatic scaling mechanism keeps the agent’s collision probability very low insparsely cluttered environments by staying away from small gaps. In environments with high obstacle density the robot stillkeeps its mobility by passing through smaller gaps. Finally, when the obstacle density exceeds 20 %, most gaps fall below thegap entering threshold so that the robot can not leave the arena anymore (see Figure A.5, right and Figure 2c,f). .2.3 Corridors One common experiment to characterise an agent’s motion vision response is to observe its centering behaviour in a tunnelequipped with vertical stripes on the walls. The simple geometry of the environment enables the observer to directly relatethe received visual input with the agent’s actions. In bees and flies an increase in flight velocity proportionally to the tunnelwidth has been observed . In very narrow tunnels insects show a pronounced centering behaviour which declines withincreasing tunnel width. We evaluated the robot’s performance in three tunnels with different tunnel widths. Similar to thebiological role model the robot’s velocity stands in a positive linear relationship with the tunnel width. The measured velocityin a.u. per second is ~ ~ ~ . Autonomous agents need to successfully avoid obstacles in a variety of different environments, be they human made or ofnatural origin. Our investigations present a closed-loop proof of concept of how obstacle avoidance could be performed in aparsimonious, asynchronous and fully distributed fashion. While most results reported here are based on computer simulations,the implementation on digital or mixed-signal neuromorphic hardware of each building block of the simulated SNN have beendemonstrated for event-based cameras , the sEMD (see Figure A.2), artificial neurons and synapses , as well as theinverse WTA . We demonstrated for the first time a simulation of a neuromorphic system that takes informed decisions whilemoving in its environment by closing the action-perception loop. We emulated this system on neuromorphic sensory-processinghardware carried by a physical robot (see Figure 1f, 2d, A.8 and A.9), tested it in a corridor centering experiment, and obtainedsimilar results to the simulation. These real-world experiments suggest that the underlying computational primitives lead torobust decision making in operational real-time. Due to the physical simulation with the engine Gazebo that capture the physicsof the movements and our real-world proof of implementation, our simulations are likely to translate to real-world situations.While producing relatively simple, yet crucial decisions, the proposed model represents a critical milestone towards enablingparallel, asynchronous and purely event-driven neuromorphic systems.Our proposed SNN architecture comprises of ~ ~
4k neurons which yields a low-power, lightweightand robust neural algorithm. When implemented on mixed-signal neuromorphic processing hardware, e.g. , the payloadrequired to perform on-board processing will be drastically reduced. This reduction stems from the low volume and lowerpower requirements of neuromorphic hardware. In addition such hardware implementation would ensure operational real-timedecision making capabilities. The features outlined above are quite desirable in the context of highly constrained autonomoussystems such as drones or other unmanned vehicles.We investigated the performance of the sEMDs, the apparent motion encoders in our SNN, in detail. The sEMDs show asimilar velocity response curve to motion-sensitive neurons (e.g. T4 and T5 neurons in the fruitfly’s brain ) when presentedwith a grating of 20° spatial frequency and temporal frequencies between 0 . Drosophila ’s optical lobe performs contrast normalisation through inhibitory recurrentfeedback to evoke a contrast independent response . In a next step we will implement contrast normalisation in our motionvision network to improve its performance in natural environments.Besides the similarities in neural response, the agent showed many similarities to flying insects in its behaviour in spatiallyconstrained environments. It meandered in cluttered terrain (Section Densely Cluttered Environments), modulated its speed asa function of object proximity (Section Corridors), selected wider gaps (Section Gaps), centered in tunnels (Section Corridors),while using an active gaze strategy known as saccadic flight control (Section Collision Avoidance Network) . Theagent moved collision-free through cluttered environments with an obstacle density between 0 and 38 % with a mean successrate of 81 % . We further examined the simulated robot’s performance to understand the essential behavioural componentswhich led to a low collision rate. The most significant ingredient in that regard was the implementation of an OF strengthdependent locomotion velocity. This insect inspired control mechanism improved the collision avoidance performance ofthe agent from a mean success rate of 76 % to 81 % (Compare Figure 2i and Figure A.6). We propose that this velocityadaptation mechanism could be regulated in insects by a simple feedback control loop. This loop changes the agent’s velocityanti-proportionally to the global OF integrated by a subset of neurons (For further explanations see Section Collision AvoidanceNetwork). Several closed-loop, insect-inspired approaches have been demonstrated , however, due to a missing unifying benchmark and evaluation metric, tocompare insect-inspired collision avoidance algorithms, we cannot provide a quantitative comparison n OF-dependent control of locomotion velocity is only one of at least three mechanisms which decreased the agent’s rate ofcollision. When moving in environments of high obstacle density the simulated robot follows locally low obstacle density paths.We suggest that a probabilistic decision process in the network model automatically keeps the agent’s collision probability lowby following these pathways. We further investigated this path choice mechanism in a second experiment. Here, the agenthad to cross two gaps of different size. The dependence of the agent’ probability to cross the gap resembled that of bees .Similar to insects the agent preferred gaps of larger size. Bees cross gaps with a gap-size as small as 1.5 times their wingspan .In contrast our agent crossed gaps of 5 times its body width. This discrepancy in performance may be due to the absence ofa goal. A goal can be understood as providing an incentive to cross a gap despite a risk of collision. Indeed in behaviouralexperiments, bees had to cross the gap to return to their home. Combining different directions, such as a collision-free path anda goal, require an integration of the two signal representations. Such networks have been proposed for navigating insects .Integration of similar streams of information have been demonstrated to work in neuromorphic systems , however, weenvision that a dynamic competition between collision avoidance and goal reaching neural representations could allow ourrobot to cross gaps 1.5 times its width.The findings reported here indicate an alternative point of view how flies and bees could use motion-vision input to movethrough the environment, not by collision avoidance but by gap finding. As also stated by Baird and Dacke , flies and beesmight not actively avoid obstacles but fly towards open space, i.e. gaps. Looking at our network, we suggest that WTA alikestructures in flying insect brains might integrate different sensory inhibitory and excitatory inputs with previously acquiredknowledge to take navigational decisions. One could think of the central complex as such a structure which has been describedrecently in several insect species .The third mechanism is the agent’s centering behaviour. By staying in the middle of a tunnel with similar patterns on bothwalls the simulated robot minimises its risk of colliding with a wall. The agent’s deviation from the tunnel center changesapproximately linearly with the tunnel width. These results show a very strong resemblance with experimental data fromblowflies (see Figure 2j–l) . So far centering behaviour was suggested to result from balancing the OF on both eyes. Centeringin a tunnel can be seen as crossing elongated gaps. Our agent is also able to cross gaps. Two hypothesis have been suggested tocross gaps in flying insects, using the OF contrast and the brightness . Our results suggest that collision avoidance couldbe mediated by identifying minimum optic flow to center in tunnel, cross gaps, or meander in cluttered environment. Thisstrategy has so far not been investigated in flying insects. The main hypothesis to control flight in clutter is to balance either anaverage or the maximum OF on both eyes . Further behavioural experiments are required to disentangle between the differentstrategies and their potential interaction. Building on the work of , the different hypothesis could be placed into conflict bycreating a point-symmetric OF around the gap center (leading to centering), a brightest point away from the gap center, and aminimum OF away from the center (e.g. by using an OF amplitude following a Mexican hat function of the radius from thegeometric center).Our model shares several similarities with the neural correlate of visually-guided behaviour in insects, including motion-sensitive neurons , an integration of direction , efference copy to motion-sensitive neurons , and neurons controllingthe saccade amplitude . Our agent was able to adopt an active gaze strategy thanks to a saccadic suppression mechanism(due to an inhibitory efference copy from the motor neurons to the inverse WTA and motion-sensitive neurons). When theinverse WTA layer did not "find" a collision-free path (i.e. a solution to the gap finding task), an alternative response (here aU-turn) was triggered thanks to global inhibitory neurons and excitatory-inhibitory networks (GI-WTA-ET, for more details seeSection Collision Avoidance Network). The neuronal correlate of such a switch, to our knowledge, has not been described inflying insects. Our model, thus, serves as a working hypothesis for such a neuronal correlate. Furthermore, by varying theconnection between sEMD-inverse WTA, we could allow the agent to cross smaller gaps. We hypothesise that differences inclearance or centering behaviour observed between insect species could be due to different wiring or modulation betweenmotion-sensitivity neurons and direction selection layer, likely located in the central complex.In this study we demonstrated a system-level analysis of a distributed, parallel and asynchronous neural algorithm toenable neuromorphic hardware to perform insect-inspired collision avoidance. To perform a wide variety of biological-relevantbehaviour the network comprised approximately 4k neurons and 300k synapses. The agent guided by the algorithm robustlyavoided collision in a variety of situations and environments, from centering in a tunnel to crossing densely cluttered terrain andeven gap finding, solved by flying insects. These behaviour were accomplished with a single set of parameters, which have notbeen optimised for any of those. From the investigation of the agent and its underlying behaviour, we hypothesise that insectscontrol their flight by identifying regions of low apparent motion, and that excitatory-inhibitory neural structures drive switchesbetween different behaviours. With these investigations we hope to advance our understanding of closed-loop artificial neuralcomputation and start to bridge the gap between biological intelligence and its neuromorphic aspiration. Methods
Most experiments in this article were conducted in simulation using either the Nest spiking neural network (SNN) simulator or the Neurorobotics Platform environment . A corridor centering experiment was conducted in a real-world corridor centeringexperiment using a robotic platform equipped with the embedded Dynamic Vision Sensor as visual input and a SpiNN-5 board for SNN simulation in computational real-time. Sensory data for the sEMD characterisation were recorded with anevent-based camera in a real world environment. The hardware, software, SNN models and methodologies used in this articleare explained in the following. In contrast to conventional processing as postulated by von Neumann which is characterised by synchronous and inherentlysequential processing, neural networks, whether rate-based or spike-based, feature parallel and distributed processing. Artificialneural networks, the rate-based counterpart of SNNs, perform synchronous and clock-driven processing, SNNs, additionally,feature an asynchronous and event-driven processing style. SNNs represent a promising alternative to conventional von Neu-mann processing and hence computing which potentially feature low-latency, low-power, distributed and parallel computation.Neuromorphic hardware present a solution to the aforementioned limitations of conventional von Neumann architecturesincluding parallel, distributed processing in the absence of a central clock , as well as co-localisation of memory andcomputation . Moreover, neuromorphic processors benefit from the underlying algorithm to be implemented in a SNN.Emulating a SNN on a neuromorphic processor (especially a mixed-signal one) enables the network to operate in continuoustime as time represents itself . SNNs consist of massively parallel connected networks of artificial synapses and spikingneurons . SNNs, as any processing algorithm, aim to structure and represent incoming information (e.g. measurements) in astable, robust and compressed manner (e.g. memory). Measurements sampled at fixed time intervals have the disadvantagethat collected data is highly redundant and prone to aliasing if the signal of interest varies faster than half the samplingfrequency. Event-driven approaches to sampling alleviate these limitations. As incoming measurements shouldn’t be sampledat fixed temporal intervals, they need to be taken based on fixed or relative amplitude changes to take full advantage of thetime-continuous nature of SNNs and neuromorphic hardware. Such measurements can be obtained from different sensorydomains (e.g. touch , smell , auditory and vision ), with vision being the most studied and well understood sensorypathway (but see for a critical review) both in the brain and its artificial aspiration. While images taken with conventionalcameras can be converted to spike trains which are proportional to the pixel intensity , event-based cameras directly sampleonly relative changes of log intensity and transmit events. A variety of event-based cameras have been proposed in the lasttwo decades that all feature an asynchronous, parallel sampling scheme in which changes are reported at the time ofoccurrence in complete time-continuous manner. The output of event-based cameras is hence ideally suited to be processed byan SNN implemented on a neuromorphic processor. We collected real-world data using the DVS128 event-based camera to characterise the sEMD response (see Figure 1e). The event-based camera comprises 128 ×
128 independently operatingpixels which respond to relative changes in log-intensity, i.e. in temporal contrast. When the change in light intensity exceedsan adaptive threshold the corresponding pixel produces an event. The address and polarity of the pixel are communicatedthrough an Asynchronous Event Representation bus . Light increments lead to ON-events, whereas light decrements leadto OFF-events. The sensor reaches a dynamic range of more than 120 dB and is highly invariant to the absolute level ofillumination due to the logarithmic nature of the switched-capacitor differencing circuit . In 2018 we proposed a new insect-inspired building block for motion vision in the framework of SNNs designed to operate onthe out event-stream of event-based cameras, the sEMD . The sEMD is inspired by the computation of apparent motion, i.e.optic flow (OF), in flying insects. In contrast to its correlation-based role model the sEMD is spike-based. It translates thetime-to-travel of a spatio-temporally correlated pair of events into direction dependent, output burst of spikes. While the sEMDprovides OF estimates with higher precision when the entire burst is considered (rate-code), the interspike interval distribution(temporal-code) within the burst provides low-latency estimates. The sEMD consists of two building blocks, a retina to extractvisual information from the environment, and the TDE which translates the temporal difference into output spikes (see Figure3a). When the sEMD receives an input spike at its facilitatory pathway an exponentially decreasing gain variable is generated. The embedded Dynamic Vision Sensor follows the same operational principles of event-based cameras as described in Section Event-Based Cameras inGazebo but features a much more compact design A time-continuous mode of operation, in contrast to a time-varying one, is characterised by the absence of a fixed sampling frequency To perform this conversion one can use a different encoding schemes including rank-order code , timing-code , or Poisson rate-code. Level sampling means that a given time-continuous signal is sampled when the level changes by fixed (relative) amount ε , whereas time sampling, i.e.Nyquist-Shannon sampling, means that the signal is sampled when the time has changed by fixed amount ε he magnitude of the synaptic gain variable during the arrival of a spike at the trigger synapse defines the amplitude of theexcitatory post-synaptic current generated. This current integrates onto the sEMD’s membrane potential which generates ashort burst of output spikes. Therefore, the number of output spikes encodes direction sensitive and anti-proportionally thestimulus’ time-to-travel (see Figure 3e) between two adjacent input pixels. We implemented and evaluated the motion detectormodel in various software applications (Brian2, Nengo, Nest), in neuromorphic digital hardware (SpiNNaker, Loihi) and alsoas analog CMOS circuit . Figure 3.
Spiking Elementary Motion Detector model adapted from . a) sEMD model consisting of visual input and TDEunit. Two adjacent retina inputs are connected to the facilitatory synapse (fac) and the trigger synapse (trig). The fac synapsecontrols the gain of the trig synapse postsynaptic current (epsc) which integrates onto the Leaky Integrate and Fire (LIF)neuron’s membrane potential which produces output spikes (out). b) model behaviour for small positive ∆ t . c) behaviour forlarge positive ∆ t . d) behaviour for negative ∆ t . e) number of output spikes over ∆ t . The collision avoidance network (see Figure 4) extracts a collision-free direction from its sEMD outputs and translates thisspatial information into a steering command towards open space. The first layer, the event-based camera, generates an eventwhen a relative change in log-illumination, i.e. temporal contrast, is perceived by a pixel. A macropixel consists of 2 x 2event-based camera pixels. Each macropixel projects onto a single current-based exponential LIF neuron (hereafter referred toas LIF for sake of clarity) in the Spatio-Temporal Correlation (SPTC) layer (in Nest the neuron model used throughout this studyis called iaf_psc_exp). Each single SPTC neuron emits a spike only when more than 50% of the pixels within a macropixelelicit an event within a rolling window of 20 ms. Therefore, the SPTC population removes uncorrelated events, which can beinterpreted as noise. Additionally, it decreases the network resolution from 128 times 40 pixels to 64 times 20 neurons. Thenext layer extracts OF information from the filtered visual stimulus. It consists of two TDE populations sensitive to the twohorizontal cardinal directions respectively. Each TDE receives facilitatory input from its adjacent SPTC neuron and trigger inputfrom its corresponding SPTC neuron. The facilitatory input might arise either from the left (left-right population) or from theright (right-left population). The TDE output encodes the OF as number of spikes in a two-dimensional retinotopical map. Sincethe agent moves on the ground it only estimates the amount of horizontal OF. Hence, the subsequent INT population integratesthe spikes of each TDE column in a single LIF neuron. This layer encodes the OF in a one-dimensional retinotopical map. Thesubsequent population, an inverse soft Winner-Take-All (WTA) determines the agent’s movement direction, a minimum of OFin the one-dimensional retinotopical map. Since OF encodes the relative distance to objects during a translational movementthis direction represents an object-free pathway, hence the inverse Winner-Take-All (WTA) is inverted by sending feed-forwardinhibition into the neural population. A population of POIS injects Poisson distributed background spikes which ensures aneuron within the inverse WTA to win at any moment in time even in the absence of OF. In the absence of INT input theinverse WTA neuron with the strongest POIS input wins and suppresses through the GI neuron the activity of all others. Locallateral connections in the inverse WTA population strengthen the winning neuron due to excitatory feedback (For the sake ofclarity recurrent excitation is not shown in Figure 4). igure 4.
Collision avoidance network. The macropixels (2x2pixels) of the Event-Based Camera (EBC) project onto singleneurons of the Spatio-Temporal Correlation (SPTC) populationremoving uncorrelated noise. Two adjacent SPTC neurons areconnected to one Time Difference Encoder (TDE) in the left-rightsub-population and the right-left sub-population respectively.Trigger and facilitator connection are opposite in the twopopulations. The Integrator (INT) population reduces the twodimensional retinotopical map to a one-dimensional map byintegrating the spikes of each TDE column onto a single LIFneuron. The inverse Winner-Take-All (WTA) population andEscape Turn (ET) population become excited by Poisson spikesources. The winner-take-all mechanism is driven by recurrentsuppression through the Global Inhibition (GI) neuron. The twoMotor (MOT) populations are activated by a spike in the inverseWTA population. The id of the spiking inverse WTA neurondefines which MOT becomes activated and for how long. Whenthe ET neuron spikes the left MOT population becomes activatedfor the maximal time duration. When the MOT population isinactive the robot moves straight foward collecting apparentmotion information. When one MOT population is active therobot turns. All-to-all inhibition between the MOTsub-populations guarantees the dis-ambiguity of the steeringcommands. Inhibition from the MOT to the SPTC populationsuppresses rotational OF input which contains no relative depthinformation. Inhibition from MOT to inverse WTA hinders thenetwork from taking any new decision during a turn.Because of the consistently changing nature of thePOIS spike trains the winner changes frequently and theagent executes a random walk (see Figure 2a). When theagent approaches an object the position of the obstacleis indicated by a number of spikes in the INT popula-tion. These spikes strongly inhibit the inverse WTA at thecorresponding position and its closest neighbours so thatthis inverse WTA direction cannot win. Therefore, theactive neurons in the inverse WTA always represent anobstacle-free direction. In case no object-free directionhas been found for ~
700 milliseconds since the start ofan intersaccade the ET neuron emits a spike. This neu-ron is only weakly excited by the POIS population andconnected to the GI neuron similarly to the inverse WTApopulation. Only when the ET has not been inhibitedfor a long time, hence the inverse WTA was not ableto generate a spike due to strong over all inhibition, theET neuron wins. The final layer called MOT populationtranslates the inverse WTA population and ET neuron ac-tivity into a turn direction and duration using pulse-widthmodulation to control the motors. The left turn MOTpopulation becomes activated by inverse WTA neuronson the left side and the right turn population by inverseWTA neurons on the right side. Since the turning velocityis always constant the angle of rotation is defined by theduration of the turn. This duration of the excitation wavein the MOT population relates proportionally to the dis-tance of the inverse WTA neuron from the center of thehorizontal visual field. The duration saturates for neurondistances higher than nine. Since a left turn and a rightturn are exclusive events, strong inhibition between thetwo MOT populations assures to disambiguate the MOTlayer outputs. In case the ET neuron emits a spike theexcitation wave passes through most neurons of the leftMOT population. Hence, the turning duration is slightlyhigher than for any turn induced by the inverse WTApopulation. The agent turns completely away from thefaced scene since no collision free path was found in thatdirection. During the execution of a turn the gap findingnetwork receives mainly rotational OF. This type of ap-parent motion does not contain any depth information andtherefore no new movement direction should be chosenduring or shortly after a turn. Because of that the MOTlayer strongly inhibits the inverse WTA and SPTC popu-lations as well as the ET neuron. After a turn has finishedand none of the MOT populations is spiking anymore theagent moves purely translatory. The movement speedduring this phase v ints is defined in equation 1 where ¯ f OFI is the mean firing rate of the OFI population. Duringthis movement phase, called intersaccade, the agent inte-grates translational OF information in its INT population.The inverse WTA population slowly depolarizes from itsstrongly inhibited state and releases a spike indicating thenew movement direction. This spike triggers the next sac-cadic turn of the robot while the id of the winning neuron efines the direction and duration of the movement. v ints ( ms ) = − ¯ f OFI × .
001 (1)
To perform our behavioural experiments we decided to simulate the entire system, from visual input to actions, using theNeurorobotics Platform. This platform combines simulated SNNs with physical realistic robot models in a simulated 3Denvironment . The platform consists of three main parts, the world simulator Gazebo, the SNN simulator Nest and the TransferFunction Manager Brain Interface and Body Integrator (BIBI). The BIBI middleware consists of a set of transfer functionswhich enables the communication between Gazebo and NEST via Robot Operating System (ROS) and PyNN adapters.The Closed Loop Engine (CLE) synchronizes the two simulators Gazebo and Nest and controls the data exchange throughtransfer functions. The simulation front-end virtual coach is useful to control the whole simulation procedure through a singlepython script. Furthermore, the State Machines Manager of the SMACH framework can be used to write State Machines whichmanipulate the robot or world environment during the experiment. The robot receives visual input from the embedded Dynamic Vision Sensor with a 60 degrees lens. The event-based camerasends its events to a SpiNN-5 board which simulates a simplified version of the collision avoidance network decribed in thesection Collision Avoidance Network. The robot’s visual field consists of 128x128 pixels which project onto 32x32 SPTCs.The robot computes ON-event and OFF-events in two separate pathways from the retina until the sEMDs. The INT layerintegrates the spikes of the ON- and OFF-pathway in a single population. The network does not contain any OFI neuron andthe agent moves with a constant velocity of around 0.5 m/s. There are also no MOT populations and no ET population. Theinhibition from the MOT population to the SPTC population is replaced by inhibition from inverse WTA to SPTC. The motorcontrol is regulated on an Odroid mini-computer. The computer receives inverse WTA spikes from the SpiNN-3 board viaEthernet and translates these spikes into a motor command which is then send via USB to the motor controller. This causes along delay between perception and action. The motor controller drives the six-wheeled robot in a differential manner.
Kaiser et al. 2016 developed a Neurorobotics Platform implementation of an event-based camera based on the world simulatorGazebo. This model samples the environment with a fixed update rate and produces an event when the brightness changebetween old and new frame exceeds a threshold. We used this camera model in our closed-loop simulations as visual inputto the collision avoidance network. Even though Gazebo produces an event-stream from regularly sampled synchronousframe-difference, our sEMD characterisation and open-loop experiments (see Section sEMD characterisation and ) confirmedthe working principle of the motion detector model with real-world event-based camera data. We could further demonstrate thereal-world fully-neuromorphic applicability in closed-loop of most parts of the simulated agent including the apparent motioncomputation by the sEMDs and the saccadic suppression . We set the resolution of the Gazebo event-based camera model to128 times 40 pixels. The reduction of the vertical resolution from 128 to 40 pixels was done to speed up the simulation timeand to make the model fit onto a SpiNN-3 board . To further accelerate the simulation we limited the number of events perupdate-cycle to 1000 and set the refresh rate to 200 Hz. Therefore, the sEMD can only detect time differences with a resolutionof 5 ms. We decided for a large horizontal visual angle of 140 degrees so that the robot does not crash into unforeseen objectsafter a strong turn. At the same time the uniform distribution of 128 pixels over a 140 degrees horizontal visual field leads to aninter-pixel angle of approximately 1.1 degrees. This visual acuity lies in a biologically plausible range of inter-ommatidialangles measured in Diptera and Hymneoptera which varies between 0.4 and 5.8 degrees . We designed a four-wheeled simulated robot Gazebo model. The robot’s dimensions are 20 × ×
10 cm and it is equippedwith an event-based camera (see Section Event-Based Cameras in Gazebo) and the husky differential motor controller plugin.The BIBI connects the robot with the collision avoidance network implemented in NEST (see Section Collision AvoidanceNetwork). The connections consist of one transfer function from the vision sensor to the SPTC population and another one fromthe MOT population to the differential motor controller as well as two Poisson input spike sources. The first transfer functionsends visual input events. The second transfer function controls the agent’s insect-inspired movement pattern. During inactivityof the MOT populations the robot drives purely translatory with a maximum speed of 2.5 a.u/s. The movement velocity changesanti-proportionally to the environment’s obstacle density as explained in the section Densely Cluttered Environments. Whenone of the two MOT populations spikes the robot fixes its forward velocity to 0.38 a.u/s and turns either to the left or to the ight with an angular velocity of 4 ◦ / s . The two Poisson spike source populations send spikes with a medium spike rate of 100Hz to the inverse, soft WTA population and the ET neuron (For more details see Table 5 and Table 4). For the sEMD characterisation we stimulated an event-based camera with a 79° lens (see Section Event-Based Cameras inGazebo) using square-wave gratings with a wavelength of 20° and various constant velocities (from 0.1 to 10 Hz). Theserecordings were performed in a controlled environment containing an event-based camera, an LED light ring and a movingscreen which projects exchangeable stimuli (see Figure A.1). The controllable light ring illuminates the screen. The camera’slens is positioned in the light ring’s centre to ensure a homogeneous illumination of the pattern. The screen itself is moved byan Arduino controlled motor. During recordings, the box can be closed and thus be isolated from interfering light sources. Thecontrast refers to absolute grey-scale values printed on white paper to form the screen. However, given the printed contrast wecalculated the Michelson contrast as follows: I max − I min I max + I min = I max − I max ( − C printed ) I max + I max ( − C printed ) = C printed − C printed (2)To show the model’s robustness to a wide range of environments, we varied the following three parameters in the recordings:The illumination, the grating velocity and the grating’s contrast (see Table 1). Each possible parameter combination wasrecorded three times, with a recording duration of four seconds, to allow statistical evaluation of the results. The event-basedcamera was biased for slow velocities.The model (see Figure 4 the first three populations) was simulated in Nest with the connections and neuron parametersdefined in Table 5 and Table 4 respectively. The network was simulated for four seconds, receiving the events as emitted by theevent-based camera as spike-source array input. To define a response to the various stimuli, from the simulation results, themean population activity of the preferred direction and null direction population were calculated (see Figure 1e). For the closestcomparability to the biologically imposed environment parameters, we chose to compare and discuss the sEMD’s velocitytuning curve for a grating contrast of 100 % and an illumination of 5000 lux. Five different environments were designed to evaluate the agent’s performance, a cluttered environment with randomlydistributed obstacles sizing 1 × Figure 5.
State machine to create the cluttered environment and check the agent’s collision avoidance performance.Additionally, a virtual coach script was composed which starts and stops the single runs in a for-loop. After creatingthe simulation environment the virtual coach script starts the simulation for 10 seconds so that the state machine becomes ctivated. After that the simulation stops for five minutes which are long enough for the state machine to place all objectsin the environment. When five minutes have passed the simulation is activated again and the agent starts moving throughthe environment. CSV files containing the spiking data of the network, the robot position and angular alignment as well asthe placement of the objects in the arena were saved for all experiments. 100 data points were collected for the collisionavoidance experiment in a cluttered environment with adaptive velocity, 70 data points were collected for the experiment withfixed velocity (see Figure 2f,i A.5,A.7 ). The tunnel centering experiment, gap entering experiment and all other simulationexperiments in the appendix were repeated three times for each individual configuration (see Table 2).Obstacle densities were calculated by plotting the cluttered environment and counting the number of pixels occupied bythe objects. The occurrence of collisions was also measured visually by plotting the cluttered environment with the robot’strajectory while considering the agent size and angular alignment. Since the can _ collide feature of the objects in the clutteredenvironment was turned off the agent moves through the obstacles when colliding. Therefore, an overlap of obstacle and robotcan be interpreted as a collision. The collision avoidance run was marked as failed when such an overlap occurred and thefirst time of overlap was noted as collision time. Since there is no physical collision the robot’s size can be varied during theanalysis to evaluate the effect of agent size on the performance. To enhance the comparability of the robotic system to thebiological role model, flying insects, we normalised all distance measures by dividing them by the chosen robot’s size of 40x40centimeters. The normalised distance measures were complemented with an arbitrary unit (a.u.). The data generated during this study will be available at dataverse.nl/dataset.xhtml?persistentId=doi:10.34894/QTOJJP.
The code generated during this study will be made available at dataverse.nl/dataset.xhtml?persistentId=doi:10.34894/QTOJJP.
The authors would like to thank Daniel Gutierrez-Galan and Florian Hofmann for their technical support. The authors wouldalso like to acknowledge the SpiNNaker Manchester team for their help with the sEMD implementation and the robot setup.Furthermore the authors acknowledge the Neurorobotics Platform team for their technical support.
T.S., E.C. and E.J. conceived and designed the experiments. M.B.M., T.S. and E.J. designed and optimised the tested algorithm.T.S. and E.J. carried out and analyzed the experiments. M.B.M., O.J.B., E.C. and M.E. developed the original sEMD model.O.J.B., M.B.M, T.S., E.J., E.C. and M.E. wrote the manuscript.
The authors declare no competing interests.
References Kelasidi, E. et al.
Path Following, Obstacle Detection and Obstacle Avoidance for Thrusted Underwater Snake Robots.
Front. Robotics AI , 57, DOI: 10.3389/frobt.2019.00057 (2019). Floreano, D., Ijspeert, A. & Schaal, S. Robotics and Neuroscience.
Curr. Biol. , R910–R920, DOI: 10.1016/j.cub.2014.07.058 (2014). Barca, J. C. & Sekercioglu, Y. A. Swarm robotics reviewed.
Robotica , 345–359, DOI: 10.1017/S026357471200032X(2013). Obstacle avoidance in space robotics: Review of major challenges and proposed solutions, DOI: 10.1016/j.paerosci.2018.07.001 (2018). Pandey, A. Mobile Robot Navigation and Obstacle Avoidance Techniques: A Review.
Int. Robotics & Autom. J. , DOI:10.15406/iratj.2017.02.00023 (2017). Serres, J. R. & Viollet, S. Insect-inspired vision for autonomous vehicles, DOI: 10.1016/j.cois.2018.09.005 (2018). Dickinson, M. H. Death Valley, Drosophila , and the Devonian Toolkit.
Annu. Rev. Entomol. , 51–72, DOI:10.1146/annurev-ento-011613-162041 (2014). . Baird, E. & Dacke, M. Finding the gap: a brightness-based strategy for guidance in cluttered environments.
Proceedings.Biol. sciences / The Royal Soc. , 1794–1799, DOI: 10.1098/rspb.2015.2988 (2016). Mountcastle, A. M., Alexander, T. M., Switzer, C. M. & Combes, S. A. Wing wear reduces bumblebee flight performancein a dynamic obstacle course.
Biol. letters , 20160294, DOI: 10.1098/rsbl.2016.0294 (2016). Witthöft, W. Absolute anzahl und verteilung der zellen im him der honigbiene.
Zeitschrift für Morphol. der Tiere ,160–184, DOI: 10.1007/BF00298776 (1967). Zheng, Z. et al.
A Complete Electron Microscopy Volume of the Brain of Adult Drosophila melanogaster.
Cell ,730–743, DOI: 10.1016/j.cell.2018.06.019 (2018).
Borst, A., Haag, J. & Mauss, A. S. How fly neurons compute the direction of visual motion, DOI: 10.1007/s00359-019-01375-9 (2019).
Fu, Q., Wang, H., Hu, C. & Yue, S. Towards computational models and applications of insect visual systems formotion perception: A review.
Artif. Life , 263–311, DOI: 10.1162/artl_a_00297 (2019). PMID: 31397604, https://doi.org/10.1162/artl_a_00297. Martin Egelhaaf & Jens Peter Lindemann. Texture dependance of motion sensing and free flight behavior in blowflies.(2012).
Bertrand, O. J. N., Lindemann, J. P. & Egelhaaf, M. A Bio-inspired Collision Avoidance Model Based on SpatialInformation Derived from Motion Detectors Leads to Common Routes.
PLOS Comput. Biol. , e1004339, DOI:10.1371/journal.pcbi.1004339 (2015). A Model for an Angular Velocity-Tuned Motion Detector Accounting for Deviations in the Corridor-Centering Responseof the Bee.
PLOS Comput. Biol. , e1004887, DOI: 10.1371/journal.pcbi.1004887 (2016). Lecoeur, J., Dacke, M., Floreano, D. & Baird, E. The role of optic flow pooling in insect flight control in clutteredenvironments.
Sci. Reports (2019). Mauss, A. S. & Borst, A. Optic flow-based course control in insects.
Curr. opinion neurobiology , 21—27, DOI:10.1016/j.conb.2019.10.007 (2020). Baird, E. & Dacke, M. Finding the gap: a brightness-based strategy for guidance in cluttered environments.
Proc. RoyalSoc. B: Biol. Sci. , 20152988, DOI: 10.1098/rspb.2015.2988 (2016). https://royalsocietypublishing.org/doi/pdf/10.1098/rspb.2015.2988.
Ravi, S. et al.
Gap perception in bumblebees.
J. Exp. Biol. , jeb184135, DOI: 10.1242/JEB.184135 (2019).
Ravi, S. et al.
Bumblebees perceive the spatial layout of their environment in relation to their body size and form tominimize inflight collisions.
Proc. Natl. Acad. Sci.
Li, J., Lindemann, J. P. & Egelhaaf, M. Local motion adaptation enhances the representation of spatial structure at emdarrays.
PLOS Comput. Biol. , e1005919, DOI: 10.1371/journal.pcbi.1005919 (2017). Zingg, S., Scaramuzza, D., Weiss, S. & Siegwart, R. Mav navigation through indoor corridors using optical flow. In , 3361–3368 (IEEE, 2010).
Blösch, M., Weiss, S., Scaramuzza, D. & Siegwart, R. Vision based mav navigation in unknown and unstructuredenvironments. In , 21–28 (IEEE, 2010).
Posch, C., Matolin, D. & Wohlgenannt, R. A QVGA 143dB dynamic range asynchronous address-event PWM dynamicimage sensor with lossless pixel-level video compression. In
Digest of Technical Papers - IEEE International Solid-StateCircuits Conference , vol. 53, 400–401, DOI: 10.1109/ISSCC.2010.5433973 (2010).
Lichtsteiner, P., Posch, C. & Delbruck, T. A 128 × 128 120 dB 15 µ s latency asynchronous temporal contrast vision sensor. IEEE J. Solid-State Circuits , 566–576, DOI: 10.1109/JSSC.2007.914337 (2008). Brandli, C., Berner, R., Yang, M., Liu, S. C. & Delbruck, T. A 240 × 180 130 dB 3 µ s latency global shutter spatiotemporalvision sensor. IEEE J. Solid-State Circuits , 2333–2341, DOI: 10.1109/JSSC.2014.2342715 (2014). Posch, C., Serrano-Gotarredona, T., Linares-Barranco, B. & Delbruck, T. Retinomorphic event-based vision sensors:Bioinspired cameras with spiking output.
Proc. IEEE , 1470–1484, DOI: 10.1109/JPROC.2014.2346153 (2014).
Son, B. et al.
A 640×480 dynamic vision sensor with a 9µm pixel and 300Meps address-event representation. In
Digest ofTechnical Papers - IEEE International Solid-State Circuits Conference , vol. 60, 66–67, DOI: 10.1109/ISSCC.2017.7870263(Institute of Electrical and Electronics Engineers Inc., 2017). Astrom, K. J. & Bernhardsson, B. M. Comparison of riemann and lebesgue sampling for first order stochastic systems. In
Proceedings of the 41st IEEE Conference on Decision and Control, 2002. , vol. 2, 2011–2016 (IEEE, 2002).
Benosman, R., Clercq, C., Lagorce, X., Ieng, S.-H. & Bartolozzi, C. Event-based visual flow.
IEEE transactions on neuralnetworks learning systems , 407–417 (2013). Conradt, J. On-board real-time optic-flow for miniature event-based vision sensors. In , 1858–1863, DOI: 10.1109/ROBIO.2015.7419043 (Institute of Electricaland Electronics Engineers Inc., 2015).
Milde, M. B., Bertrand, O. J., Benosmanz, R., Egelhaaf, M. & Chicca, E. Bioinspired event-driven collision avoidancealgorithm based on optic flow. In , 1–7, DOI: 10.1109/EBCCSP.2015.7300673 (IEEE, 2015).
Liu, M. & Delbruck, T. Block-matching optical flow for dynamic vision sensors: Algorithm and FPGA implementation. In
Proceedings - IEEE International Symposium on Circuits and Systems , DOI: 10.1109/ISCAS.2017.8050295 (Institute ofElectrical and Electronics Engineers Inc., 2017).
Rueckauer, B. & Delbruck, T. Evaluation of Event-Based Algorithms for Optical Flow with Ground-Truth from InertialMeasurement Sensor.
Front. Neurosci. , 176, DOI: 10.3389/fnins.2016.00176 (2016). Gallego, G., Rebecq, H. & Scaramuzza, D. A unifying contrast maximization framework for event cameras, withapplications to motion, depth, and optical flow estimation. In
Proceedings of the IEEE Conference on Computer Vision andPattern Recognition (CVPR) (2018).
Haessig, G., Cassidy, A., Alvarez, R., Benosman, R. & Orchard, G. Spiking optical flow for event-based sensors usingibm’s truenorth neurosynaptic system.
IEEE transactions on biomedical circuits systems , 860–870 (2018). Martel, J. N., Chau, M., Dudek, P. & Cook, M. Toward joint approximate inference of visual quantities on cellular processorarrays. In , 2061–2064 (IEEE, 2015).
Milde, M. B., Bertrand, O. J. N., Ramachandran, H., Egelhaaf, M. & Chicca, E. Spiking elementary motion detector inneuromorphic systems.
Neural Comput. , 2384–2417, DOI: 10.1162/neco_a_01112 (2018). PMID: 30021082. Serres, J. R. & Ruffier, F. Optic flow-based collision-free strategies: From insects to robots.
Arthropod structure &development , 703–717 (2017). Fu, Q., Wang, H., Hu, C. & Yue, S. Towards computational models and applications of insect visual systems for motionperception: A review.
Artif. life , 263–311 (2019). Müller, G. R. & Conradt, J. A miniature low-power sensor system for real time 2d visual tracking of led markers. In , 2429–2434, DOI: 10.1109/ROBIO.2011.6181669 (2011).
Milde, M. B., Dietmuller, A., Blum, H., Indiveri, G. & Sandamirskaya, Y. Obstacle avoidance and target acquisition inmobile robots equipped with neuromorphic sensory-processing systems. In
Proceedings - IEEE International Symposiumon Circuits and Systems , DOI: 10.1109/ISCAS.2017.8050984 (Institute of Electrical and Electronics Engineers Inc., 2017).
Kreiser, R., Renner, A., Sandamirskaya, Y. & Pienroj, P. Pose estimation and map formation with spiking neural networks:towards neuromorphic slam. In ,2159–2166 (IEEE, 2018).
Indiveri, G. & Sandamirskaya, Y. The importance of space and time for signal processing in neuromorphic agents: thechallenge of developing low-power, autonomous agents that interact with the environment.
IEEE Signal Process. Mag. ,16–28 (2019). Schilstra, C. & van Hateren, J. H. Blowfly flight: kinematics of the thorax. , 1481–1490 (1999).
Serres, J. R., Masson, G. P., Ruffier, F. & Franceschini, N. A bee in the corridor: centering and wall-following.
Naturwissenschaften , 1181–1187, DOI: 10.1007/s00114-008-0440-6 (2008). Baird, E., Srinivasan, M. V., Zhang, S. & Cowling, A. Visual control of flight speed in honeybees.
J. Exp. Biol. ,3895–3905 (2005).
Kern, R., Boeddeker, N., Dittmar, L. & Egelhaaf, M. Blowfly flight characteristics are shaped by environmentalfeatures and controlled by optic flow information.
J. Exp. Biol. , 2501–2514, DOI: 10.1242/jeb.061713 (2012).https://jeb.biologists.org/content/215/14/2501.full.pdf.
Linander, N., Baird, E. & Dacke, M. Bumblebee flight performance in environments of different proximity.
J. Comp.Physiol. A: Neuroethol. Sensory, Neural, Behav. Physiol. , 97–103, DOI: 10.1007/s00359-015-1055-y (2016). Arenz, A., Drews, M. S., Richter, F. G., Ammer, G. & Borst, A. The temporal tuning of the drosophila motion detectors isdetermined by the dynamics of their input elements.
Curr. Biol. , 929–944 (2017). Drews, M. S. et al.
Dynamic signal compression for robust motion vision in flies.
Curr. Biol. , 209–221 (2020). Falotico, E. et al.
Connecting artificial brains to robots in a comprehensive simulation framework: The neuroroboticsplatform.
Front. Neurorobotics , 2, DOI: 10.3389/fnbot.2017.00002 (2017). Hassentstein, B. & Reichardt, W. Systemtheoretische analyse der zeit-, reihenfolgen- und vorzeichenauswertung bei derbewegungsperzeption des rüsselkäfers chlorophanus.
Z. Naturforsch. , 513–524, DOI: 10.1515/znb-1956-9-1004(1956).
Diesmann, M. & Gewaltig, M.-O. Nest: An environment for neural systems simulations (2003).
Haag J, S. E. G. F. B. A., Arenz A. Complementary mechanisms create direction selectivity in the fly.
Elife (2016).
Ong M, G. J. S. M., Bulmer M. Obstacle traversal and route choice in flying honeybees: Evidence for individual handedness.
PLoS One , DOI: 10.1371/journal.pone.0184343 (2017). Schoepe, T. et al.
Neuromorphic sensory integration for combining sound source localization and collision avoidance. 1–4(2019).
Indiveri, G., Chicca, E. & Douglas, R. A vlsi array of low-power spiking neurons and bistable synapses with spike-timingdependent plasticity.
IEEE transactions on neural networks , 211–221 (2006). Bartolozzi, C. & Indiveri, G. Synaptic dynamics in analog vlsi.
Neural computation , 2581–2603 (2007). Horiuchi, T. K. A spike-latency model for sonar-based navigation in obstacle fields.
IEEE Transactions on Circuits Syst. I:Regul. Pap. , 2393–2401, DOI: 10.1109/TCSI.2009.2015597 (2009). Moradi, S., Qiao, N., Stefanini, F. & Indiveri, G. A scalable multicore architecture with heterogeneous memory structuresfor dynamic neuromorphic asynchronous processors (dynaps).
IEEE transactions on biomedical circuits systems ,106–122 (2017). Painkras, E. et al.
SpiNNaker: A 1-W 18-core system-on-chip for massively-parallel neural network simulation.
IEEE J.Solid-State Circuits , 1943–1953 (2013). Wang, R. & van Schaik, A. Breaking liebig’s law: an advanced multipurpose neuromorphic engine.
Front. neuroscience , 593 (2018). Schnell, B., Ros, I. G. & Dickinson, M. A descending neuron correlated with the rapid steering maneuvers of flyingdrosophila.
Curr. Biol. , 1200–1205 (2017). Sun, X., Yue, S. & Mangan, M. A decentralised neural model explaining optimal integration of navigational strategies ininsects. bioRxiv
Kreiser, R., Cartiglia, M., Martel, J. N., Conradt, J. & Sandamirskaya, Y. A neuromorphic approach to path integration: ahead-direction spiking neural network with vision-driven reset. In , 1–5 (IEEE, 2018).
Blum, H. et al.
A neuromorphic controller for a robotic vehicle equipped with a dynamic vision sensor.
Robotics Sci. Syst.RSS 2017 (2017).
Honkanen, A., Adden, A., da Silva Freitas, J. & Heinze, S. The insect central complex and the neural basis of navigationalstrategies.
J. Exp. Biol. , DOI: 10.1242/jeb.188854 (2019). https://jeb.biologists.org/content/222/Suppl_1/jeb188854.full.pdf.
Sun, X., Yue, S. & Mangan, M. A decentralised neural model explaining optimal integration of navigational strategies ininsects. bioRxiv
Kim, A. J., Fitzgerald, J. K. & Maimon, G. Cellular evidence for efference copy in drosophila visuomotor processing.
Nat.Neurosci. , 1247–1255 (2015). Alex, L. The Von Neumann architecture topic paper
Comput. Sience , 360–8771 (2009).
Thakur, C. S. et al.
Large-scale neuromorphic spiking array processors: A quest to mimic the brain.
Front. Neurosci. ,891, DOI: 10.3389/fnins.2018.00891 (2018). Mead, C.
Analog VLSI and Neural Systems (Addison Wesley Publishing Company, 1989).
Shih-Chii Liu, G. I. A. W. R. D., Tobi Delbruck.
Event-Based Neuromorphic Systems (John Wiley and Sons, 2015). Payvand, M., Nair, M. V., Müller, L. K. & Indiveri, G. A neuromorphic systems approach to in-memory computing withnon-ideal memristive devices: From mitigation to exploitation.
Faraday Discuss. , 487–510 (2019).
Serb, A. et al.
Memristive synapses connect brain and silicon spiking neurons.
Sci. reports , 1–7 (2020). Gerstner, W. & Kistler, W. M.
Spiking Neuron Models: Single Neurons, Populations, Plasticity (Cambridge UniversityPress, 2002).
Khalil, A. A., Valle, M., Chible, H. & Bartolozzi, C. Cmos dynamic tactile sensor. In , 269–272, DOI: 10.1109/NGCAS.2017.48 (2017).
Drix, D. & Schmuker, M. Resolving fast transients with metal-oxide gas sensors (2020). 2010.01903.
Liu, S.-C., van Schaik, A., Minch, B. A. & Delbruck, T. Asynchronous binaural spatial audition sensor with 2 × × IEEE Transactions on Biomed. Circuits Syst. , 453–464 (2014). Chan, V., Liu, S. & van Schaik, A. Aer ear: A matched silicon cochlea pair with address event representation interface.
IEEE Transactions on Circuits Syst. I: Regul. Pap. , 48–59, DOI: 10.1109/TCSI.2006.887979 (2007). Mahowald, M. Vlsi analogs of neural visual processing: a synthesis of form and function.
Ph.D. dissertation (1992).
Lichtsteiner, P., Posch, C. & Delbruck, T. A 128 x 128 120db 30mw asynchronous vision sensor that responds to relativeintensity change. 2060–2069 (2006).
Olshausen, B. A. & Field, D. J. How close are we to understanding v1?
Neural Comput. , 1665–1699, DOI:10.1162/0899766054026639 (2005). https://doi.org/10.1162/0899766054026639. S., T. & J., G. Rank order coding.
Bower: j.M. (eds) Comput. Neurosci. Springer
DOI: 10.1007/978-1-4615-4831-7_19(1998).
Thorpe, S. Spike arrival times: A highly efficient coding scheme for neural networks (1990).
Masquelier, T. Relative spike time coding and stdp-based orientation selectivity in the early visual system in naturalcontinuous and saccadic vision: a computational model.
J. computational neuroscience , 425–441 (2012). D’Angelo, G. et al.
Event-based eccentric motion detection exploiting time difference encoding.
Front. Neurosci. (2020). Quigley, M. Ros: an open-source robot operating system. In
ICRA 2009 (2009).
Kaiser, J. et al.
Towards a framework for end-to-end control of a simulated vehicle with spiking neural networks. In , 127–134,DOI: 10.1109/SIMPAR.2016.7862386 (2016).
Schoepe, T. et al.
Live demonstration: Neuromorphic sensory integration for combining sound source localization andcollision avoidance. In , 1–1, DOI: 10.1109/ISCAS45731.2020.9181257 (2020).
Land, M. F. Visual acuity in insects.
Annu. Rev. Entomol. , 147–177, DOI: 10.1146/annurev.ento.42.1.147 (1997).PMID: 15012311, https://doi.org/10.1146/annurev.ento.42.1.147. Appendix
A.1 sEMD Characterization Setup
To ensure repeatability and reproducibility we recorded the grating in a controlled environment, see Figure A.1. The DynamicVision Sensor (DVS) is mounted in a light sealed box, with a variable distance to the screen. An LED-ring (with 32 LEDs)homogeneously illuminates the DVS’s field of view. The LEDs them self are controlled by an external power-source. Themoving screen consists of a thick paper tube, glued together at the ends with double-sided adhesive tape. This tube is clampedover two horizontally mounted cylinders. The lower cylinder is mounted with a floating bearing in the y-direction. The uppercylinder is driven by a stepper motor controlled by an Arduino Uno and translates its movement to the screen. The possiblevelocities of the screen range from 23 mm s − to 210 mm s − . The grating itself is printed on dull thick paper forming thepaper-tube and stored in the dark to avoid fading out. Figure A.1.
Controlled environment for the recordings of the grating. The screen can move either from bottom to top or top tobottom. The upper roll of the screen contraption is driven by an Arduino controlled stepper-motor. The LED - ring illuminatesthe screen and the event driven camera is located in its center.
A.2 sEMD Implementation on SpiNNaker
To demonstrate the sEMD’s wide range of operation and applicability on multiple platforms, we characterised the model’sbehaviour on SpiNNaker. We further investigated the sEMD’s robustness regarding contrast and illumination. Figure A.2 a)shows that the model operates well in a wide range of illuminations at 100 % contrast and produces similar velocity tuningcurves on SpiNNaker and NEST (see Figure 1e for comparison). Regarding the contrast sensitivity, we found that with thegiven parameter set, the model reaches half activity at a relative contrast of 45 . (a) (b) Figure A.2. sEMD population response on SpiNNaker for varying illuminations and contrasts. a) Normalised sEMDpopulation preferred direction and null direction response for 100 % contrast and all illuminations from 5 lux to 5000 lux. b)Normalised preferred direction response for 5000 lux illumination over contrasts varying from 0 % to 100 % at a temporalfrequency of 5 Hz. For further information on the model parameters see Table 3.
A.3 The Motion-Vision Network
One very important parameter for collision avoidance is the knowledge of the own body size. Orchid bees with a wingspan ofapproximately 20 mm avoid to pass circular apertures smaller than 25 mm because of a too high collision risk. Some kind of elf-representation in the bee’s brain has to drive the insect’s decision that the gap is too small for it. . Similarly, we can tunethe connectivity of our SNN to indirectly include relevant body size information. Our neural network model needs to considerits own body measures when moving through a gap. This decision process to move or not to move through a gap can be purelydriven by the agent’s relative perception of the gap. In our collision avoidance network this perception is modifiable by a changeof the synaptic connections between the integrator neuron population and the inverse WTA population. OF is encoded in aretinotopical map of the integrator neuron population. This neuron population is initially one-to-one connected to the inverseWTA network. By connecting the integrator neuron to its accordant inverse WTA neuron and its closest neighbours the size ofthe perceived OF caused by an object increases. Therefore, small gaps between objects are closed with increasing numberof neighbouring INT to inverse WTA connections which leads to an increase of a perceived gap’s minimum size. The angleoccupied by a gap has to be bigger than gap min to be considered a movement direction as shown in Equation 3. α INT , the angleof perception of a single INT neuron, amounts to ~ .
2° while n connect represents the number of neighbouring connections. gap min = ( × n connect + ) × α sEMD (3)We evaluated how the minimum gap size entered by the robotic agent changes with the OF perception. As expected, withincreasing number of neighbouring connections small gaps were not entered anymore (see Figure A.3). By fixing the numberof neighbouring connections to 4 nearest neighbours for all following experiments the robot wouldn’t enter too small gaps butwould still be able to navigate through larger corridors. Figure A.3.
Agent’s trajectories in a narrowing corridor with varied connectivity between INT and inverse WTA populationas explained in section The Motion-Vision Network. Legend refers to number of neighbouring connections. Simulated robot’sstart point is on left side.
A.4 The Movement Behaviour
When exposed to a densely cluttered environment, a narrow tunnel or a nearby object flying insects decrease their movementvelocity. This mechanisms reduces the agent’s collision probability by an increase in the time-of-flight. The agent has moretime to react and turn away from the potential threat due to its lower speed. We tested the effect of a change in velocity with theagent in an empty arena. As expected, the simulated robot’s minimum wall distance was increasing with lower velocities (seeFigure A.4a,b). Therefore, an adaptive obstacle density dependent velocity can be a helpful tool to increase the agents workingrange towards higher obstacle densities.
A.5 Gap finding behaviour in cluttered environments
Quantifying the relative motion perception and collision avoidance behaviour in controlled environments (see Figure A.1 andA.4) allows us to assess the fundamental capabilities of our agent. However, these tests do not fully capture conditions an agentwill encounter in the real-world. These conditions include urban areas, indoors as well as outdoor forest environments. Asimple, yet effective test environment thus should be characterised with variable amount of clutter, i.e. obstacle density, ofvertical obstacles placed in a random configuration. We introduced the agent in an arena and varied the obstacle density from0% up to 38% and measured the mean clearance (see Figure A.5 a) and the maximum distance (see Figure A.5 b) as a functionof increasing obstacle density. The mean clearance quickly drops from 25 a . u . in roughly exponential fashion to a minimum of5 a . u . . If the obstacle density is greater than ≈
15 %, the mean clearance stays constant. However, the collision rate starts toincrease (see Figure A.6). Interestingly, due to the employed adaptive movement strategy the agent’s velocity decreases almost a) (b)
Figure A.4.
Agent’s trajectories in an empty box. a) Agent’s trajectory with different fixed intersaccadic velocities in a.u./s. b)Trajectories with parameters used for cluttered environment experiment in section Densely Cluttered Environments with fixedand adaptive intersaccadic velocity in a.u./s.linearly with increasing obstacle density (see Figure A.7). This adaptive behaviour ensures that despite high clutter the agentsuccessfully identifies gaps in the environment and steers towards them and consequently avoids collisions with its surrounding. (a) (b)
Figure A.5.
Agent’s mean obstacle clearance and maximum distance to the start location calculated for the data from Figure2d. igure A.6.
Agent’s behaviour in cluttered environments with the parameters from Table 4 and 5 moving with a fixedintersaccadic velocity of 2.5 a.u./s. Top: Real world time at which the simulated robot leaves the arena, collides or thesimulation time is over. Bottom: Agent’s success rate, hence number of runs without collisions.
Figure A.7.
Agent’s mean velocity over obstacle density calculated for the data from Figure 2d .6 Corridor-centering in Real-World
To proof the real-time-capability and robustness of the SNN on neuromorphic hardware, we evaluated the system in a real-worldscenario. A robotic platform described in section Real World Robot was assembled and tested in a corridor (see Figure A.8). Infive out of six cases the agent centers very well in the 80 centimeters wide corridor with a standard deviation from the corridorcenter between four and eight centimeters (see Figure A.9a,b). In a control experiment with no corridor the robot was movingrandomly into different directions showing that the robot’s centering was caused by the corridor itself (see Figure A.9c,d). (a) (b)
Figure A.8.
Robot and setup to conduct the real world experiment. a) The robot receives visual input from the embeddedDynamic Vision Sensor. The event-based camera sends its events to a SpiNN-5 board which simulates a simplified version ofthe collision avoidance network described in section Collision Avoidance Network. For more details on the real-world roboticimplementation see section Real World Robot. b) One meter long corridor. igure A.9.
Real world corridor centering experiment results. a) Robot’s movement trajectories through corridor shown infigure A.8. Movement direction is from left to right. Yellow triangle indicates position at which robot can not see the corridoranymore. b) Mean and standard-deviation for two regions from beginning until yellow triangle and from the yellow triangleuntil the end. The standard deviation increases for the second region since the robot does not see the corridor anymore. c)Control experiment of robot moving in environment without the corridor. d) Mean and standard-deviation of c).
Tables printed Contrast Temporal Frequency Illumination(Hz) (lux)0 0.1 50.2 0.5 500.4 1.0 1000.6 2.5 5000.8 5.0 10001.0 10.0 5000
Table 1.
Parameters of grating recordings. Three four second recordings were made for each possible parameter-combination.Simulation Figures Repetitions Real time duration(min)Clutter adaptive velocity 2f,i, A.5, A.7a 100 360Clutter fixed velocity 2i, A.6 70 360Corridors 2g,j,k,l 3 per corridor width 60Real World Corridor 2d 6 -Gaps 2e,h 3 per gap size 180Narrowing Corridor A.3 3 per configuration 90Empty Box A.4, A.7b 3 per configuration 30
Table 2.
Parameters of simulations and real world experiment. ame Type C m tau m tau re f v reset v rest v thresh tau syn _ E tau syn _ I I o f f set Popsize col × row )DVS SSA 128 ×
128 1SPTC LIF 0.25 20 1 -85 -60 -50 20 20 0 32 ×
32 1sEMD TDE 0.25 20 1 -85 -60 -50 20 20 0 32 ×
32 2From To Weight (nA) Connection type Synapse type delay (ms)DVS SPTC 0.2 (int(i/(128*4)*32) +int(i % (128*4) / 3) to i) excitatory 1SPTC TDE top-bottom 0.2 one_to_one facilitator 1SPTC TDE top-bottom 0.2 i to i+32 trigger 1SPTC TDE bottom-top 0.2 one_to_one trigger 1SPTC TDE bottom-top 0.2 i+32 to i facilitator 0.1
Table 3.
Neuron Parameters and Connections on SpiNNaker.Name Type E L C m tau m t re f tau syn _ exc tau syn _ inh V th V reset V m Popsize
Pop (mV) (pF) (ms) (ms) (ms) (ms) (mV) (mV) (mV) ( col × row )SPTC LIF -60.5 25 20 1 10 10 -60 -60.5 -60.5 64 ×
20 1sEMD TDE -60.0 250 10 1 10 10 -30 -85 -60 64 ×
20 2INT LIF -70 250 20 1 5 5 -40 -70 -65 64 × × × × × × Pop
POIS1 Spike Source 100 64 × × Table 4.
Neuron Parameters from neurorobotics platform NEST network. rom To Weight (nA) Connection type Synapse type delay (ms)DVS NRP SPTC default (i and i+1 and i+128 and i+ 129) to i excitatory 0.1DVS real world SPTC 0.002 (i and i+1 and i+128 and i+ 129) to i excitatory 0.1SPTC TDE left-right 4 one_to_one trigger 0.1SPTC TDE left-right 4 i to i+1 facilitator 0.1SPTC TDE right-left 4 one_to_one facilitator 0.1SPTC TDE right-left 4 i+1 to i trigger 0.1TDE right-left INT right-left 1 i mod 64 to i excitatory 0.1TDE left-right INT left-right 1 i mod 64 to i excitatory 0.1INT right-left WTA -5 one_to_one inhibitory 0.1INT right-left WTA -3 i to i ± i ± i ± − all _ to _ all excitatory 0.1INT left-right WTA -5 one_to_one inhibitory 0.1INT left-right WTA -3 i to i ± i ± i ± − all _ to _ all excitatory 0.1WTA(0-8) MOT1 10 i to 50 excitatory 0.1WTA(9-31) MOT1 10 i to 2i + 32 excitatory 0.1WTA(32-53) MOT2 10 63 - i to 2i + 32 excitatory 0.1WTA(54-63) MOT2 10 i to 50 excitatory 0.1WTA GI 10 all_to_all excitatory 0.1ET MOT1 10 0 to 0 excitatory 0.1ET GI 10 all_to_all excitatory 0.1GI ET -10 all_to_all inhibitory 0.1GI WTA -10 all_to_all inhibitory 0.1MOT1 WTA -30 all_to_all inhibitory 0.1MOT1 ET -30 all_to_all inhibitory 0.1MOT1 MOT2 -10 all_to_all inhibitory 0.1MOT1 Sensors -30 all_to_all inhibitory 0.1MOT1 MOT1 10 i to i + 1 excitatory 10MOT1 MOT1 -10 one_to_one inhibitory 0.1MOT2 WTA -30 all_to_all inhibitory 0.1MOT2 ET -30 all_to_all inhibitory 0.1MOT2 MOT1 -10 all_to_all inhibitory 0.1MOT2 Sensors -30 all_to_all inhibitory 0.1MOT2 MOT2 10 i to i + 1 excitatory 10MOT2 MOT2 -10 one_to_one inhibitory 0.1POIS1 WTA 1 one_to_one excitatory 0.1POIS2 ET 0.3 one_to_one excitatory 0.1 Table 5.