Daniel Neil
University of Zurich
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Daniel Neil.
Frontiers in Neuroscience | 2013
Peter O'Connor; Daniel Neil; Shih-Chii Liu; Tobi Delbruck; Michael Pfeiffer
Deep Belief Networks (DBNs) have recently shown impressive performance on a broad range of classification problems. Their generative properties allow better understanding of the performance, and provide a simpler solution for sensor fusion tasks. However, because of their inherent need for feedback and parallel update of large numbers of units, DBNs are expensive to implement on serial computers. This paper proposes a method based on the Siegert approximation for Integrate-and-Fire neurons to map an offline-trained DBN onto an efficient event-driven spiking neural network suitable for hardware implementation. The method is demonstrated in simulation and by a real-time implementation of a 3-layer network with 2694 neurons used for visual classification of MNIST handwritten digits with input from a 128 × 128 Dynamic Vision Sensor (DVS) silicon retina, and sensory-fusion using additional input from a 64-channel AER-EAR silicon cochlea. The system is implemented through the open-source software in the jAER project and runs in real-time on a laptop computer. It is demonstrated that the system can recognize digits in the presence of distractions, noise, scaling, translation and rotation, and that the degradation of recognition performance by using an event-based approach is less than 1%. Recognition is achieved in an average of 5.8 ms after the onset of the presentation of a digit. By cue integration from both silicon retina and cochlea outputs we show that the system can be biased to select the correct digit from otherwise ambiguous input.
international symposium on neural networks | 2015
Peter U. Diehl; Daniel Neil; Jonathan Binas; Matthew Cook; Shih-Chii Liu; Michael Pfeiffer
Deep neural networks such as Convolutional Networks (ConvNets) and Deep Belief Networks (DBNs) represent the state-of-the-art for many machine learning and computer vision classification problems. To overcome the large computational cost of deep networks, spiking deep networks have recently been proposed, given the specialized hardware now available for spiking neural networks (SNNs). However, this has come at the cost of performance losses due to the conversion from analog neural networks (ANNs) without a notion of time, to sparsely firing, event-driven SNNs. Here we analyze the effects of converting deep ANNs into SNNs with respect to the choice of parameters for spiking neurons such as firing rates and thresholds. We present a set of optimization techniques to minimize performance loss in the conversion process for ConvNets and fully connected deep networks. These techniques yield networks that outperform all previous SNNs on the MNIST database to date, and many networks here are close to maximum performance after only 20 ms of simulated time. The techniques include using rectified linear units (ReLUs) with zero bias during training, and using a new weight normalization method to help regulate firing rates. Our method for converting an ANN into an SNN enables low-latency classification with high accuracies already after the first output spike, and compared with previous SNN approaches it yields improved performance without increased training time. The presented analysis and optimization techniques boost the value of spiking deep networks as an attractive framework for neuromorphic computing platforms aimed at fast and efficient pattern recognition.
IEEE Transactions on Very Large Scale Integration Systems | 2014
Daniel Neil; Shih-Chii Liu
Current neural networks are accumulating accolades for their performance on a variety of real-world computational tasks including recognition, classification, regression, and prediction, yet there are few scalable architectures that have emerged to address the challenges posed by their computation. This paper introduces Minitaur, an event-driven neural network accelerator, which is designed for low power and high performance. As an field-programmable gate array-based system, it can be integrated into existing robotics or it can offload computationally expensive neural network tasks from the CPU. The version presented here implements a spiking deep network which achieves 19 million postsynaptic currents per second on 1.5 W of power and supports up to 65 K neurons per board. The system records 92% accuracy on the MNIST handwritten digit classification and 71% accuracy on the 20 newsgroups classification data set. Due to its event-driven nature, it allows for trading off between accuracy and latency.
Frontiers in Neuroscience | 2015
Evangelos Stromatias; Daniel Neil; Michael Pfeiffer; Francesco Galluppi; Steve B. Furber; Shih-Chii Liu
Increasingly large deep learning architectures, such as Deep Belief Networks (DBNs) are the focus of current machine learning research and achieve state-of-the-art results in different domains. However, both training and execution of large-scale Deep Networks require vast computing resources, leading to high power requirements and communication overheads. The on-going work on design and construction of spike-based hardware platforms offers an alternative for running deep neural networks with significantly lower power consumption, but has to overcome hardware limitations in terms of noise and limited weight precision, as well as noise inherent in the sensor signal. This article investigates how such hardware constraints impact the performance of spiking neural network implementations of DBNs. In particular, the influence of limited bit precision during execution and training, and the impact of silicon mismatch in the synaptic weight parameters of custom hybrid VLSI implementations is studied. Furthermore, the network performance of spiking DBNs is characterized with regard to noise in the spiking input signal. Our results demonstrate that spiking DBNs can tolerate very low levels of hardware bit precision down to almost two bits, and show that their performance can be improved by at least 30% through an adapted training mechanism that takes the bit precision of the target platform into account. Spiking DBNs thus present an important use-case for large-scale hybrid analog-digital or digital neuromorphic platforms such as SpiNNaker, which can execute large but precision-constrained deep networks in real time.
international symposium on neural networks | 2015
Evangelos Stromatias; Daniel Neil; Francesco Galluppi; Michael Pfeiffer; Shih-Chii Liu; Steve B. Furber
Deep neural networks have become the state-of-the-art approach for classification in machine learning, and Deep Belief Networks (DBNs) are one of its most successful representatives. DBNs consist of many neuron-like units, which are connected only to neurons in neighboring layers. Larger DBNs have been shown to perform better, but scaling-up poses problems for conventional CPUs, which calls for efficient implementations on parallel computing architectures, in particular reducing the communication overhead. In this context we introduce a realization of a spike-based variation of previously trained DBNs on the biologically-inspired parallel SpiNNaker platform. The DBN on SpiNNaker runs in real-time and achieves a classification performance of 95% on the MNIST handwritten digit dataset, which is only 0.06% less than that of a pure software implementation. Importantly, using a neurally-inspired architecture yields additional benefits: during network run-time on this task, the platform consumes only 0.3 W with classification latencies in the order of tens of milliseconds, making it suitable for implementing such networks on a mobile platform. The results in this paper also show how the power dissipation of the SpiNNaker platform and the classification latency of a network scales with the number of neurons and layers in the network and the overall spike activity rate.
international conference on event based control communication and signal processing | 2016
Diederik Paul Moeys; Federico Corradi; Emmett Kerr; Philip Vance; Gautham P. Das; Daniel Neil; Dermot Kerr; Tobi Delbruck
This paper describes the application of a Convolutional Neural Network (CNN) in the context of a predator/prey scenario. The CNN is trained and run on data from a Dynamic and Active Pixel Sensor (DAVIS) mounted on a Summit XL robot (the predator), which follows another one (the prey). The CNN is driven by both conventional image frames and dynamic vision sensor “frames” that consist of a constant number of DAVIS ON and OFF events. The network is thus “data driven” at a sample rate proportional to the scene activity, so the effective sample rate varies from 15 Hz to 240 Hz depending on the robot speeds. The network generates four outputs: steer right, left, center and non-visible. After off-line training on labeled data, the network is imported on the on-board Summit XL robot which runs jAER and receives steering directions in real time. Successful results on closed-loop trials, with accuracies up to 87% or 92% (depending on evaluation criteria) are reported. Although the proposed approach discards the precise DAVIS event timing, it offers the significant advantage of compatibility with conventional deep learning technology without giving up the advantage of data-driven computing.
international symposium on circuits and systems | 2016
Daniel Neil; Shih-Chii Liu
The use of spiking neuromorphic sensors with state-of-art deep networks is currently an active area of research. Still relatively unexplored are the pre-processing steps needed to transform spikes from these sensors and the types of network architectures that can produce high-accuracy performance using these sensors. This paper discusses several methods for preprocessing the spiking data from these sensors for use with various deep network architectures. The outputs of these preprocessing methods are evaluated using different networks including a deep fusion network composed of Convolutional Neural Networks and Recurrent Neural Networks, to jointly solve a recognition task using the MNIST (visual) and TIDIGITS (audio) benchmark datasets. With only 1000 visual input spikes from a spiking hardware retina, the classification accuracy of 64.5% achieved by a particular trained fusion network increases to 98.31% when combined with inputs from a spiking hardware cochlea.
acm symposium on applied computing | 2016
Daniel Neil; Michael Pfeiffer; Shih-Chii Liu
Recent advances have allowed Deep Spiking Neural Networks (SNNs) to perform at the same accuracy levels as Artificial Neural Networks (ANNs), but have also highlighted a unique property of SNNs: whereas in ANNs, every neuron needs to update once before an output can be created, the computational effort in an SNN depends on the number of spikes created in the network. While higher spike rates and longer computing times typically improve classification performance, very good results can already be achieved earlier. Here we investigate how Deep SNNs can be optimized to reach desired high accuracy levels as quickly as possible. Different approaches are compared which either minimize the number of spikes created, or aim at rapid classification by enforcing the learning of feature detectors that respond to few input spikes. A variety of networks with different optimization approaches are trained on the MNIST benchmark to perform at an accuracy level of at least 98%, while monitoring the average number of input spikes and spikes created within the network to reach this level of accuracy. The majority of SNNs required significantly fewer computations than frame-based ANN approaches. The most efficient SNN achieves an answer in less than 42% of the computational steps necessary for the ANN, and the fastest SNN requires only 25% of the original number of input spikes to achieve equal classification accuracy. Our results suggest that SNNs can be optimized to dramatically decrease the latency as well as the computation requirements for Deep Neural Networks, making them particularly attractive for applications like robotics, where real-time restrictions to produce outputs and low energy budgets are common.
international symposium on circuits and systems | 2015
Evangelos Stromatias; Daniel Neil; Francesco Galluppi; Michael Pfeiffer; Shih-Chii Liu; Steve B. Furber
We demonstrate an interactive handwritten digit recognition system with a spike-based deep belief network running in real-time on SpiNNaker, a biologically inspired many-core architecture. Results show that during the simulation a SpiNNaker chip can deliver spikes in under 1 μs, with a classification latency in the order of tens of milliseconds, while consuming less than 0.3 W.
international symposium on circuits and systems | 2016
Ilya Kiselev; Daniel Neil; Shih-Chii Liu
This paper presents a real-time multi-modal spiking Deep Neural Network (DNN) implemented on an FPGA platform. The hardware DNN system, called n-Minitaur, demonstrates a 4-fold improvement in computational speed over the previous DNN FPGA system. The proposed system directly interfaces two different event-based sensors: a Dynamic Vision Sensor (DVS) and a Dynamic Audio Sensor (DAS). The DNN for this bimodal hardware system is trained on the MNIST digit dataset and a set of unique audio tones for each digit. When tested on the spikes produced by each sensor alone, the classification accuracy is around 70% for DVS spikes generated in response to displayed MNIST images, and 60% for DAS spikes generated in response to noisy tones. The accuracy increases to 98% when spikes from both modalities are provided simultaneously. In addition, the system shows a fast latency response of only 5ms.