Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Diederik Paul Moeys is active.

Publication


Featured researches published by Diederik Paul Moeys.


international conference on event based control communication and signal processing | 2016

Steering a predator robot using a mixed frame/event-driven convolutional neural network

Diederik Paul Moeys; Federico Corradi; Emmett Kerr; Philip Vance; Gautham P. Das; Daniel Neil; Dermot Kerr; Tobi Delbruck

This paper describes the application of a Convolutional Neural Network (CNN) in the context of a predator/prey scenario. The CNN is trained and run on data from a Dynamic and Active Pixel Sensor (DAVIS) mounted on a Summit XL robot (the predator), which follows another one (the prey). The CNN is driven by both conventional image frames and dynamic vision sensor “frames” that consist of a constant number of DAVIS ON and OFF events. The network is thus “data driven” at a sample rate proportional to the scene activity, so the effective sample rate varies from 15 Hz to 240 Hz depending on the robot speeds. The network generates four outputs: steer right, left, center and non-visible. After off-line training on labeled data, the network is imported on the on-board Summit XL robot which runs jAER and receives steering directions in real time. Successful results on closed-loop trials, with accuracies up to 87% or 92% (depending on evaluation criteria) are reported. Although the proposed approach discards the precise DAVIS event timing, it offers the significant advantage of compatibility with conventional deep learning technology without giving up the advantage of data-driven computing.


international symposium on circuits and systems | 2016

Combined frame- and event-based detection and tracking

Hongjie Liu; Diederik Paul Moeys; Gautham P. Das; Daniel Neil; Shih-Chii Liu; Tobi Delbruck

This paper reports an object tracking algorithm for a moving platform using the dynamic and active-pixel vision sensor (DAVIS). It takes advantage of both the active pixel sensor (APS) frame and dynamic vision sensor (DVS) event outputs from the DAVIS. The tracking is performed in a three step-manner: regions of interest (ROIs) are generated by a cluster-based tracking using the DVS output, likely target locations are detected by using a convolutional neural network (CNN) on the APS output to classify the ROIs as foreground and background, and finally a particle filter infers the target location from the ROIs. Doing convolution only in the ROIs boosts the speed by a factor of 70 compared with full-frame convolutions for the 240×180 frame input from the DAVIS. The tracking accuracy on a predator and prey robot database reaches 90% with a cost of less than 20ms/frame in Matlab on a normal PC without using a GPU.


international symposium on circuits and systems | 2016

Retinal ganglion cell software and FPGA model implementation for object detection and tracking

Diederik Paul Moeys; Tobias Delbrück; Antonio Rios-Navarro; Alejandro Linares-Barranco

This paper describes the software and FPGA implementation of a Retinal Ganglion Cell model which detects moving objects. It is shown how this processing, in conjunction with a Dynamic Vision Sensor as its input, can be used to extrapolate information about object position. Software-wise, a system based on an array of these of RGCs has been developed in order to obtain up to two trackers. These can track objects in a scene, from a still observer, and get inhibited when saccadic camera motion happens. The entire processing takes on average 1000 ns/event. A simplified version of this mechanism, with a mean latency of 330 ns/event, at 50 MHz, has also been implemented in a Spartan6 FPGA.


Entropy | 2018

Approaching Retinal Ganglion Cell Modeling and FPGA Implementation for Robotics

Alejandro Linares-Barranco; Hongjie Liu; Antonio Rios-Navarro; Francisco Gomez-Rodriguez; Diederik Paul Moeys; Tobi Delbruck

Taking inspiration from biology to solve engineering problems using the organizing principles of biological neural computation is the aim of the field of neuromorphic engineering. This field has demonstrated success in sensor based applications (vision and audition) as well as in cognition and actuators. This paper is focused on mimicking the approaching detection functionality of the retina that is computed by one type of Retinal Ganglion Cell (RGC) and its application to robotics. These RGCs transmit action potentials when an expanding object is detected. In this work we compare the software and hardware logic FPGA implementations of this approaching function and the hardware latency when applied to robots, as an attention/reaction mechanism. The visual input for these cells comes from an asynchronous event-driven Dynamic Vision Sensor, which leads to an end-to-end event based processing system. The software model has been developed in Java, and computed with an average processing time per event of 370 ns on a NUC embedded computer. The output firing rate for an approaching object depends on the cell parameters that represent the needed number of input events to reach the firing threshold. For the hardware implementation, on a Spartan 6 FPGA, the processing time is reduced to 160 ns/event with the clock running at 50 MHz. The entropy has been calculated to demonstrate that the system is not totally deterministic in response to approaching objects because of several bioinspired characteristics. It has been measured that a Summit XL mobile robot can react to an approaching object in 90 ms, which can be used as an attentional mechanism. This is faster than similar event-based approaches in robotics and equivalent to human reaction latencies to visual stimulus.


international conference on artificial neural networks | 2017

Neuromorphic Approach Sensitivity Cell Modeling and FPGA Implementation

Hongjie Liu; Antonio Rios-Navarro; Diederik Paul Moeys; Tobi Delbruck; Alejandro Linares-Barranco

Neuromorphic engineering takes inspiration from biology to solve engineering problems using the organizing principles of biological neural computation. This field has demonstrated success in sensor based applications (vision and audition) as well in cognition and actuators. This paper is focused on mimicking an interesting functionality of the retina that is computed by one type of Retinal Ganglion Cell (RGC). It is the early detection of approaching (expanding) dark objects. This paper presents the software and hardware logic FPGA implementation of this approach sensitivity cell. It can be used in later cognition layers as an attention mechanism. The input of this hardware modeled cell comes from an asynchronous spiking Dynamic Vision Sensor, which leads to an end-to-end event based processing system. The software model has been developed in Java, and computed with an average processing time per event of 370 ns on a NUC embedded computer. The output firing rate for an approaching object depends on the cell parameters that represent the needed number of input events to reach the firing threshold. For the hardware implementation on a Spartan6 FPGA, the processing time is reduced to 160 ns/event with the clock running at 50 MHz.


international symposium on circuits and systems | 2016

Live demonstration: Retinal ganglion cell software and FPGA implementation for object detection and tracking

Diederik Paul Moeys; Tobias Delbrück; Antonio Rios-Navarro; Alejandro Linares-Barranco

This demonstration shows how object detection and tracking are possible thanks to a new implementation which takes inspiration from the visual processing of a particular type of ganglion cell in the retina.


international symposium on circuits and systems | 2015

Current-mode automated quality control cochlear resonator for bird identity tagging

Diederik Paul Moeys; Tobias Delbrück; Shih-Chii Liu

This paper describes a VLSI automatic quality control pitch detector circuit which can be used for detecting the identity of a unique bird. The detector is based on a previous VLSI model of the local gain control mechanism of the outer hair cells of the biological cochlea. This work presents characterization results from a 20-channel chip fabricated in a 4-metal 2-poly CMOS 0.35 μm technology with estimated dynamic range of 70 dB, power consumption of 825 nW per channel, frequency range covering 0.4-10 kHz and mean Q of 6.31. Results are shown for a pitch detection experiment with a tuned resonator.


international symposium on circuits and systems | 2018

Live Demonstration: Front and Back Illuminated Dynamic and Active Pixel Vision Sensors Comparison

Gemma Taverni; Diederik Paul Moeys; Chenghan Li; Tobi Delbruck; Celso Cavaco; Vasyl Motsnyi; David San Segundo Bello


arXiv: Computer Vision and Pattern Recognition | 2018

PRED18: Dataset and Further Experiments with DAVIS Event Camera in Predator-Prey Robot Chasing.

Diederik Paul Moeys; Daniel Neil; Federico Corradi; Emmett Kerr; Philip Vance; Gautham P. Das; Sonya A. Coleman; Tm McGinnity; Dermot Kerr; Tobi Delbruck


IEEE Transactions on Circuits and Systems Ii-express Briefs | 2018

Front and Back Illuminated Dynamic and Active Pixel Vision Sensors Comparison

Gemma Taverni; Diederik Paul Moeys; Chenghan Li; Celso Cavaco; Vasyl Motsnyi; David San Segundo Bello; Tobi Delbruck

Collaboration


Dive into the Diederik Paul Moeys's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

David San Segundo Bello

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar

Vasyl Motsnyi

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Celso Cavaco

Katholieke Universiteit Leuven

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge