Joni Dambre
Ghent University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Joni Dambre.
Monthly Notices of the Royal Astronomical Society | 2015
Sander Dieleman; Kyle W. Willett; Joni Dambre
Measuring the morphological parameters of galaxies is a key requirement for studying their formation and evolution. Surveys such as the Sloan Digital Sky Survey have resulted in the availability of very large collections of images, which have permitted population-wide analyses of galaxy morphology. Morphological analysis has traditionally been carried out mostly via visual inspection by trained experts, which is time consuming and does not scale to large (≳104) numbers of images. Although attempts have been made to build automated classification systems, these have not been able to achieve the desired level of accuracy. The Galaxy Zoo project successfully applied a crowdsourcing strategy, inviting online users to classify images by answering a series of questions. Unfortunately, even this approach does not scale well enough to keep up with the increasing availability of galaxy images. We present a deep neural network model for galaxy morphology classification which exploits translational and rotational symmetry. It was developed in the context of the Galaxy Challenge, an international competition to build the best model for morphology classification based on annotated images from the Galaxy Zoo project. For images with high agreement among the Galaxy Zoo participants, our model is able to reproduce their consensus with near-perfect accuracy (>99 per cent) for most questions. Confident model predictions are highly accurate, which makes the model suitable for filtering large collections of images and forwarding challenging images to experts for manual annotation. This approach greatly reduces the experts’ workload without affecting accuracy. The application of these algorithms to larger sets of training data will be critical for analysing results from future surveys such as the Large Synoptic Survey Telescope.
IEEE Transactions on Pattern Analysis and Machine Intelligence | 2016
Di Wu; Lionel Pigou; Pieter-Jan Kindermans; Nam Le; Ling Shao; Joni Dambre; Jean-Marc Odobez
This paper describes a novel method called Deep Dynamic Neural Networks (DDNN) for multimodal gesture recognition. A semi-supervised hierarchical dynamic framework based on a Hidden Markov Model (HMM) is proposed for simultaneous gesture segmentation and recognition where skeleton joint information, depth and RGB images, are the multimodal input observations. Unlike most traditional approaches that rely on the construction of complex handcrafted features, our approach learns high-level spatiotemporal representations using deep neural networks suited to the input modality: a Gaussian-Bernouilli Deep Belief Network (DBN) to handle skeletal dynamics, and a 3D Convolutional Neural Network (3DCNN) to manage and fuse batches of depth and RGB images. This is achieved through the modeling and learning of the emission probabilities of the HMM required to infer the gesture sequence. This purely data driven approach achieves a Jaccard index score of 0.81 in the ChaLearn LAP gesture spotting challenge. The performance is on par with a variety of state-of-the-art hand-tuned feature-based approaches and other learning-based methods, therefore opening the door to the use of deep learning techniques in order to further explore multimodal time series data.
IEEE Transactions on Neural Networks | 2011
Kristof Vandoorne; Joni Dambre; David Verstraeten; Benjamin Schrauwen; Peter Bienstman
Reservoir computing (RC), a computational paradigm inspired on neural systems, has become increasingly popular in recent years for solving a variety of complex recognition and classification problems. Thus far, most implementations have been software-based, limiting their speed and power efficiency. Integrated photonics offers the potential for a fast, power efficient and massively parallel hardware implementation. We have previously proposed a network of coupled semiconductor optical amplifiers as an interesting test case for such a hardware implementation. In this paper, we investigate the important design parameters and the consequences of process variations through simulations. We use an isolated word recognition task with babble noise to evaluate the performance of the photonic reservoirs with respect to traditional software reservoir implementations, which are based on leaky hyperbolic tangent functions. Our results show that the use of coherent light in a well-tuned reservoir architecture offers significant performance benefits. The most important design parameters are the delay and the phase shift in the systems physical connections. With optimized values for these parameters, coherent semiconductor optical amplifier (SOA) reservoirs can achieve better results than traditional simulated reservoirs. We also show that process variations hardly degrade the performance, but amplifier noise can be detrimental. This effect must therefore be taken into account when designing SOA-based RC implementations.
Optics Express | 2012
Thomas Van Vaerenbergh; Martin Fiers; Pauline Mechet; Thijs Spuesens; Rajesh Kumar; Geert Morthier; Benjamin Schrauwen; Joni Dambre; Peter Bienstman
To emulate a spiking neuron, a photonic component needs to be excitable. In this paper, we theoretically simulate and experimentally demonstrate cascadable excitability near a self-pulsation regime in high-Q-factor silicon-on-insulator microrings. For the theoretical study we use Coupled Mode Theory. While neglecting the fast energy and phase dynamics of the cavity light, we can still preserve the most important microring dynamics, by only keeping the temperature difference with the surroundings and the amount of free carriers as dynamical variables of the system. Therefore we can analyse the microring dynamics in a 2D phase portrait. For some wavelengths, when changing the input power, the microring undergoes a subcritical Andronov-Hopf bifurcation at the self-pulsation onset. As a consequence the system shows class II excitability. Experimental single ring excitability and self-pulsation behaviour follows the theoretic predictions. Moreover, simulations and experiments show that this excitation mechanism is cascadable.
Journal of The Optical Society of America B-optical Physics | 2012
Martin Fiers; Thomas Van Vaerenbergh; Ken Caluwaerts; Dries Vande Ginste; Benjamin Schrauwen; Joni Dambre; Peter Bienstman
We present a tool that aids in the modeling of optical circuits, both in the frequency and in the time domain. The tool is based on the definition of a node, which can have both an instantaneous input-output relation and different state variables (e.g., temperature and carrier density) and differential equations for these states. Furthermore, each node has access to part of its input history, allowing the creation of delay lines or digital filters. Additionally, a node can contain subnodes, allowing the creation of hierarchical networks. This tool can be used in numerous applications such as frequency-domain analysis of optical ring filters, time-domain analysis of optical amplifiers, microdisks, and microcavities. Although we mainly use this tool to model optical circuits, it can also be used to model other classes of dynamical systems, such as electrical circuits and neural networks.
International Journal of Computer Vision | 2018
Lionel Pigou; Aäron van den Oord; Sander Dieleman; Mieke Van Herreweghe; Joni Dambre
Recent studies have demonstrated the power of recurrent neural networks for machine translation, image captioning and speech recognition. For the task of capturing temporal structure in video, however, there still remain numerous open research questions. Current research suggests using a simple temporal feature pooling strategy to take into account the temporal aspect of video. We demonstrate that this method is not sufficient for gesture recognition, where temporal information is more discriminative compared to general video classification tasks. We explore deep architectures for gesture recognition in video and propose a new end-to-end trainable neural network architecture incorporating temporal convolutions and bidirectional recurrence. Our main contributions are twofold; first, we show that recurrence is crucial for this task; second, we show that adding temporal convolutions leads to significant improvements. We evaluate the different approaches on the Montalbano gesture recognition dataset, where we achieve state-of-the-art results.
international symposium on neural networks | 2010
David Verstraeten; Joni Dambre; Xavier Dutoit; Benjamin Schrauwen
Reservoir Computing (RC) is increasingly being used as a conceptually simple yet powerful method for using the temporal processing of recurrent neural networks (RNN). However, because fundamental insight in the exact functionality of the reservoir is as yet still lacking, in practice there is still a lot of manual parameter tweaking or brute-force searching involved in optimizing these systems. In this contribution we aim to enhance the insights into reservoir operation, by experimentally studying the interplay of the two crucial reservoir properties, memory and non-linear mapping. For this, we introduce a novel metric which measures the deviation of the reservoir from a linear regime and use it to define different regions of dynamical behaviour. Next, we study the relationship of two important reservoir parameters, input scaling and spectral radius, on two properties of an artificial task, namely memory and non-linearity.
IEEE Transactions on Very Large Scale Integration Systems | 2007
Ian O'Connor; Faress Tissafi-Drissi; F. Gaffiot; Joni Dambre; M. De Wilde; J. Van Campenhout; D. Van Thourhout; Dirk Stroobandt
Integrated optical interconnect has been identified by the ITRS as a potential solution to overcome predicted interconnect limitations in future systems-on-chip. However, the multiphysics nature of the design problem and the lack of a mature integrated photonic technology have contributed to severe difficulties in assessing its suitability. This paper describes a systematic, fully automated synthesis method for integrated microsource-based optical interconnect capable of optimally sizing the interface circuits based on system specifications, CMOS technology data, and optical device characteristics. The simulation-based nature of the design method means that its results are relatively accurate, even though the generation of each data point requires only 5 min on a 1.3-GHz processor. This method has been used to extract typical performance metrics (delay, power, interconnect density) for optical interconnect of length 2.5-20 mm in three predictive technologies at 65-, 45-, and 32-nm gate length.
IEEE Journal of Selected Topics in Quantum Electronics | 2006
I Artundo; Lieven Desmet; Wim Heirman; C Debaes; Joni Dambre; J. Van Campenhout; Hugo Thienpont
Recent advances in the development of optical interconnect technologies suggest the possible emergence of optical interconnects within distributed shared memory (DSM) machines in the near future. Moreover, current developments in wavelength tunable devices could soon allow for the fabrication of low-cost, adaptable interconnection networks with varying switching times. It is the objective of this paper to investigate whether such reconfigurable networks can boost the performance of the DSM machines further. In this respect, we propose a system concept of a passive optical broadcasting component to be used as the scalable key element in such a reconfigurable network. We briefly discuss the necessary opto-electronic components and the limitations they impose on network performance. We show through detailed full-system simulations of benchmark executions, that the proposed system architecture can provide a significant speedup for shared-memory machines, even when taking into account the limitations imposed by the opto-electronics and the optical broadcast component
system-level interconnect prediction | 2008
Wim Heirman; Joni Dambre; Dirk Stroobandt; Jan Van Campenhout
In VLSI systems, Rents rule characterizes the locality of interconnect between different subsystems, and allows an efficient layout of the circuit on a chip. With rising complexities of both hardware and software, Systems-on-Chip are converging to multiprocessor architectures connected by a Network-on-Chip. Here, packets are routed instead of wires, and threads of a parallel program are distributed among processors. Still, Rents rule remains applicable, as it can now be used to describe the locality of network traffic. In this paper, we analyze network traffic on an on-chip network and observe the power-law relation between the size of clusters of network nodes and their external bandwidths. We then use the same techniques to study the time-varying behavior of the application, and derive the implications for future on-chip networks.