Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Enea Ceolini is active.

Publication


Featured researches published by Enea Ceolini.


international conference on acoustics, speech, and signal processing | 2017

Impact of low-precision deep regression networks on single-channel source separation

Enea Ceolini; Shih-Chii Liu

Recent work on developing training methods for reduced precision Deep Convolutional Networks show that these networks can perform with similar accuracy to full precision networks when tested on a classification task. Reduced precision networks decrease the demand on the memory and computational power capabilities of the computing platform. This paper investigates the impact of reduced precision deep Recurrent Neural Networks (RNNs) when trained on a regression task, in this case, a monaural source separation task. The effect of reduced precision nets is explored for two popular recurrent network architectures: Vanilla RNNs and RNNs using Long-Short Term Memory (LSTM) units. The results show that the performance of the networks as measured by blind source separation metrics and speech intelligibility tests on two datasets, show very little decrease even when the weight precision goes down to 4 bits.


bioRxiv | 2018

A Comparison of Temporal Response Function Estimation Methods for Auditory Attention Decoding

Daniel D. E. Wong; Søren A. Fuglsang; Jens Hjortkjær; Enea Ceolini; Malcolm Slaney; Alain de Cheveigné

The decoding of selective auditory attention from noninvasive electroencephalogram (EEG) data is of interest in brain computer interface and auditory perception research. The current state-of-the-art approaches for decoding the attentional selection of listeners are based on temporal response functions (TRFs). In the current context, a TRF is a function that facilitates a mapping between features of sound streams and EEG responses. It has been shown that when the envelope of attended speech and EEG responses are used to derive TRF mapping functions, the TRF model predictions can be used to discriminate between attended and unattended talkers. However, the predictive performance of the TRF models is dependent on how the TRF model parameters are estimated. There exist a number of TRF estimation methods that have been published, along with a variety of datasets. It is currently unclear if any of these methods perform better than others, as they have not yet been compared side by side on a single standardized dataset in a controlled fashion. Here, we present a comparative study of the ability of different TRF estimation methods to classify attended speakers from multi-channel EEG data. The performance of the TRF estimation methods is evaluated using different performance metrics on a set of labeled EEG data from 18 subjects listening to mixtures of two speech streams.


Frontiers in Neuroscience | 2018

A Comparison of Regularization Methods in Forward and Backward Models for Auditory Attention Decoding

Daniel D. E. Wong; Søren A. Fuglsang; Jens Hjortkjær; Enea Ceolini; Malcolm Slaney; Alain de Cheveigné

The decoding of selective auditory attention from noninvasive electroencephalogram (EEG) data is of interest in brain computer interface and auditory perception research. The current state-of-the-art approaches for decoding the attentional selection of listeners are based on linear mappings between features of sound streams and EEG responses (forward model), or vice versa (backward model). It has been shown that when the envelope of attended speech and EEG responses are used to derive such mapping functions, the model estimates can be used to discriminate between attended and unattended talkers. However, the predictive/reconstructive performance of the models is dependent on how the model parameters are estimated. There exist a number of model estimation methods that have been published, along with a variety of datasets. It is currently unclear if any of these methods perform better than others, as they have not yet been compared side by side on a single standardized dataset in a controlled fashion. Here, we present a comparative study of the ability of different estimation methods to classify attended speakers from multi-channel EEG data. The performance of the model estimation methods is evaluated using different performance metrics on a set of labeled EEG data from 18 subjects listening to mixtures of two speech streams. We find that when forward models predict the EEG from the attended audio, regularized models do not improve regression or classification accuracies. When backward models decode the attended speech from the EEG, regularization provides higher regression and classification accuracies.


local computer networks | 2017

WHISPER: Wirelessly Synchronized Distributed Audio Sensor Platform

Ilya Kiselev; Enea Ceolini; Daniel D. E. Wong; Alain de Cheveigné; Shih-Chii Liu

This paper describes a distributed wireless acoustic sensor network (WASN) platform called WHISPER that is capable of synchronous multichannel sampling at different spatial locations with a sampling clock whose relative jitter is less than 300 ns. The platform comprises up to four data acquisition modules with onboard computing capabilities, and that can form an ad-hoc Wi-Fi network allowing an additional processing module such as a laptop or a smartphone to be connected if needed. Each acquisition module holds four digital SPI microphones with a total of 16 microphones for the entire system. Wireless synchronization of the sampling on each platform is implemented using a separate wireless module operating in the 902-928 MHz ISM band. Usage of this system is demonstrated in a real-time application involving spatial sound filtering through a beamforming algorithm.


international conference on event based control communication and signal processing | 2016

Temporal sequence recognition in a self-organizing recurrent network

Enea Ceolini; Daniel Neil; Tobi Delbruck; Shih-Chii Liu

A big challenge of reservoir-based Recurrent Neural Networks (RNNs) is the optimization of the connection weights within the network so that the network performance is optimal for the intended task of temporal sequence recognition. One particular RNN called the Self-Organizing Recurrent Network (SORN) avoids the mathematical normalization required after each initialization. Instead, three types of cortical plasticity mechanisms optimize the weights within the network during the initial part of the training. The success of this unsupervised training method was demonstrated on temporal sequences that use input symbols with a binary encoding and that activate only one input pool in each time step. This work extends the analysis towards different types of symbol encoding ranging from encoding methods that activate multiple input pools and that use encoding levels that are not strictly binary but analog in nature. Preliminary results show that the SORN model is able to classify well temporal sequences with symbols using these encoding methods and the advantages of this network over a static network in a classification task is still retained.


international conference on learning representations | 2018

Towards better understanding of gradient-based attribution methods for Deep Neural Networks

Marco Ancona; Enea Ceolini; Cengiz Oztireli; Markus H. Gross


field programmable gate arrays | 2018

DeltaRNN: A Power-efficient Recurrent Neural Network Accelerator

Chang Gao; Daniel Neil; Enea Ceolini; Shih-Chii Liu; Tobias Delbrück


arXiv: Learning | 2018

Sensor Transformation Attention Networks

Stefan Braun; Daniel Neil; Enea Ceolini; Jithendar Anumula; Shih-Chii Liu


international symposium on circuits and systems | 2018

An event-driven probabilistic model of sound source localization using cochlea spikes

Jithendar Anumula; Enea Ceolini; Zhe He; Adrian E. G. Huber; Shih-Chii Liu


conference of the international speech communication association | 2018

Speaker Activity Detection and Minimum Variance Beamforming for Source Separation.

Enea Ceolini; Jithendar Anumula; Adrian E. G. Huber; Ilya Kiselev; Shih-Chii Liu

Collaboration


Dive into the Enea Ceolini's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Daniel D. E. Wong

École Normale Supérieure

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Søren A. Fuglsang

Technical University of Denmark

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge