Guillaume Garreau
University of Cyprus
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Guillaume Garreau.
conference on information sciences and systems | 2015
Kayode Sanni; Guillaume Garreau; Jamal Lottier Molin; Andreas G. Andreou
Deep Neural Networks (DNNs) have proven very effective for classification and generative tasks, and are widely adapted in a variety of fields including vision, robotics, speech processing, and more. Specifically, Deep Belief Networks (DBNs), are graphical model constructed of multiple layers of nodes connected as Markov random fields, have been successfully implemented for tackling such tasks. However, because of the numerous connections between nodes in the networks, DBNs suffer a drawback of being computational intensive. In this work, we exploit an alternative approach based on computation on probabilistic unary streams for designing a more efficient deep neural network architecture for classification.
human behavior unterstanding | 2011
Salvador Dura-Bernal; Guillaume Garreau; Charalambos M. Andreou; Andreas G. Andreou; Julius Georgiou; Thomas Wennekers; Susan L. Denham
The spectrotemporal representation of an ultrasonar wave reflected by an object contains frequency shifts corresponding to the velocity of the objects moving parts, also known as the micro-Doppler signature. The present study describes how the micro-Doppler signature of human subjects, collected in two experiments, can be used to categorize the action performed by the subject. The proposed method segments the spectrogram into temporal events, learns prototypes and categorizes the events using a Nearest Neighbour approach. Results show an average accuracy above 95%, with some categories reaching 100%, and a strong robustness to variations in the model parameters. The low computational cost of the system, together with its high accuracy, even for short length inputs, make it appropriate for a real-time implementation with applications to intelligent surveillance, monitoring and related disciplines.
conference on information sciences and systems | 2011
Julius Georgiou; Philippe O. Pouliquen; Andrew S. Cassidy; Guillaume Garreau; Charalambos M. Andreou; Guillermo Stuarts; Cyrille d'Urbal; Andreas G. Andreou; Susan L. Denham; Thomas Wennekers; Robert Mill; István Winkler; Tamás Bohm; Orsolya Szalárdy; Georg M. Klump; Simon J. Jones; Alexandra Bendixen
We report on the design and the collection of a multi-modal data corpus for cognitive acoustic scene analysis. Sounds are generated by stationary and moving sources (people), that is by omni-directional speakers mounted on peoples heads. One or two subjects walk along predetermined systematic and random paths, in synchrony and out of sync. Sound is captured in multiple microphone systems, including a four MEMS microphone directional array, two electret microphones situated in the ears of a stuffed gerbil head, and a Head Acoustics, head-shoulder unit with ICP microphones. Three micro-Doppler units operating at different frequencies were employed to capture gait and the articulatory signatures as well as location of the people in the scene. Three ground vibration sensors were recording the footsteps of the walking people. A 3D MESA camera as well as a web-cam provided 2D and 3D visual data for system calibration and ground truth. Data were collected in three environments ranging from a well controlled environment (anechoic chamber), an indoor environment (large classroom) and the natural environment of an outside courtyard. A software tool has been developed for the browsing and visualization of the data.
conference on information sciences and systems | 2011
Guillaume Garreau; Nicoletta Nicolaou; Charalambos M. Andreou; Cyrille d'Urbal; Guillermo Stuarts; Julius Georgiou
In this paper we present a micro-Doppler (mD) system and a computationally efficient classifier for the purpose of distinguishing different means of transport for human beings (pedestrians, inline skaters and cyclists) based on their mD time-frequency signatures. Accuracies as high as 97% are obtained while keeping the overall computational cost low.
International Journal of Neural Systems | 2013
Salvador Dura-Bernal; Guillaume Garreau; Julius Georgiou; Andreas G. Andreou; Susan L. Denham; Thomas Wennekers
The ability to recognize the behavior of individuals is of great interest in the general field of safety (e.g. building security, crowd control, transport analysis, independent living for the elderly). Here we report a new real-time acoustic system for human action and behavior recognition that integrates passive audio and active micro-Doppler sonar signatures over multiple time scales. The system architecture is based on a six-layer convolutional neural network, trained and evaluated using a dataset of 10 subjects performing seven different behaviors. Probabilistic combination of system output through time for each modality separately yields 94% (passive audio) and 91% (micro-Doppler sonar) correct behavior classification; probabilistic multimodal integration increases classification performance to 98%. This study supports the efficacy of micro-Doppler sonar systems in characterizing human actions, which can then be efficiently classified using ConvNets. It also demonstrates that the integration of multiple sources of acoustic information can significantly improve the systems performance.
Frontiers in Neuroscience | 2014
L. B. Shestopalova; Tamás M. Bőhm; Alexandra Bendixen; Andreas G. Andreou; Julius Georgiou; Guillaume Garreau; Botond Hajdu; Susan L. Denham; István Winkler
An audio-visual experiment using moving sound sources was designed to investigate whether the analysis of auditory scenes is modulated by synchronous presentation of visual information. Listeners were presented with an alternating sequence of two pure tones delivered by two separate sound sources. In different conditions, the two sound sources were either stationary or moving on random trajectories around the listener. Both the sounds and the movement trajectories were derived from recordings in which two humans were moving with loudspeakers attached to their heads. Visualized movement trajectories modeled by a computer animation were presented together with the sounds. In the main experiment, behavioral reports on sound organization were collected from young healthy volunteers. The proportion and stability of the different sound organizations were compared between the conditions in which the visualized trajectories matched the movement of the sound sources and when the two were independent of each other. The results corroborate earlier findings that separation of sound sources in space promotes segregation. However, no additional effect of auditory movement per se on the perceptual organization of sounds was obtained. Surprisingly, the presentation of movement-congruent visual cues did not strengthen the effects of spatial separation on segregating auditory streams. Our findings are consistent with the view that bistability in the auditory modality can occur independently from other modalities.
conference on information sciences and systems | 2011
Philippe O. Pouliquen; Andrew S. Cassidy; Andreas G. Andreou; Guillaume Garreau; Julius Georgiou
We present the design of a distributed data acquisition system for passive and active multi-modal sensing that is capable of synchronized signal sampling within ±5 microseconds. The system is employed in a “wireless cortex” architecture for multi-modal cognitive scene analysis.
computer vision and pattern recognition | 2017
Arnon Amir; Brian Taba; David J. Berg; Timothy Melano; Jeffrey L. McKinstry; Carmelo di Nolfo; Tapan Kumar Nayak; Alexander Andreopoulos; Guillaume Garreau; Marcela Mendoza; Jeff Kusnitz; Michael DeBole; Steven K. Esser; Tobi Delbruck; Myron Flickner; Dharmendra S. Modha
We present the first gesture recognition system implemented end-to-end on event-based hardware, using a TrueNorth neurosynaptic processor to recognize hand gestures in real-time at low power from events streamed live by a Dynamic Vision Sensor (DVS). The biologically inspired DVS transmits data only when a pixel detects a change, unlike traditional frame-based cameras which sample every pixel at a fixed frame rate. This sparse, asynchronous data representation lets event-based cameras operate at much lower power than frame-based cameras. However, much of the energy efficiency is lost if, as in previous work, the event stream is interpreted by conventional synchronous processors. Here, for the first time, we process a live DVS event stream using TrueNorth, a natively event-based processor with 1 million spiking neurons. Configured here as a convolutional neural network (CNN), the TrueNorth chip identifies the onset of a gesture with a latency of 105 ms while consuming less than 200 mW. The CNN achieves 96.5% out-of-sample accuracy on a newly collected DVS dataset (DvsGesture) comprising 11 hand gesture categories from 29 subjects under 3 illumination conditions.
Journal of Computational Physics | 2017
Jung Hee Seo; Hani Bakhshaee; Guillaume Garreau; Chi Zhu; Andreas G. Andreou; William R. Thompson; Rajat Mittal
A computational method for direct simulation of the generation and propagation of blood flow induced sounds is proposed. This computational hemoacoustic method is based on the immersed boundary approach and employs high-order finite difference methods to resolve wave propagation and scattering accurately. The current method employs a two-step, one-way coupled approach for the sound generation and its propagation through the tissue. The blood flow is simulated by solving the incompressible NavierStokes equations using the sharp-interface immersed boundary method, and the equations corresponding to the generation and propagation of the three-dimensional elastic wave corresponding to the murmur are resolved with a high-order, immersed boundary based, finite-difference methods in the time-domain. The proposed method is applied to a model problem of aortic stenosis murmur and the simulation results are verified and validated by comparing with known solutions as well as experimental measurements. The murmur propagation in a realistic model of a human thorax is also simulated by using the computational method. The roles of hemodynamics and elastic wave propagation on the murmur are discussed based on the simulation results.
international symposium on circuits and systems | 2016
Andreas G. Andreou; Andrew Dykman; Kate D. Fischl; Guillaume Garreau; Daniel R. Mendat; Garrick Orchard; Andrew S. Cassidy; Paul A. Merolla; John V. Arthur; Rodrigo Alvarez-Icaza; Bryan L. Jackson; Dharmendra S. Modha
Summary form only given. The IBM TrueNorth (TN) Neurosynaptic System, is a chip multi processor with a tightly coupled processor/memory architecture, that results in energy efficient neurocomputing and it is a significant milestone to over 30 years of neuromorphic engineering! It comprises of 4096 cores each core with 65K of local memory (6T SRAM)-synapses- and 256 arithmetic logic units - neurons-that operate on a unary number representation and compute by counting up to a maximum of 19 bits. The cores are event-driven using custom asynchronous and synchronous logic, and they are globally connected through an asynchronous packet switched mesh network on chip (NOC). The chip development board, includes a Zyng Xilinx FPGA that does the housekeeping and provides support for standard communication support through an Ethernet UDP interface. The asynchronous Addressed Event Representation (AER) in the NOC is al so exposed to the user for connection to AER based peripherals through a packet with bundled data full duplex interface. The unary data values represented on the system buses can take on a wide variety of spatial and temporal encoding schemes. Pulse density coding (the number of events Ne represents a number N), thermometer coding, time-slot encoding, and stochastic encoding are examples. Additional low level interfaces are available for communicating directly with the TrueNorth chip to aid programming and parameter setting. A hierarchical, compositional programming language, Corelet, is available to aid the development of TN applications. IBM provides support and a development system as well as “Compass” a scalable simulator. The software environment runs under standard Linux installations (Red Hat, CentOS and Ubuntu) and has standard interfaces to Matlab and to Caffe that is employed to train deep neural network models. The TN architecture can be interfaced using native AER to a number of bio-inspired sensory devices developed over many years of neuromorphic engineering (silicon retinas and silicon cochleas). In addition the architecture is well suited for implementing deep neural networks with many applications in computer vision, speech recognition and language processing. In a sensory information processing system architecture one desires both pattern processing in space and time to extract features in symbolic sub-spaces as well as natural language processing to provide contextual and semantic information in the form of priors. In this paper we discuss results from ongoing experimental work on real-time sensory information processing using the TN architecture in three different areas (i) spatial pattern processing -computer vision(ii) temporal pattern processing -speech processing and recognition(iii) natural language processing -word similarity-. A real-time demonstration will be done at ISCAS 2016 using the TN system and neuromorphic event based sensors for audition (silicon cochlea) and vision (silicon retina).