Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daniel R. Mendat is active.

Publication


Featured researches published by Daniel R. Mendat.


conference on information sciences and systems | 2013

Audio-visual saliency map: Overview, basic models and hardware implementation

Sudarshan Ramenahalli; Daniel R. Mendat; Salvador Dura-Bernal; Eugenio Culurciello; Ernst Nieburt; Andreas G. Andreou

In this paper we provide an overview of audiovisual saliency map models. In the simplest model, the location of auditory source is modeled as a Gaussian and use different methods of combining the auditory and visual information. We then provide experimental results with applications of simple audio-visual integration models for cognitive scene analysis. We validate the simple audio-visual saliency models with a hardware convolutional network architecture and real data recorded from moving audio-visual objects. The latter system was developed under Torch language by extending the attention.lua (code) and attention.ui (GUI) files that implement Culurciellos visual attention model.


latin american symposium on circuits and systems | 2016

Neuromorphic sampling on the SpiNNaker and parallella chip multiprocessors

Daniel R. Mendat; Sang Chin; Steve B. Furber; Andreas G. Andreou

We present a bio-inspired, hardware/software architecture to perform Markov Chain Monte Carlo sampling on probabilistic graphical models using energy aware hardware. We have developed algorithms and programming data flows for two recently developed multiprocessor architectures, the SpiNNaker and Parallella. We employ a neurally inspired sampling algorithm that abstracts the functionality of neurons in a biological network and exploits the neural dynamics to implement the sampling process. This algorithm maps nicely on the two hardware systems. Speedups as high as 1000 fold are achieved when performing inference using this approach, compared to algorithms running on traditional engineering workstations.


international symposium on circuits and systems | 2016

Real-time sensory information processing using the TrueNorth Neurosynaptic System

Andreas G. Andreou; Andrew Dykman; Kate D. Fischl; Guillaume Garreau; Daniel R. Mendat; Garrick Orchard; Andrew S. Cassidy; Paul A. Merolla; John V. Arthur; Rodrigo Alvarez-Icaza; Bryan L. Jackson; Dharmendra S. Modha

Summary form only given. The IBM TrueNorth (TN) Neurosynaptic System, is a chip multi processor with a tightly coupled processor/memory architecture, that results in energy efficient neurocomputing and it is a significant milestone to over 30 years of neuromorphic engineering! It comprises of 4096 cores each core with 65K of local memory (6T SRAM)-synapses- and 256 arithmetic logic units - neurons-that operate on a unary number representation and compute by counting up to a maximum of 19 bits. The cores are event-driven using custom asynchronous and synchronous logic, and they are globally connected through an asynchronous packet switched mesh network on chip (NOC). The chip development board, includes a Zyng Xilinx FPGA that does the housekeeping and provides support for standard communication support through an Ethernet UDP interface. The asynchronous Addressed Event Representation (AER) in the NOC is al so exposed to the user for connection to AER based peripherals through a packet with bundled data full duplex interface. The unary data values represented on the system buses can take on a wide variety of spatial and temporal encoding schemes. Pulse density coding (the number of events Ne represents a number N), thermometer coding, time-slot encoding, and stochastic encoding are examples. Additional low level interfaces are available for communicating directly with the TrueNorth chip to aid programming and parameter setting. A hierarchical, compositional programming language, Corelet, is available to aid the development of TN applications. IBM provides support and a development system as well as “Compass” a scalable simulator. The software environment runs under standard Linux installations (Red Hat, CentOS and Ubuntu) and has standard interfaces to Matlab and to Caffe that is employed to train deep neural network models. The TN architecture can be interfaced using native AER to a number of bio-inspired sensory devices developed over many years of neuromorphic engineering (silicon retinas and silicon cochleas). In addition the architecture is well suited for implementing deep neural networks with many applications in computer vision, speech recognition and language processing. In a sensory information processing system architecture one desires both pattern processing in space and time to extract features in symbolic sub-spaces as well as natural language processing to provide contextual and semantic information in the form of priors. In this paper we discuss results from ongoing experimental work on real-time sensory information processing using the TN architecture in three different areas (i) spatial pattern processing -computer vision(ii) temporal pattern processing -speech processing and recognition(iii) natural language processing -word similarity-. A real-time demonstration will be done at ISCAS 2016 using the TN system and neuromorphic event based sensors for audition (silicon cochlea) and vision (silicon retina).


conference on information sciences and systems | 2017

Neuromorphic self-driving robot with retinomorphic vision and spike-based processing/closed-loop control

Kate D. Fischl; Gaspar Tognetti; Daniel R. Mendat; Garrick Orchard; John Rattray; Christos Sapsanis; Laura F. Campbell; Laxaviera Elphage; Tobias E. Niebur; Alejandro Pasciaroni; Valerie E. Rennoll; Heather Romney; Shamaria Walker; Philippe O. Pouliquen; Andreas G. Andreou

We present our work on a neuromorphic self-driving robot that employs retinomoprhic visual sensing and spike based processing. The robot senses the world through a spike-based visual system - the Asynchronous Time-based Image Sensor (ATIS) - and processes the sensory data stream using IBMs TrueNorth Neurosynaptic System. A convolutional neural network (CNN) running on the TrueNorth determines the steering direction based on what the ATIS “sees.” The network was trained on data from three different environments (indoor hallways, large campus sidewalks, and narrow neighborhood sidewalks) and achieved steering decision accuracies from 68% to 82% on development data from each dataset.


Proceedings of SPIE | 2015

The Johns Hopkins University multimodal dataset for human action recognition

Thomas S. Murray; Daniel R. Mendat; Philippe O. Pouliquen; Andreas G. Andreou

The Johns Hopkins University MultiModal Action (JHUMMA) dataset contains a set of twenty-one actions recorded with four sensor systems in three different modalities. The data was collected with a data acquisition system that includes three independent active sonar devices at three different frequencies and a Microsoft Kinect sensor that provides both RGB and Depth data. We have developed algorithms for human action recognition from active acoustics and provide benchmark baseline recognition performance results.


conference on information sciences and systems | 2017

Action recognition using micro-Doppler signatures and a recurrent neural network

Jeff Craley; Thomas S. Murray; Daniel R. Mendat; Andreas G. Andreou

This paper explores the long short-term memory (LSTM) recurrent neural network for human action recognition from micro-Doppler signatures. The recurrent neural network model is evaluated using the Johns Hopkins MultiModal Action (JHUMMA) dataset. In testing we use only the active acoustic micro-Doppler signatures. We compare classification performed using hidden Markov model (HMM) systems trained on both micro-Doppler sensor and Kinect data with LSTM classification trained only on the micro-Doppler signatures. For HMM systems we evaluate the performance of product of expert based systems and systems trained on concatenated sensor data. By testing with leave one user out (LOUO) cross-validation we verify the ability of these systems to generalize to new users. We find that LSTM systems trained only on micro-Doppler signatures outperform the other models evaluated.


conference on information sciences and systems | 2017

Audio-Visual beamforming with the Eigenmike microphone array an omni-camera and cognitive auditory features

Daniel R. Mendat; James E. West; Sudarshan Ramenahalli; Ernst Niebur; Andreas G. Andreou

Audio-visual beamforming involves both an acoustic sensor and an omni-camera to form a composite 3D audio-visual representation of the environment. Information from the respective modalities is combined in the process of acoustic localization taking into account high level cognitive features of the signals, namely the presence of specific sounds - speech and tones - which have characteristic signatures in specific spectral bands. We compare the results from two systems. One is a custom-built architecture based on the MH Acoustics Eigenmike microphone array and a consumer grade omni-camera (Bloggie). The other is the commercially available Visisonics Audio-Visual (AV) camera. We show that the performances of the two systems are comparable.


latin american symposium on circuits and systems | 2016

Bio-inspired system architecture for energy efficient, BIGDATA computing with application to wide area motion imagery

Andreas G. Andreou; Tomas Figliolia; Kayode Sanni; Thomas S. Murray; Gaspar Tognetti; Daniel R. Mendat; Jamal Lottier Molin; Martin Villemur; Philippe O. Pouliquen; Pedro Julián; Ralph Etienne-Cummings; Isidoros Doxas

In this paper we discuss a brain-inspired system architecture for real-time big velocity BIGDATA processing that originates in large format tiled imaging arrays used in wide area motion imagery ubiquitous surveillance. High performance and high throughput is achieved through approximate computing and fixed point arithmetic in a variable precision (6 bits to 18 bits) architecture. The architecture implements a variety of processing algorithms classes ranging from convolutional networks (Con-vNets) to linear and non-linear morphological processing, probabilistic inference using exact and approximate Bayesian methods and ConvNet based classification. The processing pipeline is implemented entirely using event based neuromorphic and stochastic computational primitives. The system is capable of processing in real-time 160 × 120 raw pixel data running on a reconfigurable computing platform (5 Xilinx Kintex-7 FPGAs). The reconfigurable computing implementation was developed to emulate the computational structures for a 3D System on Chip (3D-SOC) that will be fabricated in the 55nm CMOS technology and it has a dual goal: (i) algorithm exploration and (ii) architecture exploration.


conference on information sciences and systems | 2013

Auditory modulation of visual proto-object formation in a hierarchical auditory-visual saliency map

Tomas Figliolia; Daniel R. Mendat; Alexander F. Russell; Thomas A. Murray; Ernst Nieburyk; Ralph Etienne-Cummings; Andreas G. Andreou

Organisms use the process of selective attention to optimally allocate their computational resources to interesting subsets of a visual scene - ensuring that they can parse the scene in realtime. Many models of attention assume that basic image features (e.g. intensity, color and orientation) behave as attractors for attention. Gestalt psychologists, however, argue that humans perceive the whole before they analyze individual features. This is supported by recent psychophysical studies that show that objects predict eye-fixations better than features. In this paper we present a neurally inspired algorithm of object based, bottom up attention. In this paper we extend the original object-based visual saliency map using border ownership to include auditory integration. The integration is implemented using the modulation of the receptive field for border ownership in the process of protoobject formation. We report on preliminary experimental results from our model using data from two types of objects, people and cars.


conference on information sciences and systems | 2015

Markov Chain Monte Carlo inference on graphical models using event-based processing on the SpiNNaker neuromorphic architecture

Daniel R. Mendat; Sang Chin; Steve B. Furber; Andreas G. Andreou

Collaboration


Dive into the Daniel R. Mendat's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kate D. Fischl

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar

Kayode Sanni

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sang Chin

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge