Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Jamal Lottier Molin is active.

Publication


Featured researches published by Jamal Lottier Molin.


conference on information sciences and systems | 2015

FPGA implementation of a Deep Belief Network architecture for character recognition using stochastic computation

Kayode Sanni; Guillaume Garreau; Jamal Lottier Molin; Andreas G. Andreou

Deep Neural Networks (DNNs) have proven very effective for classification and generative tasks, and are widely adapted in a variety of fields including vision, robotics, speech processing, and more. Specifically, Deep Belief Networks (DBNs), are graphical model constructed of multiple layers of nodes connected as Markov random fields, have been successfully implemented for tackling such tasks. However, because of the numerous connections between nodes in the networks, DBNs suffer a drawback of being computational intensive. In this work, we exploit an alternative approach based on computation on probabilistic unary streams for designing a more efficient deep neural network architecture for classification.


international midwest symposium on circuits and systems | 2015

FPGA emulation of a spike-based, stochastic system for real-time image dewarping

Jamal Lottier Molin; Tomas Figliolia; Kayode Sanni; Isidoros Doxas; Andreas G. Andreou; Ralph Etienne-Cummings

Image dewarping is vital for systems performing image processing tasks in real-time. We introduce a real-time emulation of a low-power, event-based system for image dewarping implemented on an FPGA. The system utilizes stochastic computation in conjunction with a neuromorphic system called the Integrate-and-Fire Array Transceiver (IFAT) for low hardware area and to reduce computational complexity. It models the dynamics of integrate-and-fire neurons and takes advantage of the reconfigurability of the IFAT system for performing image dewarping in real-time. When real-time camera motion data is readily available, we show that this system can accurately perform the dewarping task. Such a system is ideal for devices such as micro aerial vehicles performing visual processing tasks in-flight.


conference on information sciences and systems | 2015

How is motion integrated into a proto-object based visual saliency model?

Jamal Lottier Molin; Ralph Etienne-Cummings; Ernst Niebur

When computing visual saliency on natural scenes, many current models do not consider temporal information that may exist within the visual stimuli. Most models are designed for predicting salient regions of static images only. However, the world is dynamic and constantly changing. Furthermore, motion is a naturally occurring phenomena that plays an essential role in both human and computer visual processing. Henceforth, the most efficient model of visual saliency should consider motion that may be exhibited within the visual scene. In this paper, we investigate the most advantageous and biologically-plausible manner in which motion should be applied to our current model of proto-object based visual saliency. We investigate the type of motion that should be extracted in such a bottom-up, feed-forward model as well as where the motion should be incorporated into the model. Two final approaches are suggested and compared against how well each can predict human eye saccades on a set of videos from the Itti dataset. They are each validated using the KL Divergence metric. We conclude by selecting the model better at predicting saccades throughout various videos from the dataset. Our results also gives general insight into how motion should be integrated into a visual saliency model.


biomedical circuits and systems conference | 2015

Dynamically reconfigurable silicon array of generalized integrate-and-fire neurons

Vigil Varghese; Jamal Lottier Molin; Christian Brandli; Shoushun Chen; Ralph Etienne Cummings

In this paper we present a highly scalable, dynamically reconfigurable, energy efficient silicon neuron model for large scale neural networks. This model is a simplification of the generalized linear integrate-and-fire neuron model. The presented model is capable of reproducing 9 of the 20 prominent biologically relevant neuron behaviors. The circuits are designed for a 0.5 μm process and occupy an area of 1029 μm2, while only consuming an average power of 0.38 nW at 1 kHz.


biomedical circuits and systems conference | 2013

Proto-object based visual saliency model with a motion-sensitive channel

Jamal Lottier Molin; Alexander F. Russell; Stefan Mihalas; Ernst Niebur; Ralph Etienne-Cummings

The human visual system has the inherent capability of using selective attention to rapidly process visual information across visual scenes. Early models of visual saliency are purely feature-based and compute visual attention for static scenes. However, to model the human visual system, it is important to also consider temporal change that may exist within the scene when computing visual saliency. We present a biologically-plausible model of dynamic visual attention that computes saliency as a function of proto-objects modulated by an independent motion-sensitive channel. This motion-sensitive channel extracts motion information via biologically plausible temporal filters modeling simple cell receptive fields. By using KL divergence measurements, we show that this model performs significantly better than chance in predicting eye fixations. Furthermore, in our experiments, this model outperforms the Itti, 2005 dynamic saliency model and insignificantly differs from the graph-based visual dynamic saliency model in performance.


international symposium on circuits and systems | 2016

Stochastic image processing and simultaneous dewarping for aerial vehicles

Jamal Lottier Molin; John Rattray; Ralph Etienne-Cummings

There is increasing interest for aerial vehicles to perform image processing tasks (i.e. object recognition and detection) in real-time. Such systems systems should have minimal data throughput, low computational complexity, and low-power. Traditional frame-based digital cameras are not ideal for meeting such specifications. More recent cameras, inspired by biology, drastically reduce data throughput by representing information in event streams, and in doing so, represent image information temporally. In this work, we utilize the ATIS (Asynchronous Time-based Image Sensor) in conjunction with a Field-Programmable Gate Array implementation of the Integrate-and-Fire Array Transceiver for performing an event-based, simultaneous image dewarping and filtering task. The ATIS output is inherently event-based and stochastic, giving our system the low data throughput and low-power specifications that we seek, as it more directly mimics the communication protocol of biological neurons. We further emphasize how our system can be coupled with aerial vehicles that must perform visual tasks in real-time on a coherent representation of what its camera has captured.


international symposium on circuits and systems | 2017

Low-power, low-mismatch, highly-dense array of VLSI Mihalas-Niebur neurons

Jamal Lottier Molin; Adebayo Eisape; Chetan Singh Thakur; Vigil Varghese; Christian Brandli; Ralph Etienne-Cummings

We present an array of Mihalas-Niebur neurons with dynamically reconfigurable synapses implemented in 0.5 μm CMOS technology optimized for low-power, low-mismatch, and high-density. This neural array has two modes of operation: one is each cell in the array operates as independent leaky integrate-and-fire neurons, and the second is two cells work together to model the Mihalas-Niebur neuron dynamics. Depending on the mode of operation, this implementation consists of 2040 Mihalas-Niebur neurons or 4080 I&F neurons within a 3mm χ 3mm area. Each I&F neuron cell consumes an area of 1495μm2 and the neural array dissipates 360pJ of energy per synaptic event measured at 5.0V power supply (∼14pJ at 1.0V estimated from SPICE simulation).


international symposium on circuits and systems | 2017

Live demonstration: Real-time, dynamic visual saliency computation in a VR environment seeing through the eyes of a mobile robot

Jamal Lottier Molin; Christopher Simmons; Garrett Nixon; Ralph Etienne-Cummings

We demonstrate dynamic visual saliency computation within a virtual environment using the Oculus Rift with a custom eye tracker. The visual display is representative of the real-time view of a mobile robot with two mounted first-person view cameras for stereoscopic vision.


international symposium on circuits and systems | 2017

Neuromorphic visual saliency implementation using stochastic computation

Chetan Singh Thakur; Jamal Lottier Molin; Tao Xiong; Jie Zhang; Ernst Niebur; Ralph Etienne-Cummings

Visual saliency models are difficult to implement in hardware for real time applications due to their computational complexity. The conventional digital implementation is not optimal because of the requirement of a large number of convolution operations for filtering on several feature channels across multiple image pyramids [1], [2]. Here, we propose an alternative approach to implement a neuromorphic visual saliency algorithm [3] in digital hardware using stochastic computation, which can achieve very low power and small area. We show the real time implementation of important building blocks of the system and compare the overall system with its software implementation. Our implementation will be useful for facilitating high-fidelity selective rendering in computer graphics applications using the output of the saliency model, and for communications, where the non-salient parts of an image can be compressed more heavily than the salient parts. Our implementation will find several applications as a frontend co-processor for information triaging, compression and analysis in computer vision tasks. Our proposed SC-based convolution circuit could be a potential building block for implanting deep convolutional neural networks (CNN) on hardware.


international symposium on circuits and systems | 2017

Live demonstration: Event-based image processing on CMOS Mihalas-Niebur neuron array transceiver

Jamal Lottier Molin; Adebayo Eisape; Ralph Etienne-Cummings

Neuromorphic systems mimic the functionality and communication protocol of biological neurons with efforts to design the most size and power efficient computing systems. We demonstrate the capability of an in-house neural array of 4080 Mihalas-Niebur neurons designed in 0.5μm CMOS technology to perform various event-based image processing tasks including warping and filtering.

Collaboration


Dive into the Jamal Lottier Molin's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Kayode Sanni

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar

Ernst Niebur

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar

Adebayo Eisape

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar

John Rattray

Johns Hopkins University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Vigil Varghese

Nanyang Technological University

View shared research outputs
Researchain Logo
Decentralizing Knowledge