Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Sakyasingha Dasgupta is active.

Publication


Featured researches published by Sakyasingha Dasgupta.


Frontiers in Neurorobotics | 2017

A Neurocomputational Model of Goal-Directed Navigation in Insect-Inspired Artificial Agents

Dennis Goldschmidt; Poramate Manoonpong; Sakyasingha Dasgupta

Despite their small size, insect brains are able to produce robust and efficient navigation in complex environments. Specifically in social insects, such as ants and bees, these navigational capabilities are guided by orientation directing vectors generated by a process called path integration. During this process, they integrate compass and odometric cues to estimate their current location as a vector, called the home vector for guiding them back home on a straight path. They further acquire and retrieve path integration-based vector memories globally to the nest or based on visual landmarks. Although existing computational models reproduced similar behaviors, a neurocomputational model of vector navigation including the acquisition of vector representations has not been described before. Here we present a model of neural mechanisms in a modular closed-loop control—enabling vector navigation in artificial agents. The model consists of a path integration mechanism, reward-modulated global learning, random search, and action selection. The path integration mechanism integrates compass and odometric cues to compute a vectorial representation of the agents current location as neural activity patterns in circular arrays. A reward-modulated learning rule enables the acquisition of vector memories by associating the local food reward with the path integration state. A motor output is computed based on the combination of vector memories and random exploration. In simulation, we show that the neural mechanisms enable robust homing and localization, even in the presence of external sensory noise. The proposed learning rules lead to goal-directed navigation and route formation performed under realistic conditions. Consequently, we provide a novel approach for vector learning and navigation in a simulated, situated agent linking behavioral observations to their possible underlying neural substrates.


international conference on pattern recognition | 2016

Regularized dynamic Boltzmann machine with Delay Pruning for unsupervised learning of temporal sequences

Sakyasingha Dasgupta; Takayuki Yoshizumi; Takayuki Osogami

We introduce Delay Pruning, a simple yet powerful technique to regularize dynamic Boltzmann machines (DyBM). The recently introduced DyBM provides a particularly structured Boltzmann machine, as a generative model of a multi-dimensional time-series. This Boltzmann machine can have infinitely many layers of units but allows exact inference and learning based on its biologically motivated structure. DyBM uses the idea of conduction delays in the form of fixed length first-in first-out (FIFO) queues, with a neuron connected to another via this FIFO queue, and spikes from a pre-synaptic neuron travel along the queue to the post-synaptic neuron with a constant period of delay. Here, we present Delay Pruning as a mechanism to prune the lengths of the FIFO queues (making them zero) by setting some delay lengths to one with a fixed probability, and finally selecting the best performing model with fixed delays. The uniqueness of structure and a non-sampling based learning rule in DyBM, make the application of previously proposed regularization techniques like Dropout or DropConnect difficult, leading to poor generalization. First, we evaluate the performance of Delay Pruning to let DyBM learn a multidimensional temporal sequence generated by a Markov chain. Finally, we show the effectiveness of delay pruning in learning high dimensional sequences using the moving MNIST dataset, and compare it with Dropout and DropConnect methods.


international symposium on neural networks | 2015

A neural path integration mechanism for adaptive vector navigation in autonomous agents

Dennis Goldschmidt; Sakyasingha Dasgupta; Florentin Wörgötter; Poramate Manoonpong

Animals show remarkable capabilities in navigating their habitat in a fully autonomous and energy-efficient way. In many species, these capabilities rely on a process called path integration, which enables them to estimate their current location and to find their way back home after long-distance journeys. Path integration is achieved by integrating compass and odometric cues. Here we introduce a neural path integration mechanism that interacts with a neural locomotion control to simulate homing behavior and path integration-related behaviors observed in animals. The mechanism is applied to a simulated six-legged artificial agent. Input signals from an allothetic compass and odometry are sustained through leaky neural integrator circuits, which are then used to compute the home vector by local excitation-global inhibition interactions. The home vector is computed and represented in circular arrays of neurons, where compass directions are population-coded and linear displacements are rate-coded. The mechanism allows for robust homing behavior in the presence of external sensory noise. The emergent behavior of the controlled agent does not only show a robust solution for the problem of autonomous agent navigation, but it also reproduces various aspects of animal navigation. Finally, we discuss how the proposed path integration mechanism may be used as a scaffold for spatial learning in terms of vector navigation.


bioRxiv | 2016

Neural mechanisms for reward-modulated vector learning and navigation: from social insects to embodied agents

Dennis Goldschmidt; Poramate Manoonpong; Sakyasingha Dasgupta

Despite their small size, insect brains are able to produce robust and efficient navigation in complex environments. Specifically in social insects, such as ants and bees, these navigational capabilities are guided by orientation directing vectors generated by a process called path integration. During this process, they integrate compass and odometric cues to estimate their current location as a vector, called home vector for guiding them back home on a straight path. They further acquire and retrieve path integration-based vector memories anchored globally to the nest or visual landmarks. Although existing computational models reproduced similar behaviors, they largely neglected evidence for possible neural substrates underlying the generated behavior. Therefore, we present here a model of neural mechanisms in a modular closed-loop control - enabling vector navigation in embodied agents. The model consists of a path integration mechanism, reward-modulated global and local vector learning, random search, and action selection. The path integration mechanism integrates compass and odometric cues to compute a vectorial representation of the agent’s current location as neural activity patterns in circular arrays. A reward-modulated learning rule enables the acquisition of vector memories by associating the local food reward with the path integration state. A motor output is computed based on the combination of vector memories and random exploration. In sim-ulation, we show that the neural mechanisms enable robust homing and localization, even in the presence of external sensory noise. The proposed learning rules lead to goal-directed navigation and route formation performed under realistic conditions. This provides an explanation for, how view-based navigational strategies are guided by path integration. Consequently, we provide a novel approach for vector learning and navigation in a simulated embodied agent linking behavioral observations to their possible underlying neural substrates. Author Summary Desert ants survive under harsh conditions by foraging for food in temperatures over 60° C. In this extreme environment, they cannot, like other ants, use pheromones to track their long-distance journeys back to their nests. Instead they apply a computation called path integration, which involves integrating skylight compass and odometric stimuli to estimate its current position. Path integration is not only used to return safely to their nests, but also helps in learning so-called vector memories. Such memories are sufficient to produce goal-directed and landmark-guided navigation in social insects. How can small insect brains generate such complex behaviors? Computational models are often useful for studying behavior and their underlying control mechanisms. Here we present a novel computational framework for the acquisition and expression of vector memories based on path integration. It consists of multiple neural networks and a reward-based learning rule, where vectors are represented by the activity patterns of circular arrays. Our model not only reproduces goal-directed navigation and route formation in a simulated agent, but also offers predictions about neural implementations. Taken together, we believe that it demonstrates the first complete model of vector-guided navigation linking observed behaviors of navigating social insects to their possible underlying neural mechanisms.


national conference on artificial intelligence | 2017

Nonlinear Dynamic Boltzmann Machines for Time-Series Prediction.

Sakyasingha Dasgupta; Takayuki Osogami


arXiv: Robotics | 2017

Transfer learning from synthetic to real images using variational autoencoders for robotic applications.

Tadanobu Inoue; Subhajit Chaudhury; Giovanni De Magistris; Sakyasingha Dasgupta


Ibm Journal of Research and Development | 2017

Learning the values of the hyperparameters of a dynamic Boltzmann machine

Takayuki Osogami; Sakyasingha Dasgupta


international conference on image processing | 2018

Transfer Learning from Synthetic to Real Images Using Variational Autoencoders for Precise Position Detection.

Tadanobu Inoue; Subhajit Chaudhury; Giovanni De Magistris; Sakyasingha Dasgupta


arXiv: Learning | 2018

Internal Model from Observations for Reward Shaping.

Daiki Kimura; Subhajit Chaudhury; Ryuki Tachibana; Sakyasingha Dasgupta


arXiv: Learning | 2017

Conditional generation of multi-modal data using constrained embedding space mapping.

Subhajit Chaudhury; Sakyasingha Dasgupta; Asim Munawar; Md. A. Salam Khan; Ryuki Tachibana

Researchain Logo
Decentralizing Knowledge