Featured Researches

Neurons And Cognition

Gibbs Sampling with People

A core problem in cognitive science and machine learning is to understand how humans derive semantic representations from perceptual objects, such as color from an apple, pleasantness from a musical chord, or seriousness from a face. Markov Chain Monte Carlo with People (MCMCP) is a prominent method for studying such representations, in which participants are presented with binary choice trials constructed such that the decisions follow a Markov Chain Monte Carlo acceptance rule. However, while MCMCP has strong asymptotic properties, its binary choice paradigm generates relatively little information per trial, and its local proposal function makes it slow to explore the parameter space and find the modes of the distribution. Here we therefore generalize MCMCP to a continuous-sampling paradigm, where in each iteration the participant uses a slider to continuously manipulate a single stimulus dimension to optimize a given criterion such as 'pleasantness'. We formulate both methods from a utility-theory perspective, and show that the new method can be interpreted as 'Gibbs Sampling with People' (GSP). Further, we introduce an aggregation parameter to the transition step, and show that this parameter can be manipulated to flexibly shift between Gibbs sampling and deterministic optimization. In an initial study, we show GSP clearly outperforming MCMCP; we then show that GSP provides novel and interpretable results in three other domains, namely musical chords, vocal emotions, and faces. We validate these results through large-scale perceptual rating experiments. The final experiments use GSP to navigate the latent space of a state-of-the-art image synthesis network (StyleGAN), a promising approach for applying GSP to high-dimensional perceptual spaces. We conclude by discussing future cognitive applications and ethical implications.

Read more
Neurons And Cognition

Go with the FLOW: Visualizing spatiotemporal dynamics in optical widefield calcium imaging

Widefield calcium imaging has recently emerged as a powerful experimental technique to record coordinated large-scale brain activity. These measurements present a unique opportunity to characterize spatiotemporal coherent structures that underlie neural activity across many regions of the brain. In this work, we leverage analytic techniques from fluid dynamics to develop a visualization framework that highlights features of flow across the cortex, mapping wave fronts that may be correlated with behavioral events. First, we transform the time series of widefield calcium images into time-varying vector fields using optic flow. Next, we extract concise diagrams summarizing the dynamics, which we refer to as FLOW (flow lines in optical widefield imaging) portraits. These FLOW portraits provide an intuitive map of dynamic calcium activity, including regions of initiation and termination, as well as the direction and extent of activity spread. To extract these structures, we use the finite-time Lyapunov exponent (FTLE) technique developed to analyze time-varying manifolds in unsteady fluids. Importantly, our approach captures coherent structures that are poorly represented by traditional modal decomposition techniques. We demonstrate the application of FLOW portraits on three simple synthetic datasets and two widefield calcium imaging datasets, including cortical waves in the developing mouse and spontaneous cortical activity in an adult mouse.

Read more
Neurons And Cognition

Going in circles is the way forward: the role of recurrence in visual inference

Biological visual systems exhibit abundant recurrent connectivity. State-of-the-art neural network models for visual recognition, by contrast, rely heavily or exclusively on feedforward computation. Any finite-time recurrent neural network (RNN) can be unrolled along time to yield an equivalent feedforward neural network (FNN). This important insight suggests that computational neuroscientists may not need to engage recurrent computation, and that computer-vision engineers may be limiting themselves to a special case of FNN if they build recurrent models. Here we argue, to the contrary, that FNNs are a special case of RNNs and that computational neuroscientists and engineers should engage recurrence to understand how brains and machines can (1) achieve greater and more flexible computational depth, (2) compress complex computations into limited hardware, (3) integrate priors and priorities into visual inference through expectation and attention, (4) exploit sequential dependencies in their data for better inference and prediction, and (5) leverage the power of iterative computation.

Read more
Neurons And Cognition

Graph Convolutional Networks Reveal Neural Connections Encoding Prosthetic Sensation

Extracting stimulus features from neuronal ensembles is of great interest to the development of neuroprosthetics that project sensory information directly to the brain via electrical stimulation. Machine learning strategies that optimize stimulation parameters as the subject learns to interpret the artificial input could improve device efficacy, increase prosthetic performance, ensure stability of evoked sensations, and improve power consumption by eliminating extraneous input. Recent advances extending deep learning techniques to non-Euclidean graph data provide a novel approach to interpreting neuronal spiking activity. For this study, we apply graph convolutional networks (GCNs) to infer the underlying functional relationship between neurons that are involved in the processing of artificial sensory information. Data was collected from a freely behaving rat using a four infrared (IR) sensor, ICMS-based neuroprosthesis to localize IR light sources. We use GCNs to predict the stimulation frequency across four stimulating channels in the prosthesis, which encode relative distance and directional information to an IR-emitting reward port. Our GCN model is able to achieve a peak performance of 73.5% on a modified ordinal regression performance metric in a multiclass classification problem consisting of 7 classes, where chance is 14.3%. Additionally, the inferred adjacency matrix provides a adequate representation of the underlying neural circuitry encoding the artificial sensation.

Read more
Neurons And Cognition

Great expectations in music: violation of rhythmic expectancies elicits late frontal gamma activity nested in theta oscillations

Rhythm processing involves building expectations according to the hierarchical temporal structure of auditory events. Although rhythm processing has been addressed in the context of predictive coding, the properties of the oscillatory response in different cortical areas is still not clear. We explored the oscillatory properties of the neural response to rhythmic incongruence and explored the cross-frequency coupling between multiple frequencies to provide links between the concepts of predictive coding and rhythm perception. We designed an experiment to investigate the neural response to rhythmic deviations in which the tone either arrived earlier than expected or the tone in the same metrical position was omitted. These two manipulations modulate the rhythmic structure differently, with the former creating a larger violation of the general structure of the musical stimulus than the latter. Both deviations resulted in an MMN response, whereas only the rhythmic deviant resulted in a subsequent P3a. Rhythmic deviants due to the early occurrence of a tone, but not omission deviants, elicited a late high gamma response (60-80 Hz) at the end of the P3a over the left frontal region, which, interestingly, correlated with the P3a amplitude over the same region and was also nested in theta oscillations. The timing of the elicited high-frequency gamma oscillations related to rhythmic deviation suggests that it might be related to the update of the predictive neural model, corresponding to the temporal structure of the events in higher-level cortical areas.

Read more
Neurons And Cognition

Grid Cells Are Ubiquitous in Neural Networks

Grid cells are believed to play an important role in both spatial and non-spatial cognition tasks. A recent study observed the emergence of grid cells in an LSTM for path integration. The connection between biological and artificial neural networks underlying the seemingly similarity, as well as the application domain of grid cells in deep neural networks (DNNs), expect further exploration. This work demonstrated that grid cells could be replicated in either pure vision based or vision guided path integration DNNs for navigation under a proper setting of training parameters. We also show that grid-like behaviors arise in feedforward DNNs for non-spatial tasks. Our findings support that the grid coding is an effective representation for both biological and artificial networks.

Read more
Neurons And Cognition

GuessTheMusic: Song Identification from Electroencephalography response

The music signal comprises of different features like rhythm, timbre, melody, harmony. Its impact on the human brain has been an exciting research topic for the past several decades. Electroencephalography (EEG) signal enables non-invasive measurement of brain activity. Leveraging the recent advancements in deep learning, we proposed a novel approach for song identification using a Convolution Neural network given the electroencephalography (EEG) responses. We recorded the EEG signals from a group of 20 participants while listening to a set of 12 song clips, each of approximately 2 minutes, that were presented in random order. The repeating nature of Music is captured by a data slicing approach considering brain signals of 1 second duration as representative of each song clip. More specifically, we predict the song corresponding to one second of EEG data when given as input rather than a complete two-minute response. We have also discussed pre-processing steps to handle large dimensions of a dataset and various CNN architectures. For all the experiments, we have considered each participant's EEG response for each song in both train and test data. We have obtained 84.96\% accuracy for the same. The performance observed gives appropriate implication towards the notion that listening to a song creates specific patterns in the brain, and these patterns vary from person to person.

Read more
Neurons And Cognition

Habit learning supported by efficiently controlled network dynamics in naive macaque monkeys

Primates display a marked ability to learn habits in uncertain and dynamic environments. The associated perceptions and actions of such habits engage distributed neural circuits. Yet, precisely how such circuits support the computations necessary for habit learning remain far from understood. Here we construct a formal theory of network energetics to account for how changes in brain state produce changes in sequential behavior. We exercise the theory in the context of multi-unit recordings spanning the caudate nucleus, prefrontal cortex, and frontal eyefields of female macaque monkeys engaged in 60-180 sessions of a free scan task that induces motor habits. The theory relies on the determination of effective connectivity between recording channels, and on the stipulation that a brain state is taken to be the trial-specific firing rate across those channels. The theory then predicts how much energy will be required to transition from one state into another, given the constraint that activity can spread solely through effective connections. Consistent with the theory's predictions, we observed smaller energy requirements for transitions between more similar and more complex trial saccade patterns, and for sessions characterized by less entropic selection of saccade patterns. Using a virtual lesioning approach, we demonstrate the resilience of the observed relationships between minimum control energy and behavior to significant disruptions in the inferred effective connectivity. Our theoretically principled approach to the study of habit learning paves the way for future efforts examining how behavior arises from changing patterns of activity in distributed neural circuitry.

Read more
Neurons And Cognition

Hierarchical emotion-recognition framework based on discriminative brain neural network topology and ensemble co-decision strategy

Brain neural networks characterize various information propagation patterns for different emotional states. However, the statistical features based on traditional graph theory may ignore the spacial network difference. To reveal these inherent spatial features and increase the stability of emotional recognition, we proposed a hierarchical framework that can perform the multiple emotion recognitions with the multiple emotion-related spatial network topology patterns (MESNP) by combining a supervised learning with ensemble co-decision strategy. To evaluate the performance of our proposed MESNP approach, we conduct both off-line and simulated on-line experiments with two public datasets i.e., MAHNOB and DEAP. The experiment results demonstrated that MESNP can significantly enhance the classification performance for the multiple emotions. The highest accuracies of off-line experiments for MAHNOB-HCI and DEAP achieved 99.93% (3 classes) and 83.66% (4 classes), respectively. For simulated on-line experiments, we also obtained the best classification accuracies with 100% (3 classes) for MAHNOB and 99.22% (4 classes) for DEAP by proposed MESNP. These results further proved the efficiency of MESNP for structured feature extraction in mult-classification emotional task.

Read more
Neurons And Cognition

Hippocampal representations emerge when training recurrent neural networks on a memory dependent maze navigation task

Can neural networks learn goal-directed behaviour using similar strategies to the brain, by combining the relationships between the current state of the organism and the consequences of future actions? Recent work has shown that recurrent neural networks trained on goal based tasks can develop representations resembling those found in the brain, entorhinal cortex grid cells, for instance. Here we explore the evolution of the dynamics of their internal representations and compare this with experimental data. We observe that once a recurrent network is trained to learn the structure of its environment solely based on sensory prediction, an attractor based landscape forms in the network's representation, which parallels hippocampal place cells in structure and function. Next, we extend the predictive objective to include Q-learning for a reward task, where rewarding actions are dependent on delayed cue modulation. Mirroring experimental findings in hippocampus recordings in rodents performing the same task, this training paradigm causes nonlocal neural activity to sweep forward in space at decision points, anticipating the future path to a rewarded location. Moreover, prevalent choice and cue-selective neurons form in this network, again recapitulating experimental findings. Together, these results indicate that combining predictive, unsupervised learning of the structure of an environment with reinforcement learning can help understand the formation of hippocampus-like representations containing both spatial and task-relevant information.

Read more

Ready to get started?

Join us today