Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Ivilin Stoianov is active.

Publication


Featured researches published by Ivilin Stoianov.


Proceedings of the National Academy of Sciences of the United States of America | 2012

Functional organization of the insula and inner perisylvian regions

Ahmad Jezzini; Fausto Caruana; Ivilin Stoianov; Vittorio Gallese; Giacomo Rizzolatti

In the last few years, the insula has been the focus of many brain-imaging studies, mostly devoted to clarify its role in emotions and social communication. Physiological data, however, on which one may ground these correlative findings are almost totally lacking. Here, we investigated the functional properties of the insular cortex in behaving monkeys using intracortical microstimulation. Behavioral responses and heart rate changes were recorded. The results showed that the insula is functionally formed by two main subdivisions: (i) a sensorimotor field occupying the caudal–dorsal portion of the insula and appearing as an extension of the parietal lobe; and (ii) a mosaic of orofacial motor programs located in the anterior and centroventral insula sector. These programs show a progressive shift from dorsally located nonemotional motor programs (ingestive activity) to ventral ones laden with emotional and communicative content. The relationship between ingestive and other behaviors is discussed in an evolutionary perspective.


Frontiers in Psychology | 2013

Modeling language and cognition with deep unsupervised learning: a tutorial overview

Marco Zorzi; Alberto Testolin; Ivilin Stoianov

Deep unsupervised learning in stochastic recurrent neural networks with many layers of hidden units is a recent breakthrough in neural computation research. These networks build a hierarchy of progressively more complex distributed representations of the sensory data by fitting a hierarchical generative model. In this article we discuss the theoretical foundations of this approach and we review key issues related to training, testing and analysis of deep networks for modeling language and cognitive processing. The classic letter and word perception problem of McClelland and Rumelhart (1981) is used as a tutorial example to illustrate how structured and abstract representations may emerge from deep generative learning. We argue that the focus on deep architectures and generative (rather than discriminative) learning represents a crucial step forward for the connectionist modeling enterprise, because it offers a more plausible model of cortical learning as well as a way to bridge the gap between emergentist connectionist models and structured Bayesian models of cognition.


Journal of Cognitive Neuroscience | 2016

Prefrontal goal codes emerge as latent states in probabilistic value learning

Ivilin Stoianov; Aldo Genovesio; Giovanni Pezzulo

The prefrontal cortex (PFC) supports goal-directed actions and exerts cognitive control over behavior, but the underlying coding and mechanism are heavily debated. We present evidence for the role of goal coding in PFC from two converging perspectives: computational modeling and neuronal-level analysis of monkey data. We show that neural representations of prospective goals emerge by combining a categorization process that extracts relevant behavioral abstractions from the input data and a reward-driven process that selects candidate categories depending on their adaptive value; both forms of learning have a plausible neural implementation in PFC. Our analyses demonstrate a fundamental principle: goal coding represents an efficient solution to cognitive control problems, analogous to efficient coding principles in other (e.g., visual) brain areas. The novel analytical–computational approach is of general interest because it applies to a variety of neurophysiological studies.


Psychonomic Bulletin & Review | 2011

Interactions between perceptual and numerical space.

Peter Kramer; Ivilin Stoianov; Carlo Umiltà; Marco Zorzi

Interactions between numbers and space have become a major issue in numerical cognition. Neuropsychological studies suggest that the interactions occur, before response selection, at a spatially organized representation of numbers (the mental number line). Reaction time (RT) studies, on the other hand, usually point to associations between response codes that do not necessarily imply a number line. There is only one such study that has found a spationumerical interaction between perception and semantics (SNIPS) effect before response selection. Here, in Experiment 1, we isolated the SNIPS effect from other numerical effects and corroborated the prediction that it can be induced by both left and right spatial cues. In Experiment 2, we isolated the peak of the time course of the SNIPS effect and corroborated the prediction that it occurs when a cue follows a target, and not when both appear simultaneously. The results reconcile neuropsychological and RT studies and support the hypothesis that numbers are represented along a left-to-right spatially organized mental number line. [corrected]


Frontiers in Psychology | 2013

Deep Unsupervised Learning on a Desktop PC: A Primer for Cognitive Scientists

Alberto Testolin; Ivilin Stoianov; Michele De Filippo De Grazia; Marco Zorzi

Deep belief networks hold great promise for the simulation of human cognition because they show how structured and abstract representations may emerge from probabilistic unsupervised learning. These networks build a hierarchy of progressively more complex distributed representations of the sensory data by fitting a hierarchical generative model. However, learning in deep networks typically requires big datasets and it can involve millions of connection weights, which implies that simulations on standard computers are unfeasible. Developing realistic, medium-to-large-scale learning models of cognition would therefore seem to require expertise in programing parallel-computing hardware, and this might explain why the use of this promising approach is still largely confined to the machine learning community. Here we show how simulations of deep unsupervised learning can be easily performed on a desktop PC by exploiting the processors of low cost graphic cards (graphic processor units) without any specific programing effort, thanks to the use of high-level programming routines (available in MATLAB or Python). We also show that even an entry-level graphic card can outperform a small high-performance computing cluster in terms of learning time and with no loss of learning quality. We therefore conclude that graphic card implementations pave the way for a widespread use of deep learning among cognitive scientists for modeling cognition and behavior.


Cognitive Science | 2016

Learning Orthographic Structure With Sequential Generative Neural Networks

Alberto Testolin; Ivilin Stoianov; Alessandro Sperduti; Marco Zorzi

Learning the structure of event sequences is a ubiquitous problem in cognition and particularly in language. One possible solution is to learn a probabilistic generative model of sequences that allows making predictions about upcoming events. Though appealing from a neurobiological standpoint, this approach is typically not pursued in connectionist modeling. Here, we investigated a sequential version of the restricted Boltzmann machine (RBM), a stochastic recurrent neural network that extracts high-order structure from sensory data through unsupervised generative learning and can encode contextual information in the form of internal, distributed representations. We assessed whether this type of network can extract the orthographic structure of English monosyllables by learning a generative model of the letter sequences forming a word training corpus. We show that the network learned an accurate probabilistic model of English graphotactics, which can be used to make predictions about the letter following a given context as well as to autonomously generate high-quality pseudowords. The model was compared to an extended version of simple recurrent networks, augmented with a stochastic process that allows autonomous generation of sequences, and to non-connectionist probabilistic models (n-grams and hidden Markov models). We conclude that sequential RBMs and stochastic simple recurrent networks are promising candidates for modeling cognition in the temporal domain.


international conference on artificial neural networks | 2002

Associative Arithmetic with Boltzmann Machines: The Role of Number Representations

Ivilin Stoianov; Marco Zorzi; Suzanna Becker; Carlo Umiltà

This paper presents a study on associative mental arithmetic with mean-field Boltzmann Machines. We examined the role of number representations, showing theoretically and experimentally that cardinal number representations (e.g., numerosity) are superior to symbolic and ordinal representations w.r.t. learnability and cognitive plausibility. Only the network trained on numerosities exhibited the problem-size effect, the core phenomenon in human behavioral studies. These results urge a reevaluation of current cognitive models of mental arithmetic.


Nature Human Behaviour | 2017

Letter perception emerges from unsupervised deep learning and recycling of natural image features

Alberto Testolin; Ivilin Stoianov; Marco Zorzi

The use of written symbols is a major achievement of human cultural evolution. However, how abstract letter representations might be learned from vision is still an unsolved problem1,2. Here, we present a large-scale computational model of letter recognition based on deep neural networks3,4, which develops a hierarchy of increasingly more complex internal representations in a completely unsupervised way by fitting a probabilistic, generative model to the visual input5,6. In line with the hypothesis that learning written symbols partially recycles pre-existing neuronal circuits for object recognition7, earlier processing levels in the model exploit domain-general visual features learned from natural images, while domain-specific features emerge in upstream neurons following exposure to printed letters. We show that these high-level representations can be easily mapped to letter identities even for noise-degraded images, producing accurate simulations of a broad range of empirical findings on letter perception in human observers. Our model shows that by reusing natural visual primitives, learning written symbols only requires limited, domain-specific tuning, supporting the hypothesis that their shape has been culturally selected to match the statistical structure of natural environments8.Testolin et al. develop a computational model of letter perception based on deep learning and show that domain-general visual knowledge extracted from natural scenes is recycled for learning domain-specific cultural artefacts, such as printed letters.


PLOS ONE | 2010

Perinatal asphyxia affects rat auditory processing: implications for auditory perceptual impairments in neurodevelopmental disorders.

Fabrizio Strata; Ivilin Stoianov; Etienne de Villers-Sidani; Ben H. Bonham; Tiziana Martone; Tal Kenet; Edward F. Chang; Vincenzo Vincenti; Michael M. Merzenich

Perinatal asphyxia, a naturally and commonly occurring risk factor in birthing, represents one of the major causes of neonatal encephalopathy with long term consequences for infants. Here, degraded spectral and temporal responses to sounds were recorded from neurons in the primary auditory cortex (A1) of adult rats exposed to asphyxia at birth. Response onset latencies and durations were increased. Response amplitudes were reduced. Tuning curves were broader. Degraded successive-stimulus masking inhibitory mechanisms were associated with a reduced capability of neurons to follow higher-rate repetitive stimuli. The architecture of peripheral inner ear sensory epithelium was preserved, suggesting that recorded abnormalities can be of central origin. Some implications of these findings for the genesis of language perception deficits or for impaired language expression recorded in developmental disorders, such as autism spectrum disorders, contributed to by perinatal asphyxia, are discussed.


Entropy | 2017

Model-Based Approaches to Active Perception and Control

Giovanni Pezzulo; Francesco Donnarumma; Pierpaolo Iodice; Domenico Maisto; Ivilin Stoianov

There is an on-going debate in cognitive (neuro) science and philosophy between classical cognitive theory and embodied, embedded, extended, and enactive (“4-Es”) views of cognition—a family of theories that emphasize the role of the body in cognition and the importance of brain-body-environment interaction over and above internal representation. This debate touches foundational issues, such as whether the brain internally represents the external environment, and “infers” or “computes” something. Here we focus on two (4-Es-based) criticisms to traditional cognitive theories—to the notions of passive perception and of serial information processing—and discuss alternative ways to address them, by appealing to frameworks that use, or do not use, notions of internal modelling and inference. Our analysis illustrates that: an explicitly inferential framework can capture some key aspects of embodied and enactive theories of cognition; some claims of computational and dynamical theories can be reconciled rather than seen as alternative explanations of cognitive phenomena; and some aspects of cognitive processing (e.g., detached cognitive operations, such as planning and imagination) that are sometimes puzzling to explain from enactive and non-representational perspectives can, instead, be captured nicely from the perspective that internal generative models and predictive processing mediate adaptive control loops.

Collaboration


Dive into the Ivilin Stoianov's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge