Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Iballa Burunat is active.

Publication


Featured researches published by Iballa Burunat.


PLOS ONE | 2015

Action in Perception: Prominent Visuo-Motor Functional Symmetry in Musicians during Music Listening

Iballa Burunat; Elvira Brattico; Tuomas Puoliväli; Tapani Ristaniemi; Mikko Sams; Petri Toiviainen

Musical training leads to sensory and motor neuroplastic changes in the human brain. Motivated by findings on enlarged corpus callosum in musicians and asymmetric somatomotor representation in string players, we investigated the relationship between musical training, callosal anatomy, and interhemispheric functional symmetry during music listening. Functional symmetry was increased in musicians compared to nonmusicians, and in keyboardists compared to string players. This increased functional symmetry was prominent in visual and motor brain networks. Callosal size did not significantly differ between groups except for the posterior callosum in musicians compared to nonmusicians. We conclude that the distinctive postural and kinematic symmetry in instrument playing cross-modally shapes information processing in sensory-motor cortical areas during music listening. This cross-modal plasticity suggests that motor training affects music perception.


Journal of Neuroscience Methods | 2014

Key issues in decomposing fMRI during naturalistic and continuous music experience with independent component analysis

Fengyu Cong; Tuomas Puoliväli; Vinoo Alluri; Tuomo Sipola; Iballa Burunat; Petri Toiviainen; Asoke K. Nandi; Tapani Ristaniemi

BACKGROUND Independent component analysis (ICA) has been often used to decompose fMRI data mostly for the resting-state, block and event-related designs due to its outstanding advantage. For fMRI data during free-listening experiences, only a few exploratory studies applied ICA. NEW METHOD For processing the fMRI data elicited by 512-s modern tango, a FFT based band-pass filter was used to further pre-process the fMRI data to remove sources of no interest and noise. Then, a fast model order selection method was applied to estimate the number of sources. Next, both individual ICA and group ICA were performed. Subsequently, ICA components whose temporal courses were significantly correlated with musical features were selected. Finally, for individual ICA, common components across majority of participants were found by diffusion map and spectral clustering. RESULTS The extracted spatial maps (by the new ICA approach) common across most participants evidenced slightly right-lateralized activity within and surrounding the auditory cortices. Meanwhile, they were found associated with the musical features. COMPARISON WITH EXISTING METHOD(S) Compared with the conventional ICA approach, more participants were found to have the common spatial maps extracted by the new ICA approach. Conventional model order selection methods underestimated the true number of sources in the conventionally pre-processed fMRI data for the individual ICA. CONCLUSIONS Pre-processing the fMRI data by using a reasonable band-pass digital filter can greatly benefit the following model order selection and ICA with fMRI data by naturalistic paradigms. Diffusion map and spectral clustering are straightforward tools to find common ICA spatial maps.


Neuropsychologia | 2016

Hidden sources of joy, fear, and sadness : Explicit versus implicit neural processing of musical emotions

Brigitte Bogert; Taru Numminen-Kontti; Benjamin P. Gold; Mikko Sams; Jussi Numminen; Iballa Burunat; Jouko Lampinen

Music is often used to regulate emotions and mood. Typically, music conveys and induces emotions even when one does not attend to them. Studies on the neural substrates of musical emotions have, however, only examined brain activity when subjects have focused on the emotional content of the music. Here we address with functional magnetic resonance imaging (fMRI) the neural processing of happy, sad, and fearful music with a paradigm in which 56 subjects were instructed to either classify the emotions (explicit condition) or pay attention to the number of instruments playing (implicit condition) in 4-s music clips. In the implicit vs. explicit condition, stimuli activated bilaterally the inferior parietal lobule, premotor cortex, caudate, and ventromedial frontal areas. The cortical dorsomedial prefrontal and occipital areas activated during explicit processing were those previously shown to be associated with the cognitive processing of music and emotion recognition and regulation. Moreover, happiness in music was associated with activity in the bilateral auditory cortex, left parahippocampal gyrus, and supplementary motor area, whereas the negative emotions of sadness and fear corresponded with activation of the left anterior cingulate and middle frontal gyrus and down-regulation of the orbitofrontal cortex. Our study demonstrates for the first time in healthy subjects the neural underpinnings of the implicit processing of brief musical emotions, particularly in frontoparietal, dorsolateral prefrontal, and striatal areas of the brain.


Scientific Reports | 2018

Decoding Musical Training from Dynamic Processing of Musical Features in the Brain

Pasi Saari; Iballa Burunat; Elvira Brattico; Petri Toiviainen

Pattern recognition on neural activations from naturalistic music listening has been successful at predicting neural responses of listeners from musical features, and vice versa. Inter-subject differences in the decoding accuracies have arisen partly from musical training that has widely recognized structural and functional effects on the brain. We propose and evaluate a decoding approach aimed at predicting the musicianship class of an individual listener from dynamic neural processing of musical features. Whole brain functional magnetic resonance imaging (fMRI) data was acquired from musicians and nonmusicians during listening of three musical pieces from different genres. Six musical features, representing low-level (timbre) and high-level (rhythm and tonality) aspects of music perception, were computed from the acoustic signals, and classification into musicians and nonmusicians was performed on the musical feature and parcellated fMRI time series. Cross-validated classification accuracy reached 77% with nine regions, comprising frontal and temporal cortical regions, caudate nucleus, and cingulate gyrus. The processing of high-level musical features at right superior temporal gyrus was most influenced by listeners’ musical training. The study demonstrates the feasibility to decode musicianship from how individual brains listen to music, attaining accuracy comparable to current results from automated clinical diagnosis of neurological and psychological disorders.


Psychomusicology: Music, Mind and Brain | 2018

Musical training predicts cerebello-hippocampal coupling during music listening.

Iballa Burunat; Martin Hartmann; Peter Vuust; Teppo Särkämö; Petri Toiviainen

Cerebello-hippocampal interactions occur during accurate spatiotemporal prediction of movements. In the context of music listening, differences in cerebello-hippocampal functional connectivity may result from differences in predictive listening accuracy. Using functional MRI, we studied differences in this network between 18 musicians and 18 nonmusicians while they listened to music. Musicians possess a predictive listening advantage over nonmusicians, facilitated by strengthened coupling between produced and heard sounds through lifelong musical experience. Thus, we hypothesized that musicians would exhibit greater functional connectivity than nonmusicians as a marker of accurate online predictions during music listening. To this end, we estimated the functional connectivity between cerebellum and hippocampus as modulated by a perceptual measure of the predictability of the music. Results revealed increased predictability-driven functional connectivity in this network in musicians compared with nonmusicians, which was positively correlated with the length of musical training. Findings may be explained by musicians’ improved predictive listening accuracy. Our findings advance the understanding of cerebellar integrative function.


Journal of Neuroscience Methods | 2018

On application of kernel PCA for generating stimulus features for fMRI during continuous music listening

Valeri Tsatsishvili; Iballa Burunat; Fengyu Cong; Petri Toiviainen; Vinoo Alluri; Tapani Ristaniemi

BACKGROUND There has been growing interest towards naturalistic neuroimaging experiments, which deepen our understanding of how human brain processes and integrates incoming streams of multifaceted sensory information, as commonly occurs in real world. Music is a good example of such complex continuous phenomenon. In a few recent fMRI studies examining neural correlates of music in continuous listening settings, multiple perceptual attributes of music stimulus were represented by a set of high-level features, produced as the linear combination of the acoustic descriptors computationally extracted from the stimulus audio. NEW METHOD: fMRI data from naturalistic music listening experiment were employed here. Kernel principal component analysis (KPCA) was applied to acoustic descriptors extracted from the stimulus audio to generate a set of nonlinear stimulus features. Subsequently, perceptual and neural correlates of the generated high-level features were examined. RESULTS The generated features captured musical percepts that were hidden from the linear PCA features, namely Rhythmic Complexity and Event Synchronicity. Neural correlates of the new features revealed activations associated to processing of complex rhythms, including auditory, motor, and frontal areas. COMPARISON WITH EXISTING METHOD Results were compared with the findings in the previously published study, which analyzed the same fMRI data but applied linear PCA for generating stimulus features. To enable comparison of the results, methodology for finding stimulus-driven functional maps was adopted from the previous study. CONCLUSIONS Exploiting nonlinear relationships among acoustic descriptors can lead to the novel high-level stimulus features, which can in turn reveal new brain structures involved in music processing.


Cortex | 2014

Dynamics of brain activity underlying working memory for music in a naturalistic condition

Iballa Burunat; Vinoo Alluri; Petri Toiviainen; Jussi Numminen; Elvira Brattico


NeuroImage | 2016

The reliability of continuous brain responses during naturalistic listening to music

Iballa Burunat; Petri Toiviainen; Vinoo Alluri; Brigitte Bogert; Tapani Ristaniemi; Mikko Sams; Elvira Brattico


Psychomusicology: Music, Mind and Brain | 2015

Musical expertise modulates functional connectivity of limbic regions during continuous music listening.

Vinoo Alluri; Elvira Brattico; Petri Toiviainen; Iballa Burunat; Brigitte Bogert; Jussi Numminen; Marina Kliuchko


Human Brain Mapping | 2017

Connectivity patterns during music listening: Evidence for action-based processing in musicians

Vinoo Alluri; Petri Toiviainen; Iballa Burunat; Marina Kliuchko; Peter Vuust

Collaboration


Dive into the Iballa Burunat's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Vinoo Alluri

University of Jyväskylä

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tapani Ristaniemi

Information Technology University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Tuomas Puoliväli

Information Technology University

View shared research outputs
Researchain Logo
Decentralizing Knowledge