Vinoo Alluri
University of Jyväskylä
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Vinoo Alluri.
NeuroImage | 2012
Vinoo Alluri; Petri Toiviainen; Iiro P. Jääskeläinen; Enrico Glerean; Mikko Sams
We investigated the neural underpinnings of timbral, tonal, and rhythmic features of a naturalistic musical stimulus. Participants were scanned with functional Magnetic Resonance Imaging (fMRI) while listening to a stimulus with a rich musical structure, a modern tango. We correlated temporal evolutions of timbral, tonal, and rhythmic features of the stimulus, extracted using acoustic feature extraction procedures, with the fMRI time series. Results corroborate those obtained with controlled stimuli in previous studies and highlight additional areas recruited during musical feature processing. While timbral feature processing was associated with activations in cognitive areas of the cerebellum, and sensory and default mode network cerebrocortical areas, musical pulse and tonality processing recruited cortical and subcortical cognitive, motor and emotion-related circuits. In sum, by combining neuroimaging, acoustic feature extraction and behavioral methods, we revealed the large-scale cognitive, motor and limbic brain circuitry dedicated to acoustic feature processing during listening to a naturalistic stimulus. In addition to these novel findings, our study has practical relevance as it provides a powerful means to localize neural processing of individual acoustical features, be it those of music, speech, or soundscapes, in ecological settings.
Frontiers in Psychology | 2011
Vinoo Alluri; Brigitte Bogert; Thomas Jacobsen; Nuutti Vartiainen; Sirke Nieminen; Mari Tervaniemi
Musical emotions, such as happiness and sadness, have been investigated using instrumental music devoid of linguistic content. However, pop and rock, the most common musical genres, utilize lyrics for conveying emotions. Using participants’ self-selected musical excerpts, we studied their behavior and brain responses to elucidate how lyrics interact with musical emotion processing, as reflected by emotion recognition and activation of limbic areas involved in affective experience. We extracted samples from subjects’ selections of sad and happy pieces and sorted them according to the presence of lyrics. Acoustic feature analysis showed that music with lyrics differed from music without lyrics in spectral centroid, a feature related to perceptual brightness, whereas sad music with lyrics did not diverge from happy music without lyrics, indicating the role of other factors in emotion classification. Behavioral ratings revealed that happy music without lyrics induced stronger positive emotions than happy music with lyrics. We also acquired functional magnetic resonance imaging data while subjects performed affective tasks regarding the music. First, using ecological and acoustically variable stimuli, we broadened previous findings about the brain processing of musical emotions and of songs versus instrumental music. Additionally, contrasts between sad music with versus without lyrics recruited the parahippocampal gyrus, the amygdala, the claustrum, the putamen, the precentral gyrus, the medial and inferior frontal gyri (including Broca’s area), and the auditory cortex, while the reverse contrast produced no activations. Happy music without lyrics activated structures of the limbic system and the right pars opercularis of the inferior frontal gyrus, whereas auditory regions alone responded to happy music with lyrics. These findings point to the role of acoustic cues for the experience of happiness in music and to the importance of lyrics for sad musical emotions.
NeuroImage | 2013
Vinoo Alluri; Petri Toiviainen; Torben E. Lund; Mikkel Wallentin; Peter Vuust; Asoke K. Nandi; Tapani Ristaniemi
We aimed at predicting the temporal evolution of brain activity in naturalistic music listening conditions using a combination of neuroimaging and acoustic feature extraction. Participants were scanned using functional Magnetic Resonance Imaging (fMRI) while listening to two musical medleys, including pieces from various genres with and without lyrics. Regression models were built to predict voxel-wise brain activations which were then tested in a cross-validation setting in order to evaluate the robustness of the hence created models across stimuli. To further assess the generalizability of the models we extended the cross-validation procedure by including another dataset, which comprised continuous fMRI responses of musically trained participants to an Argentinean tango. Individual models for the two musical medleys revealed that activations in several areas in the brain belonging to the auditory, limbic, and motor regions could be predicted. Notably, activations in the medial orbitofrontal region and the anterior cingulate cortex, relevant for self-referential appraisal and aesthetic judgments, could be predicted successfully. Cross-validation across musical stimuli and participant pools helped identify a region of the right superior temporal gyrus, encompassing the planum polare and the Heschls gyrus, as the core structure that processed complex acoustic features of musical pieces from various genres, with or without lyrics. Models based on purely instrumental music were able to predict activation in the bilateral auditory cortices, parietal, somatosensory, and left hemispheric primary and supplementary motor areas. The presence of lyrics on the other hand weakened the prediction of activations in the left superior temporal gyrus. Our results suggest spontaneous emotion-related processing during naturalistic listening to music and provide supportive evidence for the hemispheric specialization for categorical sounds with realistic stimuli. We herewith introduce a powerful means to predict brain responses to music, speech, or soundscapes across a large variety of contexts.
Frontiers in Human Neuroscience | 2016
Elvira Brattico; Brigitte Bogert; Vinoo Alluri; Mari Tervaniemi; Tuomas Eerola; Thomas Jacobsen
Emotion-related areas of the brain, such as the medial frontal cortices, amygdala, and striatum, are activated during listening to sad or happy music as well as during listening to pleasurable music. Indeed, in music, like in other arts, sad and happy emotions might co-exist and be distinct from emotions of pleasure or enjoyment. Here we aimed at discerning the neural correlates of sadness or happiness in music as opposed those related to musical enjoyment. We further investigated whether musical expertise modulates the neural activity during affective listening of music. To these aims, 13 musicians and 16 non-musicians brought to the lab their most liked and disliked musical pieces with a happy and sad connotation. Based on a listening test, we selected the most representative 18 sec excerpts of the emotions of interest for each individual participant. Functional magnetic resonance imaging (fMRI) recordings were obtained while subjects listened to and rated the excerpts. The cortico-thalamo-striatal reward circuit and motor areas were more active during liked than disliked music, whereas only the auditory cortex and the right amygdala were more active for disliked over liked music. These results discern the brain structures responsible for the perception of sad and happy emotions in music from those related to musical enjoyment. We also obtained novel evidence for functional differences in the limbic system associated with musical expertise, by showing enhanced liking-related activity in fronto-insular and cingulate areas in musicians.
IEEE Transactions on Multimedia | 2013
Fengyu Cong; Vinoo Alluri; Asoke K. Nandi; Petri Toiviainen; Rui Fa; Basel Abu-Jamous; Liyun Gong; Bart G. W. Craenen; Hanna Poikonen; Minna Huotilainen; Tapani Ristaniemi
This study proposes a novel approach for the analysis of brain responses in the modality of ongoing EEG elicited by the naturalistic and continuous music stimulus. The 512-second long EEG data (recorded with 64 electrodes) are first decomposed into 64 components by independent component analysis (ICA) for each participant. Then, the spatial maps showing dipolar brain activity are selected in terms of the residual dipole variance through a single dipole model in brain imaging, and clustered into a pre-defined number (estimated by the minimum description length) of clusters. Subsequently, the temporal courses of the EEG theta and alpha oscillations of each component for each cluster are produced and correlated with the temporal courses of tonal and rhythmic features of the music. Using this approach, we found that the extracted temporal courses of the theta and alpha oscillations along central and occipital area of scalp in two of the selected clusters significantly correlated with the musical features representing progressions in the rhythmic content of the stimulus. We suggest that this demonstrates that with the proposed approach, we have managed to discover what kinds of brain responses were elicited when a participant was listening continuously to the long piece of naturalistic music.
Journal of Neuroscience Methods | 2014
Fengyu Cong; Tuomas Puoliväli; Vinoo Alluri; Tuomo Sipola; Iballa Burunat; Petri Toiviainen; Asoke K. Nandi; Tapani Ristaniemi
BACKGROUND Independent component analysis (ICA) has been often used to decompose fMRI data mostly for the resting-state, block and event-related designs due to its outstanding advantage. For fMRI data during free-listening experiences, only a few exploratory studies applied ICA. NEW METHOD For processing the fMRI data elicited by 512-s modern tango, a FFT based band-pass filter was used to further pre-process the fMRI data to remove sources of no interest and noise. Then, a fast model order selection method was applied to estimate the number of sources. Next, both individual ICA and group ICA were performed. Subsequently, ICA components whose temporal courses were significantly correlated with musical features were selected. Finally, for individual ICA, common components across majority of participants were found by diffusion map and spectral clustering. RESULTS The extracted spatial maps (by the new ICA approach) common across most participants evidenced slightly right-lateralized activity within and surrounding the auditory cortices. Meanwhile, they were found associated with the musical features. COMPARISON WITH EXISTING METHOD(S) Compared with the conventional ICA approach, more participants were found to have the common spatial maps extracted by the new ICA approach. Conventional model order selection methods underestimated the true number of sources in the conventionally pre-processed fMRI data for the individual ICA. CONCLUSIONS Pre-processing the fMRI data by using a reasonable band-pass digital filter can greatly benefit the following model order selection and ICA with fMRI data by naturalistic paradigms. Diffusion map and spectral clustering are straightforward tools to find common ICA spatial maps.
Neuroscience | 2016
Hanna Poikonen; Vinoo Alluri; Elvira Brattico; Olivier Lartillot; Mari Tervaniemi; Minna Huotilainen
Brain responses to discrete short sounds have been studied intensively using the event-related potential (ERP) method, in which the electroencephalogram (EEG) signal is divided into epochs time-locked to stimuli of interest. Here we introduce and apply a novel technique which enables one to isolate ERPs in human elicited by continuous music. The ERPs were recorded during listening to a Tango Nuevo piece, a deep techno track and an acoustic lullaby. Acoustic features related to timbre, harmony, and dynamics of the audio signal were computationally extracted from the musical pieces. Negative deflation occurring around 100 milliseconds after the stimulus onset (N100) and positive deflation occurring around 200 milliseconds after the stimulus onset (P200) ERP responses to peak changes in the acoustic features were distinguishable and were often largest for Tango Nuevo. In addition to large changes in these musical features, long phases of low values that precede a rapid increase - and that we will call Preceding Low-Feature Phases - followed by a rapid increase enhanced the amplitudes of N100 and P200 responses. These ERP responses resembled those to simpler sounds, making it possible to utilize the tradition of ERP research with naturalistic paradigms.
international workshop on machine learning for signal processing | 2013
Tuomo Sipola; Fengyu Cong; Tapani Ristaniemi; Vinoo Alluri; Petri Toiviainen; Elvira Brattico; Asoke K. Nandi
Functional magnetic resonance imaging (fMRI) produces data about activity inside the brain, from which spatial maps can be extracted by independent component analysis (ICA). In datasets, there are n spatial maps that contain p voxels. The number of voxels is very high compared to the number of analyzed spatial maps. Clustering of the spatial maps is usually based on correlation matrices. This usually works well, although such a similarity matrix inherently can explain only a certain amount of the total variance contained in the high-dimensional data where n is relatively small but p is large. For high-dimensional space, it is reasonable to perform dimensionality reduction before clustering. In this research, we used the recently developed diffusion map for dimensionality reduction in conjunction with spectral clustering. This research revealed that the diffusion map based clustering worked as well as the more traditional methods, and produced more compact clusters when needed.
international conference on acoustics, speech, and signal processing | 2013
Tuomas Puoliväli; Fengyu Cong; Vinoo Alluri; Qiu-Hua Lin; Petri Toiviainen; Asoke K. Nandi; Tapani Ristaniemi
This study presents a method to analyze blood-oxygen-level-dependent (BOLD) functional magnetic resonance imaging (fMRI) signals associated with listening to continuous music. Semi-blind independent component analysis (ICA) was applied to decompose the fMRI data to source level activation maps and their respective temporal courses. The unmixing matrix in the source separation process of ICA was constrained by a variety of acoustic features derived from the piece of music used as the stimulus in the experiment. This allowed more stable estimation and extraction of more activation maps of interest compared to conventional ICA methods.
Journal of Neuroscience Methods | 2018
Valeri Tsatsishvili; Iballa Burunat; Fengyu Cong; Petri Toiviainen; Vinoo Alluri; Tapani Ristaniemi
BACKGROUND There has been growing interest towards naturalistic neuroimaging experiments, which deepen our understanding of how human brain processes and integrates incoming streams of multifaceted sensory information, as commonly occurs in real world. Music is a good example of such complex continuous phenomenon. In a few recent fMRI studies examining neural correlates of music in continuous listening settings, multiple perceptual attributes of music stimulus were represented by a set of high-level features, produced as the linear combination of the acoustic descriptors computationally extracted from the stimulus audio. NEW METHOD: fMRI data from naturalistic music listening experiment were employed here. Kernel principal component analysis (KPCA) was applied to acoustic descriptors extracted from the stimulus audio to generate a set of nonlinear stimulus features. Subsequently, perceptual and neural correlates of the generated high-level features were examined. RESULTS The generated features captured musical percepts that were hidden from the linear PCA features, namely Rhythmic Complexity and Event Synchronicity. Neural correlates of the new features revealed activations associated to processing of complex rhythms, including auditory, motor, and frontal areas. COMPARISON WITH EXISTING METHOD Results were compared with the findings in the previously published study, which analyzed the same fMRI data but applied linear PCA for generating stimulus features. To enable comparison of the results, methodology for finding stimulus-driven functional maps was adopted from the previous study. CONCLUSIONS Exploiting nonlinear relationships among acoustic descriptors can lead to the novel high-level stimulus features, which can in turn reveal new brain structures involved in music processing.