Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Parag Chordia is active.

Publication


Featured researches published by Parag Chordia.


European Journal of Neuroscience | 2013

Inter-subject synchronization of brain responses during natural music listening.

Daniel A. Abrams; Srikanth Ryali; Tianwen Chen; Parag Chordia; Amirah Khouzam; Daniel J. Levitin; Vinod Menon

Music is a cultural universal and a rich part of the human experience. However, little is known about common brain systems that support the processing and integration of extended, naturalistic ‘real‐world’ music stimuli. We examined this question by presenting extended excerpts of symphonic music, and two pseudomusical stimuli in which the temporal and spectral structure of the Natural Music condition were disrupted, to non‐musician participants undergoing functional brain imaging and analysing synchronized spatiotemporal activity patterns between listeners. We found that music synchronizes brain responses across listeners in bilateral auditory midbrain and thalamus, primary auditory and auditory association cortex, right‐lateralized structures in frontal and parietal cortex, and motor planning regions of the brain. These effects were greater for natural music compared to the pseudo‐musical control conditions. Remarkably, inter‐subject synchronization in the inferior colliculus and medial geniculate nucleus was also greater for the natural music condition, indicating that synchronization at these early stages of auditory processing is not simply driven by spectro‐temporal features of the stimulus. Increased synchronization during music listening was also evident in a right‐hemisphere fronto‐parietal attention network and bilateral cortical regions involved in motor planning. While these brain structures have previously been implicated in various aspects of musical processing, our results are the first to show that these regions track structural elements of a musical stimulus over extended time periods lasting minutes. Our results show that a hierarchical distributed network is synchronized between individuals during the processing of extended musical sequences, and provide new insight into the temporal integration of complex and biologically salient auditory sequences.


Proceedings of the National Academy of Sciences of the United States of America | 2012

Musical rhythm spectra from Bach to Joplin obey a 1/f power law

Daniel J. Levitin; Parag Chordia; Vinod Menon

Much of our enjoyment of music comes from its balance of predictability and surprise. Musical pitch fluctuations follow a 1/f power law that precisely achieves this balance. Musical rhythms, especially those of Western classical music, are considered highly regular and predictable, and this predictability has been hypothesized to underlie rhythms contribution to our enjoyment of music. Are musical rhythms indeed entirely predictable and how do they vary with genre and composer? To answer this question, we analyzed the rhythm spectra of 1,788 movements from 558 compositions of Western classical music. We found that an overwhelming majority of rhythms obeyed a 1/fβ power law across 16 subgenres and 40 composers, with β ranging from ∼0.5–1. Notably, classical composers, whose compositions are known to exhibit nearly identical 1/f pitch spectra, demonstrated distinctive 1/f rhythm spectra: Beethovens rhythms were among the most predictable, and Mozarts among the least. Our finding of the ubiquity of 1/f rhythm spectra in compositions spanning nearly four centuries demonstrates that, as with musical pitch, musical rhythms also exhibit a balance of predictability and surprise that could contribute in a fundamental way to our aesthetic experience of music. Although music compositions are intended to be performed, the fact that the notated rhythms follow a 1/f spectrum indicates that such structure is no mere artifact of performance or perception, but rather, exists within the written composition before the music is performed. Furthermore, composers systematically manipulate (consciously or otherwise) the predictability in 1/f rhythms to give their compositions unique identities.


Journal of New Music Research | 2011

Predictive Tabla Modelling Using Variable-length Markov and Hidden Markov Models

Parag Chordia; Avinash Sastry; Sertan Şentürk

Abstract Tabla is a sophisticated, centuries-old percussion tradition from North India based on timbral sequences. We model these sequences in a predictive framework with Variable-length Markov Models (VLMMs). Using a database containing nearly 30,000 strokes in 35 compositions, we show that VLMMs have high predictive accuracy, with an average perplexity of 1.80, and median perplexity of 1.19, on a task with 42 distinct symbols. This basic framework is extended by the introduction of several new smoothing techniques that determine how to integrate predictions from the different order models. The model is then extended to include parallel representations of the sequence, a technique known as Multiple Viewpoint modelling. The work is then extended to the problem of recognizing strokes from audio. In this hidden context, the identity of the previous stroke is not revealed at each time step. A Variable-length Hidden Markov Model (VLHMM) is used to determine the next-symbol distribution that is used in computing the perplexity. We detail how the forward probabilities can be efficiently computed for the VLHMM by traversing a prediction suffix tree (PST) that is used to represent sequences. Using a VLHMM with a maximum order of 3, we obtain an average perplexity of 2.31, with a median of 1.16 on a nine-target task. To the best of our knowledge, this is the first use of Variable-length Hidden Markov Models for music modelling or prediction.


computer music modeling and retrieval | 2007

Understanding Emotion in Raag : An Empirical Study of Listener Responses

Parag Chordia; Alex Rae

A survey of emotion in North Indian classical music was undertaken to determine the type and consistency of emotional responses to raag. Participants listened to five one-minute raag excerpts and recorded their emotional responses after each. They were asked to describe the emotions each excerpt evoked and then to adjust six different sliders indicating the degree to which they felt the following: happy, sad, peaceful, tense, romantic, longing. A total of 280 responses were received. We find that both free-response and quantitative judgments of emotions are significantly different for each raag and quite consistent across listeners. We hypothesized that the primary predictors of emotion in these excerpts would be pitch-class distribution, pitch-class dyad entropy, overall sensory dissonance, and note density. Multiple regression analysis was used to determine the most important factors, their relative importance, and their total predictive value (R 2). The features in combination explained between 11% (peaceful) and 33% (happy) of response variance. For all models, a subset of the features were significant, with the interplay between “minor” and “major” scale degrees playing an important role. Although the explanatory power of the current models is limited, the results thus far strongly suggest that raags do consistently elicit specific emotions that are linked to musical properties. The responses did not differ significantly for enculturated and non-enculturated listeners, suggesting that musical rather than cultural factors are dominant.


Computer Music Journal | 2013

Joint recognition of raag and tonic in north indian music

Parag Chordia; Sertan Şentürk

In many non-Western musical traditions, such as North Indian classical music (NICM), melodies do not conform to the major and minor modes, and they commonly use tunings that have no fixed reference (e.g., A = 440 Hz). We present a novel method for joint tonic and raag recognition in NICM from audio, based on pitch distributions. We systematically compare the accuracy of several methods using these tonal features when combined with instance-based (nearest-neighbor) and Bayesian classifiers. We find that, when compared with a standard twelve-dimensional pitch class distribution that estimates the relative frequency of each of the chromatic pitches, smoother and more continuous tonal representations offer significant performance advantages, particularly when combined with appropriate classification techniques. Best results are obtained using a kernel-density pitch distribution along with a nearest-neighbor classifier using Bhattacharyya distance, attaining a tonic error rate of 4.2 percent and raag error rate of 10.3 percent (with 21 different raag categories). These experiments suggest that tonal features based on pitch distributions are robust, reliable features that can be applied to complex melodic music.


international conference on machine learning | 2010

Evaluating multiple viewpoint models of tabla sequences

Parag Chordia; Avinash Sastry; Aaron Albin

We describe a realtime tabla generation system based on a variable-length n-gram model trained on a large symbolic tabla database. A novel, parametric smoothing algorithm based on a family of exponential curves is introduced to control the relative weight of high- and low-order models. This technique is shown to lead to improvements over a back-off smoothing for our tabla database. We find that cross-entropy is lowest when the coefficient of the exponential curve is between 1 and 2 and increases for values outside of this optimal range. The basic n-gram model is extended to model dependencies between duration, stroke-type, and meter using cross-products in a Multiple Viewpoints (MV) framework, leading to improvements in most cases when compared with independent stroke and duration models.


international conference on acoustics, speech, and signal processing | 2010

Characterization of movie genre based on music score

Aida Austin; Elliot Moore; Udit Gupta; Parag Chordia

While it is clear that the full emotional effect of a movie scene is carried through the successful interpretation of audio and visual information, music still carries a significant impact for interpretation of the directors intent and style. The intent of this study was to provide a preliminary understanding on a new database for the impact of timbral and select rhythm features in characterizing the differences among movie genres based on their film scores. For this study, a database of film scores from 98 movies was collected containing instrumental (non-vocal) music from 25 romance, 25 drama, 23 horror, and 25 action movies. Both pair-wise genre classification and classification with all four genres was performed using support vector machines (SVM) in a ten-fold cross-validation test. The results of the study support the notion that high intensity movies (i.e., Action and Horror) have musical cues that are measurably different from the musical scores for movies with more measured expressions of emotion (i.e., Drama and Romance).


international symposium/conference on music information retrieval | 2007

Raag Recognition Using Pitch-Class and Pitch-Class Dyad Distributions.

Parag Chordia; Alex Rae


international computer music conference | 2006

Automatic Raag Classification of Pitch-tracked Performances Using Pitch-class and Pitch-class Dyad Distributions

Parag Chordia


international symposium/conference on music information retrieval | 2012

Chord Recognition Using Duration-explicit Hidden Markov Models.

Ruofeng Chen; Weibin Shen; Ajay Srinivasamurthy; Parag Chordia

Collaboration


Dive into the Parag Chordia's collaboration.

Top Co-Authors

Avatar

Alex Rae

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Ajay Srinivasamurthy

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Avinash Sastry

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Mark Godfrey

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Aaron Albin

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Sertan Şentürk

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Weibin Shen

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Ajay Srinivasamurthy

Georgia Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge