Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Aniruddh D. Patel is active.

Publication


Featured researches published by Aniruddh D. Patel.


Frontiers in Systems Neuroscience | 2014

The evolutionary neuroscience of musical beat perception: the Action Simulation for Auditory Prediction (ASAP) hypothesis

Aniruddh D. Patel; John R. Iversen

Every human culture has some form of music with a beat: a perceived periodic pulse that structures the perception of musical rhythm and which serves as a framework for synchronized movement to music. What are the neural mechanisms of musical beat perception, and how did they evolve? One view, which dates back to Darwin and implicitly informs some current models of beat perception, is that the relevant neural mechanisms are relatively general and are widespread among animal species. On the basis of recent neural and cross-species data on musical beat processing, this paper argues for a different view. Here we argue that beat perception is a complex brain function involving temporally-precise communication between auditory regions and motor planning regions of the cortex (even in the absence of overt movement). More specifically, we propose that simulation of periodic movement in motor planning regions provides a neural signal that helps the auditory system predict the timing of upcoming beats. This “action simulation for auditory prediction” (ASAP) hypothesis leads to testable predictions. We further suggest that ASAP relies on dorsal auditory pathway connections between auditory regions and motor planning regions via the parietal cortex, and suggest that these connections may be stronger in humans than in non-human primates due to the evolution of vocal learning in our lineage. This suggestion motivates cross-species research to determine which species are capable of human-like beat perception, i.e., beat perception that involves accurate temporal prediction of beat times across a fairly broad range of tempi.


Journal of the Acoustical Society of America | 2008

Perception of rhythmic grouping depends on auditory experience

John R. Iversen; Aniruddh D. Patel; Kengo Ohgushi

Many aspects of perception are known to be shaped by experience, but others are thought to be innate universal properties of the brain. A specific example comes from rhythm perception, where one of the fundamental perceptual operations is the grouping of successive events into higher-level patterns, an operation critical to the perception of language and music. Grouping has long been thought to be governed by innate perceptual principles established a century ago. The current work demonstrates instead that grouping can be strongly dependent on culture. Native English and Japanese speakers were tested for their perception of grouping of simple rhythmic sequences of tones. Members of the two cultures showed different patterns of perceptual grouping, demonstrating that these basic auditory processes are not universal but are shaped by experience. It is suggested that the observed perceptual differences reflect the rhythms of the two languages, and that native language can exert an influence on general auditory perception at a basic level.


Psychonomic Bulletin & Review | 2009

Making psycholinguistics musical: Self-paced reading time evidence for shared processing of linguistic and musical syntax

L. Robert Slevc; Jason C. Rosenberg; Aniruddh D. Patel

Linguistic processing, especially syntactic processing, is often considered a hallmark of human cognition; thus, the domain specificity or domain generality of syntactic processing has attracted considerable debate. The present experiments address this issue by simultaneously manipulating syntactic processing demands in language and music. Participants performed self-paced reading of garden path sentences, in which structurally unexpected words cause temporary syntactic processing difficulty. A musical chord accompanied each sentence segment, with the resulting sequence forming a coherent chord progression. When structurally unexpected words were paired with harmonically unexpected chords, participants showed substantially enhanced garden path effects. No such interaction was observed when the critical words violated semantic expectancy or when the critical chords violated timbral expectancy. These results support a prediction of the shared syntactic integration resource hypothesis (Patel, 2003), which suggests that music and language draw on a common pool of limited processing resources for integrating incoming elements into syntactic structures. Notations of the stimuli from this study may be downloaded from pbr.psychonomic-journals.org/content/supplemental.


Hearing Research | 2014

Can nonlinguistic musical training change the way the brain processes speech? The expanded OPERA hypothesis.

Aniruddh D. Patel

A growing body of research suggests that musical training has a beneficial impact on speech processing (e.g., hearing of speech in noise and prosody perception). As this research moves forward two key questions need to be addressed: 1) Can purely instrumental musical training have such effects? 2) If so, how and why would such effects occur? The current paper offers a conceptual framework for understanding such effects based on mechanisms of neural plasticity. The expanded OPERA hypothesis proposes that when music and speech share sensory or cognitive processing mechanisms in the brain, and music places higher demands on these mechanisms than speech does, this sets the stage for musical training to enhance speech processing. When these higher demands are combined with the emotional rewards of music, the frequent repetition that musical training engenders, and the focused attention that it requires, neural plasticity is activated and makes lasting changes in brain structure and function which impact speech processing. Initial data from a new study motivated by the OPERA hypothesis is presented, focusing on the impact of musical training on speech perception in cochlear-implant users. Suggestions for the development of animal models to test OPERA are also presented, to help motivate neurophysiological studies of how auditory training using non-biological sounds can impact the brains perceptual processing of species-specific vocalizations. This article is part of a Special Issue entitled .


PLOS Biology | 2014

The Evolutionary Biology of Musical Rhythm: Was Darwin Wrong?

Aniruddh D. Patel

Are other species able to process basic musical rhythm in the same way that humans do? Darwin supported this intuitive idea, but it is being challenged by new cross-species research.


Cognition | 2015

Synchronization to auditory and visual rhythms in hearing and deaf individuals

John R. Iversen; Aniruddh D. Patel; Brenda Nicodemus; Karen Emmorey

A striking asymmetry in human sensorimotor processing is that humans synchronize movements to rhythmic sound with far greater precision than to temporally equivalent visual stimuli (e.g., to an auditory vs. a flashing visual metronome). Traditionally, this finding is thought to reflect a fundamental difference in auditory vs. visual processing, i.e., superior temporal processing by the auditory system and/or privileged coupling between the auditory and motor systems. It is unclear whether this asymmetry is an inevitable consequence of brain organization or whether it can be modified (or even eliminated) by stimulus characteristics or by experience. With respect to stimulus characteristics, we found that a moving, colliding visual stimulus (a silent image of a bouncing ball with a distinct collision point on the floor) was able to drive synchronization nearly as accurately as sound in hearing participants. To study the role of experience, we compared synchronization to flashing metronomes in hearing and profoundly deaf individuals. Deaf individuals performed better than hearing individuals when synchronizing with visual flashes, suggesting that cross-modal plasticity enhances the ability to synchronize with temporally discrete visual stimuli. Furthermore, when deaf (but not hearing) individuals synchronized with the bouncing ball, their tapping patterns suggest that visual timing may access higher-order beat perception mechanisms for deaf individuals. These results indicate that the auditory advantage in rhythmic synchronization is more experience- and stimulus-dependent than has been previously reported.


Neuroscience & Biobehavioral Reviews | 2017

Temporal modulations in speech and music

Nai Ding; Aniruddh D. Patel; Lin Chen; Henry Butler; Cheng Luo; David Poeppel

Speech and music have structured rhythms. Here we discuss a major acoustic correlate of spoken and musical rhythms, the slow (0.25-32Hz) temporal modulations in sound intensity and compare the modulation properties of speech and music. We analyze these modulations using over 25h of speech and over 39h of recordings of Western music. We show that the speech modulation spectrum is highly consistent across 9 languages (including languages with typologically different rhythmic characteristics). A different, but similarly consistent modulation spectrum is observed for music, including classical music played by single instruments of different types, symphonic, jazz, and rock. The temporal modulations of speech and music show broad but well-separated peaks around 5 and 2Hz, respectively. These acoustically dominant time scales may be intrinsic features of speech and music, a possibility which should be investigated using more culturally diverse samples in each domain. Distinct modulation timescales for speech and music could facilitate their perceptual analysis and its neural processing.


Scientific Reports | 2015

Musical training, individual differences and the cocktail party problem

Jayaganesh Swaminathan; Christine R. Mason; Timothy Streeter; Virginia Best; Gerald Kidd; Aniruddh D. Patel

Are musicians better able to understand speech in noise than non-musicians? Recent findings have produced contradictory results. Here we addressed this question by asking musicians and non-musicians to understand target sentences masked by other sentences presented from different spatial locations, the classical ‘cocktail party problem’ in speech science. We found that musicians obtained a substantial benefit in this situation, with thresholds ~6u2009dB better than non-musicians. Large individual differences in performance were noted particularly for the non-musically trained group. Furthermore, in different conditions we manipulated the spatial location and intelligibility of the masking sentences, thus changing the amount of ‘informational masking’ (IM) while keeping the amount of ‘energetic masking’ (EM) relatively constant. When the maskers were unintelligible and spatially separated from the target (low in IM), musicians and non-musicians performed comparably. These results suggest that the characteristics of speech maskers and the amount of IM can influence the magnitude of the differences found between musicians and non-musicians in multiple-talker “cocktail party” environments. Furthermore, considering the task in terms of the EM-IM distinction provides a conceptual framework for future behavioral and neuroscientific studies which explore the underlying sensory and cognitive mechanisms contributing to enhanced “speech-in-noise” perception by musicians.


Proceedings of the National Academy of Sciences of the United States of America | 2011

The motor origins of human and avian song structure.

Adam Tierney; Frank A. Russo; Aniruddh D. Patel

Human song exhibits great structural diversity, yet certain aspects of melodic shape (how pitch is patterned over time) are widespread. These include a predominance of arch-shaped and descending melodic contours in musical phrases, a tendency for phrase-final notes to be relatively long, and a bias toward small pitch movements between adjacent notes in a melody [Huron D (2006) Sweet Anticipation: Music and the Psychology of Expectation (MIT Press, Cambridge, MA)]. What is the origin of these features? We hypothesize that they stem from motor constraints on song production (i.e., the energetic efficiency of their underlying motor actions) rather than being innately specified. One prediction of this hypothesis is that any animals subject to similar motor constraints on song will exhibit similar melodic shapes, no matter how distantly related those animals are to humans. Conversely, animals who do not share similar motor constraints on song will not exhibit convergent melodic shapes. Birds provide an ideal case for testing these predictions, because their peripheral mechanisms of song production have both notable similarities and differences from human vocal mechanisms [Riede T, Goller F (2010) Brain Lang 115:69–80]. We use these similarities and differences to make specific predictions about shared and distinct features of human and avian song structure and find that these predictions are confirmed by empirical analysis of diverse human and avian song samples.


Neuropsychologia | 2015

A music perception disorder (congenital amusia) influences speech comprehension.

Fang Liu; Cunmei Jiang; Bei Wang; Yi Xu; Aniruddh D. Patel

This study investigated the underlying link between speech and music by examining whether and to what extent congenital amusia, a musical disorder characterized by degraded pitch processing, would impact spoken sentence comprehension for speakers of Mandarin, a tone language. Sixteen Mandarin-speaking amusics and 16 matched controls were tested on the intelligibility of news-like Mandarin sentences with natural and flat fundamental frequency (F0) contours (created via speech resynthesis) under four signal-to-noise (SNR) conditions (no noise, +5, 0, and -5dB SNR). While speech intelligibility in quiet and extremely noisy conditions (SNR=-5dB) was not significantly compromised by flattened F0, both amusic and control groups achieved better performance with natural-F0 sentences than flat-F0 sentences under moderately noisy conditions (SNR=+5 and 0dB). Relative to normal listeners, amusics demonstrated reduced speech intelligibility in both quiet and noise, regardless of whether the F0 contours of the sentences were natural or flattened. This deficit in speech intelligibility was not associated with impaired pitch perception in amusia. These findings provide evidence for impaired speech comprehension in congenital amusia, suggesting that the deficit of amusics extends beyond pitch processing and includes segmental processing.

Collaboration


Dive into the Aniruddh D. Patel's collaboration.

Top Co-Authors

Avatar

John R. Iversen

The Neurosciences Institute

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Adam Tierney

Northwestern University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge