Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Micah R. Bregman is active.

Publication


Featured researches published by Micah R. Bregman.


Annals of the New York Academy of Sciences | 2009

Studying Synchronization to a Musical Beat in Nonhuman Animals

Aniruddh D. Patel; John R. Iversen; Micah R. Bregman; Irena Schulz

The recent discovery of spontaneous synchronization to music in a nonhuman animal (the sulphur‐crested cockatoo Cacatua galerita eleonora) raises several questions. How does this behavior differ from nonmusical synchronization abilities in other species, such as synchronized frog calls or firefly flashes? What significance does the behavior have for debates over the evolution of human music? What kinds of animals can synchronize to musical rhythms, and what are the key methodological issues for research in this area? This paper addresses these questions and proposes some refinements to the “vocal learning and rhythmic synchronization hypothesis.”


Language and Linguistics Compass | 2011

How Talker Identity Relates to Language Processing

Sarah C. Creel; Micah R. Bregman

Speech carries both linguistic content – phonemes, words, sentences – and talker information, sometimes called ‘indexical information’. While talker variability materially affects language processing, it has historically been regarded as a curiosity rather than a central influence, possibly because talker variability does not fit with a conception of speech sounds as abstract categories. Despite this relegation to the periphery, a long history of research suggests that phoneme perception and talker perception are interrelated. The current review argues that speech perception itself may arise from phylogenetically earlier vocal recognition, and discusses evidence that many cues to talker identity are also cues to speech-sound identity. Rather than brushing talker differences aside, explicit examination of the role of talker variability and talker identity in language processing can illuminate our understanding of the origins of spoken language, and the nature of language representations themselves. Spoken language contains a great amount of communicative information. Speech can refer to things, but it also identifies or classifies the person speaking. This dual function links language to other types of vocal communication systems, and means that recognizing speech and recognizing speakers are intertwined. Imagine someone says the word cat. The vocal pitch of ‘cat’ is 200 Hz, and the delay between opening of the vocal tract and the beginning of vocal-cord vibration is 80 ms. The listener has to process all of this acoustic information to identify this word as ⁄ kaet ⁄ . Many would agree that the vocal pitch is relatively unimportant, while the 80-ms voice onset time is crucial because it distinguishes the phoneme ⁄ k ⁄ from the very similar phoneme ⁄ g ⁄ . But what happens to the ‘unimportant’ information? On a modular view, information not linked to phoneme identity is discarded by the speech-processing system (e.g. Liberman and Mattingly 1985). Nonetheless, details such as vocal pitch may influence comprehension because they indicate the speaker’s identity, and can refine the listener’s expectations about what particular speakers are likely to talk about. Differing theories of language processing have different accounts of talker identification. On one view, two separate, independent systems process speech and talker identification


Cognition | 2012

Stimulus-dependent flexibility in non-human auditory pitch processing

Micah R. Bregman; Aniruddh D. Patel; Timothy Q. Gentner

Songbirds and humans share many parallels in vocal learning and auditory sequence processing. However, the two groups differ notably in their abilities to recognize acoustic sequences shifted in absolute pitch (pitch height). Whereas humans maintain accurate recognition of words or melodies over large pitch height changes, songbirds are comparatively much poorer at recognizing pitch-shifted tone sequences. This apparent disparity may reflect fundamental differences in the neural mechanisms underlying the representation of sound in songbirds. Alternatively, because non-human studies have used sine-tone stimuli almost exclusively, tolerance to pitch height changes in the context of natural signals may be underestimated. Here, we show that European starlings, a species of songbird, can maintain accurate recognition of the songs of other starlings when the pitch of those songs is shifted by as much as ±40%. We observed accurate recognition even for songs pitch-shifted well outside the range of frequencies used during training, and even though much smaller pitch shifts in conspecific songs are easily detected. With similar training using human piano melodies, recognition of the pitch-shifted melodies is very limited. These results demonstrate that non-human pitch processing is more flexible than previously thought and that the flexibility in pitch processing strategy is stimulus dependent.


Proceedings of the National Academy of Sciences of the United States of America | 2016

Songbirds use spectral shape, not pitch, for sound pattern recognition

Micah R. Bregman; Aniruddh D. Patel; Timothy Q. Gentner

Significance Past work characterizes songbirds as having a strong bias to rely on absolute pitch for the recognition of tone sequences. In a series of behavioral experiments, we find that the human percepts of both pitch and timbre are poor descriptions of the perceptual cues used by birds for melody recognition. We suggest instead that auditory sequence recognition in some species reflects more direct perception of acoustic spectral shape. Signals that preserve this shape, even in the absence of pitch, allow for generalization of learned patterns. Humans easily recognize “transposed” musical melodies shifted up or down in log frequency. Surprisingly, songbirds seem to lack this capacity, although they can learn to recognize human melodies and use complex acoustic sequences for communication. Decades of research have led to the widespread belief that songbirds, unlike humans, are strongly biased to use absolute pitch (AP) in melody recognition. This work relies almost exclusively on acoustically simple stimuli that may belie sensitivities to more complex spectral features. Here, we investigate melody recognition in a species of songbird, the European Starling (Sturnus vulgaris), using tone sequences that vary in both pitch and timbre. We find that small manipulations altering either pitch or timbre independently can drive melody recognition to chance, suggesting that both percepts are poor descriptors of the perceptual cues used by birds for this task. Instead we show that melody recognition can generalize even in the absence of pitch, as long as the spectral shapes of the constituent tones are preserved. These results challenge conventional views regarding the use of pitch cues in nonhuman auditory sequence recognition.


Communicative & Integrative Biology | 2009

Avian and human movement to music: Two further parallels

Aniruddh D. Patel; John R. Iversen; Micah R. Bregman; Irena Schulz

It has recently been demonstrated that a nonhuman animal (the medium sulphur-crested cockatoo Cacatua galerita eleonora) can entrain its rhythmic movements to the beat of human music across a wide range of tempi. Entrainment occurrs in “synchronized bouts”, occasional stretches of synchrony embedded in longer sequences of rhythmic movement to music. Here we examine non-synchronized rhythmic movements made while dancing to music, and find strong evidence for a preferred tempo around 126 beats per minute [bpm]. The animal shows best synchronization to music when the musical tempo is near this preferred tempo. The tendency to dance to music at a preferred tempo, and to synchronize best when the music is near this tempo, parallels how young humans move to music. These findings support the idea that avian and human synchronization to music have similar neurobiological foundations.


Encyclopedia of Animal Behavior | 2010

Syntactically Complex Vocal Systems

Micah R. Bregman; Timothy Q. Gentner

The temporal patterning of vocal communication signals is widespread and suggests parallels to human linguistic capacities. Two levels of syntactic structure may be defined: phonological and lexical. Lexical syntax involves the patterning of meaningful units into longer, meaningful sequences, as in human language. In phonological syntax, a sequence of otherwise meaningless sound units conveys meaning. The syntactic complexity of animal vocalizations has been measured using information theoretic analyses and formal grammars across a range of taxa. Birds and nonhuman primates have been the most extensively studied. Phonological syntax is common across a wide range of groups and can involve rule or rule-like organization of vocal sequences. Lexical syntax is very rare – having been observed only in nonhuman primates and then in a limited capacity that may lack open compositional rules.


Cognition | 2014

Gradient language dominance affects talker learning

Micah R. Bregman; Sarah C. Creel


Archive | 2008

Investigating the human-specificity of synchronization to music

Aniruddh D. Patel; John R. Iversen; Micah R. Bregman; Irena Schulz; Charles Schulz


Empirical Musicology Review | 2013

A method for testing synchronization to a musical beat in domestic horses (Equus ferus caballus)

Micah R. Bregman; John R. Iversen; David Lichman; Meredith Reinhart; Aniruddh D. Patel


Cognitive Science | 2012

Learning to recognize unfamiliar voices: the role of language familiarity and music experience

Micah R. Bregman; Sarah C. Creel

Collaboration


Dive into the Micah R. Bregman's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

John R. Iversen

The Neurosciences Institute

View shared research outputs
Top Co-Authors

Avatar

Irena Schulz

University of California

View shared research outputs
Top Co-Authors

Avatar

Sarah C. Creel

University of California

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Charles Schulz

University of California

View shared research outputs
Researchain Logo
Decentralizing Knowledge