Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Josh H. McDermott is active.

Publication


Featured researches published by Josh H. McDermott.


Nature Neuroscience | 2003

The evolution of the music faculty: a comparative perspective

Marc D. Hauser; Josh H. McDermott

We propose a theoretical framework for exploring the evolution of the music faculty from a comparative perspective. This framework addresses questions of phylogeny, adaptive function, innate biases and perceptual mechanisms. We argue that comparative studies can make two unique contributions to investigations of the origins of music. First, musical exposure can be controlled and manipulated to an extent not possible in humans. Second, any features of music perception found in nonhuman animals must not be part of an adaptation for music, and must rather be side effects of more general features of perception or cognition. We review studies that use animal research to target specific aspects of music perception (such as octave generalization), as well as studies that investigate more general and shared systems of the mind/brain that may be relevant to music (such as rhythm perception and emotional encoding). Finally, we suggest several directions for future work, following the lead of comparative studies on the language faculty.


Current Biology | 2010

Individual differences reveal the basis of consonance.

Josh H. McDermott; Andriana J. Lehr; Andrew J. Oxenham

Some combinations of musical notes are consonant (pleasant), whereas others are dissonant (unpleasant), a distinction central to music. Explanations of consonance in terms of acoustics, auditory neuroscience, and enculturation have been debated for centuries. We utilized individual differences to distinguish the candidate theories. We measured preferences for musical chords as well as nonmusical sounds that isolated particular acoustic factors--specifically, the beating and the harmonic relationships between frequency components, two factors that have long been thought to potentially underlie consonance. Listeners preferred stimuli without beats and with harmonic spectra, but across more than 250 subjects, only the preference for harmonic spectra was consistently correlated with preferences for consonant over dissonant chords. Harmonicity preferences were also correlated with the number of years subjects had spent playing a musical instrument, suggesting that exposure to music amplifies preferences for harmonic frequencies because of their musical importance. Harmonic spectra are prominent features of natural sounds, and our results indicate that they also underlie the perception of consonance.


Current Opinion in Neurobiology | 2008

Music perception, pitch, and the auditory system

Josh H. McDermott; Andrew J. Oxenham

The perception of music depends on many culture-specific factors, but is also constrained by properties of the auditory system. This has been best characterized for those aspects of music that involve pitch. Pitch sequences are heard in terms of relative as well as absolute pitch. Pitch combinations give rise to emergent properties not present in the component notes. In this review we discuss the basic auditory mechanisms contributing to these and other perceptual effects in music.


Current Biology | 2009

The cocktail party problem

Josh H. McDermott

Natural auditory environments, be they cocktail parties or rain forests, contain many things that concurrently make sounds. The cocktail party problem is the task of hearing a sound of interest, often a speech signal, in this sort of complex auditory setting (Figure 1). The problem is intrinsically quite difficult, and there has been longstanding interest in how humans manage to solve it.


Cognition | 2007

Nonhuman primates prefer slow tempos but dislike music overall

Josh H. McDermott; Marc D. Hauser

Human adults generally find fast tempos more arousing than slow tempos, with tempo frequently manipulated in music to alter tension and emotion. We used a previously published method [McDermott, J., & Hauser, M. (2004). Are consonant intervals music to their ears? Spontaneous acoustic preferences in a nonhuman primate. Cognition, 94(2), B11-B21] to test cotton-top tamarins and common marmosets, two new-World primates, for their spontaneous responses to stimuli that varied systematically with respect to tempo. Across several experiments, we found that both tamarins and marmosets preferred slow tempos to fast. It is possible that the observed preferences were due to arousal, and that this effect is homologous to the human response to tempo. In other respects, however, these two monkey species showed striking differences compared to humans. Specifically, when presented with a choice between slow tempo musical stimuli, including lullabies, and silence, tamarins and marmosets preferred silence whereas humans, when similarly tested, preferred music. Thus despite the possibility of homologous mechanisms for tempo perception in human and nonhuman primates, there appear to be motivational ties to music that are uniquely human.


The Journal of Neuroscience | 2013

Cortical Pitch Regions in Humans Respond Primarily to Resolved Harmonics and Are Located in Specific Tonotopic Regions of Anterior Auditory Cortex

Sam Norman-Haignere; Nancy Kanwisher; Josh H. McDermott

Pitch is a defining perceptual property of many real-world sounds, including music and speech. Classically, theories of pitch perception have differentiated between temporal and spectral cues. These cues are rendered distinct by the frequency resolution of the ear, such that some frequencies produce “resolved” peaks of excitation in the cochlea, whereas others are “unresolved,” providing a pitch cue only via their temporal fluctuations. Despite longstanding interest, the neural structures that process pitch, and their relationship to these cues, have remained controversial. Here, using fMRI in humans, we report the following: (1) consistent with previous reports, all subjects exhibited pitch-sensitive cortical regions that responded substantially more to harmonic tones than frequency-matched noise; (2) the response of these regions was mainly driven by spectrally resolved harmonics, although they also exhibited a weak but consistent response to unresolved harmonics relative to noise; (3) the response of pitch-sensitive regions to a parametric manipulation of resolvability tracked psychophysical discrimination thresholds for the same stimuli; and (4) pitch-sensitive regions were localized to specific tonotopic regions of anterior auditory cortex, extending from a low-frequency region of primary auditory cortex into a more anterior and less frequency-selective region of nonprimary auditory cortex. These results demonstrate that cortical pitch responses are located in a stereotyped region of anterior auditory cortex and are predominantly driven by resolved frequency components in a way that mirrors behavior.


Nature | 2008

The evolution of music

Josh H. McDermott

In the second of a nine-part essay series, Josh McDermott explores the origins of the human urge to make and hear music.


Perception | 2001

Beyond junctions : nonlocal form constraints on motion interpretation

Josh H. McDermott; Yair Weiss; Edward H. Adelson

Because of the aperture problem, local motion measurements must be combined across space. However, not all motions should be combined. Some arise from distinct objects and should be segregated, and some are due to occlusion and should be discounted because they are spurious. Humans have little difficulty ignoring spurious motions at occlusions and correctly integrating object motion, and are evidently making use of form information to do so. There is a large body of theoretical and empirical evidence supporting the importance of form processes involving junctions in the way motion is integrated. To assess the role of more complex form analysis, we manipulated nonlocal form cues that could be varied independently of local junctions. Using variants on diamond and plaid stimuli used in previous studies, we found that manipulations distant from the junctions themselves could cause large changes in motion interpretation. Nonlocal information often overrides the integration decisions that would be expected from local cues. The mechanisms implicated appear to involve surface segmentation, amodal completion, and depth ordering.


Proceedings of the National Academy of Sciences of the United States of America | 2012

The basis of musical consonance as revealed by congenital amusia

Marion Cousineau; Josh H. McDermott; Isabelle Peretz

Some combinations of musical notes sound pleasing and are termed “consonant,” but others sound unpleasant and are termed “dissonant.” The distinction between consonance and dissonance plays a central role in Western music, and its origins have posed one of the oldest and most debated problems in perception. In modern times, dissonance has been widely believed to be the product of “beating”: interference between frequency components in the cochlea that has been believed to be more pronounced in dissonant than consonant sounds. However, harmonic frequency relations, a higher-order sound attribute closely related to pitch perception, has also been proposed to account for consonance. To tease apart theories of musical consonance, we tested sound preferences in individuals with congenital amusia, a neurogenetic disorder characterized by abnormal pitch perception. We assessed amusics’ preferences for musical chords as well as for the isolated acoustic properties of beating and harmonicity. In contrast to control subjects, amusic listeners showed no preference for consonance, rating the pleasantness of consonant chords no higher than that of dissonant chords. Amusics also failed to exhibit the normally observed preference for harmonic over inharmonic tones, nor could they discriminate such tones from each other. Despite these abnormalities, amusics exhibited normal preferences and discrimination for stimuli with and without beating. This dissociation indicates that, contrary to classic theories, beating is unlikely to underlie consonance. Our results instead suggest the need to integrate harmonicity as a foundation of music preferences, and illustrate how amusia may be used to investigate normal auditory function.


Proceedings of the National Academy of Sciences of the United States of America | 2011

Recovering sound sources from embedded repetition

Josh H. McDermott; David Wrobleski; Andrew J. Oxenham

Cocktail parties and other natural auditory environments present organisms with mixtures of sounds. Segregating individual sound sources is thought to require prior knowledge of source properties, yet these presumably cannot be learned unless the sources are segregated first. Here we show that the auditory system can bootstrap its way around this problem by identifying sound sources as repeating patterns embedded in the acoustic input. Due to the presence of competing sounds, source repetition is not explicit in the input to the ear, but it produces temporal regularities that listeners detect and use for segregation. We used a simple generative model to synthesize novel sounds with naturalistic properties. We found that such sounds could be segregated and identified if they occurred more than once across different mixtures, even when the same sounds were impossible to segregate in single mixtures. Sensitivity to the repetition of sound sources can permit their recovery in the absence of other segregation cues or prior knowledge of sounds, and could help solve the cocktail party problem.

Collaboration


Dive into the Josh H. McDermott's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar

Sam Norman-Haignere

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Nancy Kanwisher

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

James Traer

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar

Edward H. Adelson

Massachusetts Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Maria Chait

University College London

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Makio Kashino

Tokyo Institute of Technology

View shared research outputs
Researchain Logo
Decentralizing Knowledge