Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gregory A. Bryant is active.

Publication


Featured researches published by Gregory A. Bryant.


Human Nature | 2003

Music and dance as a coalition signaling system

Edward H. Hagen; Gregory A. Bryant

Evidence suggests that humans might have neurological specializations for music processing, but a compelling adaptationist account of music and dance is lacking. The sexual selection hypothesis cannot easily account for the widespread performance of music and dance in groups (especially synchronized performances), and the social bonding hypothesis has severe theoretical difficulties. Humans are unique among the primates in their ability to form cooperative alliances between groups in the absence of consanguineal ties. We propose that this unique form of social organization is predicated on music and dance. Music and dance may have evolved as a coalition signaling system that could, among other things, credibly communicate coalition quality, thus permitting meaningful cooperative relationships between groups. This capability may have evolved from coordinated territorial defense signals that are common in many social species, including chimpanzees. We present a study in which manipulation of music synchrony significantly altered subjects’ perceptions of music quality, and in which subjects’ perceptions of music quality were correlated with their perceptions of coalition quality, supporting our hypothesis. Our hypothesis also has implications for the evolution of psychological mechanisms underlying cultural production in other domains such as food preparation, clothing and body decoration, storytelling and ritual, and tools and other artifacts.


Proceedings of the Royal Society of London B: Biological Sciences | 2010

Adaptations in humans for assessing physical strength from the voice

Aaron Nathaniel Sell; Gregory A. Bryant; Leda Cosmides; John Tooby; Daniel Sznycer; Christopher von Rueden; Andre Krauss; Michael Gurven

Recent research has shown that humans, like many other animals, have a specialization for assessing fighting ability from visual cues. Because it is probable that the voice contains cues of strength and formidability that are not available visually, we predicted that selection has also equipped humans with the ability to estimate physical strength from the voice. We found that subjects accurately assessed upper-body strength in voices taken from eight samples across four distinct populations and language groups: the Tsimane of Bolivia, Andean herder-horticulturalists and United States and Romanian college students. Regardless of whether raters were told to assess height, weight, strength or fighting ability, they produced similar ratings that tracked upper-body strength independent of height and weight. Male voices were more accurately assessed than female voices, which is consistent with ethnographic data showing a greater tendency among males to engage in violent aggression. Raters extracted information about strength from the voice that was not supplied from visual cues, and were accurate with both familiar and unfamiliar languages. These results provide, to our knowledge, the first direct evidence that both men and women can accurately assess mens physical strength from the voice, and suggest that estimates of strength are used to assess fighting ability.


Metaphor and Symbol | 2002

Recognizing Verbal Irony in Spontaneous Speech

Gregory A. Bryant; Jean E. Fox Tree

We explored the differential impact of auditory information and written contextual information on the recognition of verbal irony in spontaneous speech. Based on relevance theory, we predicted that speakers would provide acoustic disambiguation cues when speaking in situations that lack other sources of information, such as a visual channel. We further predicted that listeners would use this information, in addition to context, when interpreting the utterances. People were presented with spontaneously produced ironic and nonironic utterances from radio talk shows in written or auditory form, with or without written contextual information. When the utterances were read without written contextual information, all utterances were rated as equally ironic. But when they were heard as opposed to read, or when they were presented in irony-biasing contexts, originally ironic utterances were rated as more sarcastic than originally nonironic utterances. This evidence suggests both acoustic and contextual information are used when inferring ironic intent in spontaneous speech, and validates previous manipulations of intonation in studies of irony understanding.


Language and Speech | 2005

Is there an Ironic Tone of Voice

Gregory A. Bryant; Jean E. Fox Tree

Research on nonverbal vocal cues and verbal irony has often relied on the concept of an ironic tone of voice. Here we provide acoustic analysis and experimental evidence that this notion is oversimplified and misguided. Acoustic analyses of spontaneous ironic speech extracted from talk radio shows, both ambiguous and unambiguous in written form, revealed only a difference in amplitude variability compared to matched nonironic speech from the same sources, and that was only among the most clear-cut items. In a series of experiments, participants rated content-filtered versions of the same ironic and nonironic utterances on a range of affective and linguistic dimensions. Listeners did not rely on any set of vocal cues to identify verbal irony that was separate from other emotional and linguistic judgments. We conclude that there is no particular ironic tone of voice and that listeners interpret verbal irony by combining a variety of cues, including information outside of the linguistic context.


Biology Letters | 2009

Vocal cues of ovulation in human females

Gregory A. Bryant; Martie G. Haselton

Recent research has documented a variety of ovulatory cues in humans, and in many nonhuman species, the vocal channel provides cues of reproductive state. We collected two sets of vocal samples from 69 normally ovulating women: one set during the follicular (high-fertility) phase of the cycle and one set during the luteal (low-fertility) phase, with ovulation confirmed by luteinizing hormone tests. In these samples we measured fundamental frequency (pitch), formant dispersion, jitter, shimmer, harmonics-to-noise ratio and speech rate. When speaking a simple introductory sentence, womens pitch increased during high- as compared with low-fertility, and this difference was the greatest for women whose voices were recorded on the two highest fertility days within the fertile window (the 2 days just before ovulation). This pattern did not occur when the same women produced vowels. The high- versus low-fertility difference in pitch was associated with the approach of ovulation and not menstrual onset, thus representing, to our knowledge, the first research to show a specific cyclic fertility cue in the human voice. We interpret this finding as evidence of a fertility-related enhancement of femininity consistent with other research documenting attractiveness-related changes associated with ovulation.


Psychological Science | 2007

Recognizing Intentions in Infant-Directed Speech Evidence for Universals

Gregory A. Bryant; H. Clark Barrett

In all languages studied to date, distinct prosodic contours characterize different intention categories of infant-directed (ID) speech. This vocal behavior likely exists universally as a species-typical trait, but little research has examined whether listeners can accurately recognize intentions in ID speech using only vocal cues, without access to semantic information. We recorded native-English-speaking mothers producing four intention categories of utterances (prohibition, approval, comfort, and attention) as both ID and adult-directed (AD) speech, and we then presented the utterances to Shuar adults (South American hunter-horticulturalists). Shuar subjects were able to reliably distinguish ID from AD speech and were able to reliably recognize the intention categories in both types of speech, although performance was significantly better with ID speech. This is the first demonstration that adult listeners in an indigenous, nonindustrialized, and nonliterate culture can accurately infer intentions from both ID speech and AD speech in a language they do not speak.


Journal of Cognition and Culture | 2008

Vocal Emotion Recognition Across Disparate Cultures

Gregory A. Bryant; H. Clark Barrett

There exists substantial cultural variation in how emotions are expressed, but there is also considerable evidence for universal properties in facial and vocal affective expressions. This is the first empirical effort examining the perception of vocal emotional expressions across cultures with little common exposure to sources of emotion stimuli, such as mass media. Shuar hunter-horticulturalists from Amazonian Ecuador were able to reliably identify happy, angry, fearful and sad vocalizations produced by American native English speakers by matching emotional spoken utterances to emotional expressions portrayed in pictured faces. The Shuar performed similarly to English speakers who heard the same utterances in a content-filtered condition. These data support the hypothesis that vocal emotional expressions of basic affective categories manifest themselves in similar ways across quite disparate cultures.


Discourse Processes | 2010

Prosodic Contrasts in Ironic Speech

Gregory A. Bryant

Prosodic features in spontaneous speech help disambiguate implied meaning not explicit in linguistic surface structure, but little research has examined how these signals manifest themselves in real conversations. Spontaneously produced verbal irony utterances generated between familiar speakers in conversational dyads were acoustically analyzed for prosodic contrasts. A prosodic contrast was defined as a statistically reliable shift between adjacent phrasal units in at least 1 of 5 acoustic dimensions (mean fundamental frequency, fundamental frequency variability, mean amplitude, amplitude variability, and mean syllable duration). Overall, speakers contrasted prosodic features in ironic utterances with utterances immediately preceding them at a higher rate than between adjacent nonironic utterance pairs from the same interactions. Across multiple speakers, ironic utterances were spoken significantly slower than preceding speech, but no other acoustic dimensions changed consistently. This is the first acoustic analysis examining relative prosodic changes in spontaneous ironic speech. Prosodic contrasts are argued to be an important mechanism for communicating implicit emotional and intentional information in speech—and a means to understanding traditional notions of an ironic tone.


Language and Linguistics Compass | 2012

Is Verbal Irony Special

Gregory A. Bryant

The way we speak can reveal much about what we intend to communicate, but the words we use often only indirectly relate to the meanings we wish to convey. Verbal irony is a commonly studied form of indirect speech in which a speaker produces an explicit evaluative utterance that implicates an unstated, opposing evaluation. Producing and understanding ironic language, as well as many other types of indirect speech, requires the ability to recognize mental states in others, sometimes described as a capacity for metarepresentation. This article aims to connect common elements between the major theoretical approaches to verbal irony to recent psycholinguistic, developmental, and neuropsychological research demonstrating the necessity for metarepresentation in the effective use of verbal irony in social interaction. Here I will argue that verbal irony is one emergent, strategic possibility given the interface between people’s ability to infer mental states, and use language. Rather than think of ironic communication as a specialized cognitive ability, I will claim that it arises from the same set of abilities that underlie a wide range of inferential communicative behaviors.


Proceedings of the National Academy of Sciences of the United States of America | 2016

Detecting affiliation in colaughter across 24 societies

Gregory A. Bryant; Daniel M. T. Fessler; Riccardo Fusaroli; Edward K. Clint; Lene Aarøe; Coren L. Apicella; Michael Bang Petersen; Shaneikiah T. Bickham; Alexander H. Bolyanatz; Brenda Lía Chávez; Delphine De Smet; Cinthya Díaz; Jana Fančovičová; Michal Fux; Paulina Giraldo-Perez; Anning Hu; Shanmukh V. Kamble; Tatsuya Kameda; Norman P. Li; Francesca R. Luberti; Pavol Prokop; Katinka Quintelier; Brooke A. Scelza; HyunJung Shin; Montserrat Soler; Stefan Stieger; Wataru Toyokawa; Ellis A. van den Hende; Hugo Viciana-Asensio; Saliha Elif Yildizhan

Significance Human cooperation requires reliable communication about social intentions and alliances. Although laughter is a phylogenetically conserved vocalization linked to affiliative behavior in nonhuman primates, its functions in modern humans are not well understood. We show that judges all around the world, hearing only brief instances of colaughter produced by pairs of American English speakers in real conversations, are able to reliably identify friends and strangers. Participants’ judgments of friendship status were linked to acoustic features of laughs known to be associated with spontaneous production and high arousal. These findings strongly suggest that colaughter is universally perceivable as a reliable indicator of relationship quality, and contribute to our understanding of how nonverbal communicative behavior might have facilitated the evolution of cooperation. Laughter is a nonverbal vocal expression that often communicates positive affect and cooperative intent in humans. Temporally coincident laughter occurring within groups is a potentially rich cue of affiliation to overhearers. We examined listeners’ judgments of affiliation based on brief, decontextualized instances of colaughter between either established friends or recently acquainted strangers. In a sample of 966 participants from 24 societies, people reliably distinguished friends from strangers with an accuracy of 53–67%. Acoustic analyses of the individual laughter segments revealed that, across cultures, listeners’ judgments were consistently predicted by voicing dynamics, suggesting perceptual sensitivity to emotionally triggered spontaneous production. Colaughter affords rapid and accurate appraisals of affiliation that transcend cultural and linguistic boundaries, and may constitute a universal means of signaling cooperative relationships.

Collaboration


Dive into the Gregory A. Bryant's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge