Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Daniel L. Bowling is active.

Publication


Featured researches published by Daniel L. Bowling.


Science | 2012

A Distinct Role of the Temporal-Parietal Junction in Predicting Socially Guided Decisions

R. McKell Carter; Daniel L. Bowling; Crystal Reeck; Scott A. Huettel

You Must Be Human Are there specific brain structures associated with social cognition or with aspects of information processing that frequently occur together with social cognition? Carter et al. (p. 109) invited subjects to play a simplified virtual poker game against either a human or a computer and examined brain scans collected during the game. The brains scans when the cards shown were used to predict the participants decision 6 seconds later. Activity in one region, the temporal-parietal junction, was one of the best predictors of future decisions against a human opponent, but the single worst predictor against a computer opponent. A single region of the brain is particularly engaged when one is beating an opponent at poker. To make adaptive decisions in a social context, humans must identify relevant agents in the environment, infer their underlying strategies and motivations, and predict their upcoming actions. We used functional magnetic resonance imaging, in conjunction with combinatorial multivariate pattern analysis, to predict human participants’ subsequent decisions in an incentive-compatible poker game. We found that signals from the temporal-parietal junction provided unique information about the nature of the upcoming decision, and that information was specific to decisions against agents who were both social and relevant for future behavior.


Journal of the Acoustical Society of America | 2010

Major and minor music compared to excited and subdued speech

Daniel L. Bowling; Kamraan Z. Gill; Jonathan Choi; Joseph A. Prinz; Dale Purves

The affective impact of music arises from a variety of factors, including intensity, tempo, rhythm, and tonal relationships. The emotional coloring evoked by intensity, tempo, and rhythm appears to arise from association with the characteristics of human behavior in the corresponding condition; however, how and why particular tonal relationships in music convey distinct emotional effects are not clear. The hypothesis examined here is that major and minor tone collections elicit different affective reactions because their spectra are similar to the spectra of voiced speech uttered in different emotional states. To evaluate this possibility the spectra of the intervals that distinguish major and minor music were compared to the spectra of voiced segments in excited and subdued speech using fundamental frequency and frequency ratios as measures. Consistent with the hypothesis, the spectra of major intervals are more similar to spectra found in excited speech, whereas the spectra of particular minor intervals are more similar to the spectra of subdued speech. These results suggest that the characteristic affective impact of major and minor tone collections arises from associations routinely made between particular musical intervals and voiced speech.


Frontiers in Psychology | 2014

Chorusing, synchrony, and the evolutionary functions of rhythm

Andrea Ravignani; Daniel L. Bowling; W. Tecumseh Fitch

A central goal of biomusicology is to understand the biological basis of human musicality. One approach to this problem has been to compare core components of human musicality (relative pitch perception, entrainment, etc.) with similar capacities in other animal species. Here we extend and clarify this comparative approach with respect to rhythm. First, whereas most comparisons between human music and animal acoustic behavior have focused on spectral properties (melody and harmony), we argue for the central importance of temporal properties, and propose that this domain is ripe for further comparative research. Second, whereas most rhythm research in non-human animals has examined animal timing in isolation, we consider how chorusing dynamics can shape individual timing, as in human music and dance, arguing that group behavior is key to understanding the adaptive functions of rhythm. To illustrate the interdependence between individual and chorusing dynamics, we present a computational model of chorusing agents relating individual call timing with synchronous group behavior. Third, we distinguish and clarify mechanistic and functional explanations of rhythmic phenomena, often conflated in the literature, arguing that this distinction is key for understanding the evolution of musicality. Fourth, we expand biomusicological discussions beyond the species typically considered, providing an overview of chorusing and rhythmic behavior across a broad range of taxa (orthopterans, fireflies, frogs, birds, and primates). Finally, we propose an “Evolving Signal Timing” hypothesis, suggesting that similarities between timing abilities in biological species will be based on comparable chorusing behaviors. We conclude that the comparative study of chorusing species can provide important insights into the adaptive function(s) of rhythmic behavior in our “proto-musical” primate ancestors, and thus inform our understanding of the biology and evolution of rhythm in human music and language.


PLOS ONE | 2012

Expression of Emotion in Eastern and Western Music Mirrors Vocalization

Daniel L. Bowling; Janani Sundararajan; Shui’er Han; Dale Purves

In Western music, the major mode is typically used to convey excited, happy, bright or martial emotions, whereas the minor mode typically conveys subdued, sad or dark emotions. Recent studies indicate that the differences between these modes parallel differences between the prosodic and spectral characteristics of voiced speech sounds uttered in corresponding emotional states. Here we ask whether tonality and emotion are similarly linked in an Eastern musical tradition. The results show that the tonal relationships used to express positive/excited and negative/subdued emotions in classical South Indian music are much the same as those used in Western music. Moreover, tonal variations in the prosody of English and Tamil speech uttered in different emotional states are parallel to the tonal trends in music. These results are consistent with the hypothesis that the association between musical tonality and emotion is based on universal vocal characteristics of different affective states.


PLOS ONE | 2011

Co-Variation of Tonality in the Music and Speech of Different Cultures

Shui’er Han; Janani Sundararajan; Daniel L. Bowling; Jessica I. Lake; Dale Purves

Whereas the use of discrete pitch intervals is characteristic of most musical traditions, the size of the intervals and the way in which they are used is culturally specific. Here we examine the hypothesis that these differences arise because of a link between the tonal characteristics of a cultures music and its speech. We tested this idea by comparing pitch intervals in the traditional music of three tone language cultures (Chinese, Thai and Vietnamese) and three non-tone language cultures (American, French and German) with pitch intervals between voiced speech segments. Changes in pitch direction occur more frequently and pitch intervals are larger in the music of tone compared to non-tone language cultures. More frequent changes in pitch direction and larger pitch intervals are also apparent in the speech of tone compared to non-tone language cultures. These observations suggest that the different tonal preferences apparent in music across cultures are closely related to the differences in the tonal characteristics of voiced speech.


Proceedings of the National Academy of Sciences of the United States of America | 2015

A biological rationale for musical consonance

Daniel L. Bowling; Dale Purves

The basis of musical consonance has been debated for centuries without resolution. Three interpretations have been considered: (i) that consonance derives from the mathematical simplicity of small integer ratios; (ii) that consonance derives from the physical absence of interference between harmonic spectra; and (iii) that consonance derives from the advantages of recognizing biological vocalization and human vocalization in particular. Whereas the mathematical and physical explanations are at odds with the evidence that has now accumulated, biology provides a plausible explanation for this central issue in music and audition.


Scientific Reports | 2017

Body size and vocalization in primates and carnivores

Daniel L. Bowling; Maxime Garcia; Jacob C. Dunn; R. Ruprecht; A. Stewart; K.-H. Frommolt; W. T. Fitch

A fundamental assumption in bioacoustics is that large animals tend to produce vocalizations with lower frequencies than small animals. This inverse relationship between body size and vocalization frequencies is widely considered to be foundational in animal communication, with prominent theories arguing that it played a critical role in the evolution of vocal communication, in both production and perception. A major shortcoming of these theories is that they lack a solid empirical foundation: rigorous comparisons between body size and vocalization frequencies remain scarce, particularly among mammals. We address this issue here in a study of body size and vocalization frequencies conducted across 91 mammalian species, covering most of the size range in the orders Primates (n = 50; ~0.11–120 Kg) and Carnivora (n = 41; ~0.14–250 Kg). We employed a novel procedure designed to capture spectral variability and standardize frequency measurement of vocalization data across species. The results unequivocally demonstrate strong inverse relationships between body size and vocalization frequencies in primates and carnivores, filling a long-standing gap in mammalian bioacoustics and providing an empirical foundation for theories on the adaptive function of call frequency in animal communication.


PLOS ONE | 2013

Social Origins of Rhythm? Synchrony and Temporal Regularity in Human Vocalization

Daniel L. Bowling; Christian T. Herbst; W. Tecumseh Fitch

Humans have a capacity to perceive and synchronize with rhythms. This is unusual in that only a minority of other species exhibit similar behavior. Study of synchronizing species (particularly anurans and insects) suggests that simultaneous signal production by different individuals may play a critical role in the development of regular temporal signaling. Accordingly, we investigated the link between simultaneous signal production and temporal regularity in our own species. Specifically, we asked whether inter-individual synchronization of a behavior that is typically irregular in time, speech, could lead to evenly-paced or “isochronous” temporal patterns. Participants read nonsense phrases aloud with and without partners, and we found that synchronous reading resulted in greater regularity of durational intervals between words. Comparison of same-gender pairings showed that males and females were able to synchronize their temporal speech patterns with equal skill. These results demonstrate that the shared goal of synchronization can lead to the development of temporal regularity in vocalizations, suggesting that the origins of musical rhythm may lie in cooperative social interaction rather than in sexual selection.


Cognition & Emotion | 2017

More than words (and faces): evidence for a Stroop effect of prosody in emotion word processing.

Piera Filippi; Sebastian Ocklenburg; Daniel L. Bowling; Larissa Heege; Onur Güntürkün; Albert Newen; Bart de Boer

ABSTRACT Humans typically combine linguistic and nonlinguistic information to comprehend emotions. We adopted an emotion identification Stroop task to investigate how different channels interact in emotion communication. In experiment 1, synonyms of “happy” and “sad” were spoken with happy and sad prosody. Participants had more difficulty ignoring prosody than ignoring verbal content. In experiment 2, synonyms of “happy” and “sad” were spoken with happy and sad prosody, while happy or sad faces were displayed. Accuracy was lower when two channels expressed an emotion that was incongruent with the channel participants had to focus on, compared with the cross-channel congruence condition. When participants were required to focus on verbal content, accuracy was significantly lower also when prosody was incongruent with verbal content and face. This suggests that prosody biases emotional verbal content processing, even when conflicting with verbal content and face simultaneously. Implications for multimodal communication and language evolution studies are discussed.


Trends in Cognitive Sciences | 2015

Do Animal Communication Systems Have Phonemes

Daniel L. Bowling; W. Tecumseh Fitch

Biologists often ask whether animal communication systems make use of conceptual entities from linguistics, such as semantics or syntax. A new study of an Australian bird species argues that their communication system has phonemes, but we argue that imposing linguistic concepts obscures, rather than clarifyies, communicative function.

Collaboration


Dive into the Daniel L. Bowling's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge