Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Gabriel J. L. Beckers is active.

Publication


Featured researches published by Gabriel J. L. Beckers.


Neuroreport | 2012

Birdsong neurolinguistics: songbird context-free grammar claim is premature

Gabriel J. L. Beckers; Johan J. Bolhuis; Kazuo Okanoya

There are remarkable behavioral, neural, and genetic similarities between song learning in songbirds and speech acquisition in human infants. Previously, we have argued that this parallel cannot be extended to the level of sentence syntax. Although birdsong can indeed have a complex structure, it lacks the combinatorial complexity of human language syntax. Recently, this conclusion has been challenged by a report purporting to show that songbirds can learn so-called context-free syntactic rules and then use them to discriminate particular syllable patterns. Here, we demonstrate that the design of this study is inadequate to draw such a conclusion, and offer alternative explanations for the experimental results that do not require the acquisition and use of context-free grammar rules or a grammar of any kind, only the simpler hypothesis of acoustic similarity matching. We conclude that the evolution of vocal learning involves both neural homologies and behavioral convergence, and that human language reflects a unique cognitive capacity.


Frontiers in Evolutionary Neuroscience | 2012

A Bird’s Eye View of Human Language Evolution

Gabriel J. L. Beckers; Kazuo Okanoya; Johan J. Bolhuis

Comparative studies of linguistic faculties in animals pose an evolutionary paradox: language involves certain perceptual and motor abilities, but it is not clear that this serves as more than an input–output channel for the externalization of language proper. Strikingly, the capability for auditory–vocal learning is not shared with our closest relatives, the apes, but is present in such remotely related groups as songbirds and marine mammals. There is increasing evidence for behavioral, neural, and genetic similarities between speech acquisition and birdsong learning. At the same time, researchers have applied formal linguistic analysis to the vocalizations of both primates and songbirds. What have all these studies taught us about the evolution of language? Is the comparative study of an apparently species-specific trait like language feasible? We argue that comparative analysis remains an important method for the evolutionary reconstruction and causal analysis of the mechanisms underlying language. On the one hand, common descent has been important in the evolution of the brain, such that avian and mammalian brains may be largely homologous, particularly in the case of brain regions involved in auditory perception, vocalization, and auditory memory. On the other hand, there has been convergent evolution of the capacity for auditory–vocal learning, and possibly for structuring of external vocalizations, such that apes lack the abilities that are shared between songbirds and humans. However, significant limitations to this comparative analysis remain. While all birdsong may be classified in terms of a particularly simple kind of concatenation system, the regular languages, there is no compelling evidence to date that birdsong matches the characteristic syntactic complexity of human language, arising from the composition of smaller forms like words and phrases into larger ones.


Proceedings of the Royal Society of London B: Biological Sciences | 2010

Zebra finches exhibit speaker-independent phonetic perception of human speech

Verena R. Ohms; Arike Gill; Caroline A. A. van Heijningen; Gabriel J. L. Beckers; Carel ten Cate

Humans readily distinguish spoken words that closely resemble each other in acoustic structure, irrespective of audible differences between individual voices or sex of the speakers. There is an ongoing debate about whether the ability to form phonetic categories that underlie such distinctions indicates the presence of uniquely evolved, speech-linked perceptual abilities, or is based on more general ones shared with other species. We demonstrate that zebra finches (Taeniopygia guttata) can discriminate and categorize monosyllabic words that differ in their vowel and transfer this categorization to the same words spoken by novel speakers independent of the sex of the voices. Our analysis indicates that the birds, like humans, use intrinsic and extrinsic speaker normalization to make the categorization. This finding shows that there is no need to invoke special mechanisms, evolved together with language, to explain this feature of speech perception.


PLOS ONE | 2010

Vocal Tract Articulation in Zebra Finches

Verena R. Ohms; Peter Snelderwaard; Carel ten Cate; Gabriel J. L. Beckers

Background Birdsong and human vocal communication are both complex behaviours which show striking similarities mainly thought to be present in the area of development and learning. Recent studies, however, suggest that there are also parallels in vocal production mechanisms. While it has been long thought that vocal tract filtering, as it occurs in human speech, only plays a minor role in birdsong there is an increasing number of studies indicating the presence of sound filtering mechanisms in bird vocalizations as well. Methodology/Principal Findings Correlating high-speed X-ray cinematographic imaging of singing zebra finches (Taeniopygia guttata) to song structures we identified beak gape and the expansion of the oropharyngeal-esophageal cavity (OEC) as potential articulators. We subsequently manipulated both structures in an experiment in which we played sound through the vocal tract of dead birds. Comparing acoustic input with acoustic output showed that OEC expansion causes an energy shift towards lower frequencies and an amplitude increase whereas a wide beak gape emphasizes frequencies around 5 kilohertz and above. Conclusion These findings confirm that birds can modulate their song by using vocal tract filtering and demonstrate how OEC and beak gape contribute to this modulation.


PLOS ONE | 2010

Neural Processing of Short-Term Recurrence in Songbird Vocal Communication

Gabriel J. L. Beckers; Manfred Gahr

Background Many situations involving animal communication are dominated by recurring, stereotyped signals. How do receivers optimally distinguish between frequently recurring signals and novel ones? Cortical auditory systems are known to be pre-attentively sensitive to short-term delivery statistics of artificial stimuli, but it is unknown if this phenomenon extends to the level of behaviorally relevant delivery patterns, such as those used during communication. Methodology/Principal Findings We recorded and analyzed complete auditory scenes of spontaneously communicating zebra finch (Taeniopygia guttata) pairs over a week-long period, and show that they can produce tens of thousands of short-range contact calls per day. Individual calls recur at time scales (median interval 1.5 s) matching those at which mammalian sensory systems are sensitive to recent stimulus history. Next, we presented to anesthetized birds sequences of frequently recurring calls interspersed with rare ones, and recorded, in parallel, action and local field potential responses in the medio-caudal auditory forebrain at 32 unique sites. Variation in call recurrence rate over natural ranges leads to widespread and significant modulation in strength of neural responses. Such modulation is highly call-specific in secondary auditory areas, but not in the main thalamo-recipient, primary auditory area. Conclusions/Significance Our results support the hypothesis that pre-attentive neural sensitivity to short-term stimulus recurrence is involved in the analysis of auditory scenes at the level of delivery patterns of meaningful sounds. This may enable birds to efficiently and automatically distinguish frequently recurring vocalizations from other events in their auditory scene.


The Journal of Experimental Biology | 2012

Vocal tract articulation revisited: the case of the monk parakeet

Verena R. Ohms; Gabriel J. L. Beckers; Carel ten Cate; Roderick A. Suthers

SUMMARY Birdsong and human speech share many features with respect to vocal learning and development. However, the vocal production mechanisms have long been considered to be distinct. The vocal organ of songbirds is more complex than the human larynx, leading to the hypothesis that vocal variation in birdsong originates mainly at the sound source, while in humans it is primarily due to vocal tract filtering. However, several recent studies have indicated the importance of vocal tract articulators such as the beak and oropharyngeal–esophageal cavity. In contrast to most other bird groups, parrots have a prominent tongue, raising the possibility that tongue movements may also be of significant importance in vocal production in parrots, but evidence is rare and observations often anecdotal. In the current study we used X-ray cinematographic imaging of naturally vocalizing monk parakeets (Myiopsitta monachus) to assess which articulators are possibly involved in vocal tract filtering in this species. We observed prominent tongue height changes, beak opening movements and tracheal length changes, which suggests that all of these components play an important role in modulating vocal tract resonance. Moreover, the observation of tracheal shortening as a vocal articulator in live birds has to our knowledge not been described before. We also found strong positive correlations between beak opening and amplitude as well as changes in tongue height and amplitude in several types of vocalization. Our results suggest considerable differences between parrot and songbird vocal production while at the same time the parrots vocal articulation might more closely resemble human speech production in the sense that both make extensive use of the tongue as a vocal articulator.


BMC Biology | 2014

Plumes of neuronal activity propagate in three dimensions through the nuclear avian brain

Gabriel J. L. Beckers; Jacqueline van der Meij; John A. Lesku; Niels C. Rattenborg

BackgroundIn mammals, the slow-oscillations of neuronal membrane potentials (reflected in the electroencephalogram as high-amplitude, slow-waves), which occur during non-rapid eye movement sleep and anesthesia, propagate across the neocortex largely as two-dimensional traveling waves. However, it remains unknown if the traveling nature of slow-waves is unique to the laminar cytoarchitecture and associated computational properties of the neocortex.ResultsWe demonstrate that local field potential slow-waves and correlated multiunit activity propagate as complex three-dimensional plumes of neuronal activity through the avian brain, owing to its non-laminar, nuclear neuronal cytoarchitecture.ConclusionsThe traveling nature of slow-waves is not dependent upon the laminar organization of the neocortex, and is unlikely to subserve functions unique to this pattern of neuronal organization. Finally, the three-dimensional geometry of propagating plumes may reflect computational properties not found in mammals that contributed to the evolution of nuclear neuronal organization and complex cognition in birds.


Human Biology | 2011

Bird Speech Perception and Vocal Production: A Comparison with Humans

Gabriel J. L. Beckers

Abstract Research into speech perception by nonhuman animals can be crucially informative in assessing whether specific perceptual phenomena in humans have evolved to decode speech, or reflect more general traits. Birds share with humans not only the capacity to use complex vocalizations for communication but also many characteristics of its underlying developmental and mechanistic processes; thus, birds are a particularly interesting group for comparative study. This review first discusses commonalities between birds and humans in perception of speech sounds. Several psychoacoustic studies have shown striking parallels in seemingly speech-specific perceptual phenomena, such as categorical perception of voice-onset-time variation, categorization of consonants that lack phonetic invariance, and compensation for coarticulation. Such findings are often regarded as evidence for the idea mat the objects of human speech perception are auditory or acoustic events rather than articulations. Next, I highlight recent research on the production side of avian communication that has revealed the existence of vocal tract filtering and articulation in bird species-specific vocalization, which has traditionally been considered a hallmark of human speech production. Together, findings in birds show that many of characteristics of human speech perception are not uniquely human but also that a comparative approach to the question of what are the objects of perception—articulatory or auditory events—requires careful consideration of species-specific vocal production mechanisms.


Neuroscience & Biobehavioral Reviews | 2015

An in depth view of avian sleep

Gabriel J. L. Beckers; Niels C. Rattenborg

Brain rhythms occurring during sleep are implicated in processing information acquired during wakefulness, but this phenomenon has almost exclusively been studied in mammals. In this review we discuss the potential value of utilizing birds to elucidate the functions and underlying mechanisms of such brain rhythms. Birds are of particular interest from a comparative perspective because even though neurons in the avian brain homologous to mammalian neocortical neurons are arranged in a nuclear, rather than a laminar manner, the avian brain generates mammalian-like sleep-states and associated brain rhythms. Nonetheless, until recently, this nuclear organization also posed technical challenges, as the standard surface EEG recording methods used to study the neocortex provide only a superficial view of the sleeping avian brain. The recent development of high-density multielectrode recording methods now provides access to sleep-related brain activity occurring deep in the avian brain. Finally, we discuss how intracerebral electrical imaging based on this technique can be used to elucidate the systems-level processing of hippocampal-dependent and imprinting memories in birds.


Neuroscience & Biobehavioral Reviews | 2017

What do animals learn in artificial grammar studies

Gabriel J. L. Beckers; Kazuo Okanoya; Johan J. Bolhuis

HighlightsArtificial grammars are often used to assess syntactic capabilities in animals.Often, biases can be found in acoustic overlap between training and test stimuli.Neural systems can assess acoustic similarity without syntactic computations.Acoustic similarity is a blind spot in many studies, and should be controlled for. Abstract Artificial grammar learning is a popular paradigm to study syntactic ability in nonhuman animals. Subjects are first trained to recognize strings of tokens that are sequenced according to grammatical rules. Next, to test if recognition depends on grammaticality, subjects are presented with grammar‐consistent and grammar‐violating test strings, which they should discriminate between. However, simpler cues may underlie discrimination if they are available. Here, we review stimulus design in a sample of studies that use particular sounds as tokens, and that claim or suggest their results demonstrate a form of sequence rule learning. To assess the extent of acoustic similarity between training and test strings, we use four simple measures corresponding to cues that are likely salient. All stimulus sets contain biases in similarity measures such that grammatical test stimuli resemble training stimuli acoustically more than do non‐grammatical test stimuli. These biases may contribute to response behaviour, reducing the strength of grammatical explanations. We conclude that acoustic confounds are a blind spot in artificial grammar learning studies in nonhuman animals.

Collaboration


Dive into the Gabriel J. L. Beckers's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Verena R. Ohms

Indiana University Bloomington

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge