Ole Næsbye Larsen
University of Southern Denmark
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Ole Næsbye Larsen.
Journal of the Acoustical Society of America | 1993
Ole Næsbye Larsen; Simon Boel Pedersen
The habitat‐induced degradation of the full song of the blackbird (Turdus merula) was quantified by measuring excess attenuation, reduction of the signal‐to‐noise ratio, and blur ratio, the latter measure representing the degree of blurring of amplitude and frequency patterns over time. All three measures were calculated from changes of the amplitude functions (i.e., envelopes) of the degraded songs using a new technique which allowed a compensation for the contribution of the background noise to the amplitude values. Representative songs were broadcast in a deciduous forest without leaves and rerecorded. Speakers and microphones were placed at typical blackbird emitter and receiver positions. Analyses showed that the three degradation measures were mutually correlated, and that they varied with log distance. Their variation suggests that the broadcast song could be detected across more than four, and discriminated across more than two territories. The song’s high‐pitched twitter sounds were degraded more rapidly than its low‐pitched motif sounds. Motif sounds with a constant frequency projected best. The effect of microphone height was pronounced, especially on motif sounds, whereas the effect of speaker height was negligible. Degradation was inversely proportional to microphone height. Changing the reception site from a low to a high position reduced the degradation by the same amount as by approaching the sound source across one‐half or one‐whole territory. This suggests that the main reason for a male to sing from a high perch is to improve the singer’s ability to hear responses to its songs, rather than to maximize the transmission distance. The difference in degradation between low and high microphone heights may explain why females, which tend to perch on low brush, disregard certain degradable components of the song.
Journal of the Acoustical Society of America | 1998
Jo Holland; Simon Boel Pedersen; Ole Næsbye Larsen
The effects of bird song imply a transfer of information between conspecifics. This communication channel is constrained by habitat-induced degradation. Many studies suggest that birds can utilize features of degraded song to assess relative distance to the signaller (ranging). The degradation of transmitted song in the wren Troglodytes troglodytes is quantified to assess the opportunities offered in received song for both information transfer and ranging. This quantification incorporates three measurable aspects of degradation: signal-to-noise ratio; excess attenuation; blur ratio. Each aspect varies more-or-less predictably with transmission distance, i.e., a criterion for ranging. Significant effects of speaker and microphone elevation indicate a potential for birds to optimize both the opportunity for information transfer and ranging by considering perch location. Song elements are the smallest units of a song being defined as a continuous trace on a sonagram. Main and second-order effects of element ...
Proceedings of the Royal Society of London B: Biological Sciences | 1999
Ole Næsbye Larsen; Franz Goller
The sound–generating mechanism in the bird syrinx has been the subject of debate. Recent endoscopic imaging of the syrinx during phonation provided evidence for vibrations of membranes and labia, but could not provide quantitative analysis of the vibrations. We have now recorded vibrations in the intact syrinx directly with an optic vibration detector together with the emitted sound during brain stimulation–induced phonation in anaesthetized pigeons, cockatiels, and a hill myna. The phonating syrinx was also filmed through an endoscope inserted into the trachea. In these species vibrations were always present during phonation, and their frequency and amplitude characteristics were highly similar to those of the emitted sound, including nonlinear acoustic phenomena. This was also true for tonal vocalizations, suggesting that a vibratory mechanism can account for all vocalizations presented in the study. In some vocalizations we found differences in the shape of the waveform between vibrations and the emitted sound, probably reflecting variations in oscillatory behaviour of syringeal structures. This study therefore provides the first direct evidence for a vibratory sound–generating mechanism (i.e. lateral tympaniform membranes or labia acting as pneumatic valves) and does not support pure aerodynamic models. Furthermore, the data emphasize a potentially high degree of acoustic complexity.
Journal of Comparative Physiology A-neuroethology Sensory Neural and Behavioral Physiology | 2002
Franz Goller; Ole Næsbye Larsen
Abstract. The physical mechanisms of sound generation in the vocal organ, the syrinx, of songbirds have been investigated mostly with indirect methods. Recent direct endoscopic observation identified vibrations of the labia as the principal sound source. This model suggests sound generation in a pulse-tone mechanism similar to human phonation with the labia forming a pneumatic valve. The classical avian model proposed that vibrations of the thin medial tympaniform membranes are the primary sound generating mechanism. As a direct test of these two hypotheses we ablated the medial tympaniform membranes in two species (cardinal and zebra finch) and found that both were still able to phonate and sing without functional membranes. Small changes in song structure (harmonic emphasis, frequency control) occurred after medial tympaniform membrane ablation and suggest that the medial tympaniform membranes play a role in adjusting tension on the labia. Such a role is consistent with the fact that the medial tympaniform membranes are directly attached to the medial labia. There is no experimental support for a third hypothesis, proposing an aerodynamic model for generation of tonal sounds. Indirect tests (song in heliox atmosphere) as well as direct (labial vibration during tonal sound) measurements of syringeal vibrations support a vibration-based sound-generating mechanism even for tonal sounds.
Nature Communications | 2015
Coen P. H. Elemans; Jeppe Have Rasmussen; Christian T. Herbst; Daniel Normen Düring; Sue Anne Zollinger; Henrik Brumm; K. Srivastava; Niels Svane; Ming Ding; Ole Næsbye Larsen; Samuel J. Sober; Jan G. Švec
As animals vocalize, their vocal organ transforms motor commands into vocalizations for social communication. In birds, the physical mechanisms by which vocalizations are produced and controlled remain unresolved because of the extreme difficulty in obtaining in vivo measurements. Here, we introduce an ex vivo preparation of the avian vocal organ that allows simultaneous high-speed imaging, muscle stimulation and kinematic and acoustic analyses to reveal the mechanisms of vocal production in birds across a wide range of taxa. Remarkably, we show that all species tested employ the myoelastic-aerodynamic (MEAD) mechanism, the same mechanism used to produce human speech. Furthermore, we show substantial redundancy in the control of key vocal parameters ex vivo, suggesting that in vivo vocalizations may also not be specified by unique motor commands. We propose that such motor redundancy can aid vocal learning and is common to MEAD sound production across birds and mammals, including humans.
Bioinspiration & Biomimetics | 2008
Axel Michelsen; Ole Næsbye Larsen
Directional sound receivers are useful for locating sound sources, and they can also partly compensate for the signal degradations caused by noise and reverberations. Ears may become inherently directional if sound can reach both surfaces of the eardrum. Attempts to understand the physics of such pressure difference receiving ears have been hampered by lack of suitable experimental methods. In this review, we review the methods for collecting reliable data on the binaural directional cues at the eardrums, on how the eardrum vibrations depend on the direction of sound incidence, and on how sound waves behave in the air spaces leading to the interior surfaces of eardrums. A linear mathematical model with well-defined inputs is used for exploring how the directionality varies with the binaural directional cues and the amplitude and phase gain of the sound pathway to the inner surface of the eardrum. The mere existence of sound transmission to the inner surface does not ensure a useful directional hearing, since a proper amplitude and phase relationship must exist between the sounds acting on the two surfaces of the eardrum. The gain of the sound pathway must match the amplitude and phase of the sounds at the outer surfaces of the eardrums, which are determined by diffraction and by the arrival time of the sound, that is by the size and shape of the animal and by the frequency of sound. Many users of hearing aids do not obtain a satisfactory improvement of their ability to localize sound sources. We suggest that some of the mechanisms of directional hearing evolved in Nature may serve as inspiration for technical improvements.
Journal of Comparative Physiology A-neuroethology Sensory Neural and Behavioral Physiology | 1992
Georg M. Klump; Ole Næsbye Larsen
SummaryThe physical measurements reported here test whether the European starling (Sturnus vulgaris) evaluates the azimuth direction of a sound source with a peripheral auditory system composed of two acoustically coupled pressure-difference receivers (1) or of two decoupled pressure receivers (2).A directional pattern of sound intensity in the freefield was measured at the entrance of the auditory meatus using a probe microphone, and at the tympanum using laser vibrometry. The maximum differences in the soundpressure level measured with the microphone between various speaker positions and the frontal speaker position were 2.4 dB at 1 and 2 kHz, 7.3 dB at 4 kHz, 9.2 dB at 6 kHz, and 10.9 dB at 8 kHz. The directional amplitude pattern measured by laser vibrometry did not differ from that measured with the microphone. Neither did the directional pattern of travel times to the ear. Measurements of the amplitude and phase transfer function of the starlings interaural pathway using a closed sound system were in accord with the results of the free-field measurements.In conclusion, although some sound transmission via the interaural canal occurred, the present experiments support the hypothesis 2 above that the starlings peripheral auditory system is best described as consisting of two functionally decoupled pressure receivers.
Proceedings of the Royal Society of London B: Biological Sciences | 2007
Kenneth K. Jensen; Brenton G. Cooper; Ole Næsbye Larsen; Franz Goller
The principal physical mechanism of sound generation is similar in songbirds and humans, despite large differences in their vocal organs. Whereas vocal fold dynamics in the human larynx are well characterized, the vibratory behaviour of the sound-generating labia in the songbird vocal organ, the syrinx, is unknown. We present the first high-speed video records of the intact syrinx during induced phonation. The syrinx of anaesthetized crows shows a vibration pattern of the labia similar to that of the human vocal fry register. Acoustic pulses result from short opening of the labia, and pulse generation alternates between the left and right sound sources. Spontaneously calling crows can also generate similar pulse characteristics with only one sound generator. Airflow recordings in zebra finches and starlings show that pulse tone sounds can be generated unilaterally, synchronously or by alternating between the two sides. Vocal fry-like dynamics therefore represent a common production mechanism for low-frequency sounds in songbirds. These results also illustrate that complex vibration patterns can emerge from the mechanical properties of the coupled sound generators in the syrinx. The use of vocal fry-like dynamics in the songbird syrinx extends the similarity to this unusual vocal register with mammalian sound production mechanisms.
Animal Biology | 2003
Coen P. H. Elemans; Ole Næsbye Larsen; Marc R. Hoffmann; Johan L. van Leeuwen
We review current quantitative models of the biomechanics of bird sound production. A quantitative model of the vocal apparatus was proposed by Fletcher (1988). He represented the syrinx (i.e. the portions of the trachea and bronchi with labia and membranes) as a single membrane. This membrane acts as a valve that rapidly closes and opens during phonation. This model can be used as a basis to address comparative morphological and physiological questions. More recently, the syrinx was modelled as a simple modified oscillator. Many features of the sound were captured remarkably well. The parameter values, however, did not represent the distribution of the actual material properties of the syrinx. These models demonstrated the minimum number of parameters required to describe the essential dynamics of the sound signal. Furthermore, we address possible interesting future directions for modelling.
Journal of the Acoustical Society of America | 2011
Ida G. Eskesen; Magnus Wahlberg; Malene Simon; Ole Næsbye Larsen
The source characteristics of biosonar signals from sympatric killer whales and long-finned pilot whales in a Norwegian fjord were compared. A total of 137 pilot whale and more than 2000 killer whale echolocation clicks were recorded using a linear four-hydrophone array. Of these, 20 pilot whale clicks and 28 killer whale clicks were categorized as being recorded on-axis. The clicks of pilot whales had a mean apparent source level of 196 dB re 1 μPa pp and those of killer whales 203 dB re 1 μPa pp. The duration of pilot whale clicks was significantly shorter (23 μs, S.E.=1.3) and the centroid frequency significantly higher (55 kHz, S.E.=2.1) than killer whale clicks (duration: 41 μs, S.E.=2.6; centroid frequency: 32 kHz, S.E.=1.5). The rate of increase in the accumulated energy as a function of time also differed between clicks from the two species. The differences in duration, frequency, and energy distribution may have a potential to allow for the distinction between pilot and killer whale clicks when using automated detection routines for acoustic monitoring.