Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Christian T. Herbst is active.

Publication


Featured researches published by Christian T. Herbst.


Logopedics Phoniatrics Vocology | 2006

A comparison of different methods to measure the EGG contact quotient

Christian T. Herbst; Sten Ternström

The results from six published electroglottographic (EGG-based) methods for calculating the EGG contact quotient (CQEGG) were compared to closed quotients derived from simultaneous videokymographic imaging (CQKYM). Two trained male singers phonated in falsetto and in chest register, with two degrees of adduction in both registers. The maximum difference between methods in the CQEGG was 0.3 (out of 1.0). The CQEGG was generally lower than the CQKYM. Within subjects, the CQEGG co-varied with the CQkym, but with changing offsets depending on method. The CQEGG cannot be calculated for falsetto phonation with little adduction, since there is no complete glottal closure. Basic criterion-level methods with thresholds of 0.2 or 0.25 gave the best match to the CQKYM data. The results suggest that contacting and de-contacting in the EGG might not refer to the same physical events as do the beginning and cessation of airflow.


Science | 2012

How Low Can You Go? Physical Production Mechanism of Elephant Infrasonic Vocalizations

Christian T. Herbst; Angela S. Stoeger; Roland Frey; Jörg Lohscheller; Ingo R. Titze; Michaela Gumpenberger; W. Tecumseh Fitch

The Song of the Elephant In mammals, vocal sound production generally occurs in one of two ways, either through muscular control—as when a cat purrs or, more commonly, by air passing through the vocal folds—which occurs in humans and facilitates production of extremely high frequency bat calls. Over the past 20 years, it has been recognized that elephants can communicate through extremely low frequency infrasonic sounds. Taking advantage of a natural death of an elephant in a zoo, Herbst et al. (p. 595) examined the biomechanics of elephant sound production in an excised elephant larynx. Self-sustained vocal-fold vibrations, without the presence of any neural control, were used to produce infrasonic elephant sounds, using the same mechanism as singing in humans and echolocation in bats. Elephants produce low-frequency sounds via intrinsic vocal-fold vibrations similar to those in humans. Elephants can communicate using sounds below the range of human hearing (“infrasounds” below 20 hertz). It is commonly speculated that these vocalizations are produced in the larynx, either by neurally controlled muscle twitching (as in cat purring) or by flow-induced self-sustained vibrations of the vocal folds (as in human speech and song). We used direct high-speed video observations of an excised elephant larynx to demonstrate flow-induced self-sustained vocal fold vibration in the absence of any neural signals, thus excluding the need for any “purring” mechanism. The observed physical principles of voice production apply to a wide variety of mammals, extending across a remarkably large range of fundamental frequencies and body sizes, spanning more than five orders of magnitude.


Current Biology | 2012

An Asian elephant imitates human speech.

Angela S. Stoeger; Daniel Mietchen; Sukhun Oh; Shermin de Silva; Christian T. Herbst; Soowhan Kwon; W. Tecumseh Fitch

Summary Vocal imitation has convergently evolved in many species, allowing learning and cultural transmission of complex, conspecific sounds, as in birdsong [1, 2]. Scattered instances also exist of vocal imitation across species, including mockingbirds imitating other species or parrots and mynahs producing human speech [3, 4]. Here, we document a male Asian elephant (Elephas maximus) that imitates human speech, matching Korean formants and fundamental frequency in such detail that Korean native speakers can readily understand and transcribe the imitations. To create these very accurate imitations of speech formant frequencies, this elephant (named Koshik) places his trunk inside his mouth, modulating the shape of the vocal tract during controlled phonation. This represents a wholly novel method of vocal production and formant control in this or any other species. One hypothesized role for vocal imitation is to facilitate vocal recognition by heightening the similarity between related or socially affiliated individuals [1, 2]. The social circumstances under which Koshik’s speech imitations developed suggest that one function of vocal learning might be to cement social bonds and, in unusual cases, social bonds across species.


Nature Communications | 2015

Universal mechanisms of sound production and control in birds and mammals

Coen P. H. Elemans; Jeppe Have Rasmussen; Christian T. Herbst; Daniel Normen Düring; Sue Anne Zollinger; Henrik Brumm; K. Srivastava; Niels Svane; Ming Ding; Ole Næsbye Larsen; Samuel J. Sober; Jan G. Švec

As animals vocalize, their vocal organ transforms motor commands into vocalizations for social communication. In birds, the physical mechanisms by which vocalizations are produced and controlled remain unresolved because of the extreme difficulty in obtaining in vivo measurements. Here, we introduce an ex vivo preparation of the avian vocal organ that allows simultaneous high-speed imaging, muscle stimulation and kinematic and acoustic analyses to reveal the mechanisms of vocal production in birds across a wide range of taxa. Remarkably, we show that all species tested employ the myoelastic-aerodynamic (MEAD) mechanism, the same mechanism used to produce human speech. Furthermore, we show substantial redundancy in the control of key vocal parameters ex vivo, suggesting that in vivo vocalizations may also not be specified by unique motor commands. We propose that such motor redundancy can aid vocal learning and is common to MEAD sound production across birds and mammals, including humans.


Journal of the Acoustical Society of America | 2015

Toward a consensus on symbolic notation of harmonics, resonances, and formants in vocalization

Ingo R. Titze; Ronald J. Baken; Kenneth Bozeman; Svante Granqvist; Nathalie Henrich; Christian T. Herbst; David M. Howard; Eric J. Hunter; Dean Kaelin; Ray D. Kent; Jody Kreiman; Malte Kob; Anders Löfqvist; Scott McCoy; Donald G. Miller; Hubert Noé; Ronald C. Scherer; John Smith; Brad H. Story; Jan G. Švec; Sten Ternström; Joe Wolfe

Toward a consensus on symbolic notation of harmonics, resonances, and formants in vocalization


The Journal of Experimental Biology | 2014

Glottal opening and closing events investigated by electroglottography and super-high-speed video recordings

Christian T. Herbst; Jörg Lohscheller; Jan G. Švec; Nathalie Henrich; G. E. Weissengruber; W. Tecumseh Fitch

Previous research has suggested that the peaks in the first derivative (dEGG) of the electroglottographic (EGG) signal are good approximate indicators of the events of glottal opening and closing. These findings were based on high-speed video (HSV) recordings with frame rates 10 times lower than the sampling frequencies of the corresponding EGG data. The present study attempts to corroborate these previous findings, utilizing super-HSV recordings. The HSV and EGG recordings (sampled at 27 and 44 kHz, respectively) of an excised canine larynx phonation were synchronized by an external TTL signal to within 0.037 ms. Data were analyzed by means of glottovibrograms, digital kymograms, the glottal area waveform and the vocal fold contact length (VFCL), a new parameter representing the time-varying degree of ‘zippering’ closure along the anterior–posterior (A–P) glottal axis. The temporal offsets between glottal events (depicted in the HSV recordings) and dEGG peaks in the opening and closing phase of glottal vibration ranged from 0.02 to 0.61 ms, amounting to 0.24–10.88% of the respective glottal cycle durations. All dEGG double peaks coincided with vibratory A–P phase differences. In two out of the three analyzed video sequences, peaks in the first derivative of the VFCL coincided with dEGG peaks, again co-occurring with A–P phase differences. The findings suggest that dEGG peaks do not always coincide with the events of glottal closure and initial opening. Vocal fold contacting and de-contacting do not occur at infinitesimally small instants of time, but extend over a certain interval, particularly under the influence of A–P phase differences.


Journal of Voice | 2010

Using Electroglottographic Real-Time Feedback to Control Posterior Glottal Adduction during Phonation

Christian T. Herbst; David M. Howard; Josef Schlömicher-Thier

The goal of this pilot study was to determine whether the ability to change the degree of posterior glottal adduction (PGA) during phonation can be acquired more easily with the aid of electroglottographic (EGG) real-time feedback. The subject was a 37-year-old untrained female with habitually breathy voice. Before the experiment, she participated in one voice coaching session where exercises for increasing PGA were explained and executed. During the experiment, phonation has been monitored simultaneously with videostroboscopy, electroglottography, and audio recording. While phonating, the subject saw amplitude and period normalized EGG waveform representing one glottal cycle consecutively changing over time. The assignment was to increase the width of the EGG waveform during phonation. Laryngeal imaging revealed a posterior glottal chink during habitual phonation. The subject could only introduce intentional changes into the EGG waveform after its relevance had been explained, and after recapitulation of the exercises of the voice coaching session: An increase of the EGG waveform width coincided with the increase of high-frequency partials and an increase of PGA. For pitches B3 and B4, full glottal closure could be achieved. At G5, a reduction of the posterior glottal chink occurred. The findings of this study suggest that the skill to control the degree of PGA can be acquired, and that EGG real-time feedback can be a crucial element in optimizing the process of skill acquisition, but only if (1) the context and nature of the feedback is explained and (2) proper instructions are provided. The EGG contact quotient might not be sensitive to changes of PGA in falsetto phonation.


Journal of the Royal Society Interface | 2013

Visualization of system dynamics using phasegrams

Christian T. Herbst; Hanspeter Herzel; Jan G. Švec; Megan T. Wyman; W. Tecumseh Fitch

A new tool for visualization and analysis of system dynamics is introduced: the phasegram. Its application is illustrated with both classical nonlinear systems (logistic map and Lorenz system) and with biological voice signals. Phasegrams combine the advantages of sliding-window analysis (such as the spectrogram) with well-established visualization techniques from the domain of nonlinear dynamics. In a phasegram, time is mapped onto the x-axis, and various vibratory regimes, such as periodic oscillation, subharmonics or chaos, are identified within the generated graph by the number and stability of horizontal lines. A phasegram can be interpreted as a bifurcation diagram in time. In contrast to other analysis techniques, it can be automatically constructed from time-series data alone: no additional system parameter needs to be known. Phasegrams show great potential for signal classification and can act as the quantitative basis for further analysis of oscillating systems in many scientific fields, such as physics (particularly acoustics), biology or medicine.


Journal of Voice | 2016

Relationship Between the Electroglottographic Signal and Vocal Fold Contact Area.

Vit Hampala; Maxime Garcia; Jan G. Švec; Ronald C. Scherer; Christian T. Herbst

OBJECTIVE Electroglottography (EGG) is a widely used noninvasive method that purports to measure changes in relative vocal fold contact area (VFCA) during phonation. Despite its broad application, the putative direct relation between the EGG waveform and VFCA has to date only been formally tested in a single study, suggesting an approximately linear relationship. However, in that study, flow-induced vocal fold (VF) vibration was not investigated. A rigorous empirical evaluation of EGG as a measure of VFCA under proper physiological conditions is therefore still needed. METHODS/DESIGN Three red deer larynges were phonated in an excised hemilarynx preparation using a conducting glass plate. The time-varying contact between the VF and the glass plate was assessed by high-speed video recordings at 6000 fps, synchronized to the EGG signal. RESULTS The average differences between the normalized [0, 1] VFCA and EGG waveforms for the three larynges were 0.180 (±0.156), 0.075 (±0.115), and 0.168 (±0.184) in the contacting phase and 0.159 (±0.112), -0.003 (±0.029), and 0.004 (±0.032) in the decontacting phase. DISCUSSIONS AND CONCLUSIONS Overall, there was a better agreement between VFCA and the EGG waveform in the decontacting phase than in the contacting phase. Disagreements may be caused by nonuniform tissue conductance properties, electrode placement, and electroglottograph hardware circuitry. Pending further research, the EGG waveform may be a reasonable first approximation to change in medial contact area between the VFs during phonation. However, any quantitative and statistical data derived from EGG should be interpreted cautiously, allowing for potential deviations from true VFCA.


PLOS ONE | 2013

Social Origins of Rhythm? Synchrony and Temporal Regularity in Human Vocalization

Daniel L. Bowling; Christian T. Herbst; W. Tecumseh Fitch

Humans have a capacity to perceive and synchronize with rhythms. This is unusual in that only a minority of other species exhibit similar behavior. Study of synchronizing species (particularly anurans and insects) suggests that simultaneous signal production by different individuals may play a critical role in the development of regular temporal signaling. Accordingly, we investigated the link between simultaneous signal production and temporal regularity in our own species. Specifically, we asked whether inter-individual synchronization of a behavior that is typically irregular in time, speech, could lead to evenly-paced or “isochronous” temporal patterns. Participants read nonsense phrases aloud with and without partners, and we found that synchronous reading resulted in greater regularity of durational intervals between words. Comparison of same-gender pairings showed that males and females were able to synchronize their temporal speech patterns with equal skill. These results demonstrate that the shared goal of synchronization can lead to the development of temporal regularity in vocalizations, suggesting that the origins of musical rhythm may lie in cooperative social interaction rather than in sexual selection.

Collaboration


Dive into the Christian T. Herbst's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jörg Lohscheller

University of Erlangen-Nuremberg

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Michael Döllinger

Pacific Lutheran University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jakob Unger

RWTH Aachen University

View shared research outputs
Top Co-Authors

Avatar

Sten Ternström

Royal Institute of Technology

View shared research outputs
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge