Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Christian Geng is active.

Publication


Featured researches published by Christian Geng.


Journal of the International Phonetic Association | 2008

Acoustic and articulatory manifestations of vowel reduction in German

Christine Mooshammer; Christian Geng

Recent phonological approaches incorporate phonetic principles in the motivation of phonological regularities, e.g. vowel reduction and neutralization in unstressed position by target undershoot. So far, evidence for this hypothesis is based on impressionistic and acoustic data but not on articulatory data. The major goal of this study is to compare formant spaces and lingual positions during the production of German vowels for combined effects of stress, accent and corrective contrast. In order to identify strategies for vowel reduction independent of speaker-specific vocal-tract anatomies and individual biomechanical properties, an approach similar to the Generalized Procrustes Analysis was applied to formant spaces and lingual vowel target positions. The data basis consists of the German stressed and unstressed full vowels /i …I y …Y e… E E… P… { a… ao …O u …U / from seven speakers recorded by means of electromagnetic midsagittal articulography (EMMA). Speaker normalized articulatory and formant spaces gave evidence for a greater degree of coarticulation with the consonant context for unstressed vowels as compared to stressed vowels. However, only for tense vowels could spatial reduction patterns be attributed to vowel shortening, whereas lax vowels were reduced without shortening. The results are discussedinthelightofcurrent theoriesofvowel reduction, i.e.target undershoot, Adaptive Dispersion Theory and Prominence Alignment.


IEEE Transactions on Audio, Speech, and Language Processing | 2012

Foreign Accent Conversion Through Concatenative Synthesis in the Articulatory Domain

Daniel Felps; Christian Geng; Ricardo Gutierrez-Osuna

We propose a concatenative synthesis approach to the problem of foreign accent conversion. The approach consists of replacing the most accented portions of nonnative speech with alternative segments from a corpus of the speakers own speech based on their similarity to those from a reference native speaker. We propose and compare two approaches for selecting units, one based on acoustic similarity [e.g., mel frequency cepstral coefficients (MFCCs)] and a second one based on articulatory similarity, as measured through electromagnetic articulography (EMA). Our hypothesis is that articulatory features provide a better metric for linguistic similarity across speakers than acoustic features. To test this hypothesis, we recorded an articulatory-acoustic corpus from a native and a nonnative speaker, and evaluated the two speech representations (acoustic versus articulatory) through a series of perceptual experiments. Formal listening tests indicate that the approach can achieve a 20% reduction in perceived accent, but also reveal a strong coupling between accent and speaker identity. To address this issue, we disguised original and resynthesized utterances by altering their average pitch and normalizing vocal tract length. An additional listening experiment supports the hypothesis that articulatory features are less speaker dependent than acoustic features.


Journal of the Acoustical Society of America | 2009

How to stretch and shrink vowel systems: results from a vowel normalization procedure.

Christian Geng; Christine Mooshammer

One of the goals of phonetic investigations is to find strategies for vowel production independent of speaker-specific vocal-tract anatomies and individual biomechanical properties. In this study techniques for speaker normalization that are derived from Procrustes methods were applied to acoustic and articulatory data. More precisely, data consist of the first two formants and EMMA fleshpoint markers of stressed and unstressed vowels of German from seven speakers in the consonantal context /t/. Main results indicate that (a) for the articulatory data, the normalization can be related to anatomical properties (palate shapes), (b) the recovery of phonemic identity is of comparable quality for acoustic and articulatory data, (c) the procedure outperforms the Lobanov transform in the acoustic domain in terms of phoneme recovery, and (d) this advantage comes at the cost of partly also changing ellipse orientations, which is in accordance with the formulation of the algorithms.


Hno | 2004

MRT-Sequenzen als Datenbasis eines visuellen Artikulationsmodells

Bernd J. Kröger; Philip Hoole; Robert Sader; Christian Geng; B. Pompino-Marschall; Christiane Neuschaefer-Rube

Articulatory models can be used in phoniatrics for the visualisation of speech disorders, and can thus be used in teaching, the counselling of patients and their relatives, and in speech therapy. The articulatory model developed here was based on static MRI data of sustained sounds. MRI sequences are now being used to further refine the model with respect to speech movements. Medio-sagittal MRI sections were recorded for 12 consonants in the symmetrical context of the three point vowels [i:], [a:] and [u:] for this corpus. The recording-rate was eight images/s. The data show a strong influence of the vocalic context on the articulatory target-positions of all consonants. A method for the reduction of the MRI data for subsequent qualitative and quantitative analyses is presented.


wearable and implantable body sensor networks | 2011

Accelerometer-Based Respiratory Measurement During Speech

Andrew Bates; Martin J. Ling; Christian Geng; Alice Turk; D. K. Arvind

Accelerometer-based respiratory monitoring is a recent area of research based on the observation of small rotations at the chest wall due to breathing. Previous studies of this technique have begun to address some sources of interference e.g. subject movements, but have not investigated operation during speech production when breathing patterns are known to be substantially different to normal respiration. We demonstrate measurement of speech breathing with a wireless tri-axial accelerometer in a synchronously captured dataset, including annotated audio and electro-magnetic articulograph data. We find agreement between peaks in the accelerometer-derived rotation signal and manually annotated breath timings, and correlation between peak rotations and the duration of audible in breaths. In speech breathing the rotation rate signal does not appear to be a good proxy for airflow rate as previously suggested, and instead seems to better reflect the role of specific muscles around the accelerometer location. We conclude that the method can be usable during speech breathing, but that this difference should be considered. The method has some advantages for speech breathing research due to its unobtrusive nature.


Hno | 2004

MRT-Sequenzen als Datenbasis eines visuellen Artikulationsmodells@@@MRT sequences as a database for a visual articulatory model

B.J. Krger; Philip Hoole; Robert Sader; Christian Geng; B. Pompino-Marschall; Christiane Neuschaefer-Rube

Articulatory models can be used in phoniatrics for the visualisation of speech disorders, and can thus be used in teaching, the counselling of patients and their relatives, and in speech therapy. The articulatory model developed here was based on static MRI data of sustained sounds. MRI sequences are now being used to further refine the model with respect to speech movements. Medio-sagittal MRI sections were recorded for 12 consonants in the symmetrical context of the three point vowels [i:], [a:] and [u:] for this corpus. The recording-rate was eight images/s. The data show a strong influence of the vocalic context on the articulatory target-positions of all consonants. A method for the reduction of the MRI data for subsequent qualitative and quantitative analyses is presented.


Journal of Phonetics | 2013

Recording speech articulation in dialogue: Evaluating a synchronized double Electromagnetic Articulography setup

Christian Geng; Alice Turk; James M. Scobbie; Cedric Macmartin; Philip Hoole; Korin Richmond; Alan Wrench; Marianne Pouplier; Ellen Gurman Bard; Ziggy Campbell; Catherine Dickie; Eddie Dubourg; William J. Hardcastle; Evia Kainada; Simon King; Robin J. Lickley; Satsuki Nakai; Steve Renals; Kevin White; Ronny Wiegand

We demonstrate the workability of an experimental facility that is geared towards the acquisition of articulatory data from a variety of speech styles common in language use, by means of two synchronized electromagnetic articulography (EMA) devices. This approach synthesizes the advantages of real dialogue settings for speech research with a detailed description of the physiological reality of speech production. We describe the facilitys method for acquiring synchronized audio streams of two speakers and the system that enables communication among control room technicians, experimenters and participants. Further, we demonstrate the feasibility of the approach by evaluating problems inherent to this specific setup: The first problem is the accuracy of temporal synchronization of the two EMA machines, the second is the severity of electromagnetic interference between the two machines. Our results suggest that the synchronization method used yields an accuracy of approximately 1 ms. Electromagnetic interference was derived from the complex-valued signal amplitudes. This dependent variable was analyzed as a function of the recording status – i.e. on/off – of the interfering machines transmitters. The intermachine distance was varied between 1 m and 8.5 m. Results suggest that a distance of approximately 6.5 m is appropriate to achieve data quality comparable to that of single speaker recordings.


Journal of the Acoustical Society of America | 2010

The Edinburgh Speech Production Facility’s articulatory corpus of spontaneous dialogue.

Alice Turk; James M. Scobbie; Christian Geng; Cedric Macmartin; Ellen Gurman Bard; Barry Campbell; Catherine Dickie; Eddie Dubourg; Bill Hardcastle; Phil Hoole; Evia Kanaida; Robin J. Lickley; Satsuki Nakai; Marianne Pouplier; Simon King; Stephen Renals; Korin Richmond; Sonja Schaeffler; Ronnie Wiegand; Kevin White; Alan Wrench

The EPSRC‐funded Edinburgh Speech Production is built around two synchronized Carstens AG500 electromagnetic articulographs (EMAs) in order to capture articulatory/acoustic data from spontaneous dialogue. An initial articulatory corpus was designed with two aims. The first was to elicit a range of speech styles/registers from speakers, and therefore provide an alternative to fully scripted corpora. The second was to extend the corpus beyond monologue, by using tasks that promote natural discourse and interaction. A subsidiary driver was to use dialects from outwith North America: dialogues paired up a Scottish English and a Southern British English speaker. Tasks. Monologue: Story reading of “Comma Gets a Cure” [Honorof et al. (2000)], lexical sets [Wells (1982)], spontaneous story telling, diadochokinetic tasks. Dialogue: Map tasks [Anderson et al. (1991)], “Spot the Difference” picture tasks [Bradlow et al. (2007)], story‐recall. Shadowing of the spontaneous story telling by the second participant. Each...


Archive | 2003

Beyond 2D in articulatory data acquisition and analysis

Philip Hoole; Andreas Zierdt; Christian Geng


Archive | 2003

What role does the palate play in speech motor control? Insights from tongue kinematics for German alveolar obstruents

Susanne Fuchs; Pascal Perrier; Christian Geng; Christine Mooshammer

Collaboration


Dive into the Christian Geng's collaboration.

Top Co-Authors

Avatar

Alice Turk

University of Edinburgh

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Simon King

University of Edinburgh

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Alan Wrench

Queen Margaret University

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge