Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Catherine I. Watson is active.

Publication


Featured researches published by Catherine I. Watson.


Journal of the Acoustical Society of America | 1999

Acoustic evidence for dynamic formant trajectories in Australian English vowels.

Catherine I. Watson; Jonathan Harrington

The extent to which it is necessary to model the dynamic behavior of vowel formants to enable vowel separation has been the subject of debate in recent years. To investigate this issue, a study has been made on the vowels of 132 Australian English speakers (male and female). The degree of vowel separation from the formant values at the target was contrasted to that from modeling the formant contour with discrete cosine transform coefficients. The findings are that, although it is necessary to model the formant contour to separate out the diphthongs, the formant values at the target, plus vowel duration are sufficient to separate out the monophthongs. However, further analysis revealed that there are formant contour differences which benefit the within-class separation of the tense/lax monophthong pairs.


Nature | 2000

Does the Queen speak the Queen's English?

Jonathan Harrington; Sallyanne Palethorpe; Catherine I. Watson

The pronunciation of all languages changes subtly over time, mainly owing to the younger members of the community. What is unknown is whether older members unwittingly adapt their accent towards community changes. Here we analyse vowel sounds from the annual Christmas messages broadcast by HRH Queen Elizabeth II during the period between the 1950s and 1980s. Our analysis reveals that the Queens pronunciation of some vowels has been influenced by the standard southern-British accent of the 1980s which is more typically associated with speakers who are younger and lower in the social hierarchy.


Journal of the International Phonetic Association | 2000

Monophthongal vowel changes in Received Pronunciation: an acoustic analysis of the Queen's Christmas broadcasts

Jonathan Harrington; Sallyanne Palethorpe; Catherine I. Watson

In this paper we analyse the extent to which an adults vowel space is affected by vowel changes to the community using a database of nine Christmas broadcasts made by Queen Elizabeth II spanning three time periods (the 1950s; the late 1960s/early 70s; the 1980s). An analysis of the monophthongal formant space showed that the first formant frequency was generally higher for open vowels, and lower for mid-high vowels in the 1960s and 1980s data than in the 1950s data, which we interpret as an expansion of phonetic height from earlier to later years. The second formant frequency showed a more modest compression in later, compared with earlier years: in general, front vowels had a decreased F2 in later years, while F2 of the back vowels was unchanged except for [u] which had a higher F2 in the 1960s and 1980s data. We also show that the majority of these F1 and F2 changes were in the direction of the vowel positions of 1980s Standard Southern British speakers reported in Deterding (1997). Our general conclusion is that there is evidence of accent change within the same individual over time and that the Queens vowels in the Christmas broadcasts have shifted in the direction of a more mainstream form of Received Pronunciation.


Language Variation and Change | 2000

Acoustic evidence for vowel change in New Zealand English

Catherine I. Watson; Margaret Maclagan; Jonathan Harrington

This study provides acoustic evidence that in the last 50 years New Zealand English (NZE) has undergone a substantial vowel shift. Two sets of data are studied: the Otago corpus, recorded in 1995, and the Mobile Unit corpus, recorded in 1948. Both corpora have male and female speakers. The corpora were labeled, accented vowels were extracted, and formant values were calculated. The results of the formant analysis from the two corpora are contrasted. We provide evidence that in NZE0i0 has centralized, 0e0 and0ae0 have raised, and the diphthongs 0i@0 and0e@0 have merged. We argue that0i0 changed in quality not only because of crowding in the front vowel space, but also because it would be less likely misperceived as an unaccented vowel (i.e., as @).


intelligent robots and systems | 2010

Deployment of a service robot to help older people

Chandimal Jayawardena; I-Han Kuo; U. Unger; Aleksandar Igic; R. Wong; Catherine I. Watson; Rebecca Q. Stafford; Elizabeth Broadbent; Priyesh Tiwari; J. Warren; J. Sohn; Bruce A. MacDonald

This paper presents the first version of a mobile service robot designed for older people. Six service application modules were developed with the key objective being successful interaction between the robot and the older people. A series of trials were conducted in an independent living facility at a retirement village, with the participation of 32 residents and 21 staff. In this paper, challenges of deploying the robot and lessons learned are discussed. Results show that the robot could successfully interact with people and gain their acceptance.


new zealand international two stream conference on artificial neural networks and expert systems | 1995

The development of the Otago speech database

S. J. Sinclair; Catherine I. Watson

A collection of digits and words, spoken with a New Zealand English accent, has been systematically and formally collected. This collection along with the beginning and end points of the realised phonemes from within the words, comprise the Otago Speech Corpora. A relational database management system has been developed to house the speech data. This system provides much more usability, flexibility and expandibility than file based speech corpora such as TIMIT.


International Journal of Social Robotics | 2011

The Effects of Synthesized Voice Accents on User Perceptions of Robots

Rie Tamagawa; Catherine I. Watson; I. Han Kuo; Bruce A. MacDonald; Elizabeth Broadbent

Human voice accents have been shown to affect people’s perceptions of the speaker, but little research has looked at how synthesized voice accents affect perceptions of robots. This research investigated people’s perceptions of three synthesized voice accents. Three male robot voices were generated: British (UK), American (US), and New Zealand (NZ). In study one, twenty adults listened through headphones to a recorded script repeated in the three different accents, rated the nationality, roboticness, and overall impression of each voice, and chose their preferred accent. Study two used these voices on a healthcare robot to investigate the influence of accent on user perceptions of the robot. Ninety-one individuals were randomized to one of three conditions. In each condition they interacted with a healthcare robot that assisted with blood pressure measurement but the conditions differed in the accent the robot spoke with. In study one, each accent was correctly identified. There was no difference in impression ratings of each voice, but the US accent was rated as more robotic than the NZ accent, and the UK accent was preferred to the US accent. Study two showed that people randomized to the NZ accent had more positive feelings towards the robot and rated the robot’s overall performance as higher compared to the robot with the US voice. These results suggest that the employment of a less robotic voice with a local accent may positively affect user perceptions of robots.


international conference on acoustics, speech, and signal processing | 2008

Modelling and synthesising F0 contours with the discrete cosine transform

Jonathan Teutenberg; Catherine I. Watson; Patricia Riddle

The discrete cosine transform is proposed as a basis for representing fundamental frequency (F0) contours of speech. The advantages over existing representations include deterministic algorithms for both analysis and synthesis and a simple distance measure in the parameter space. A two-tier model using the DCT is shown to be able to model F0 contours to around 10 Hz RMS error. A proof-of-concept system for synthesising DCT parameters is evaluated, showing that the benefits do not come at the expense of speech synthesis applications.


Journal of Voice | 2014

Using the Perturbation of the Contact Quotient of the EGG Waveform to Analyze Age Differences in Adult Speech

Stephen D. Bier; Catherine I. Watson; Clare M. McCann

This study examines electroglottographic (EGG) recordings for 15 young and 14 old male speakers of New Zealand English. Analysis was performed on the sustained vowels /i:/ and /a:/ at three target levels for both pitch and loudness. Jitter was greater for older speakers, and the contact quotient (Qx) was significantly lower for older speakers. The greater jitter for older speakers indicates a decrease in the stability of the vocal production mechanism of the older speakers. The jitter is an acoustic measure, so to examine the stability at a physiological level, a perturbation measure of the Qx is developed and applied to the EGG recordings. The contact quotient perturbation (CQP) showed a significant increase for older speakers (1.55% and 3.54% for young and old, respectively), and this demonstrated more about the variability than the jitter data alone. When loudness is also considered, the Qx was significantly greater for louder vowels, whereas its perturbation was significantly lower for louder vowels. This relationship combined with the age effect, with the CQP for all three loudness levels being greater for the older speakers. The findings of this study will contribute to the development of vocal fold models that account for aging.


intelligent robots and systems | 2009

Expressive facial speech synthesis on a robotic platform

Xingyan Li; Bruce A. MacDonald; Catherine I. Watson

This paper presents our expressive facial speech synthesis system Eface, for a social or service robot. Eface aims at enabling a robot to deliver information clearly with empathetic speech and an expressive virtual face. The empathetic speech is built on the Festival speech synthesis system and provides robots the capability to speak with different voices and emotions. Two versions of a virtual face have been implemented to display the robots expressions. One with just over 100 polygons has a lower hardware requirement but looks less natural. The other has over 1000 polygons; it looks realistic, but costs more CPU resource and requires better video hardware. The whole system is incorporated into the popular open source robot interface Player, which makes client programs easy to write and debug. Also, it is convenient to use the same system with different robot platforms. We have implemented this system on a physical robot and tested it with a robotic nurse assistant scenario.

Collaboration


Dive into the Catherine I. Watson's collaboration.

Top Co-Authors

Avatar

Jeanette King

University of Canterbury

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Ray Harlow

University of Canterbury

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge