Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Donald Derrick is active.

Publication


Featured researches published by Donald Derrick.


Nature | 2009

Aero-tactile integration in speech perception

Bryan Gick; Donald Derrick

Visual information from a speaker’s face can enhance or interfere with accurate auditory perception. This integration of information across auditory and visual streams has been observed in functional imaging studies, and has typically been attributed to the frequency and robustness with which perceivers jointly encounter event-specific information from these two modalities. Adding the tactile modality has long been considered a crucial next step in understanding multisensory integration. However, previous studies have found an influence of tactile input on speech perception only under limited circumstances, either where perceivers were aware of the task or where they had received training to establish a cross-modal mapping. Here we show that perceivers integrate naturalistic tactile information during auditory speech perception without previous training. Drawing on the observation that some speech sounds produce tiny bursts of aspiration (such as English ‘p’), we applied slight, inaudible air puffs on participants’ skin at one of two locations: the right hand or the neck. Syllables heard simultaneously with cutaneous air puffs were more likely to be heard as aspirated (for example, causing participants to mishear ‘b’ as ‘p’). These results demonstrate that perceivers integrate event-relevant tactile information in auditory perception in much the same way as they do visual information.


Journal of the Acoustical Society of America | 2012

Biomechanical modeling of English /r/ variants

Ian Stavness; Bryan Gick; Donald Derrick; Sidney S. Fels

This study reports an investigation of the well-known context-dependent variation in English /r/ using a biomechanical tongue-jaw-hyoid model. The simulation results show that preferred /r/ variants require less volume displacement, relative strain, and relative muscle stress than variants that are not preferred. This study also uncovers a previously unknown mechanism in tongue biomechanics for /r/ production: Torque in the sagittal plane about the mental spine. This torque enables raising of the tongue anterior for retroflexed [Symbol: see text] by activation of hyoglossus and relaxation of anterior genioglossus. The results provide a deeper understanding of the articulatory factors that govern contextual phonetic variation.


Journal of the Acoustical Society of America | 2005

ArtiSynth designing a modular 3D articulatory speech synthesizer

Florian Vogt; Oliver Guenther; Allan Hannam; Kees van den Doel; John E. Lloyd; Leah Vilhan; Rahul Chander; Justin Lam; Charles R. Wilson; Kalev Tait; Donald Derrick; Ian Wilson; Carol Jaeger; Bryan Gick; Eric Vatikiotis-Bateson; Sidney S. Fels

ArtiSynth is a modular, component‐based system for performing dynamic 3D simulations of the human vocal tract and face. It provides a test bed for research in areas such as speech synthesis, linguistics, medicine, and dentistry. ArtiSynths framework enables researchers to construct, refine, and exchange models of all parts of the vocal tract and surrounding structures. ArtiSynth introduces a probe concept to unify input and output data flow, which allows control of and access to models with time varying data series. ArtiSynth supports interconnected heterogeneous models, such as rigid body, mass‐spring, and parametric, using a point‐set connection method, called markers, for constraint satisfaction. Using ArtiSynth, we created a muscle‐driven rigid body jaw model, a parametric principle component tongue model from MRI images, a parametric lip model, and mass‐spring face tissue model. We combined them in various ways. Data from medical imaging (MRI, CT, and ultrasound) and other technologies such as optica...


Journal of the Acoustical Society of America | 2015

Using a radial ultrasound probe's virtual origin to compute midsagittal smoothing splines in polar coordinates

Matthias Heyne; Donald Derrick

Tongue surface measurements from midsagittal ultrasound scans are effectively arcs with deviations representing tongue shape, but smoothing-spline analysis of variances (SSANOVAs) assume variance around a horizontal line. Therefore, calculating SSANOVA average curves of tongue traces in Cartesian Coordinates [Davidson, J. Acoust. Soc. Am. 120(1), 407-415 (2006)] creates errors that are compounded at tongue tip and root where average tongue shape deviates most from a horizontal line. This paper introduces a method for transforming data into polar coordinates similar to the technique by Mielke [J. Acoust. Soc. Am. 137(5), 2858-2869 (2015)], but using the virtual origin of a radial ultrasound transducer as the polar origin-allowing data conversion in a manner that is robust against between-subject and between-session variability.


Journal of the Acoustical Society of America | 2013

Acoustic correlates of flaps in North American English

Donald Derrick; Benjamin Schultz

Using B/M mode ultrasound, Derrick & Gick (2010) identified four categorical variations of flaps in North American English, up-flaps, down-flaps, alveolar taps, and postalveolar taps produced in English. These variants can be used to test hypotheses about constraints on speech articulation, such as local context, gravity and elasticity, speech rate, and longer distance anticipatory coarticulation. This study examines acoustic correlates of flap variations in order to make connections between the results of larger, and easier to collect, acoustic databases and the tongue movements underlying flap productions. Preliminary analyses using smoothing spline ANOVAs of z-score normalized f0, F1, F2, F3, F4 and F5 indicate significant differences in each dependent variable for flaps in non-rhotic vowel contexts. The results for flaps adjacent to rhotic vowels is more complex, requiring more detailed analysis. Based on these results, we are currently planning supervised hierarchical clustering to aid in probabilist...


Literary and Linguistic Computing | 2010

TreeForm: Explaining and exploring grammar through syntax trees

Donald Derrick; Daniel W. Archambault

Linguists studying grammar often describe their models using a syntax tree. Drawing a syntax tree involves the depiction of a rooted tree with additional syntactic features using specific domain conventions. TreeForm assists users in developing syntax trees, complete with movement lines, coreference, and feature association, in order to explore their syntactic theories and explain them to their colleagues. It is a drag-and-drop alternative to LaTeX and labelled bracket nota- tion tools already available, which many linguists find difficult to use. We com- pare the output of TreeForm to those existing tools and show that it is able to better respect the conventions of the domain. We assess how easily linguists learn to use TreeForm through a series of cognitive walkthroughs. Our reviews find that TreeForm is a viable alternative to existing tools.


Journal of the Acoustical Society of America | 2016

Effects of aero-tactile stimuli on continuous speech perception

Donald Derrick; Greg A. O'Beirne; Jennifer Hay; Romain Fiasson

We follow up on research demonstrating that aerotactile information can enhance accurate identification of stop- and fricative-onset syllables in two-way forced-choice experiments (Derrick, et al., 2014) to include open-set identification tasks. We recorded audio and speech airflow simultaneously from the lips of two New Zealand English (NZE) speakers (one female, one male), and used these recordings to produce an auditory/aero-tactile matrix sentence test. The airflow signal is used to drive a piezoelectric air pump that delivers airflow to the right temple simultaneously with presentation of noise-degraded auditory recordings. Participants (including native NZE speakers with and without hearing impairment, and normal-hearing native non-NZE and non-native English speakers) listen to and repeat 5-word sentences presented in noise with and without simultaneous airflow. Their open-set responses are scored by the researchers. Custom-written software identifies the SNRs for 20% and 80% word identification acc...


Journal of the Acoustical Society of America | 2018

Stop, approximant, and timing slot: The changing faces of the velar stop in Iwaidja

Jason A. Shaw; Christopher Carignan; Tonya Agostini; Robert Mailhammer; Mark Harvey; Donald Derrick

Limited access to speakers and incomplete lexical knowledge are common challenges facing phonetic description of under-documented languages. We address these challenges by taking a multi-dimensional approach, seeking to constrain our phonetic description by covariation across acoustic and articulatory parameters. We demonstrate the approach through an analysis of velar consonsants in the Australian Aboriginal language Iwaidja. Existing accounts contrast a velar stop /k/ with a velar approximant /ɰ/ in word-medial position (Evans 2009). Converging evidence from ultrasound images of the tongue body and acoustic analysis of intensity data reveal that the posited opposition is not consistent across speakers (N = 4) and lexical items. Unsupervised categorization of the phonetic data indicates two phonetic categories, appropriately labelled as [a] and [ɰ], which do not map consistently to dictionary labels in existing descriptions. We conclude that speaker-specific allophonic variation is the result of an ongoing process of lenition of /k/ between sonorant segments which has not yet phonologized. More broadly, integrating phonetic dimensions revealed categories that were ill-defined on the basis of just acoustic or articulatory measures alone. Depth of analysis, characterized by phonetic multi-dimensionality, may support robust generalization where broad analysis (multiple speakers, large corpora) are impractical or impossible.Limited access to speakers and incomplete lexical knowledge are common challenges facing phonetic description of under-documented languages. We address these challenges by taking a multi-dimensional approach, seeking to constrain our phonetic description by covariation across acoustic and articulatory parameters. We demonstrate the approach through an analysis of velar consonsants in the Australian Aboriginal language Iwaidja. Existing accounts contrast a velar stop /k/ with a velar approximant /ɰ/ in word-medial position (Evans 2009). Converging evidence from ultrasound images of the tongue body and acoustic analysis of intensity data reveal that the posited opposition is not consistent across speakers (N = 4) and lexical items. Unsupervised categorization of the phonetic data indicates two phonetic categories, appropriately labelled as [a] and [ɰ], which do not map consistently to dictionary labels in existing descriptions. We conclude that speaker-specific allophonic variation is the result of an ongoi...


Journal of the Acoustical Society of America | 2016

Visual-tactile integration in speech perception: Evidence for modality neutral speech primitives

Katie Bicevskis; Donald Derrick; Bryan Gick

Audio-visual [McGurk and MacDonald (1976). Nature 264, 746-748] and audio-tactile [Gick and Derrick (2009). Nature 462(7272), 502-504] speech stimuli enhance speech perception over audio stimuli alone. In addition, multimodal speech stimuli form an asymmetric window of integration that is consistent with the relative speeds of the various signals [Munhall, Gribble, Sacco, and Ward (1996). Percept. Psychophys. 58(3), 351-362; Gick, Ikegami, and Derrick (2010). J. Acoust. Soc. Am. 128(5), EL342-EL346]. In this experiment, participants were presented video of faces producing /pa/ and /ba/ syllables, both alone and with air puffs occurring synchronously and at different timings up to 300 ms before and after the stop release. Perceivers were asked to identify the syllable they perceived, and were more likely to respond that they perceived /pa/ when air puffs were present, with asymmetrical preference for puffs following the video signal-consistent with the relative speeds of visual and air puff signals. The results demonstrate that visual-tactile integration of speech perception occurs much as it does with audio-visual and audio-tactile stimuli. This finding contributes to the understanding of multimodal speech perception, lending support to the idea that speech is not perceived as an audio signal that is supplemented by information from other modes, but rather that primitives of speech perception are, in principle, modality neutral.


Journal of the Acoustical Society of America | 2016

The articulation of /ɹ/ in New Zealand English

Matthias Heyne; Xuan Wang; Kieran Dorreen; Donald Derrick; Kevin Watson

A large number of studies have investigated the articulation of approximant /ɹ/ in American English (AE) (e.g., Delattre & Freeman, 1968). This research has found that a low third formant (F3), the main acoustic cue signaling rhoticity, can be achieved using many different tongue configurations; the two main tongue shapes used for /ɹ/ are “tip-down” (“bunched”) and “tip-up” (“retroflex”) (cf. Hagiwara, 1994). While speakers likely employ various “trading relationships” to maintain a constantly low F3 across production strategies (Guenther et al., 1999), they have access to a pool of variation, which some use to form complex and idiosyncratic patterns of allophony (Mielke et al., 2016). Such patterns may arise during speech acquisition (Magloughlin, 2016). This study focuses on a non-rhotic dialect, New Zealand English (NZE), to test whether dialect rhoticity constrains idiosyncratic allophony. Ultrasound video was collected for 63 speakers articulating 13 words containing tokens of /ɹ/ in different phonet...

Collaboration


Dive into the Donald Derrick's collaboration.

Top Co-Authors

Avatar

Bryan Gick

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Jason A. Shaw

University of Western Sydney

View shared research outputs
Top Co-Authors

Avatar

Matthias Heyne

University of Canterbury

View shared research outputs
Top Co-Authors

Avatar

Michael Proctor

University of Southern California

View shared research outputs
Top Co-Authors

Avatar

Romain Fiasson

University of Canterbury

View shared research outputs
Top Co-Authors

Avatar

Ian Stavness

University of Saskatchewan

View shared research outputs
Top Co-Authors

Avatar

Catherine T. Best

University of Western Sydney

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Jennifer Hay

University of Canterbury

View shared research outputs
Top Co-Authors

Avatar

Wei-rong Chen

National Tsing Hua University

View shared research outputs
Researchain Logo
Decentralizing Knowledge