Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Chenhao Chiu is active.

Publication


Featured researches published by Chenhao Chiu.


Computer methods in biomechanics and biomedical engineering. Imaging & visualization | 2014

Speech function of the oropharyngeal isthmus: a modelling study

Bryan Gick; Peter Anderson; Hui Chen; Chenhao Chiu; Ho Beom Kwon; Ian Stavness; Ling Tsou; Sidney S. Fels

A finite element method-based numerical model of upper airway structures (jaw, tongue, maxilla, soft palate) was implemented to observe interactions between the soft palate and tongue, and, in particular, to distinguish the contributions of individual muscles in producing speech-relevant constrictions of the oropharyngeal isthmus (OPI) or ‘uvular’ region of the oral tract. Simulations revealed a sphincter-like general operation for the OPI, particularly with regard to the function of the palatoglossus muscle. Furthermore, as has been observed with the lips, the OPI can be controlled by multiple distinct muscular mechanisms, each reliably producing a different-sized opening and robust to activation noise suggestive of modular motor control for speech. As off-midline structures of the OPI are difficult to observe during speech production, biomechanical simulation offers a promising approach to studying these structures.


Journal of the Acoustical Society of America | 2012

Producing whole speech events: differential facial stiffness across the labial stops

Bryan Gick; Naomi Francis; Chenhao Chiu; Ian Stavness; Sidney S. Fels

It has long been assumed that the labial stops (e.g., [p], [b], [m]) are articulatorily identical. However, recent evidence [Abel et al. ISSP, 2011] shows that these labial stops are visually distinct. This distinction could result from differential passive responses to air pressure differences across the stops, or could reflect an active difference in facial muscle activation. An active difference would challenge the simplicity of unidimensional physical target-based speech production models. A pilot study was conducted in which air was blown simultaneously into a speakers mouth and nose just at the onset of /p/ and /m/ closures. Preliminary results show displacement of the cheeks and lips at /m/ onset, but not at /p/ onset. These results indicate different initial muscular settings for these sounds, presumably to stiffen the face in anticipation of the increased oral air pressure for /p/. Biomechanical simulation using ArtiSynth (www.artisynth.org) confirms that this outcome is consistent with activati...


Journal of the Acoustical Society of America | 2011

Feed‐forward control of phonetic gestures in consonant–vowel syllables: Evidence from responses to auditory startle.

Chenhao Chiu; Andrew James Thomas Stevenson; Dana Maslovat; Romeo Chua; Bryan Gick; Ian M. Franks

Speech production like other limb movements relies on both feed‐forward and feedback mechanisms. Use of a startling auditory stimulus (>90 dB) has been shown to trigger fast, accurate feed‐forward performances in upper limb movements prior to access to feedback information [Valls‐Sole et al. (1999), J. Physiol. 516: 931–938; Carlsen et al. (2004), J. Mot. Behav. 36: 253–264]. This startle paradigm is applied to test whether pre‐programed, feed‐forward speech production differs in phonetic detail from production with access to feedback. The experiment examined the production of the CV syllable [ba], starting with the mouth either open or closed. This speech production was triggered either by a control stimulus (82 dB) or by a startling stimulus (124 dB). Results from ten participants showed that lip compression occurred for both starting conditions (mouth open and mouth closed), and also indicated that the timing relationships of the articulators were stable across control trials and startle trials. The ac...


Journal of Chinese Linguistics | 2017

上东谷霍尔语的发声态对立 = Contrastive phonation in Upper Donggu Horpa (in Chinese)

孙 天心; 田 阡子; 邱 振豪; Jackson T.-S. Sun; Tian Qianzi; Chenhao Chiu

提要:霍尔语群属于汉藏语系羌语支嘉戎语组,包括数种互不通话的独立语 言。四川甘孜州丹巴县东谷上半乡使用上东谷话,属于一种独特的中 部霍尔语方言。上东谷话的特点是正常声与弛声两种发声态呈现音位 对立,除区别词义外,还具有体现动词词干交替等形态功能。弛声的 来源尚不完全清楚,但其中一部分弛声在词汇与形态交替上与其他霍 尔语的低调或送气清擦音对应,应属古霍尔语存古音韵特征。上东谷 话的弛声有助于阐释本语群声调与送气交替的起源以及界定方言间 的谱系关系,是极有研究价值的语音现象。Horpic denotes a cluster of little-explored languages under the Rgyalrongic subgroup of Qiangic in Sino-Tibetan. Upper Donggu, spoken in Rongbrag County of Dkarmdzes Prefecture, is a previously unknown dialect of Central Horpa, a major language within Horpic. Upper Donggu makes a phonemic distinction between modal and slack phonation. This remarkable feature not only contrasts lexical meanings, but also plays a role in verb-stem and other morphological formations. Sorting out the origins of slack phonation in Upper Donggu is still a daunting task, but partial correspondences in vocabulary and inflectional morphology can be established between Upper Donggu slack syllables and syllables in other Central Horpa varieties bearing low tone or voiceless aspirated fricative onsets. This suggests that phonation in Upper Donggu is a conservative feature that provides valuable clues to the origin of contrastive tone and fricative aspiration in Horpic, and also to the internal ramifications of this important Rgyalrongic subgroup.


Journal of the International Phonetic Association | 2016

Uvular approximation as an articulatory vowel feature

Jonathan P. Evans; Jackson T.-S. Sun; Chenhao Chiu; Michelle Liou

This study explores the phenomenon of uvularization in the vowel systems of two Heishui County varieties of Qiang, a Sino-Tibetan language of Sichuan Province, China. Ultrasound imaging (one speaker) shows that uvularized vowels have two tongue gestures: a rearward gesture, followed by movement toward the place of articulation of the corresponding plain vowel. Time-aligned acoustic and articulatory data show how movement toward the uvula correlates with changes in the acoustic signal. Acoustic correlates of uvularization (taken from two speakers) are seen most consistently in raising of vowel F1, lowering of F2 and in raising of the difference F3-F2. Imaging data and the formant structure of [l] show that uvular approximation can begin during the initial consonant that precedes a uvularized vowel. Uvularization is reflected phonologically in the phonotactic properties of vowels, while vowel harmony aids in the identification of plain–uvularized vowel pairs. The data reported in this paper argue in favor of a revision of the catalog of secondary articulations recognized by the International Phonetic Alphabet, in order to include uvularization, which can be marked with the symbol [ʶ] in the case of approximation and [χ] for secondary uvular frication.


Journal of the Acoustical Society of America | 2016

Articulatory setting as global coarticulation: Simulation, acoustics, and perception

Bryan Gick; Chenhao Chiu; Francois Roewer-Despres; Murray Schellenberg; Ian Stavness

Articulatory settings, language-specific default postures of the speech articulators, have been difficult to distinguish from segmental speech content [see Gick et al. 2004, Phonetica 61, 220-233]. The simplest construal of articulatory setting is as a constantly maintained set of tonic muscle activations that coarticulates globally with all segmental content. In his early Overlapping Innervation Wave theory, Joos [1948, Language Monogr. 23] postulated that all coarticulation can be understood as simple overlap, or superposition [Bizzi et al. 1991, Science 253, 287-291], of muscle activation patterns. The present paper describes an implementation of Joos’ proposals within a modular neuromuscular framework [see Gick & Stavness 2013, Front. Psych. 4, 977]. Results of a simulation and perception study will be reported in which muscle activations corresponding to English-like and French-like articulatory settings are simulated and superposed on activations for language-neutral vowels using the ArtiSynth biome...


Frontiers in Psychology | 2014

Startling speech: eliciting prepared speech using startling auditory stimulus

Chenhao Chiu; Bryan Gick

Speech research has recently seen a good deal of activity surrounding forward models (Tian and Poeppel, 2012; Pickering and Garrod, 2013; Scott, 2013), expanding on a long tradition of work in preprogramming of speech motor plans (e.g., Lashley, 1951; Keele, 1981; Klapp, 2003). Despite the volume of activity and interest in this area, few studies have offered insight into the detailed content of these forward plans. The content of such plans should presumably specify, at minimum, those aspects of speech that are essential in determining linguistic contrast, independent of the many aspects of a physical speech utterance that may be determined or altered through feedback mechanisms. Our previous work has attempted to uncover some of the detailed content of such forward plans using behavioral methods (Scott et al., 2013), while other studies have used neuroimaging methods (e.g., Heinks-Maldonado et al., 2006). Both approaches have given suggestive results, though not without concerns regarding interpretation (Niziolek et al., 2013). A novel experimental methodology employing startling auditory stimuli (SAS, >120 dB) has been used to demonstrate the execution of prepared non-speech motor behaviors (e.g., head rotation and upper limb movements) with little or no interference from feedback regulation (Valls-Sole et al., 1999; Oude Nijhuis et al., 2007; Carlsen et al., 2012). Accelerated release of prepared movements (as short as 70 ms for EMG response onset) in response to SAS has been termed the StartReact effect (Valls-Sole et al., 1999, 2008). Because of their very short onset latency, SAS-induced actions may be fully executed before they are affected by sensory feedback, thus enabling study of the forward plan. It is our opinion that this experimental paradigm is ideally suited for investigating speech production and uncovering the detailed contents of forward speech plans. Early analyses hypothesized that the rapid release of SAS-induced responses is the result of triggering subcortically stored information with faster neural transmissions (Carlsen et al., 2004; see also Castellote et al., 2012; Nonnekes et al., 2014). However, recent transcranial magnetic stimulation (TMS) studies show that the StartReact effect may not be limited to subcortically stored programs, but can also be observed in cortically dependent processes (Alibiglou and MacKinnon, 2012; Stevenson et al., 2014). These studies found that, when a cortical silent period was induced by applying TMS to motor cortex, the StartReact response was delayed in startle trials. If only subcortical processes were involved, TMS should not have affected the StartReact response. The delay in the StartReact response suggests that the pathways for a StartReact response would be mediated by, rather than bypassing, cortical areas. Following these studies of finger movement, Stevenson et al. (2014) apply the startle paradigm to prepared spoken syllables, observing that voluntary lip movements were released at shorter latencies by a SAS, while the timing of kinematic displacement remained unaffected and formant profiles were performed as intended with no disruption. These results support the view that prepared syllables encode sufficient kinematic and acoustic information as part of the forward plan, and that this information may be subject to rapid release by a SAS. Extending this paradigm to pitch control in speech, Chiu and Gick (in press) show that a SAS induces an elevated pitch level in prepared syllables. Speakers show no evidence of an attempt to correct this elevated pitch to a baseline level even though auditory and somatosensory feedback is likely available before the end of the response. These findings raise questions as to the extent to which feedback information may affect SAS-induced responses, and suggest that uncorrected contents of forward speech plans may be observable even for longer (i.e., multisyllabic) responses using SAS. The observed StartReact effect in syllable production also suggests that SAS-induced motor tasks, including upper limb, and speech movements, may involve similar neural pathways. It is noteworthy that the StartReact pathways involve similar pathways for speech production. As summarized in Carlsen et al. (2012), the StartReact response is mediated via an ascending thalamo-cortical pathway, generated by activation from reticular formation exerting on thalamus. Increased activation in thalamus provides inputs to primary motor cortex to initiate the cortically prepared movement via a descending corticospinal pathway. Similarly, speech production may also rely on thalamo-cortical circuits, and a descending corticospinal pathway. Specifically, receiving inputs from cerebellum, thalamus projects to primary motor cortex and Brocas area, and the commands are mediated via putamen and reticular formation and sent down to the phonatory motoneurones in the spine (Iwata et al., 1996; Jurgens, 2002; Guenther et al., 2006). Given that speech production involves a similar thalamo-cortical pathway to the one found in upper limb movements, upper limb movements, and speech movements may share the same StartReact pathways when elicited by a SAS. Shorter reaction times in StartReact responses are accounted for by increased neuron activation reaching faster above initiation threshold (see Carlsen et al., 2012 for details). Similar to the calculation of the required time span for voluntary limbic movements, we can also conservatively calculate the time required for a speech response. First, Schroeder and Foxe (2002) report a response latency of 10 ~ 25 ms from the onset of auditory stimulus to the activation in the auditory cortex. Second, another 5 ~ 10 ms is required for the stimulus to be conducted between the lateral lemniscus and the thalamus for auditorily-evoked responses (Stockard et al., 1977). Third, transcortical and thalamus-primary motor cortex transmissions require 2 ~ 4 ms for conduction (Guenther et al., 2006; Carlsen et al., 2012). Last, the orofacial muscle EMG response to TMS on the face area of the motor cortex has a latency of about 11 ~ 12 ms (Meyer et al., 1994) and the motor time for the muscle movement is delayed by 30 ms. Adding these values gives a minimum of 58 ~ 81 ms lag time in response to a SAS. As reported in Stevenson et al. (2014), the onset of SAS-induced responses is 75 ms, suggesting that the shared neural pathway used for limb movements and speech movements does lead to a StartReact effect for speech movement. Insofar as programming is necessary for speech production, we believe that the SAS methodology provides a new perspective that can help us to uncover the kinematic and linguistic contents of forward speech plans. Neural correlates and pathways for SAS-induced responses also support the view that SAS-induced speech responses may contain unaltered details of speech plans, allowing researchers a window into forward speech planning that bypasses afferent feedback information.


Journal of the Acoustical Society of America | 2013

Producing whole speech events: Anticipatory lip compression in bilabial stops

Chenhao Chiu; Bryan Gick

Bilabial stops /b/, /p/, and /m/ ostensibly share a common lip constriction. Recent evidence shows that different bilabial stops involve distinct facial muscle activations, suggesting that oral speech movements anticipate aerodynamic conditions [Gick et al. 2pSC1 Proc. Acoust. 2012 H.K.]. The present study investigates how the lips themselves behave in whole speech events. Existing models of speech production governing only articulatory motions predict that lip compression would respond to changes in aerodynamic conditions rather than anticipating such changes; a model that includes whole events predicts anticipatory activation of lip muscles with concomitant kinematic lip compression, but only in cases where a real increase in air pressure is expected. Lip kinematics were recorded using OptoTrak to trace lip movements of bilabial stops in response to imperative acoustic stimuli. Results show consistent anticipatory lip compression in spoken /b/, but not in non-speech jaw opening movements and only sporad...


Neuroscience | 2014

CORTICAL INVOLVEMENT IN THE STARTREACT EFFECT

Andrew James Thomas Stevenson; Chenhao Chiu; Dana Maslovat; Romeo Chua; Bryan Gick; Jean-Sébastien Blouin; Ian M. Franks


Canadian Acoustics | 2011

Categorical variation in lip posture is determined by quantal biomechanical-articulatory relations

Bryan Gick; Ian Stavness; Chenhao Chiu; Sidney S. Fels

Collaboration


Dive into the Chenhao Chiu's collaboration.

Top Co-Authors

Avatar

Bryan Gick

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Ian Stavness

University of Saskatchewan

View shared research outputs
Top Co-Authors

Avatar

Sidney S. Fels

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Dana Maslovat

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Ian M. Franks

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Romeo Chua

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Hui Chen

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Jean-Sébastien Blouin

University of British Columbia

View shared research outputs
Researchain Logo
Decentralizing Knowledge