Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Megan Keough is active.

Publication


Featured researches published by Megan Keough.


Journal of the Acoustical Society of America | 2018

Visual-aerotactile perception and congenital hearing loss

Charlene Chang; Megan Keough; Murray Schellenberg; Bryan Gick

Previous research on multimodal speech perception with hearing-impaired individuals focused on audiovisual integration with mixed results. Cochlear-implant users integrate audiovisual cues better than perceivers with normal hearing when perceiving congruent [Rouger et al. 2007, PNAS, 104(17), 7295–7300] but not incongruent cross-modal cues [Rouger et al. 2008, Brain Research 1188, 87–99), leading to the suggestion that early auditory exposure is required for typical speech integration processes to develop (Schorr 2005, PNAS, 102(51), 18748–18750). If a deficit of one modality does indeed lead to a deficit in multimodal processing, then hard of hearing perceivers should show different patterns of integration in other modality pairings. The current study builds on research showing that gentle puffs of air on the skin can push individuals with normal hearing to perceive silent bilabial articulations as aspirated. We report on a visual-aerotactile perception task comparing individuals with congenital hearing loss to those with normal hearing. Results indicate that aerotactile information facilitated identification of /pa/ for all participants (p < 0.001) and we found no significant difference between the two groups (normal hearing and congenital hearing loss). This suggests that typical multi-modal speech perception does not require access to all modalities from birth. [Funded by NIH.]Previous research on multimodal speech perception with hearing-impaired individuals focused on audiovisual integration with mixed results. Cochlear-implant users integrate audiovisual cues better than perceivers with normal hearing when perceiving congruent [Rouger et al. 2007, PNAS, 104(17), 7295–7300] but not incongruent cross-modal cues [Rouger et al. 2008, Brain Research 1188, 87–99), leading to the suggestion that early auditory exposure is required for typical speech integration processes to develop (Schorr 2005, PNAS, 102(51), 18748–18750). If a deficit of one modality does indeed lead to a deficit in multimodal processing, then hard of hearing perceivers should show different patterns of integration in other modality pairings. The current study builds on research showing that gentle puffs of air on the skin can push individuals with normal hearing to perceive silent bilabial articulations as aspirated. We report on a visual-aerotactile perception task comparing individuals with congenital hearing ...


Journal of the Acoustical Society of America | 2018

The stability of visual–aerotactile effects across multiple presentations of a single token

Sharon Kwan; Megan Keough; Ryan C. Taylor; Terrina Chan; Murray Schellenberg; Bryan Gick

Previous research has shown that the sensation of airflow causes bilabial stop closures to be perceived as aspirated even when paired with silent articulations rather than an acoustic signal [Bicevskis et al. 2016, JASA 140(5): 3531–3539]. However, some evidence suggests that perceivers integrate this cue differently if the silent articulations come from an animated face [Keough et al. 2017, Canadian Acoustics 45(3):176–177] rather than a human one. Participants shifted from a strong initial /ba/ bias to a strong /pa/ bias by the second half of the experiment, suggesting the participants learned to associate the video with the aspirated articulation through experience with the airflow. One explanation for the above findings is methodological: participants saw a single video clip while previous work exposed participants to multiple videos. The current study reports two experiments using a single clip with a human face (originally from Bicevskis et al. 2016). We found no evidence of a bias shift, indicating...


Journal of the Acoustical Society of America | 2018

Perceiving prosodic prominence via unnatural visual information in avatar communication

Ryan C. Taylor; Dimitri Prica; Megan Keough; Bryan Gick

Listeners integrate information from simulated faces in multimodal perception [Cohen, & Massaro 1990, Behav. Res. Meth. Instr. Comp. 22(2), 260–263], but not always in the same way as real faces [Keough et al. 2017, Can. Acoust. 45(3):176–177]. This is increasingly relevant with the dramatic increase in avatar communication in virtual spaces [https://www.bloomberg.com/professional/blog/computings-next-big-thing-virtual-world-may-reality-2020/]. Prosody is especially relevant, because compared to segmental speech sounds, the visual factors indicating prosodic prominence (e.g., eyebrow raises and hand gestures) frequently bear no biomechanical relation to the production of acoustic features of prominence, but are nonetheless highly reliable [Krahmer & Swerts 2007, JML 57(3): 396–414], and avatar virtual communication systems may convey prosodic information through unnatural means, e.g., by expressing amplitude via oral aperture (louder sound = larger opening); the present study examines whether this unnatural but reliable indicator of speech amplitude is integrated in prominence perception. We report an experiment describing whether and how perceivers take into account this reliable but unnatural visual information in the detection of prosodic prominence. Preliminary evidence suggests that oral aperture increases prominence with differences by sentence position. Listeners integrate information from simulated faces in multimodal perception [Cohen, & Massaro 1990, Behav. Res. Meth. Instr. Comp. 22(2), 260–263], but not always in the same way as real faces [Keough et al. 2017, Can. Acoust. 45(3):176–177]. This is increasingly relevant with the dramatic increase in avatar communication in virtual spaces [https://www.bloomberg.com/professional/blog/computings-next-big-thing-virtual-world-may-reality-2020/]. Prosody is especially relevant, because compared to segmental speech sounds, the visual factors indicating prosodic prominence (e.g., eyebrow raises and hand gestures) frequently bear no biomechanical relation to the production of acoustic features of prominence, but are nonetheless highly reliable [Krahmer & Swerts 2007, JML 57(3): 396–414], and avatar virtual communication systems may convey prosodic information through unnatural means, e.g., by expressing amplitude via oral aperture (louder sound = larger opening); the present study examines whether this unnatur...


Journal of the Acoustical Society of America | 2018

Perceiving audiovisual speech articulation in virtual reality

Megan Keough; Ryan C. Taylor; Dimitri Prica; Esther Y. Wong; Bryan Gick

Listeners incorporate visual speech information produced by computer-simulated faces when the articulations are precise and pre-programmed [e.g., Cohen, & Massaro 1990, Behav. Res. Meth. Instr. Comp. 22(2), 260–263]. Advances in virtual reality (VR) and avatar technologies have created new platforms for face-to-face communication in which visual speech information is presented through avatars. The avatars’ articulatory movements may be generated in real time based on an algorithmic response to acoustic parameters. While the communicative experience in VR has become increasingly realistic, the visual speech articulations remain intentionally imperfect and focused on synchrony to avoid uncanny valley effects [https://developers.facebook.com/videos/f8-2017/the-making-of-facebook-spaces/]. Depending on the VR platform, vowel rounding may be represented reasonably faithfully while mouth opening size may convey gross variation in amplitude. It is unknown whether and how perceivers make use of such underspecified and at times misleading visual cues to speech. The current study investigates whether reliable segmental information can be extracted from visual speech algorithmically generated through a popular VR platform. We report on an experiment using a speech in noise task with audiovisual stimuli in two conditions (with articulatory movement and without) to see whether the visual information improves or degrades identification.Listeners incorporate visual speech information produced by computer-simulated faces when the articulations are precise and pre-programmed [e.g., Cohen, & Massaro 1990, Behav. Res. Meth. Instr. Comp. 22(2), 260–263]. Advances in virtual reality (VR) and avatar technologies have created new platforms for face-to-face communication in which visual speech information is presented through avatars. The avatars’ articulatory movements may be generated in real time based on an algorithmic response to acoustic parameters. While the communicative experience in VR has become increasingly realistic, the visual speech articulations remain intentionally imperfect and focused on synchrony to avoid uncanny valley effects [https://developers.facebook.com/videos/f8-2017/the-making-of-facebook-spaces/]. Depending on the VR platform, vowel rounding may be represented reasonably faithfully while mouth opening size may convey gross variation in amplitude. It is unknown whether and how perceivers make use of such underspecifie...


Journal of the Acoustical Society of America | 2018

Cross-linguistic lateral bracing: An ultrasound study

Felicia Tong; Yadong Liu; Dawoon Choi; Megan Keough; Bryan Gick

Lateral bracing refers to intentional stabilizing of tongue contact with the roof of the mouth along the upper molars or the hard palate. Previous research has found evidence of lateral bracing in individual speakers of six languages [Cheng et al. 2017. Can. Acoust. 45, 186]. The current study examines lateral bracing cross-linguistically at a larger scale using ultrasound technology to image tongue movement. We tracked and measured the magnitude of vertical tongue movement at three positions (left, right, and middle) in the coronal plane over time using Flow Analyzer [Barbosa, 2014. J. Acoust. Soc. Am. 136, 2105] for optical flow analysis. Preliminary results across all languages (Cantonese, English, French, Korean, Mandarin, and Spanish) show that the sides of the tongue are more stable than the center and maintain a relatively high position in the mouth throughout speech. The magnitude of movement at the sides is significantly smaller than at the center of the tongue. Further, lateral releases vary in frequency for different languages. This evidence supports the view that bracing is a physiological property of speech production that occurs irrespective of the language spoken. [Funding from NSERC.] Lateral bracing refers to intentional stabilizing of tongue contact with the roof of the mouth along the upper molars or the hard palate. Previous research has found evidence of lateral bracing in individual speakers of six languages [Cheng et al. 2017. Can. Acoust. 45, 186]. The current study examines lateral bracing cross-linguistically at a larger scale using ultrasound technology to image tongue movement. We tracked and measured the magnitude of vertical tongue movement at three positions (left, right, and middle) in the coronal plane over time using Flow Analyzer [Barbosa, 2014. J. Acoust. Soc. Am. 136, 2105] for optical flow analysis. Preliminary results across all languages (Cantonese, English, French, Korean, Mandarin, and Spanish) show that the sides of the tongue are more stable than the center and maintain a relatively high position in the mouth throughout speech. The magnitude of movement at the sides is significantly smaller than at the center of the tongue. Further, lateral releases vary in...


Journal of the Acoustical Society of America | 2016

Spatial congruence in multimodal speech perception

Megan Keough; Murray Schellenberg; Bryan Gick

A growing literature provides evidence for the importance of synchronicity of cross-modal information in speech perception [e.g., audio-visual, Munhall et al. 1996, Perception & Psychophysics 58 : 351-362; audio-aerotactile, Gick et al. 2010, JASA 128: 342-346; visual-aerotactile, Bicevskis et al. submitted ms]. While considerable work has investigated this role of temporal congruence, no research has directly explored the role of spatial congruence (i.e., co-directionality) of stimulus sources. If perceivers are picking up a localized distal speech event [e.g., Fowler 1986, Status Report of Speech Research: 139-169] cross-modal sources of information are predicted to be more likely to integrate when presented codirectionally than contradirectionally. An audio-aerotactile pairing lends itself well to this question as both modalities can easily be presented laterally. The current study draws on methodology from previous work [Gick & Derrick 2009, Nature 462: 502-504] to ask whether cross-modal integration ...


Journal of the Acoustical Society of America | 2015

Smiled speech in a context-invariant model of coarticulation

Samuel Akinbo; Thomas J. Heins; Megan Keough; Elise Kedersha McClay; Avery Ozburn; Michael D. Schwan; Murray Schellenberg; Jonathan de Vries; Bryan Gick

Smiling during speech requires concurrent and often conflicting demands on the articulators. Thus, speaking while smiling may be modeled as a type of coarticulation. This study explores whether a context-invariant or a context-sensitive model of coarticulation better accounts for the variation seen in smiled versus neutral speech. While context-sensitive models assume some mechanism for planning of coarticulatory interactions [see Munhall et al., 2000, Lab Phon. V, 9–28], the simplest context-invariant models treat coarticulation as superposition [e.g., Joos, 1948, Language 24, 5–136]. In such a model, the intrinsic biomechanics of the body have been argued to account for many of the complex kinematic interactions associated with coarticulation [Gick et al., 2013, POMA 19, 060207]. Largely following the methods described in Fagel [2010, Dev. Multimod. Interf. 5967, 294–303], we examine articulatory variation in smiled versus neutral speech to test whether the local interactions of smiling and speech can be resolved in a context-invariant superposition model. Production results will be modeled using the ArtiSynth simulation platform (www.artisynth.org). Implications for theories of coarticulation will be discussed. [Research funded by NSERC.]


Annual Review of Linguistics | 2019

Cross-Modal Effects in Speech Perception

Megan Keough; Donald Derrick; Bryan Gick


Journal of the Acoustical Society of America | 2018

Lateral bias in lingual bracing during speech

Bryan Gick; Megan Keough; Oksana Tkachman; Yadong Liu


Canadian Acoustics | 2017

Sensory Integration from an Impossible Source: Perceiving Simulated Faces

Megan Keough; Ryan C. Taylor; Donald Derrick; Murray Schellenberg; Bryan Gick

Collaboration


Dive into the Megan Keough's collaboration.

Top Co-Authors

Avatar

Bryan Gick

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Murray Schellenberg

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Ryan C. Taylor

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Avery Ozburn

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Elise Kedersha McClay

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Michael D. Schwan

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Samuel Akinbo

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Donald Derrick

University of Canterbury

View shared research outputs
Top Co-Authors

Avatar

Janet F. Werker

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Jonathan de Vries

University of British Columbia

View shared research outputs
Researchain Logo
Decentralizing Knowledge