Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Murray Schellenberg is active.

Publication


Featured researches published by Murray Schellenberg.


Journal of the Acoustical Society of America | 2018

Dagaare [a] is not neutral to ATR harmony

Avery Ozburn; Samuel Akinbo; Alexander Angsongna; Murray Schellenberg; Douglas Pulleyblank

The literature on Dagaare (Gur; Ghana) describes a single low vowel, [a], which is neutral to ATR harmony [1]. This paper reports on an acoustic study of Dagaare , showing that this description is incorrect. A list of sentences was elicited from five native speakers of Dagaare. Each sentence contained in one of four verbal particles situated in one of four contexts: ATR __ ATR, ATR __ RTR, RTR __ ATR, and RTR __ RTR. Formants of the low vowel were measured and compared across contexts. Results showed a substantial, significant difference in F1 values and a smaller but still significant difference in F2 values in contexts where is followed by an ATR word compared to when it is followed by an RTR word. All speakers and all particles showed the same pattern. We conclude that, contrary to previous claims, Dagaare has two distinct low vowels that are not neutral to harmony: [a] occurs in RTR contexts, and [ʌ] occurs in ATR contexts.


Journal of the Acoustical Society of America | 2018

Visual-aerotactile perception and congenital hearing loss

Charlene Chang; Megan Keough; Murray Schellenberg; Bryan Gick

Previous research on multimodal speech perception with hearing-impaired individuals focused on audiovisual integration with mixed results. Cochlear-implant users integrate audiovisual cues better than perceivers with normal hearing when perceiving congruent [Rouger et al. 2007, PNAS, 104(17), 7295–7300] but not incongruent cross-modal cues [Rouger et al. 2008, Brain Research 1188, 87–99), leading to the suggestion that early auditory exposure is required for typical speech integration processes to develop (Schorr 2005, PNAS, 102(51), 18748–18750). If a deficit of one modality does indeed lead to a deficit in multimodal processing, then hard of hearing perceivers should show different patterns of integration in other modality pairings. The current study builds on research showing that gentle puffs of air on the skin can push individuals with normal hearing to perceive silent bilabial articulations as aspirated. We report on a visual-aerotactile perception task comparing individuals with congenital hearing loss to those with normal hearing. Results indicate that aerotactile information facilitated identification of /pa/ for all participants (p < 0.001) and we found no significant difference between the two groups (normal hearing and congenital hearing loss). This suggests that typical multi-modal speech perception does not require access to all modalities from birth. [Funded by NIH.]Previous research on multimodal speech perception with hearing-impaired individuals focused on audiovisual integration with mixed results. Cochlear-implant users integrate audiovisual cues better than perceivers with normal hearing when perceiving congruent [Rouger et al. 2007, PNAS, 104(17), 7295–7300] but not incongruent cross-modal cues [Rouger et al. 2008, Brain Research 1188, 87–99), leading to the suggestion that early auditory exposure is required for typical speech integration processes to develop (Schorr 2005, PNAS, 102(51), 18748–18750). If a deficit of one modality does indeed lead to a deficit in multimodal processing, then hard of hearing perceivers should show different patterns of integration in other modality pairings. The current study builds on research showing that gentle puffs of air on the skin can push individuals with normal hearing to perceive silent bilabial articulations as aspirated. We report on a visual-aerotactile perception task comparing individuals with congenital hearing ...


Journal of the Acoustical Society of America | 2018

The stability of visual–aerotactile effects across multiple presentations of a single token

Sharon Kwan; Megan Keough; Ryan C. Taylor; Terrina Chan; Murray Schellenberg; Bryan Gick

Previous research has shown that the sensation of airflow causes bilabial stop closures to be perceived as aspirated even when paired with silent articulations rather than an acoustic signal [Bicevskis et al. 2016, JASA 140(5): 3531–3539]. However, some evidence suggests that perceivers integrate this cue differently if the silent articulations come from an animated face [Keough et al. 2017, Canadian Acoustics 45(3):176–177] rather than a human one. Participants shifted from a strong initial /ba/ bias to a strong /pa/ bias by the second half of the experiment, suggesting the participants learned to associate the video with the aspirated articulation through experience with the airflow. One explanation for the above findings is methodological: participants saw a single video clip while previous work exposed participants to multiple videos. The current study reports two experiments using a single clip with a human face (originally from Bicevskis et al. 2016). We found no evidence of a bias shift, indicating...


Clinical Linguistics & Phonetics | 2017

Effects of cosmetic tongue bifurcation on English fricative production

Alyson Budd; Murray Schellenberg; Bryan Gick

ABSTRACT Tongue bifurcation (also called ‘splitting’ or ‘forking’) is an increasingly popular cosmetic procedure in the body modification community that involves splitting the anterior tongue down the centre line. The implications of this procedure for speech have not been systematically studied; a few case studies have been published and suggest that there may be effects, primarily on fricatives. This article presents the first attempt to examine the acoustic implications of tongue bifurcation on speech production using a larger population sample. It compares the speech of 12 individuals with bifurcated tongues with a normative control group of equal size. Both qualitative assessment and quantitative assessment are carried out looking specifically at fricative production and perception. The speech of subjects with bifurcated tongues, while intelligible, shows a higher proportion of perceptibly atypical fricatives and significantly greater variance than seen in the control group.


Journal of the Acoustical Society of America | 2016

Acoustics of speech, articulatory compensation, and dental overjet in Cantonese

Lauretta Cheng; Murray Schellenberg; Bryan Gick

Studies relating dental anomalies to misarticulations have noted that potential correlations appear to be obscured by articulatory compensation. Accommodation of tongue or mandible positions can help even individuals with severe malocclusion approximate perceptually typical speech [Johnson and Sandy, Angle Orthod. 69, 306-310 (1999)]. However, associations between malocclusion and articulation could surface if examined with acoustic analysis. The present study investigates the acoustic correlates of Cantonese speech as it relates to degree of overjet (horizontal overlap of upper and lower incisors). Production data was collected from native Cantonese-speaking adults, targeting the vowels /i, u, a/, and fricatives /f, s, ts, tsh/, previously found to be vulnerable phonemes in Cantonese speakers with dentofacial abnormalities [Whitehill et al., J Med Speech Lang Pathol. 9, 177-190 (2001)]. Measures of dental overjet and language background were included as well. Preliminary results from trained listeners sh...


Journal of the Acoustical Society of America | 2016

Articulatory setting as global coarticulation: Simulation, acoustics, and perception

Bryan Gick; Chenhao Chiu; Francois Roewer-Despres; Murray Schellenberg; Ian Stavness

Articulatory settings, language-specific default postures of the speech articulators, have been difficult to distinguish from segmental speech content [see Gick et al. 2004, Phonetica 61, 220-233]. The simplest construal of articulatory setting is as a constantly maintained set of tonic muscle activations that coarticulates globally with all segmental content. In his early Overlapping Innervation Wave theory, Joos [1948, Language Monogr. 23] postulated that all coarticulation can be understood as simple overlap, or superposition [Bizzi et al. 1991, Science 253, 287-291], of muscle activation patterns. The present paper describes an implementation of Joos’ proposals within a modular neuromuscular framework [see Gick & Stavness 2013, Front. Psych. 4, 977]. Results of a simulation and perception study will be reported in which muscle activations corresponding to English-like and French-like articulatory settings are simulated and superposed on activations for language-neutral vowels using the ArtiSynth biome...


Journal of the Acoustical Society of America | 2016

Spatial congruence in multimodal speech perception

Megan Keough; Murray Schellenberg; Bryan Gick

A growing literature provides evidence for the importance of synchronicity of cross-modal information in speech perception [e.g., audio-visual, Munhall et al. 1996, Perception & Psychophysics 58 : 351-362; audio-aerotactile, Gick et al. 2010, JASA 128: 342-346; visual-aerotactile, Bicevskis et al. submitted ms]. While considerable work has investigated this role of temporal congruence, no research has directly explored the role of spatial congruence (i.e., co-directionality) of stimulus sources. If perceivers are picking up a localized distal speech event [e.g., Fowler 1986, Status Report of Speech Research: 139-169] cross-modal sources of information are predicted to be more likely to integrate when presented codirectionally than contradirectionally. An audio-aerotactile pairing lends itself well to this question as both modalities can easily be presented laterally. The current study draws on methodology from previous work [Gick & Derrick 2009, Nature 462: 502-504] to ask whether cross-modal integration ...


Journal of the Acoustical Society of America | 2015

Smiled speech in a context-invariant model of coarticulation

Samuel Akinbo; Thomas J. Heins; Megan Keough; Elise Kedersha McClay; Avery Ozburn; Michael D. Schwan; Murray Schellenberg; Jonathan de Vries; Bryan Gick

Smiling during speech requires concurrent and often conflicting demands on the articulators. Thus, speaking while smiling may be modeled as a type of coarticulation. This study explores whether a context-invariant or a context-sensitive model of coarticulation better accounts for the variation seen in smiled versus neutral speech. While context-sensitive models assume some mechanism for planning of coarticulatory interactions [see Munhall et al., 2000, Lab Phon. V, 9–28], the simplest context-invariant models treat coarticulation as superposition [e.g., Joos, 1948, Language 24, 5–136]. In such a model, the intrinsic biomechanics of the body have been argued to account for many of the complex kinematic interactions associated with coarticulation [Gick et al., 2013, POMA 19, 060207]. Largely following the methods described in Fagel [2010, Dev. Multimod. Interf. 5967, 294–303], we examine articulatory variation in smiled versus neutral speech to test whether the local interactions of smiling and speech can be resolved in a context-invariant superposition model. Production results will be modeled using the ArtiSynth simulation platform (www.artisynth.org). Implications for theories of coarticulation will be discussed. [Research funded by NSERC.]


Journal of the Acoustical Society of America | 2011

The realization of rising tone contours in sung Cantonese.

Murray Schellenberg

Singers in tone languages are often thought to rely entirely on song melody to realize the tonal speech melody of the language. There is, however, some limited evidence that singers directly modify vowel F0 corresponding to lexical tone while singing. This paper analyzes the acoustic output of ten Cantonese singers singing a specially composed song to test whether singers adjust their performances to reflect mismatches between speech melody and song melody. It is found that singers include the dynamic portion of rising contour tones (falling tones were not included in this study) but that they do not adjust their performance to mark register components of tone.


Canadian Acoustics | 2016

Schlieren study of external airflow during the production of nasal and oral vowels in French

Jeffrey Rowell; Masaki Noguchi; B. May Bernhardt; Anthony T. Herdman; Bryan Gick; Murray Schellenberg

Collaboration


Dive into the Murray Schellenberg's collaboration.

Top Co-Authors

Avatar

Bryan Gick

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Megan Keough

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Avery Ozburn

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Samuel Akinbo

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Alyson Budd

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Elise Kedersha McClay

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Heather Bliss

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Lauretta Cheng

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Masaki Noguchi

University of British Columbia

View shared research outputs
Top Co-Authors

Avatar

Michael D. Schwan

University of British Columbia

View shared research outputs
Researchain Logo
Decentralizing Knowledge