Alexander L. Francis
Purdue University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Alexander L. Francis.
Journal of Phonetics | 2008
Alexander L. Francis; Valter Ciocca; Lian Ma; Kimberly M. Fenn
Abstract Two groups of listeners, one of native speakers of a tone language (Mandarin Chinese) and one of native speakers of a non-tone language (English) were trained to recognize Cantonese lexical tones. Performance before and after training was measured using closed response-set identification and pairwise difference rating tasks. Difference ratings were submitted to multidimensional scaling (MDS) analyses to investigate training-related changes in listeners’ perceptual space. Both groups showed comparable initial performance and significant improvement in tone identification following training. However, the two groups differed in terms of the tones they found most difficult to identify, and in terms of the tones that were learned best. Differences between the two groups’ training-induced changes in identification (confusions) and perceptual spaces demonstrated that listeners’ native language experience with intonational as well as tone categories affects the perception and acquisition of non-native suprasegmental categories.
Journal of the Acoustical Society of America | 2002
Valter Ciocca; Alexander L. Francis; Rani Aisha; Lena Wong
This study investigated whether cochlear implant users can identify Cantonese lexical tones, which differ primarily in their F0 pattern. Seventeen early-deafened deaf children (age= 4 years, 6 months to 8 years, 11 months; postoperative period= 11-41 months) took part in the study. Sixteen children were fitted with the Nucleus 24 cochlear implant system; one child was fitted with a Nucleus 22 implant. Participants completed a 2AFC picture identification task in which they identified one of the six contrastive Cantonese tones produced on the monosyllabic target word /ji/. Each target stimulus represented a concrete object and was presented within a carrier phrase in sentence-medial position. Group performance was significantly above chance for three contrasts. However, the cochlear implant listeners performed much worse than a 6 1/2-year-old, moderately hearing impaired control listener who was tested on the same task. These findings suggest that this group of cochlear implant users had great difficulty in extracting the pitch information needed to accurately identify Cantonese lexical tones.
Attention Perception & Psychophysics | 2003
Alexander L. Francis; Valter Ciocca; Brenda Kei Chit Ng
Identification and discrimination of lexical tones in Cantonese were compared in the context of a traditional categorical perception paradigm. Three lexical tone continua were used: one ranging from low level to high level, one from high rising to high level, and one from low falling to high rising. Identification data showed steep slopes at category boundaries, suggesting that lexical tones are perceived categorically. In contrast, discrimination curves generally showed much weaker evidence for categorical perception. Subsequent investigation showed that the presence of a tonal context played a strong role in the identification of target tones and less of a role in discrimination. The results are consistent with the hypothesis that tonal category boundaries are determined by a combination of regions of natural auditory sensitivity and the influence of linguistic experience.
Attention Perception & Psychophysics | 2000
Alexander L. Francis; Kate Baldwin; Howard C. Nusbaum
Learning new phonetic categories in a second language may be thought of in terms of learning to focus one’s attention on those parts of the acoustic-phonetic structure of speech that are phonologically relevant in any given context. As yet, however, no study has demonstrated directly that training can shift listeners’ attention between acoustic cues given feedback about the linguistic phonetic category alone. In this paper we discuss the results of a training study in which subjects learned to shift their attention from one acoustic cue to another using only category-level identification as feedback. Results demonstrate that training redirects listeners’ attention to acoustic cues and that this shift of attention generalizes to novel (untrained) phonetic contexts.
Language and Cognitive Processes | 2008
Yunxia Tong; Alexander L. Francis; Jackson T. Gandour
The aim of this study was to examine processing interactions between segmental (consonant, vowel) and suprasegmental (tone) dimensions of Mandarin Chinese. Using a speeded classification paradigm, processing interactions were examined between each pair of dimensions. Listeners were asked to attend to one dimension while ignoring the variation along another. Asymmetric interference effects were observed between segmental and suprasegmental dimensions, with segmental dimensions interfering more with tone classification than the reverse. Among the three dimensions, vowels exerted greater interference on consonants and tones than vice versa. Comparisons between each pair of dimensions revealed greater integrality between tone and vowel than between tone and consonant. Findings suggest that the direction and degree of interference between segmental and suprasegmental dimensions in spoken word recognition reflect differences in acoustic properties as well as other factors of an informational nature.
Journal of the Acoustical Society of America | 2003
Alexander L. Francis; Valter Ciocca; Jojo Man Ching Yu
Five commonly used methods for determining the onset of voicing of syllable-initial stop consonants were compared. The speech and glottal activity of 16 native speakers of Cantonese with normal voice quality were investigated during the production of consonant vowel (CV) syllables in Cantonese. Syllables consisted of the initial consonants /ph/, /th/, /kh/, /p/, /t/, and /k/ followed by the vowel /a/. All syllables had a high level tone, and were all real words in Cantonese. Measurements of voicing onset were made based on the onset of periodicity in the acoustic waveform, and on spectrographic measures of the onset of a voicing bar (f0), the onset of the first formant (F1), second formant (F2), and third formant (F3). These measurements were then compared against the onset of glottal opening as determined by electroglottography. Both accuracy and variability of each measure were calculated. Results suggest that the presence of aspiration in a syllable decreased the accuracy and increased the variability of spectrogram-based measurements, but did not strongly affect measurements made from the acoustic waveform. Overall, the acoustic waveform provided the most accurate estimate of voicing onset; measurements made from the amplitude waveform were also the least variable of the five measures. These results can be explained as a consequence of differences in spectral tilt of the voicing source in breathy versus modal phonation.
Attention Perception & Psychophysics | 2009
Alexander L. Francis; Howard C. Nusbaum
Understanding low-intelligibility speech is effortful. In three experiments, we examined the effects of intelligibility on working memory (WM) demands imposed by perception of synthetic speech. In all three experiments, a primary speeded word recognition task was paired with a secondary WM-load task designed to vary the availability of WM capacity during speech perception. Speech intelligibility was varied either by training listeners to use available acoustic cues in a more diagnostic manner (as in Experiment 1) or by providing listeners with more informative acoustic cues (i.e., better speech quality, as in Experiments 2 and 3). In the first experiment, training significantly improved intelligibility and recognition speed; increasing WM load significantly slowed recognition. A significant interaction between training and load indicated that the benefit of training on recognition speed was observed only under low memory load. In subsequent experiments, listeners received no training; intelligibility was manipulated by changing synthesizers. Improving intelligibility without training improved recognition accuracy, and increasing memory load still decreased it, but more intelligible speech did not produce more efficient use of available WM capacity. This suggests that perceptual learning modifies the way available capacity is used, perhaps by increasing the use of more phonetically informative features and/or by decreasing use of less informative ones.
Brain Research | 2006
Natalya Kaganovich; Alexander L. Francis; Robert D. Melara
This study combined behavioral and electrophysiological measurements to investigate interactions during speech perception between native phonemes and talkers voice. In a Garner selective attention task, participants either classified each sound as one of two native vowels ([epsilon] and [ae]), ignoring the talker, or as one of two male talkers, ignoring the vowel. The dimension to be ignored was held constant in baseline tasks and changed randomly across trials in filtering tasks. Irrelevant variation in talker produced as much filtering interference (i.e., poorer performance in filtering relative to baseline) in classifying vowels as vice versa, suggesting that the two dimensions strongly interact. Event-related potentials (ERPs) were recorded to identify the processing origin of the interference: an early disruption in extracting dimension-specific information or a later disruption in selecting appropriate responses. Processing in the filtering task was characterized by a sustained negativity starting 100 ms after stimulus onset and peaking 200 ms later. The early onset of this negativity suggests that interference originates in the cognitive effort required by listeners to extract dimension-specific information, a process that precedes response selection. In agreement with these findings, our results revealed numerous dimension-specific effects, most prominently in the filtering tasks.
Speech Communication | 2006
Felicia Roberts; Alexander L. Francis; Melanie Morgan
The forms, functions, and organization of sounds and utterances are generally the focus of speech communication research; little is known, however, about how the silence between speaker turns shades the meaning of the surrounding talk. We use an experimental protocol to test whether listeners’ perception of trouble in interaction (e.g., disagreement or unwillingness) varies when prosodic cues are manipulated in the context of 2 speech acts (requests and assessments). The prosodic cues investigated were inter-turn silence and the duration, absolute pitch, and pitch contour of affirmative response tokens (‘‘yeah’’ and ‘‘sure’’) that followed the inter-turn silence. Study participants evaluated spoken dialogues simulating telephone calls between friends in which the length of silence following a request/assessment (i.e., the inter-turn silence) was manipulated in Praat as were prosodic features of the responses. Results indicate that with each incremental increase in pause duration (0–600–1200 ms) listeners perceived increasingly less willingness to comply with requests and increasingly weaker agreement with assessments. Inter-turn silence and duration of response token proved to be stronger cues to unwillingness and disagreement than did the response token’s pitch characteristics. However, listeners tend to perceive response token duration as a cue to ‘‘trouble’’ when inter-turn silence cues were, apparently, ambiguous (less than 1 s). � 2006 Elsevier B.V. All rights reserved.
Attention Perception & Psychophysics | 2010
Alexander L. Francis
Perception of speech in competing speech is facilitated by spatial separation of the target and distracting speech, but this benefit may arise at either a perceptual or a cognitive level of processing. Load theory predicts different effects of perceptual and cognitive (working memory) load on selective attention in flanker task contexts, suggesting that this paradigm may be used to distinguish levels of interference. Two experiments examined interference from competing speech during a word recognition task under different perceptual and working memory loads in a dual-task paradigm. Listeners identified words produced by a talker of one gender while ignoring a talker of the other gender. Perceptual load was manipulated using a nonspeech response cue, with response conditional upon either one or two acoustic features (pitch and modulation). Memory load was manipulated with a secondary task consisting of one or six visually presented digits. In the first experiment, the target and distractor were presented at different virtual locations (0° and 90°, respectively), whereas in the second, all the stimuli were presented from the same apparent location. Results suggest that spatial cues improve resistance to distraction in part by reducing working memory demand.