Lawrence Brancazio
Southern Connecticut State University
Network
Latest external collaboration on country level. Dive into details by clicking on the dots.
Publication
Featured researches published by Lawrence Brancazio.
Language and Speech | 2000
Carol A. Fowler; Lawrence Brancazio
We explored the variation in the resistance that lingual and non lingual consonants exhibit to coarticulation by following vowels in the schwa+CV disyllables of two native speakers of English. Generally, lingual consonants other than /g/ were more resistant tho coarticulation than thhe liabial consonants /b/ and /v/. Coarticulation resistance in the consonant also affected articulatory evidence for trans consonantal vowel-to-vowel coarticulation, but did not show consistent acoustic effects. As for effects of coarticulation resistance in thhe following vowel, articulatory and acoustic effects were quite liarge at consonantre lease but much weaker farther into the following stressed vowel. Correlations between coarticulation resistance effects at consonantrelease and liocus equation slopes were highly significant, consistent with the view that variation in coarticulation resistance explains differences among consonants in liocus equation slopes.
Journal of Experimental Psychology: Human Perception and Performance | 2004
Lawrence Brancazio
Phoneme identification with audiovisually discrepant stimuli is influenced hy information in the visual signal (the McGurk effect). Additionally, lexical status affects identification of auditorily presented phonemes. The present study tested for lexical influences on the McGurk effect. Participants identified phonemes in audiovisually discrepant stimuli in which lexical status of the auditory component and of a visually influenced percept was independently varied. Visually influenced (McGurk) responses were more frequent when they formed a word and when the auditory signal was a nonword (Experiment 1). Lexical effects were larger for slow than for fast responses (Experiment 2), as with auditory speech, and were replicated with stimuli matched on physical properties (Experiment 3). These results are consistent with models in which lexical processing of speech is modality independent.
Child Development | 2011
Julia R. Irwin; Lauren A. Tornatore; Lawrence Brancazio; D. H. Whalen
This study used eye-tracking methodology to assess audiovisual speech perception in 26 children ranging in age from 5 to 15 years, half with autism spectrum disorders (ASD) and half with typical development. Given the characteristic reduction in gaze to the faces of others in children with ASD, it was hypothesized that they would show reduced influence of visual information on heard speech. Responses were compared on a set of auditory, visual, and audiovisual speech perception tasks. Even when fixated on the face of the speaker, children with ASD were less visually influenced than typical development controls. This indicates fundamental differences in the processing of audiovisual speech in children with ASD, which may contribute to their language and communication impairments.
Attention Perception & Psychophysics | 2005
Lawrence Brancazio; Joanne L. Miller
The McGurk effect, where an incongruent visual syllable influences identification of an auditory syllable, does not always occur, suggesting that perceivers sometimes fail to use relevant visual phonetic information. We tested whether another visual phonetic effect, which involves the influence of visual speaking rate on perceived voicing (Green & Miller, 1985), would occur in instances when the McGurk effect does not. In Experiment 1, we established this visual rate effect using auditory and visual stimuli matching in place of articulation, finding a shift in the voicing boundary along an auditory voice-onsettime continuum with fast versus slow visual speech tokens. In Experiment 2, we used auditory and visual stimuli differing in place of articulation and found a shift in the voicing boundary due to visual rate when the McGurk effect occurred and, more critically, when it did not. The latter finding indicates that phonetically relevant visual information is used in speech perception even when the McGurk effect does not occur, suggesting that the incidence of the McGurk effect underestimates the extent of audiovisual integration.
Frontiers in Psychology | 2014
Julia R. Irwin; Lawrence Brancazio
Using eye-tracking methodology, gaze to a speaking face was compared in a group of children with autism spectrum disorders (ASD) and a group with typical development (TD). Patterns of gaze were observed under three conditions: audiovisual (AV) speech in auditory noise, visual only speech and an AV non-face, non-speech control. Children with ASD looked less to the face of the speaker and fixated less on the speakers’ mouth than TD controls. No differences in gaze were reported for the non-face, non-speech control task. Since the mouth holds much of the articulatory information available on the face, these findings suggest that children with ASD may have reduced access to critical linguistic information. This reduced access to visible articulatory information could be a contributor to the communication and language problems exhibited by children with ASD.
Language and Speech | 2006
Lawrence Brancazio; Catherine T. Best; Carol A. Fowler
We report four experiments designed to determine whether visual information affects judgments of acoustically-specified nonspeech events as well as speech events (the “McGurk effect”). Previous findings have shown only weak McGurk effects for nonspeech stimuli, whereas strong effects are found for consonants. We used click sounds that serve as consonants in some African languages, but that are perceived as nonspeech by American English listeners. We found a significant McGurk effect for clicks presented in isolation that was much smaller than that found for stop-consonant-vowel syllables. In subsequent experiments, we found strong McGurk effects, comparable to those found for English syllables, for click-vowel syllables, and weak effects, comparable to those found for isolated clicks, for excised release bursts of stop consonants presented in isolation. We interpret these findings as evidence that the potential contributions of speech-specific processes on the McGurk effect are limited, and discuss the results in relation to current explanations for the McGurk effect.
Attention Perception & Psychophysics | 2003
Lawrence Brancazio; Joanne L. Miller; Matthew A. Paré
Previous work has demonstrated that the graded internal structure of phonetic categories is sensitive to a variety of contextual factors. One such factor is place of articulation: The best exemplars of voiceless stop consonants along auditory bilabial and velar voice onset time (VOT) continua occur over different ranges of VOTs (Volaitis & Miller, 1992). In the present study, we exploited theMcGurk effect to examine whether visual information for place of articulation also shifts the best-exemplar range for voiceless consonants, following Green and Kuhl’s (1989) demonstration of effects of visual place of articulation on the location of voicing boundaries. In Experiment 1, we established that /p/ and /t/ have different best-exemplar ranges along auditory bilabial and alveolar VOT continua. We then found, in Experiment 2, a similar shift in the best-exemplar range for /t/ relative to that for /p/ when there was a change in visual place of articulation, with auditory place of articulation held constant. These findings indicate that the perceptual mechanisms that determine internal phonetic category structure are sensitive to visual, as well as to auditory, information.
Clinical Linguistics & Phonetics | 2015
Julia R. Irwin; Jonathan L. Preston; Lawrence Brancazio; Michael D'angelo; Jacqueline Turcios
Abstract Perception of spoken language requires attention to acoustic as well as visible phonetic information. This article reviews the known differences in audiovisual speech perception in children with autism spectrum disorders (ASD) and specifies the need for interventions that address this construct. Elements of an audiovisual training program are described. This researcher-developed program delivered via an iPad app presents natural speech in the context of increasing noise, but supported with a speaking face. Children are cued to attend to visible articulatory information to assist in perception of the spoken words. Data from four children with ASD ages 8–10 are presented showing that the children improved their performance on an untrained auditory speech-in-noise task.
Journal of the Acoustical Society of America | 1999
Lawrence Brancazio; Joanne L. Miller; Matthew A. Paré
This research investigated effects of the visually specified place of articulation on perceived voicing. It is known that the /b/–/p/ boundary along a voice‐onset‐time (VOT) continuum falls at a shorter VOT than the /d/–/t/ boundary. Green and Kuhl [Percept. Psychophys. 45, 34–42 (1989)] demonstrated that tokens from an auditory /ibi/–/ipi/ continuum dubbed with an /igi/ video and perceived as /idi/ and /iti/ due to the ‘‘McGurk effect’’ had a voicing boundary at a longer VOT than when presented only auditorily and perceived as /ibi/ and /ipi/. We extended this finding in two directions. First, using an auditory /bi/–/pi/ series with a video /ti/ for which the McGurk effect did not always occur, we compared visually influenced (/d,t/) and visually uninfluenced (/b,p/) responses for these audiovisually discrepant stimuli. The /d/–/t/ boundary was at a longer VOT than the /b/–/p/ boundary, affirming the boundary shift’s perceptual, not stimulus‐based, origin. Second, we tested the generalizability of Green ...
Journal of the Acoustical Society of America | 1997
Lawrence Brancazio
Much research has been devoted to the study of whether lexical knowledge affects lower‐level phonetic processing. For example, in the Ganong effect (Ganong, JEP:HPP 6, 110–125), the lexical status of the end points of a /b/–/p/ voicing continuum affects the point at which a shift from /b/ to /p/ responses is made (e.g., more /b/ responses in a /bif/–/pif/ continuum than in a /bis/–/pis/ one). This study addresses whether lexical effects influence the perception of discrepant audiovisual stimuli (the McGurk effect). Pairs of words and nonwords were constructed that differed only in the place of articulation of the initial phoneme; in each pair either both members were words, only one was a word, or neither was a word. The stimuli were presented audiovisually, with the audio from one pair member and the video from the other; subjects identified the intial phoneme. Analyses of a proportion of video responses (McGurk responses) indicated significant effects of the lexical status of the auditorily‐presented to...