Network


Latest external collaboration on country level. Dive into details by clicking on the dots.

Hotspot


Dive into the research topics where Julie M. Brown is active.

Publication


Featured researches published by Julie M. Brown.


Attention Perception & Psychophysics | 2000

Perceptual parsing of acoustic consequences of velum lowering from information for vowels

Carol A. Fowler; Julie M. Brown

Three experiments were designed to investigate how listeners to coarticulated speech use the acoustic speech signal during a vowel to extract information about a forthcoming oral or nasal consonant. A first experiment showed that listeners use evidence of nasalization in a vowel as information for a forthcoming nasal consonant. A second and third experiment attempted to distinguish two accounts of their ability to do so. According to one account, listeners hear nasalization in the vowel as such and use it to predict that a forthcoming nasal consonant is nasal. According to a second, they perceive speech gestures and hear nasalization in the acoustic domain of a vowel as the onset of a nasal consonant. Therefore, theyparse nasal information from a vowel and hear the vowel as oral. In Experiment 2, evidence in favor of the parsing hypothesis was found. Experiment 3 showed, however, that parsing is incomplete.


Attention Perception & Psychophysics | 1997

Intrinsicf0 differences in spoken and sung vowels and their perception by listeners

Carol A. Fowler; Julie M. Brown

We explore how listeners perceive distinct pieces of phonetic information that are conveyed in parallel by the fundamental frequency (f0) contour of spoken and sung vowels. In a first experiment, we measured differences inf0 of /i/ and /a/ vowels spoken and sung by unselected undergraduate participants. Differences in “intrinsicf0” (withf0 of /i/ higher than of /a/) were present in spoken and sung vowels; however, differences in sung vowels were smaller than those in spoken vowels. Four experiments tested a hypothesis that listeners would not hear the intrinsicf0 differences as differences in pitch on the vowel, because they provide information, instead, for production of a closed or open vowel. The experiments provide clear evidence of “parsing” of intrinsicf0 from thef0 that contributes to perceived vowel pitch. However, only some conditions led to an estimate of the magnitude of parsing that closely matched the magnitude of produced intrinsicf0 differences.


Journal of the Acoustical Society of America | 1998

Visual influences on auditory distance perception

Julie M. Brown; Krista L. Anderson; Carol A. Fowler; Claudia Carello

Much of the work on sound localization has focused on sound sources in the horizontal plane. This study looks at the auditory perception of distance using a reaching task [L. D. Rosenblum, A. P. Wuestefeld, and K. L. Anderson, Ecological Psych. 8, 1–24 (1996)]. In this task, blindfolded participants judged the reachability of a rattle based on a straight right arm reach. In order to investigate visual influences on auditory distance perception, in another condition, participants judged the reachability of an ‘‘auditory’’ rattle while simultaneously viewing a ‘‘visual’’ rattle that did not produce sound. The ‘‘visual’’ rattle was placed 8 cm in front of the ‘‘auditory’’ rattle or 8 cm behind it. Compared to an auditory alone condition, participants were more likely to judge that the rattle was within reach when the visual stimulus was closer to the participant than the auditory stimulus. These results show that vision influences the perception of sound source distance.


Journal of the Acoustical Society of America | 1998

The effect of consonant context on vowel goodness rating

Alice Faber; Julie M. Brown

The effect on within‐category discrimination of supposed phonetic prototypes for the sounds of a given language depends on the assumption that the number of prototypes used by speakers of that language is of the same order of magnitude as the number of phonemes in that language. In the present experiment, listeners provided goodness ratings for three sets of synthetic tokens varying in F2 and F3. One set consisted of isolated /i/ tokens while the other two contained appropriate transitions and release bursts for BEEP and GEEK, respectively. Goodness ratings depended not only on a token’s position in the grid but also on its phonetic context (zero vs b——p vs g——k), reflecting the well‐known coarticulatory effects of consonant context on vowel production. The listener judgments thus reflect the relative appropriateness of a given set of formant values for a consonantal context and not an abstract phonological target underlying all three contexts. [Work supported by NIH Grant No. HD‐01994.]


Journal of the Acoustical Society of America | 1996

Voice effects in implicit memory tasks.

Julie M. Brown; Carol A. Fowler; Jay Rueckl

Spoken words are easier to identify if they have been heard recently. This phenomenon, known as repetition priming, can be used to investigate the processes underlying word recognition. Using an implicit memory paradigm, this study looked at the effect of changing the voice of the speaker on repetition priming. Voice effects occur if repetition priming is reduced when a spoken word has been heard in different voices at study and test. Voice effects have been found in implicit memory tasks, such as word identification and word‐stem completion [B. A. Church and D. L. Schacter, JEP:LMC 20, 521–533 (1994); S. D. Goldinger, doctoral dissertation, 1992]. This study investigated if these results could be generalized to other implicit memory tasks—specifically, lexical decision, naming, and a new task, auditory fragment completion. No voice effects were found with any of these three tasks. These results are problematic for current accounts of word recognition and imply that as yet unidentified factors control the...


Journal of the Acoustical Society of America | 2006

Audiovisual asymmetries in speech and nonspeech

Julie M. Brown

The individual contribution of the various perceptual systems to multimodal perception is typically examined by placing information from the various perceptual systems in conflict and measuring how much each contributes to the resulting perception. When one perceptual system dominates perception more than another, it is called intersensory bias. The present research examines intersensory bias using a tapping task in which participants were asked to tap to the frequency of either an auditory or visual stimulus. The stimuli that were used were either nonspeech stimuli, such as a tone and a flashing ellipse, or speech, such as a visual or auditory syllable. In past experiments that have used nonspeech stimuli, audition has been shown to dominate visual temporal perception. Based on the influence of visible speech on auditory speech perception found in the McGurk effect, it was thought that vision might influence temporal perception more when using speech stimuli than nonspeech stimuli. It was found that the ...


Journal of the Acoustical Society of America | 2002

Effects of 3‐D projection on audiovisual speech perception

D. H. Whalen; Richard Gans; Carol A. Fowler; Julie M. Brown

Visual information about speech influences speech perception, leading to better perception in noise and to illusions such as the McGurk effect. Here, the question was addressed of whether visual influences would be greater with a three‐dimensional visual speaker [the patented Life Imaging Projection System (L.I.P.S.)] than the two‐dimensional one. Perception in noise was tested with 54 monosyllabic English words of moderate frequency. The McGurk effect was tested with acoustic words having bilabial onset consonants and visual nonwords with alveolar onsets. Visual information improved perception in noise, as found previously; the 3‐D version (a videotape projected onto a life cast of the speaker’s head) elicited better performance than the 2‐D. However, the 3‐D version elicited fewer McGurk responses. Since the presence of noise increases visual fixation on the lips [Vatikiotis‐Bateson et al., Percept. Psychophys. 60, 926–940 (1998)], it is possible that more direct information about mouth shape was availa...


Journal of the Acoustical Society of America | 1997

Congruence of articulatory and acoustic variability

Alice Faber; Julie M. Brown

Johnson et al. [J. Acoust. Soc. Am. 94, 701–714 (1993)] suggest, on the basis of observed inter‐speaker variability in discriminant analyses of articulatory measures, that speakers utilize acoustically defined targets in speech production. The present paper compares inter‐speaker variability in simultaneously recorded acoustic and articulatory data (s–t words) from five New England speakers. The articulatory data were x and y coordinates of coils on the tongue, lips, and jaw, transduced by the Haskins Laboratories EMMA system and recorded at three locations in the target vowel; acoustic data were F1, F2, and F3 measures at the same temporal locations. Discriminant analyses of the articulatory and acoustic data sets reveal congruent patterns of inter‐speaker variability in the two domains. The inter‐speaker differences do not reflect superficial dialect or idiolect differences (e.g., extent to which /■/ and /■/ contrast, or tendency to glottalize syllable‐final /t/). Rather, they reflect differences in the...


Journal of Memory and Language | 2003

Rapid access to speech gestures in perception: Evidence from choice and simple response time tasks.

Carol A. Fowler; Julie M. Brown; Laura Sabadini; Jeffrey Weihing


Journal of Memory and Language | 1997

Reductions of spoken words in certain discourse contexts

Carol A. Fowler; Elena T. Levy; Julie M. Brown

Collaboration


Dive into the Julie M. Brown's collaboration.

Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar

Claudia Carello

University of Connecticut

View shared research outputs
Top Co-Authors

Avatar

D. H. Whalen

City University of New York

View shared research outputs
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Top Co-Authors

Avatar
Researchain Logo
Decentralizing Knowledge